text
stringlengths
1
2.34M
meta
dict
\section{Introduction} The nonlinear BSDE theory formulated by Pardoux and Peng \cite{PP1} has many applications in practice and theory, which range from economics (see e.g. El Karoui, Peng and Quenez \cite{KPQ}) to PDEs (see e.g. Pardoux and Peng \cite{PP2}, Peng \cite{Peng1}). Based on BSDE, Peng \cite{Peng2} introduced the nonlinear $g$-expectation theory as a nontrivial generalization of classical linear expectations. Indeed, the $g$-expectation is described by a class of equivalent probability measures. In sprit of this property, Chen and Epstein \cite{CE} studied the stochastic differential recursive utility. However, many economic and financial problems involve model uncertainty which is characterized by a family of non-dominated probability measures. Motivated by these questions, Peng \cite{P3,P4,P7} introduced a nonlinear expectation, called $G$-expectation, which can be regarded as the upper expectation of a specific family of non-dominated probability measures. Under this framework, the corresponding nonlinear Brownian motion called $G$-Brownian motion is established. Briefly speaking, $G$-Brownian motion is a continuous process with independent and stationary increments under $G$-expectation. Moreover, the stochastic calculus with respect to (symmetric) $G$-Brownian motion, forward and backward stochastic differential equations driven by $G$-Brownian motion ($G$-SDEs and $G$-BSDEs in short) are also obtained, see also Gao \cite{Gao}, Hu, Ji, Peng and Song \cite{HJPS1}. As is well-known, according to Lusin's theorem, the random variables on the classical probability space are quasi-continuous (see Section 2 for the definition). But this is no longer true for the $G$-expectation framework since the elements in the probability family that represents $G$-expectation are mutual singular. So an important problem for the $G$-expectation theory is the quasi-continuity property of random variables, especially for stopping times which play a major role in classical stochastic analysis but tends to have more discontinuity. The purpose of this paper is to study the properties of exit times for $G$-SDEs, among which the most important one is that, under mild conditions, the exit times of $G$-SDEs have the quasi-continuity property, so that it belongs to the proper nonlinear $G$-expectation space. Here, the corresponding $G$-SDEs are given by \begin{equation} dX^{x}_{t}=b(X_{t}^{x})ds+\sum_{i,j=1}^dh_{ij}(X_{t}^{x})d\langle B^i,B^j\rangle_t+\sum_{j=1}^{d}\sigma_j(X_{t}^{x})dB^j_t,\ X_{0}^{x}=x; \ \ \ t\geq 0. \end{equation} Different from the usual case of symmetric $G$-Brownian motion that involves only volatility uncertainty, in the above equation $B$ is a generalized $G$-Brownian motion, which has both mean and volatility uncertainty. Thus we need to study the corresponding stochastic calculus theory first, and one can refer to \cite{GP,GPP,Nu} for related discussions. Next we consider the exit time of $G$-SDEs from an open set $Q$ $$ {\tau}_Q^x:=\inf\{t\geq 0:X^x_t(\omega)\in Q^c\}. $$ Since we cannot expect the $G$-SDEs have sufficient continuity with respect to $\omega$, we introduce an alternative approach of considering the image space of $G$-SDEs to study the properties of ${\tau}_Q^x$. We also utilize the weakly compact method from \cite{Song1}, where the quasi-continuity property of hitting times for symmetric $G$-martingales was considered, and the strong Markov property of $G$-SDEs from \cite{HJL}. These properties of exit times may play an important role for the applications of $G$-SDEs in many fields involving a stopping rule. The well-known Feynman-Kac formula tells us that stochastic differential equations driven by linear Brownian motion (SDEs) provide a probabilistic representation for a class of linear PDEs (with Dirichlet boundary), see, e.g., \cite{Fre}. With the help of $G$-BSDEs, in \cite{P7,HJPS2} the authors obtain a stochastic representation for fully nonlinear parabolic PDEs in $\mathbb{R}^n$. Inspired by these results, as an application of our results on the exit times, we state a probabilistic interpretation for a large class of fully nonlinear elliptic PDEs with Dirichlet boundary via $G$-SDEs. We also note that Lions and Menaldi \cite{LM} (see also Buckdahn and Nie \cite{BN}) gave a representation for a class of fully nonlinear elliptic equations with Dirichlet boundary via the stochastic control theory under the linear expectation framework. In their construction, every admissible control corresponds to a trajectory of SDEs. Compared with the aforementioned results, the trajectories in our representation are universal defined for all probability measures. Moreover, we prove that the induced probability measures of $G$-SDEs are weakly compact, and hence the supremum of the upper expectation representation can be realized (Corollary \ref{supremum realization}). This kind of properties can be applied to the study of first-order differentiation of the viscosity solutions of fully nonlinear PDEs (see \cite{HPS,Song2}), which is also our future work. The paper is organized as follows. In Section 2, we present some preliminaries for nonlinear expectation theory and related space of random variables. In Section 3, we give the stochastic integral and differential equations with respect to generalized $G$-Brownian motion. Section 4 is devoted to the research of the properties of exit times for $G$-SDEs. In Section 5, we provide the probabilistic representation for fully nonlinear elliptic equations with Dirichlet boundary. \section{Preliminaries} The main purpose of this section is to recall some preliminary results about the upper expectation and the corresponding capacity theory. More details can be found in \cite{DHP}. For each Euclidian space, we denote by $\langle\cdot,\cdot\rangle$ and $|\cdot|$ its scalar product and the associated norm, respectively. Let $\Omega_d:=C([0,\infty);\mathbb{R}^d)$ and $B_t(\omega):=\omega(t)$ be the canonical space and the canonical mapping equipped with the norm $$\rho_d(\omega^1,\omega^2):=\sum_{i=1}^\infty\frac{1}{2^i}[(\max_{t\in[0,i]}|\omega^1_t-\omega^2_t|\wedge 1)].$$ The corresponding natural filtration of $B$ is given by $\mathcal{F}_t:=\sigma \{B_{s}:s\leq t\}$ for $t\geq 0$. Let $\mathcal{P}$ be a given family of probability measures on $(\Omega_d, \mathcal{B}(\Omega_d))$. Denote by $\mathcal{L}(\Omega_d,\mathcal{P})$ the space of all $\mathcal{B}(\Omega_d)$-measurable random variables $X$ such that $E_P[X]$ exists for each $P\in \mathcal{P}$. Next we define the corresponding upper-expectation by \begin{equation} \mathbb{\hat{E}}_{\mathcal{P}}[X]:=\sup_{P\in \mathcal{P}}E_P[X],\ \ \ \ \text{for}\ X \in \mathcal{L}(\Omega_d,\mathcal{P}). \end{equation} Then it is easy to check that the triple $(\Omega_d, \mathcal{L}(\Omega_d,\mathcal{P}), \mathbb{\hat{E}}_{\mathcal{P}})$ forms a sublinear expectation space (see Peng \cite{P7}). In this setting, we can also introduce the notions of identically distribution and independence: \begin{itemize} \item[$\cdot$] two $n$-dimensional random vectors $X=(X_{1},...,X_{n})$ and $Y=(Y_{1},...,Y_{n})$ are called identically distributed, denoted by $X\overset {d}{=}Y$, if for each $\varphi\in C_{b.Lip}(\mathbb{R}^{n})$, $ \hat{\mathbb{E}}_{\mathcal{P}}[\varphi(X)]=\hat{\mathbb{E}}_{\mathcal{P}}[\varphi(Y)], $ \item[$\cdot$] an $m$-dimensional random vector $Y$ is said to be independent of an $n$-dimensional random vector $X$ if for each $\varphi\in C_{b.Lip}(\mathbb{R}^{n+m})$, $ \hat{\mathbb{E}}_{\mathcal{P}}[\varphi(X,Y)]=\hat{\mathbb{E}}_{\mathcal{P}}[\hat{\mathbb{E}}_{\mathcal{P}}% [\varphi(x,Y)]_{x=X}], $ \end{itemize} where $C_{b.Lip}(\mathbb{R}^l)$ is the space of all bounded Lipschitz function defined on $\mathbb{R}^l$, $l\geq1$. \begin{example}{\upshape Given two constants $0\leq \underline{\sigma}\leq \bar{\sigma}$. Suppose $W$ is a 1-dimensional standard Brownian motion defined on Wiener space $(\Omega^0,(\mathcal{F}^0_t)_{t\geq 0},P^0)$, set \[ \mathcal{P} := \{P_{\theta} : P_{\theta}= P^0\circ X^{-1},\ X_t = \int^t_0 \theta_sdW_s,\ \theta\in\mathcal{A}_{[\underline{\sigma}, \bar{\sigma}]}\},\] where $\mathcal{A}_{[\underline{\sigma}, \bar{\sigma}]}$ is the collection of all adapted processes taking values in $[\underline{\sigma}, \bar{\sigma}]$. Then on the sublinear space $(\Omega_1, \mathcal{L}(\Omega_1,{\mathcal{P}}), \mathbb{\hat{E}}_{\mathcal{P}})$, the canonical process $B$ is a symmetric $G$-Brownian motion ($\hat{\mathbb{E}}_{\mathcal{P}}[B_t]=-\hat{\mathbb{E}}_{\mathcal{P}}[-B_t]=0$) with $G(a)=\frac{1}{2}(\bar{\sigma}^2a^+-\underline{\sigma}^2a^-)$ for each $a\in\mathbb{R}$, see \cite{DHP}.} \end{example} Now based on the set of $\mathcal{P}$, we introduce the following capacity, called upper probability, $$c_{\mathcal{P}}(A):=\sup_{P\in\mathcal{P}} P(A),\ \ A\in \mathcal{B}(\Omega_d).$$ It is obvious that, \begin{equation}\label{myq1} c_{\mathcal{P}}(A) =\sup \{ c_{\mathcal{P}}(K):\ K\makebox{ is compact in }\ \mathcal{B}(\Omega_d), \ K\subset A\}, \ \ \forall A\in \mathcal{B}(\Omega_d). \end{equation} Then we could establish the language of ``$\mathcal{P}$-quasi-surely'': \begin{itemize} \item[$\cdot$] A set $A\in\mathcal{B}(\Omega_d)$ is called $\mathcal{P}$-polar if $c_{\mathcal{P}}(A)=0$ and a property is said to holds ``$\mathcal{P}$-quasi-surely'' ($\mathcal{P}$-q.s.) if it holds outside a polar set. As usual, we do not distinguish between two random variables $X$ and $Y$ if $X=Y$ $\mathcal{P}$-q.s. \item[$\cdot$] A function $X:\Omega_d \rightarrow \mathbb{R}$ is called $\mathcal{P}$-quasi-continuous ($\mathcal{P}$-q.c.) if for each $\varepsilon>0$, there exists a closed set $F$ with $c_{\mathcal{P}}(F^{c})<\varepsilon$ such that $X|_{F}$ is continuous. We say that $Y:\Omega_d \rightarrow \mathbb{R}$ has a $\mathcal{P}$-quasi-continuous version if there exists a $\mathcal{P}$-quasi-continuous function $X:\Omega_d \rightarrow \mathbb{R}$ such that $Y=X$ $\mathcal{P}$-q.s. \end{itemize} We define the $L^p$-norm of random variables as $||X||_{p,\mathcal{P}}:=(\mathbb{\hat{E}}_{\mathcal{P}}[|X|^p])^{\frac{1}{p}}$ for $p\geq 1$ and set \[ {L}^{p}(\Omega_d;\mathcal{P}):=\{X\in\mathcal{B}(\Omega_d): ||X||_{{p,\mathcal{P}}}<\infty \}. \] Then ${L}^{p}(\Omega_d,\mathcal{P})$ is a Banach space under the norm $||\cdot||_{{p,\mathcal{P}}}$. Let $C_b(\Omega_d)$ (resp. $B_b(\Omega_d)$) be the space of all bounded, continuous functions (resp. bounded, $\mathcal{B}(\Omega_d)$-measurable functions) on $\Omega_d$. We denote the corresponding completion under norm $||\cdot||_{p,\mathcal{P}}$ by ${L}_{C}^p(\Omega_d,\mathcal{P})$ (${L}_{b}^p(\Omega_d,\mathcal{P})$, resp.). The following result characterizes the space ${L}_{C}^p(\Omega_d,\mathcal{P}),{L}_{b}^p(\Omega_d,\mathcal{P})$ in the measurable and integrable sense. \begin{theorem}[\cite{DHP}] \label{LG characteriazation theorem}For each $p\geq1$, we have \[ L_{b}^{p}(\Omega_d,\mathcal{P})=\{X\in\mathcal{B}(\Omega_d): \lim_{n\rightarrow \infty}\mathbb{\hat{E}}_{\mathcal{P}}[|X|^{p}I_{\{|X|>n\}}]=0\}, \] \[ L_{C}^{p}(\Omega_d,\mathcal{P})=\{X\in\mathcal{B}(\Omega_d):X\ \text{has a}\ \mathcal{P}\text{-q.c. version, }\lim_{n\rightarrow \infty}\mathbb{\hat{E}}_{\mathcal{P}}[|X|^{p}I_{\{|X|>n\}}]=0\}. \] \end{theorem} Moreover, we have the following monotone convergence results, which are different from the linear case. \begin{proposition}[\cite{DHP,Song1}]\label{downward convergence proposition} Suppose $X_n$, $n\geq 1$ and $X$ are $\mathcal{B}(\Omega_d)$-measurable. \begin{description} \item[(1)] Assume $X_n\uparrow X$ q.s. on $\Omega$ and $E_{P}[X_1^-]<\infty$ for all $P\in\mathcal{P}$. Then $ \mathbb{\hat{E}}_{\mathcal{P}}[X_n]\uparrow\mathbb{\hat{E}}_{\mathcal{P}}[X]. $ \item[(2)] Assume $\mathcal{P}$ is weakly compact. \begin{itemize} \item [(a)] If $\{X_n\}_{n=1}^\infty$ in ${L}_{C}^{1}(\Omega_d,\mathcal{P})$ satisfies that $X_n\downarrow X$ $\mathcal{P}$-q.s., then $ \mathbb{\hat{E}}_{\mathcal{P}}[X_n]\downarrow\mathbb{\hat{E}}_{\mathcal{P}}[X]. $ \item [(b)] For each closed set $F\in \mathcal{B}(\Omega_d)$, $ c_{\mathcal{P}}(F) =\inf \{ c_{\mathcal{P}}(O):\ O\makebox{ open in}\ \mathcal{B}(\Omega_d),\ F\subset O\}. $ \end{itemize} \end{description} \end{proposition} \begin{remark}\label{remark on tightness guarantee maximum and on closure} \upshape{ If ${{\mathcal{P}}}$ is weakly compact, then the maximum exists for elements of $L^1_C(\Omega_d,{{\mathcal{P}}})$, i.e., \begin{equation*} \mathbb{\hat{E}}_{\mathcal{P}}[X]=\max_{P\in \mathcal{P}}E_P[X],\ \ \ \ \text{for each}\ X\in L^1_C(\Omega_d,{{\mathcal{P}}}). \end{equation*} For a family $\mathcal{P}_0$, we denote by ${{\mathcal{{P}}}}$ its closure under weak convergence and it holds that \begin{equation}\label{2134354544} \mathbb{\hat{E}}_{\mathcal{P}_0}[X]=\mathbb{\hat{E}}_{{{\mathcal{{P}}}}}[X],\ \ \ \ \text{for each}\ X\in L^1_C(\Omega_d,{{\mathcal{{P}}}}). \end{equation} } \end{remark} \section{The SDEs driven by generalized $G$-Brownian motion} Let $\mathbb{S}(d)$ be the space of all $d\times d$ symmetric matrices. Consider a fixed sublinear function $G(\cdot,\cdot):\mathbb{S}% (d)\times\mathbb{R}^d\rightarrow \mathbb{R}$, which is monotonic in the first variable. Then there exists a bounded and closed set $\Theta\subset \mathbb{R}^{d\times d}\times \mathbb{R}^d$ such that \begin{equation}\label{generalized G definition} G(A,p)=\sup_{(\gamma,\mu)\in \Theta}[\frac{1}{2}\langle A,\gamma\gamma^{T}\rangle+\langle p,\mu\rangle]. \end{equation} In the sequel, we shall introduce an upper expectation on $(\Omega_d, \mathcal{B}(\Omega_d))$ such that the canonical process $B$ is the so-called generalized $G$-Brownian motion. Following the argument of \cite{DHP}, we consider a linear standard $d$-dimensional Brownian motion $W$ on some probability space $(\Omega^{0},\mathcal{F}^{0},(\mathcal{F}^{0}_t)_{t\geq 0}, P^{0})$ with \[ \mathcal{F}^0_{t}:=\sigma \{W_{s},0\leq s\leq t\} \vee \mathcal{N}^{P^{0}}, \] where $ \mathcal{N}^{P^{0}}$ is the space of all $P^{0}$-null subsets. Denote by $\mathcal{A}^{\Theta}$ the collection of all $\mathcal{F}^0_t$-adapted processes $(\gamma,\mu)$ taking values in $\Theta$ on $\lbrack0,\infty)$. For each fixed $(\gamma,\mu) \in \mathcal{A}^{\Theta}$ and $0\leq t\leq T<\infty$, we define \[ B_{T}^{t,\gamma,\mu}:=\int_{t}^{T}\gamma_{s}dW_{s}+\int_{t}^{T}\mu_{s}d{s}. \] Then we can obtain a family $\mathcal{P}_0$ of measures: \begin{equation}\label{Generalized BM P0} \mathcal{P}_0:=\{P_{\gamma,\mu}:P_{\gamma,\mu}=P^{0}\circ(B_{\cdot}^{0,\gamma,\mu}% )^{-1},(\gamma,\mu)\in \mathcal{A}^{\Theta}\}. \end{equation} which is tight by the Kolmogorov's criterion (see \cite{KS}). We define its closure under weak convergence as $\mathcal{P}$, which is weakly compact by Prokhorov's theorem. Then we could establish capacity theory corresponding to $\mathcal{P}$ through the results in Section 1. In the following, for this $\mathcal{P}$, we will abbreviate $\mathbb{\hat{E}}_{\mathcal{P}}$, $\mathcal{P}\text{-q.s.}$, $c_{\mathcal{P}}$, ${L}_{C}^p(\Omega_d,\mathcal{P})$ as $\mathbb{\hat{E}},\text{q.s.},c,{L}_{C}^p(\Omega_d)$, etc, for symbol simplicity. \begin{lemma}\label{integration transformation} For each $ X\in L_C^1(\Omega_d)$, \begin{equation} \mathbb{\hat{E}}[X]=\sup_{P \in \mathcal{P}_{0}}E_{P}[X]=\sup_{(\gamma,\mu) \in \mathcal{A}^{\Theta}}E_{P^0}[X(B^{0,\gamma,\mu}_{\cdot})]. \end{equation} \end{lemma} \begin{proof} The proof is immediate from Remark \ref{remark on tightness guarantee maximum and on closure}. \end{proof} \begin{remark}\label{L1G contain l,lip r.v.}\upshape{ From Theorem \ref{LG characteriazation theorem} and the above lemma, we could get that $\varphi(B_{t_{1}},B_{t_{2}},\cdots,B_{t_{n}})\in L_C^1(\Omega_d)$, where $\varphi\in C(\mathbb{R}^{n\times d}) $ is of polynomial growth.} \end{remark} The upper expectation $\mathbb{\hat{E}}$ corresponding to $\mathcal{P}$ is called $G$-expectation, under which the canonical process $B=(B^1,\cdots,B^d)$ is called ($d$-dimensional) generalized $G$-Brownian motion, see \cite{P7}. Indeed, \begin{proposition}\label{properties of generalized G-BM} Under $\mathbb{\hat{E}}$, the canonical process $B$ is a generalized $G$-Brownian motion, i.e., \begin{itemize} \item [(1)] $B_0=0\ q.s.$ and $\lim_{t\rightarrow 0}\mathbb{\hat{E}}[|B_t|^3]/t=0$; \item [(2)] $B$ is stationary: $B_{t+s}-B_s\overset{d}{=}B_t$ and has independent increments: $B_{t+s}-B_t$ is independent from $(B_{t_1},\cdots,B_{t_n})$ for any $t_1<\cdots t_n\leq t$ and $s\geq 0$. \end{itemize} Moreover, $\hat{\mathbb{E}}[\langle p,B_t\rangle]\leq G(0,p) t$, which implies $|\hat{\mathbb{E}}[\langle p,B_t\rangle]|\leq [G(0,p)\vee G(0,-p)] t$, for each $p\in \mathbb{R}^d$. \end{proposition} \begin{proof} The two assertions in $(1)$ can be easily proved by Lemma \ref{integration transformation} and Remark \ref{L1G contain l,lip r.v.}. Moreover, by the definition of $G$ and the observation that $(\gamma_s,\mu_s)$ take values in $\Theta$, we deduce that for each $p\in \mathbb{R}^d$ \begin{equation}\label{33334342342} \mathbb{\hat{E}}[\langle p,B_t\rangle] =\sup_{(\gamma,\mu) \in \mathcal{A}^{\Theta}}E_{P^0}[\langle p,\int_0^t\mu_sds\rangle] \leq G(0,p)t, \end{equation} from which we get $|\hat{\mathbb{E}}[\langle p,B_t\rangle]|\leq [G(0,p)\vee G(0,-p)] t$. Now we are going to prove (2). By a similar analysis as in Lemma 43 of \cite{DHP}, we derive that for $\varphi \in C_{b.Lip}(\mathbb{R}^{d})$ and $t,s\geq 0$, \begin{align*} \sup_{(\gamma,\mu) \in \mathcal{A}^{\Theta}}E_{P^0}[\varphi(B^{0,\gamma,\mu}_{t})]=\sup_{(\gamma,\mu) \in \mathcal{A}^{\Theta}}E_{P^0}[\varphi(B^{s,\gamma,\mu}_{t+s})], \end{align*} which indicates that \begin{equation}\label{elementary BM distribution property} \hat{\mathbb{E}}[\varphi(B_{t})]=\hat{\mathbb{E}}[\varphi(B_{t+s}-B_s)]. \end{equation} Next for each $\varphi \in C_{b.Lip}(\mathbb{R}^{(n+1)\times d})$, $(\gamma, \mu)\in \mathcal{A}^{\Theta}$ and $t,s\geq 0$, taking $\xi:=(B^{0,\gamma,\mu}_{t_1},\cdots,B^{0,\gamma,\mu}_{t_n})$ for $t_1<\cdots t_n\leq t$ and using the argument in Lemma 44 of \cite{DHP}, we have \begin{align*} \esssup_{\overline{\gamma},\overline{\mu} \in \mathcal{A}^{\Theta}}% E_{P^0}[\varphi((B^{0,\gamma,\mu}_{t_1},\cdots,B^{0,\gamma,\mu}_{t_n}),B^{t,\overline{\gamma},\overline{\mu}}_{t+s})|\mathcal{F}^0_{t}]& =(\esssup_{\overline{\gamma},\overline{\mu} \in \mathcal{A}^{\Theta}}% E_{P^0}[\varphi(x,B^{t,\overline{\gamma},\overline{\mu}}_{t+s})|\mathcal{F}^0_{t}])_{x=(B^{0,\gamma,\mu}_{t_1},\cdots,B^{0,\gamma,\mu}_{t_n})}\\&=(\sup_{\overline{\gamma},\overline{\mu} \in \mathcal{A}^{\Theta}}% E_{P^0}[\varphi(x,B^{t,\overline{\gamma},\overline{\mu}}_{t+s})])_{x=(B^{0,\gamma,\mu}_{t_1},\cdots,B^{0,\gamma,\mu}_{t_n})}. \end{align*} Taking firstly expectation $E_{P^0}$ and then supremum over ${\gamma},{\mu} \in \mathcal{A}^{\Theta}$ to both sides yield that $$\sup_{{\gamma},{\mu} \in \mathcal{A}^{\Theta}}% E_{P^0}[\varphi((B^{0,\gamma,\mu}_{t_1},\cdots,B^{0,\gamma,\mu}_{t_n}),B^{t,{\gamma},{\mu}}_{t+s})]=\sup_{{\gamma},{\mu} \in \mathcal{A}^{\Theta}}E_{P^0}[(\sup_{\overline{\gamma},\overline{\mu} \in \mathcal{A}^{\Theta}}% E_{P^0}[\varphi(x,B^{t,\overline{\gamma},\overline{\mu}}_{t+s})])_{x=(B^{0,\gamma,\mu}_{t_1},\cdots,B^{0,\gamma,\mu}_{t_n})}], $$ which is, $$ \hat{\mathbb{E}}[\varphi((B_{t_1},\cdots,B_{t_n}),B_{t+s}-B_t)]= \hat{\mathbb{E}}[\hat{\mathbb{E}}[\varphi(x,B_{t+s}-B_t)]_{x=(B_{t_1},\cdots,B_{t_n})}]. $$ The proof is complete. \end{proof} Note that when $\Theta$ has only a single point $(\gamma,\mu)$, $B$ is the classical linear Brownian motion with $B_1\sim N(\mu,\gamma\gamma^T)$. So the generalized $G$-Brownian motion can be regarded as a Brownian motion with mean and covariance uncertainty described by $\Theta$. Remark that when $G=G(A)$, the generalized $G$-Brownian motion reduces to symmetric $G$-Brownian motion which has only volatility uncertainty. \begin{remark}\upshape{In our article, for the purpose of running a more general PDEs, we use the generalized $G$-Brownian motion. Most results by now about symmetric $G$-Brownian motion still hold for generalized $G$-Brownian motion and the proofs are just similar, so we will give them directly, except that we need to clarify some basic properties of generalized $G$-Brownian motion and the construction of related stochastic calculus, which are more sophisticated. } \end{remark} Property $(2)$ in Theorem \ref{properties of generalized G-BM} allows us to define a time-consistent conditional $G$-expectation in the following way: for {{$X=\varphi(B_{t_{1}},B_{t_{2} }-B_{t_{1}},\cdots,B_{t_{n}}-B_{t_{n-1}})$, the conditional expectation at ${t_{j}}$ is defined by \begin{align} \hat{\mathbb{E}}_{t_j}[X] :=\psi(B_{t_{1}},B_{t_{2} }-B_{t_{1}},\cdots,B_{t_{j}}-B_{t_{j-1}}),\nonumber \end{align} where}} \[ \psi(x_{1},\cdots,x_{j})=\hat{\mathbb{E}}[\varphi(x_{1},\cdots ,x_{j},B_{t_{j+1}}-B_{t_{j}},\cdots,B_{t_{n}}-B_{t_{n-1}})]. \] The conditional $G$-expectation $\hat{\mathbb{E}}_t[\cdot]$ can be extended continuously to $L_C^1(\Omega_d)$, and can preserve most properties of the linear expectation except the linearity, see \cite{P7}. In the remainder of this section, we shall study the stochastic calculus with respect to $B$. We set $G_1(A):=\sup_{(\gamma,\mu)\in \Theta}\frac{1}{2}\langle A,\gamma\gamma^{T}\rangle=G(A,0)$ and $g(p):=\sup_{(\gamma,\mu)\in \Theta}\langle p,\mu\rangle=G(0,p).$ Then consider two sets: $$\Gamma:=\{\gamma\in \mathbb{R}^{d\times d}: \frac{1}{2}\langle A,\gamma\gamma^{T}\rangle\leq G_1(A), \ \text{for each}\ A\in \mathbb{S}% ({d}) \},\ \ \ \ \Sigma:=\{\mu\in \mathbb{R}^d:\langle p,\mu\rangle\leq g(p),\ \text{for each}\ p\in \mathbb{R}^d\}.$$ It is obvious that $\Theta\subset \Gamma\times \Sigma$. For any $\gamma\in \Gamma$ and $\mu\in \Sigma$, we have $ 0\leq \gamma\gamma^T\leq \overline{\sigma}^2 I_{d\times d }$ and $ |\mu|\leq \beta $ with $\overline{\sigma}^2:=\sup_{\gamma\in \Gamma}\lambda^{max}[\gamma\gamma^T] $ and $\beta:=\sup_{\mu\in \Sigma}|\mu|,$ where $\lambda^{max}[\gamma\gamma^T]$ is maximal eigenvalue of $\gamma\gamma^T$. The proof for the following lemma can be found in \cite{STZ} (see also \cite{HP1}). \begin{lemma}\label{STZ lemma for generalized G} For any $P\in\mathcal{P}$, we have \begin{equation}E_P[\xi|\mathcal{F}_{t}]\leq \mathbb{\hat{E}}_t[\xi],\ \ \ \ P\text{-a.s.},\ \text{for each}\ \xi\in L_C^1(\Omega_d). \end{equation} \end{lemma} Next we give the semi-martingale decomposition for generalized $G$-Brownian motion, which is crucial for our main results. The proof will be given in Appendix. \begin{theorem}\label{generalized G-BM decomposition} For any $P\in \mathcal{P}$, $B_t$ is a $d$-dimensional semimartingale with decomposition $B_t=M^P_t+A^P_t$ such that, $P$-a.s., $A^P_t,\langle M\rangle^P_t$ is absolutely continuous with respect to $t$ and $$ \frac{dA^P_t}{dt}\in\Sigma \ \ \text{and}\ \ \frac{d\langle M\rangle^P_t}{dt}\in \Gamma\Gamma^T:=\{\gamma\gamma^T:\gamma\in\Gamma \},\ \ \ \ \ {a.e.\ t},\ \text{P-a.s.} $$ Here we denote the quadratic variation of a martingale under $P$ by $\langle \cdot\rangle^P$. \end{theorem} Using the above theorem, we can now define the stochastic integral with respect to generalized $G$-Brownian motion. For each $p\geq1$ and $0<T<\infty$, set \begin{align*} M_G^{p,0}(0,T):=& \{\eta:=\eta_t(\omega)=\xi_0I_{\{0\}}+\sum_{j=0}^{N-1}\xi_j(\omega)I_{(t_j,t_{j+1}]}(t),\ \text{for any}\ N\in\mathbb{N}, \\ & 0=t_0\leq t_1\leq \cdots\leq t_N\leq T, \xi_j\in L_{C}^p(\Omega_d)\cap \mathcal{F}_{t_j},j=0,1\cdots,N\}.\\ M_b^{p,0}(0,T):=& \{\eta:=\eta_t(\omega)=\xi_0I_{\{0\}}+\sum_{j=0}^{N-1}\xi_j(\omega)I_{(t_j,t_{j+1}]}(t),\ \text{for any}\ N\in\mathbb{N}, \\ & 0=t_0\leq t_1\leq \cdots\leq t_N\leq T, \xi_j\in L_{b}^p(\Omega_d)\cap \mathcal{F}_{t_j},j=0,1\cdots,N\}. \end{align*} For each $\eta\in M_b^{p,0}(0,T)$, set $\|\eta\|_{M_b^{p}(0,T)}:=(\hat {\mathbb{E}}[\int_{0}^T|\eta_{t}|^{p}dt])^{\frac{1}{p}} $ and denote by $M_C^{p}(0,T)$ ($M_b^{p}(0,T)$, resp.) the completion of $M_C^{p,0}(0,T)$ ($M_b^{p,0}(0,T)$, resp.) under the norm $\|\cdot \|_{M_b^{p}(0,T)}$. Then for each $\eta\in M_b^{2,0}(0,T;\mathbb{R}^d)$, we define the stochastic integral with respect to $B_t$ as \[ \int_{0}^{T} \langle \eta_{t},dB_t\rangle:=\sum_{j=0}^{N-1}\langle\xi_{{j}} ,B_{t_{j+1}}-B_{t_{j} }\rangle, \] which is a linear mapping from $M_b^{2,0}(0,T;\mathbb{R}^d)$ to ${L}_b^{2}(\Omega_d)$. Moreover, it holds that \begin{proposition}\label{Bcontrol}For each $\eta\in M_b^{2,0}(0,T;\mathbb{R}^d)$ and $Y\in\mathcal{L}(\Omega_d)$, we have \begin{equation} \hat{\mathbb{E}}[-\beta\int_{0}^{T} |\eta_{t}|d{t}+Y]\leq \hat{\mathbb{E}}[\int_{0}^{T} \langle \eta_{t},dB_t\rangle+Y]\leq \hat{\mathbb{E}}[\beta\int_{0}^{T} |\eta_{t}|d{t}+Y],\end{equation} \begin{equation}\label{87676535467} \hat{\mathbb{E}}[(\int_{0}^{T} \langle\eta_{t},d B_t\rangle)^2] \leq 2(\overline{\sigma}^2+\beta^2T)\hat{\mathbb{E}}[\int_0^T|\eta_t|^2dt]. \end{equation} \end{proposition} \begin{proof} We just prove (2), since the proof for (1) is similar. For any $P\in \mathcal{P}$, by Theorem \ref{generalized G-BM decomposition}, we have \begin{equation}\label{5678934224} \begin{split} E_P[(\int_{0}^{T} \langle\eta_{t},dB_t\rangle)^2]&\leq 2E_P[(\int_{0}^{T} \langle\eta_{t},dM_t^P\rangle)^2]+2E_P[(\int_{0}^{T}\langle\eta_{t},dA^P_t\rangle)^2]\\ &\leq 2E_P[\int_{0}^{T} \langle\eta_{t}\eta_{t}^T,d\langle M^P\rangle^P_t\rangle]+2E_P[(\int_{0}^{T}|\langle\eta_{t},dA^P_t\rangle|)^2]\\ &\leq 2\overline{\sigma}^2E_P[\int_{0}^{T} |\eta_{t}|^2dt]+2\beta^2T E_P[\int_{0}^{T} |\eta_{t}|^2dt].\end{split} \end{equation} Taking supremum on both sides, we get the desired result. \end{proof} It is worth mentioning that $\int_0^T \langle\eta_t ,d B_t\rangle$ is defined q.s., and under each $P$, it is equivalent to the classical stochastic integral with respect to semi-martingale $B_t$. By above proposition, the stochastic integral can be extended continuously to $M_b^{2}(0,T;\mathbb{R}^d)$ just as in It\^{o}'s way. Remark that when $\eta\in M_C^2(0,T;\mathbb{R}^d)$, we have $\int_0^T \langle\eta_t ,d B_t\rangle\in L_C^2(\Omega_d)$. Moreover, we can also obtain stochastic integral on optional time interval as \cite{LP}. The mapping $\tau: \Omega_d\rightarrow [0, \infty)$ is called a stopping time if $(\tau\leq t)\in\mathcal{F}_t$ and an optional time if $(\tau< t)\in\mathcal{F}_t$ for each $t \geq 0$. \begin{lemma}\label{stop integral lemma of LP} For each optional time $\tau$ and $\eta\in M_b^2(0,T;\mathbb{R}^d)$, we have $ \eta I_{[0,\tau]}\in M_b^2(0,T;\mathbb{R}^d) $ and \begin{equation}\label{221111122} \int_{0}^{\tau\wedge t}\langle \eta_s,dB_s\rangle=\int_{0}^{t}\langle I_{[0,\tau]}\eta_s,dB_s\rangle. \end{equation} \end{lemma} \begin{proof} The proof is similar to the one of \cite{LP}. \end{proof} \begin{remark} \upshape {Note that the optional times may do not possess enough continuity in $\omega$, so in general we cannot expect $ \eta I_{[0,\tau]}\in M_C^2(0,T;\mathbb{R}^d)$ for $\eta\in M_C^2(0,T;\mathbb{R}^d)$, see \cite{HWZ}.} \end{remark} The quadratic variation process of $B$ is defined by \begin{equation} \langle B\rangle_t:=B_tB_t^T-\int_0^tB_sdB^T_s-\int_0^tdB_sB^T_s. \end{equation} For any $P\in\mathcal{P}$, we have $$ \langle B\rangle_t=\langle B\rangle^P_t=\langle M^P\rangle^P_t,\ \ \ \ P\text{-a.s.} $$ Hence, $$\frac{d\langle B\rangle_t}{dt}\in \Gamma\Gamma^T, \ \ \ \ \text{q.s.} $$ Thus we can define the stochastic integral $\int_0^T\langle \eta_t, d\langle B\rangle_t\rangle $ for $\eta\in M_b^1(0,T;\mathbb{S}(d))$ similarly as the one for $dB_t$ and following property hold: $$\hat{\mathbb{E}}[|\int_0^T\langle \eta_t, d\langle B\rangle_t\rangle|]\leq \overline{\sigma}^2\sqrt{d} \hat{\mathbb{E}}[\int_0^T|\eta_t| dt].$$ \begin{definition} A process $(M_{t})_{t\geq0}$ is called a {$G$-martingale} if for each $t\in \lbrack0,\infty)$, $M_{t}\in L_{C}^{1}(\Omega_d)\cap \mathcal{F}_{t}$ and for each $s\in \lbrack0,t]$, we have $ \hat{\mathbb{E}}_s[M_{t}]=M_{s}. $ \end{definition} \begin{lemma}\label{G martingale lemma of Peng} For each $A\in \mathbb{S}(d),p \in \mathbb{R}^d$ and $t\geq 0$, \begin{equation} \mathbb{\hat{E}}[\frac12\langle A,\langle B\rangle_t\rangle+\langle p,B_t\rangle]=G(A,p)t. \end{equation} \end{lemma} \begin{proof}By a direct calculation, we have \begin{equation}\label{3333333} \mathbb{\hat{E}}[\frac12\langle A,\langle B\rangle_t\rangle+\langle p,B_t\rangle]=\sup_{(\gamma,\mu) \in \mathcal{A}^{\Theta}}E_{P^0}[\frac12\int_0^t\langle A,\gamma_s\gamma_s^T\rangle ds+\int_0^t\langle p,\mu_s\rangle ds]\leq G(A,p)t \end{equation} by the definition of $G$ and the fact that $(\gamma_s,\mu_s)$ take values in $\Theta$. On the other hand, choosing $(\gamma_1,\mu_1)\in \Theta$ so that $G(A,p)=\frac{1}{2}\langle A,\gamma_1{\gamma_1}^T\rangle+\langle p,\mu_1\rangle$ and taking $\gamma_s=\gamma_1s,\mu_s=\mu_1s$, we could get equality in equation (\ref{3333333}) and this completes the proof. \end{proof} \begin{proposition}\label{martingale proposition} Let $\eta\in M_{C} ^{1}(0,T;\mathbb{S}(d))$, $\zeta\in M_{C} ^{2}(0,T;\mathbb{R}^d)$. Then \[ M_{t}:=\int_{0}^{t}\langle \eta_s,d\langle B\rangle_{s}\rangle +2\int_{0}^{t}\langle \zeta_s,d B_{s}\rangle-2\int_{0}^{t}G(\eta_s,\zeta_s)ds \] is a $G$-martingale on $[0,T]$. \end{proposition} \begin{proof} By a standard approximation argument, the proof follows from Lemma \ref{G martingale lemma of Peng} and the properties of $\mathbb{\hat{E}}_t$. \end{proof} Now we are ready to state the SDEs driven by the generalized $G$-Brownian motion: \begin{equation} \label{SDE} dX^{x}_{t}=b(X_{t}^{x})ds+\sum_{i,j=1}^dh_{ij}(X_{t}^{x})d\langle B^i,B^j\rangle_t+\sum_{j=1}^{d}\sigma_j(X_{t}^{x})dB^j_t,\ X_{0}^{x}=x; \ \ \ t\geq 0, \end{equation} where $x\in \mathbb{R}^n$ and $b,h_{ij}=h_{ji},\sigma_j:\mathbb{R}^n\rightarrow \mathbb{R}^n$ are given Lipschitz functions. Denote by $\sigma=[\sigma_1,\cdots,\sigma_d]$. From Proposition \ref{Bcontrol} and the contraction mapping method as in \cite{P7}, we can obtain that the $G$-SDE \eqref{SDE} has a unique solution $X\in M^2_C(0,T).$ Moreover, we have the following It\^{o}'s formula for $G$-SDEs \eqref{SDE}. \begin{theorem}\label{Ito formula}Let $f$ be in $C^2(\mathbb{R}^n)$ such that all the second order partial derivatives satisfy the polynomial growth condition. Then \begin{align} f(X^x_t)-f(x)=&\int_{0}^t\langle Df(X^x_s),b(X^x_s)\rangle ds+\int^t_0\langle Df(X^x_s),\sigma(X_{s}^{x}) dB_s\rangle +\sum_{i,j=1}^d\int_{0}^t\langle Df(X^x_s),h_{ij}(X^x_s)d\langle B^i,B^j\rangle_s\rangle\nonumber \\ & +\frac12\int_{0}^t\langle \sigma(X^x_s)^TD^2f(X^x_s)\sigma(X^x_s),d\langle B\rangle_s\rangle. \end{align} \end{theorem} Finally, we shall investigate the Markov property for the $G$-SDEs (\ref{SDE}). Let $\tau$ be an optional time satisfying: \begin{description} \item [(H)] $c(\{\tau>T\})\rightarrow 0$, as $ T\rightarrow \infty$. \end{description} For each $p\geq1$, we set $${L}_{C}^{0,p,\tau+}(\Omega_d)=\{X=\sum_{i=1}^n\xi_iI_{A_i}: \ n\in\mathbb{N},\ \{A_i\}_{i=1}^n\text{ is an}\ \mathcal{F}_{\tau+}\text{-partition of}\ \Omega_d,\ \xi_i\in L_C^p(\Omega_d),\ i=1,\cdots,n\}$$ and denote by ${L}_{C}^{p,\tau+}(\Omega)$ the completion of ${L}_{C}^{0,p,\tau+}(\Omega)$ under the norm $||\cdot||_p$. We also define $$L^{1,\tau+,*}_C(\Omega):=\{X:\text{there exists}\ X_n\in L^{1,\tau+}_C(\Omega_d)\ \text{such that}\ X_n\uparrow X \ q.s.\}.$$ By a similar analysis as in \cite{HJL}, the conditional expectation $\hat{\mathbb{E}}_{\tau+}$ is well defined on $L^{1,\tau+,*}_C(\Omega_d)$ and can preserve most properties of linear conditional expectation except the linearity. The conditional expectation $\hat{\mathbb{E}}_{\tau}$ for a stopping time $\tau$ satisfying (H) is defined similarly on $L^{1,\tau,*}_C(\Omega_d)$, where $L^{1,\tau,*}_C(\Omega_d)$ is defined analogously to $L^{1,\tau+,*}_C(\Omega)$ with $\mathcal{F}_{\tau+}$ replaced by $\mathcal{F}_{\tau}$. Then we have \begin{theorem}\label{extended BM strongmarkov1} Let $Y$ be lower semi-continuous on $\Omega_n$ and bounded from below. Then $Y(X^x_{\tau+\cdot}) \in L^{1,\tau+,*}_C(\Omega_d)$ and \begin{equation} \hat{\mathbb{E}}_{\tau+}[ Y(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[ Y(X^y_{\cdot})]_{y=X^x_{\tau}}. \end{equation} Moreover, if $Y\in C_b(\Omega_n)$, then \begin{equation}\label{a3 belong lemma} Y(X^x_{\tau+\cdot})\in L^{1,\tau+}_C(\Omega_d),\ \ \ \ \text{and} \ \ \ \ Y(X^x_{\tau+\cdot})\in L^{1,\tau}_C(\Omega_d) \ \text{if furthermore $\tau$ is a stopping time.} \end{equation} \end{theorem} \begin{proof} The proof is similar to Lemma 4.1, Theorem 4.2, 4.4 and Corollary 4.8 in \cite{HJL} line by line. \end{proof} \section{Exit times for $G$-SDEs} In this section, we shall give a detailed study of the exit times for $G$-SDEs (\ref{SDE}) from a bounded open set. For symbol simplicity, we only consider the case where $h_{ij}\equiv 0$ and the results still hold for the general case. From now on we always assume that $G$ satisfies the uniformly elliptic condition, i.e., there exists three constants $0<\underline{\sigma}^2\leq \overline{\sigma}^2<\infty$ and $\beta\geq 0$ such that, for each $ A_1\geq A_2\in\mathbb{S}(d)$ and $p_1,p_2\in \mathbb{R}^d,$ \begin{equation}\label{uniformly elliptic condition} \frac{\underline{\sigma}^2}{2} tr(A_1-A_2)-\beta|p_1-p_2|\leq G(A_1,p_1)-G(A_2,p_2)\leq\frac{\overline{\sigma}^2}{2} tr(A_1-A_2)+\beta|p_1-p_2|, \end{equation} In fact we can depict the uniform ellipticity of $G$ by $\Theta$: $G$ is uniformly elliptic with parameters $(\underline{\sigma}^2,\overline{\sigma}^2,\beta)$ iff $$\underline{\sigma}^2I_{d\times d}\leq \gamma\gamma^T\leq\overline{\sigma}^2I_{d\times d}\ \text{and} \ |\mu|\leq \beta\ \text{for each}\ (\gamma,\mu)\in\Theta.$$ Then it holds that $$g(p)\leq \beta|p|,\ \frac{\underline{\sigma}^2}{2} tr(A)\leq G_1(A) \leq\frac{\overline{\sigma}^2}{2} tr(A)\ \text{for}\ A\geq 0\ \text{and}\ \underline{\sigma}^2I_{d\times d}\leq \frac{d\langle B\rangle_t}{dt}\leq\overline{\sigma}^2I_{d\times d},\ \text{q.s.}$$ In the following, we also assume that $Q$ is a bounded open set in $\mathbb{R}^n$ and $\sigma$ is non-degenerate, i.e., there exists a constant $\lambda>0$ such that \[ \lambda I_{n\times n}\leq \sigma(y)\sigma(y)^T, \ \text{for all} \ y\in \overline{Q}. \] We will always use $C_f$ ($L_f$, resp.) to denote the bound (the Lipschitz constant, resp.) of a function $f$ on $\overline{Q}$. Then we get that \begin{equation}\label{myq2} \lambda I_{n\times n}\leq \sigma(y)\sigma(y)^T\leq C_\sigma^2 I_{n\times n} \ \ \ \ \text{for all} \ y\in \overline{Q}. \end{equation} For each set $D\subset \mathbb{R}^n$ and for any $x\in\mathbb{R}^n$, we define the exit times of $X^x$ by $$ {\tau}_D^x(\omega):=\inf\{t\geq 0:X^x_t(\omega)\in D^c\},\ \text{for}\ \omega\in \Omega_d. $$ Now we shall study the properties of ${\tau}_{\overline{Q}}^x$ and ${\tau}_Q^x$. \begin{lemma}\label{stopping time lemma} There exists a constant $C>0$ depending only on $\underline{\sigma}^2,\lambda,\beta,C_b,C_\sigma$ and the diameter of ${Q}$ such that for all $x\in \overline{Q}$, \begin{equation} \hat{\mathbb{E}}[{{\tau}_{\overline{Q}}^x}]\leq C. \end{equation} \end{lemma} \begin{proof} Without loss of generality, we can assume $0\in Q$. Let $h(y):=Ae^{\alpha y_1}$ on $\overline{Q}$ and take $A,\alpha\geq 0$ large enough such that $\frac{h}{2}(\underline{\sigma}^2\lambda\alpha^2-2\alpha C_b-2\beta\alpha C_\sigma)\geq 1$ on $\overline{Q}$. By It\^{o}'s formula (extending $h$ to $\mathbb{R}^n$ smoothly if necessary), we have \begin{align*} h(X^x_{{{\tau}_{\overline{Q}}^x}\wedge t})-h(x)=&\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}\alpha h( X^{x}_s)\langle \sigma^1(X^x_s),dB_s\rangle+\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}\alpha h( X^{x}_s)b^1(X^x_s)ds\\ &+\frac12\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}\alpha^2h( X^{x}_s)\langle\sigma^1(X^x_s)^T\sigma^1(X^x_s),d\langle B\rangle_s\rangle\\ \geq & \int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}\alpha h(X^x_s)\langle\sigma^1( X^{x}_s),dB_s\rangle-\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}\alpha C_bh( X^{x}_s)ds+ \frac12\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}\alpha^2\underline{\sigma}^2\lambda h( X^{x}_s)ds, \end{align*} where $\sigma^1,b^1$ is the first row of $\sigma,b$, respectively, and we have used the matrix inequality $\langle A,D\rangle\geq \langle B,D\rangle $ for $A\geq B, D\geq 0$ in the last inequality. With the help of Proposition \ref{Bcontrol}, taking expectation to both sides gives that \begin{align*} 2C_h\geq \hat{\mathbb{E}}[\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}\frac{h( X^{x}_s)}{2}(\underline{\sigma}^2\lambda\alpha^2-2\alpha C_b-2\beta\alpha C_\sigma)ds] \geq \hat{\mathbb{E}}[{{\tau}_{\overline{Q}}^x}\wedge t]. \end{align*} Letting $t\rightarrow\infty$, we get the desired result. \end{proof} \begin{lemma}\label{square stopping time lemma} There exists a constant $C>0$ depending only on $\underline{\sigma}^2,\lambda,\beta,C_b,C_\sigma$ and the diameter of ${Q}$ such that for all $x\in \overline{Q}$, \begin{equation} \hat{\mathbb{E}}[({{\tau}_{\overline{Q}}^x})^2]\leq C. \end{equation} \end{lemma} \begin{proof} Without loss of generality, we can assume $0\in Q$. Consider $th(y)$, where $h$ with $A,\alpha$ is assumed as in the proof of Lemma \ref{stopping time lemma}. By It\^{o}'s formula, we have \begin{align*} ({{\tau}_{\overline{Q}}^x}\wedge t)h(X^x_{{{\tau}_{\overline{Q}}^x}\wedge t}) =&\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}h( X^{x}_s)ds+\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}s\alpha h( X^{x}_s)\langle\sigma^1(X^x_s),dB_s\rangle+\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}s\alpha h( X^{x}_s)b^1(X^x_s)ds\\ &+\frac12\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}s\alpha^2h( X^{x}_s)\langle\sigma^1(X^x_s)^T\sigma^1(X^x_s),d\langle B\rangle_s\rangle\\ \geq &\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}s\alpha h( X^{x}_s)\langle\sigma^1(X^x_s),dB_s\rangle-\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}s\alpha C_b h( X^{x}_s)ds+\frac12\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}s\alpha^2 \underline{\sigma}^2\lambda h( X^{x}_s)ds. \end{align*} Taking expectation on both sides, we get that \begin{align*} C_h\hat{\mathbb{E}}[{{\tau}_{\overline{Q}}^x}\wedge t]\geq \hat{\mathbb{E}}[({{\tau}_{\overline{Q}}^x}\wedge t)h(X^x_{{{\tau}_{\overline{Q}}^x}\wedge t})]\geq \hat{\mathbb{E}}[\int_0^{{{\tau}_{\overline{Q}}^x}\wedge t}sds]=\frac{1}{2}\hat{\mathbb{E}}[({{{\tau}_{\overline{Q}}^x}\wedge t})^2]. \end{align*} Letting $t\rightarrow \infty$ and we obtain that $$ \hat{\mathbb{E}}[({{{\tau}_{\overline{Q}}^x}})^2]\leq 2C_h\hat{\mathbb{E}}[{{\tau}_{\overline{Q}}^x}], $$ which together with Lemma \ref{stopping time lemma} imply the desired result. \end{proof} \begin{remark}\upshape{By Theorem \ref{LG characteriazation theorem}, we know that ${\tau}^{x}_{\overline{Q}} \in L_b^1(\Omega_d)$ for any $x\in\mathbb{R}^n$, since the case that $x\in \overline{Q}^c$ is trivial. } \end{remark} In order to state the main result, we need the following additional condition on the bounded open set $Q$. \begin{itemize} \item[$\cdot$] An open set $O$ is said to satisfy the exterior ball condition if for all $x\in \partial O$, there exists an open ball $U(z,r)$ such that $U(z,r)\subset O^c$ and $x\in \partial U(z,r)$. \end{itemize} In the rest of this paper, we always assume that $Q$ satisfies the exterior ball condition. The following tells us that $G$-SDEs originating at the boundary point with exterior ball will exit $\overline{Q}$ immediately. \begin{lemma}\label{stopping time lemma 2} For each $x\in \partial Q$, we have q.s. $\tau^x_{\overline{Q}}=0$, i.e., for each $\varepsilon>0$, there exists a point $t\in(0,\varepsilon]$ such that $X^x_{t}\in \overline{Q}^c$. \end{lemma} \begin{proof} Assume $U(z,r)$ is the exterior ball of $Q$ at $x$. We are going to prove the conclusion by a technique from Lions and Menaldi \cite{LM}. We set $h(y):=e^{-k|y-z|^2}$, where the constant $k$ will be determined in the sequel. Then we have \begin{align*} &D_yh(y)=-2k(y-z)e^{-k|y-z|^2},\\ &D^2_{yy}h(y)=(4k^2(y_i-z_i)(y_j-z_j)-2k\delta_{ij})e^{-k|y-z|^2} =(4k^2(y_i-z_i)(y_j-z_j))e^{-k|y-z|^2}-(2k\delta_{ij})e^{-k|y-z|^2}. \end{align*} Note that the matrix $(4k^2(y_i-z_i)(y_j-z_j))e^{-k|y-z|^2}=4k^2(y-z)(y-z)^Te^{-k|y-z|^2}$ and $(2k\delta_{ij})e^{-k|y-z|^2}$ are nonnegative. Choosing $k$ large enough, we can find some constant $\mu>0$ so that for all $y\in \overline{Q}$, \begin{align*} &\langle\sigma(y)^TD^2_{yy}h(y)\sigma(y),\frac{d\langle B\rangle_t}{dt}\rangle+2\langle D_{y}h(y),b(y)\rangle-2\beta|\sigma^T(y)||D_{y}h(y)| \\ & =\langle\sigma(y)^T(4k^2(y_i-z_i)(y_j-z_j))\sigma(y),\frac{d\langle B\rangle_t}{dt}\rangle e^{-k|y-z|^2}-\langle\sigma(y)^T(2k\delta_{ij})\sigma(y),\frac{d\langle B\rangle_t}{dt}\rangle e^{-k|y-z|^2}\\ &\ \ \ \ -4k\langle(y-z),b(y) \rangle e^{-k|y-z|^2}-2\beta|\sigma^T(y)||D_{y}h(y)|\\ & \geq (4\underline{\sigma}^2\lambda k^2|y-z|^2-4k(C_{b}+\beta C_{\sigma} )|y-z|-2k\overline{\sigma}^2C^2_{\sigma} )e^{-k|y-z|^2}\geq \mu. \end{align*} Then applying It\^{o}'s formula, we derive that \begin{align*} h(X^x_{\tau^x_{\overline{Q}}\wedge t})-h(x) &=\int_{0}^{\tau^x_{\overline{Q}}\wedge t} \langle D_yh(X^x_s),\sigma(X^x_s)dB_s\rangle+\int_{0}^{\tau^x_{\overline{Q}}\wedge t} \langle D_yh(X^x_s),b(X^x_s)\rangle ds\\ &\ \ \ +\frac12\int_{0}^{\tau^x_{\overline{Q}}\wedge t}\langle\sigma(X^x_s)^TD_{yy}^2h(X^x_s)\sigma(X^x_s),d\langle B\rangle_s\rangle. \end{align*} Taking expectation to both sides and using Proposition \ref{Bcontrol}, we conclude that $$\frac{\mu}{2}\hat{\mathbb{E}}[{\tau^x_{\overline{Q}}\wedge t}]\leq \hat{\mathbb{E}}[h(X_{\tau^x_{\overline{Q}}\wedge t})-h(x)]\leq 0,$$ since $h(y)-h(x)\leq 0$ for $y\in U(z,r)^c$. Therefore, it holds that $$\hat{\mathbb{E}}[{\tau^x_{\overline{Q}}}\wedge t]\leq 0.$$ Letting $t\rightarrow \infty$, we obtain $\hat{\mathbb{E}}[\tau^x_{\overline{Q}}]\leq 0$, from which we get that $\tau^x_{\overline{Q}}=0$ q.s. The proof is complete. \end{proof} Lemma \ref{stopping time lemma 2} indicates that ${\tau}_Q^x= {\tau}_{\overline{Q}}^x$ for the boundary points of $Q$. In the following, we shall show that it also remains true for inner points of $Q$. \begin{theorem}\label{exit times equal lemma}For each $x\in \mathbb{R}^n$, we have $${\tau}_Q^x= {\tau}_{\overline{Q}}^x, \ \text{q.s.}$$ \end{theorem} In order to prove Theorem \ref{exit times equal lemma}, we will study the continuity of ${\tau}_Q^x$ in $\omega$. For this purpose, we shall consider the image space $\Omega_n$ of $G$-SDE (\ref{SDE}). Denote by $B'$ the canonical process on $\Omega_n$. For each subset $D$ of $\mathbb{R}^n$, we define on $\Omega_n$ the exit times of $B'$ by $$ {\tau}_D^{x,1}(\omega):=\inf\{t\geq 0:x+{\omega_t}\in D^c\},\ \text{for}\ \omega\in \Omega_n. $$ Then we have that ${\tau}_D^{x}(\omega)={\tau}_D^{0,1}(X^x_{\cdot}(\omega))$. We need following lemmas to complete the proof of Theorem \ref{exit times equal lemma}. \begin{lemma}\label{semi-continuity of exit time} On $\Omega_n$, ${\tau}_Q^{x,1}$ is lower semi-continuous and ${\tau}_{\overline{Q}}^{x,1}$ is upper semi-continuous. \end{lemma} \begin{proof} We just prove that ${\tau}_{\overline{Q}}^{x,1}$ is upper semi-continuous, since the proof of another part is similar. For any given $\omega\in\Omega_n$, set $t_0:={\tau}_{\overline{Q}}^{x,2}(\omega)$. It suffices to consider the case where $t_0<\infty$. Then we can find an arbitrarily small $\varepsilon>0$ such that $x+B'_{t_0+\varepsilon}(\omega)\in\overline{Q}^c$. Since $\overline{Q}^c$ is open, there exists an open ball $U(x+B'_{t_0+\varepsilon}(\omega),r)$ with center $x+B'_{t_0+\varepsilon}(\omega)$ and radius $r$ such that $U(x+B'_{t_0+\varepsilon}(\omega),r)\subset \overline{Q}^c$. For each ${\omega}'$ whose distance with $\omega$ is small enough, we will have $x+B'_{t_0+\varepsilon}({\omega}')\in U(x+B'_{t_0+\varepsilon}(\omega),r)\subset\overline{Q}^c$. That is, ${\tau}_{\overline{Q}}^{x,1}({\omega}')\leq t_0+\varepsilon$. This completes the proof. \end{proof} By Kolmogorov's criterion, $(X^x_t)_{t\geq 0}$ induces a tight family of probabilities $\mathcal{P}\circ(X_{\cdot}^x)^{-1}:=\{P\circ (X_{\cdot}^x)^{-1}: P\in\mathcal{P}\}$ on $\Omega_n$. We denote the induced upper capacity by $c^x_2:=c_{\mathcal{P}\circ(X_{\cdot}^x)^{-1}}=\sup_{P\in\mathcal{P}} P\circ (X_{\cdot}^x)^{-1} $ and the induced upper expectation by $\hat{\mathbb{E}}^x_2:=\hat{\mathbb{E}}_{\mathcal{P}\circ(X_{\cdot}^x)^{-1}}=\sup_{P\in\mathcal{P}}E_{P\circ (X_{\cdot}^x)^{-1}}$. More generally, for a set $A\in \mathbb{R}^n$, we define $\mathcal{P}^A_2:=\cup_{x\in A}\mathcal{P}\circ(X_{\cdot}^x)^{-1}$, and $ \hat{\mathbb{E}}^A_2:=\hat{\mathbb{E}}_{\mathcal{P}^A_2}=\sup_{P\in\mathcal{P}^A_2}E_{P}$ as well as $ c^A_2:=c_{\mathcal{P}^A_2}=\sup_{P\in\mathcal{P}^A_2}{P}$. \begin{lemma}\label{SDE continuity wrt y} Assume $(y_k)_{k\geq 1}$ is a sequence in $\mathbb{R}^n$ such that $|y_k-y|\rightarrow 0$ for some $y$. Then for each $\varphi\in C_b(\Omega_n)$, we have $$\mathbb{\hat{E}}[|\varphi(X^y_{\cdot})-\varphi(X^{y_k}_{\cdot})|]\rightarrow 0.$$ \end{lemma} \begin{proof} By Lemma 3.1 in Chap VI of \cite{P7}, we can choose a sequence of $\varphi_m\in C_b(\Omega_n)$ such that $|\varphi_{m}|\leq C_\varphi$, $0\leq |\varphi_{m}(\omega)-\varphi_{m}({\omega}')|\leq m||\omega-{\omega}'||_{C[0,m]}$ and $\varphi_{m}\uparrow \varphi$, as $m\rightarrow \infty$. We pick a compact set $K\subset \mathbb{R}^n$ such that $y_k,y\in K$ for each $k\geq 1$ and then the family $ \mathcal{P}^{K}_2 $ is tight by Kolmogorov's criterion. Then for any fixed $\varepsilon>0$, there is a compact set $\widetilde{K}\subset\Omega_n$ such that $c^{K}_2(\widetilde{K}^c)\leq\varepsilon$, which implies $c^z_2(\widetilde{K}^c)\leq\varepsilon$ uniformly for $z\in K$. By Dini's theorem, $\varphi_m\uparrow \varphi$ uniformly on $\widetilde{K}$. So we can take $m$ large enough such that $0\leq \varphi-\varphi_m\leq \varepsilon$ on $\widetilde{K}$. Then by the basic estimate for $G$-SDEs, we obtain some constant $C\geq 0$ such that \begin{align*} \mathbb{\hat{E}}[|\varphi(X^y_{\cdot})-\varphi(X^{y_k}_{\cdot})|] &\leq \mathbb{\hat{E}}^y_2[|\varphi-\varphi_m|]+\mathbb{\hat{E}}[|\varphi_m(X^y_\cdot)-\varphi_m(X^{y_k}_\cdot)|]+\mathbb{\hat{E}}^{y_k}_2[|\varphi-\varphi_m|]\\ &\leq \mathbb{\hat{E}}^y_2[|\varphi-\varphi_m|I_{\widetilde{K}}]+mC|y-y_k|+\mathbb{\hat{E}}^{y_k}_2[|\varphi-\varphi_m|I_{\widetilde{K}}]+2C_\varphi c^{y}_2({\widetilde{K}}^c)+2C_\varphi c^{y_k}_2({\widetilde{K}}^c)\\ &\leq 2\varepsilon +mC|y-y_k|+4C_\varphi\varepsilon. \end{align*} Letting $k\rightarrow \infty$, we obtain that $$ \limsup\limits_{k\rightarrow\infty}\mathbb{\hat{E}}[|\varphi(X^y_{\cdot})-\varphi(X^{y_k}_{\cdot})|]\leq 2\varepsilon +4C_\varphi\varepsilon. $$ Since $\varepsilon$ can be arbitrary small, we obtain the desired lemma. \end{proof} \begin{lemma}\label{SDE belong to G space} Assume $\varphi\in C_b(\Omega_n)$. Then it holds that $\varphi(X^x_\cdot)\in L_C^1(\Omega_d).$ \end{lemma} \begin{proof} This follows from (\ref{a3 belong lemma}) for stopping time $\tau=0$ in Theorem \ref{extended BM strongmarkov1}. \end{proof} Now we are in a position to state the proof of Theorem \ref{exit times equal lemma}. \begin{proof}[The proof of Theorem \ref{exit times equal lemma}] The case that $x\in \overline{Q}^c$ is trivial and the case that $x\in \partial{Q}$ is from Lemma \ref{stopping time lemma 2}. Then we just need to consider the case that $x\in Q$. It suffices to prove that $\mathbb{\hat{E}}[({\tau}_{\overline{Q}}^x-{\tau}_Q^x)\wedge t]=0$ for each $t>0$. Denote $\delta_t=({\tau}_{\overline{Q}}^{0,1}-{\tau}_Q^{0,1})\wedge t$, then $({\tau}_{\overline{Q}}^x-{\tau}_Q^x)\wedge t=\delta_t(X^x_\cdot)=\delta_t(X^x_{{\tau}_Q^x+\cdot})$ by the definition. Since $\delta_t$ is bounded and upper semi-continuous on $\Omega_n$, we can find a sequence of continuous functions $(f_m)_{m\geq 1}$ on $\Omega_n$ such that $0\leq f_m \leq 2t$ and $f_m\downarrow \delta_t$. Then it follows from Theorem \ref{extended BM strongmarkov1} that, \begin{align*} \mathbb{\hat{E}}[({\tau}_{\overline{Q}}^x-{\tau}_Q^x)\wedge t]=\mathbb{\hat{E}}[\delta_t(X^x_{{\tau}_Q^x+\cdot})] \leq \mathbb{\hat{E}}[f_m(X^x_{{\tau}_Q^x+\cdot})]=\mathbb{\hat{E}}[\mathbb{\hat{E}}[f_m(X^y_{\cdot})]_{y=X^x_{{\tau}_Q^x}}], \ \ \ \text{for all}\ m\geq 1. \end{align*} Denote $\varphi_m(y)=\mathbb{\hat{E}}[f_m(X^y_{\cdot})]$ for $y\in \mathbb{R}^n$. Recalling Lemma \ref{SDE belong to G space}, we have $f_m(X^y_{\cdot})\in L^1_C(\Omega_d)$. Then Proposition \ref{downward convergence proposition} and Lemma \ref{stopping time lemma 2} imply that for each $y\in\partial Q$ \[ \varphi_m(y)\downarrow \mathbb{\hat{E}}[\delta_t(X^y_{\cdot})]=0, \ \text{as}\ m\rightarrow\infty. \] Since $\varphi_m$ is continuous on $\partial Q$ by Lemma \ref{SDE continuity wrt y}, we derive that $\varphi_m(y)\downarrow 0$ uniformly on $\partial{Q}$ by Dini's theorem. Consequently, we deduce that $$ \mathbb{\hat{E}}[\mathbb{\hat{E}}[f_m(X^y_{\cdot})]_{y=X^x_{{\tau}_Q^x}}]=\mathbb{\hat{E}}[\varphi_m({X^x_{{\tau}_Q^x}})]\downarrow 0, \ \text{as}\ m\rightarrow\infty, $$ which implies the desired result. \end{proof} Now we are going to show that the exit times are quasi-continuous. \begin{lemma}\label{induced probability measures weakly compact} If $K$ is a compact set in $ \mathbb{R}^n$, then the set $\mathcal{P}^K_2$ is weakly compact on $\Omega_n$. \end{lemma} \begin{proof} Let $(P_k\circ(X_{\cdot}^{x_k})^{-1})_{k\geq 1}$ be any sequence in $\mathcal{P}^K_2$. Since $K$ is compact, we can find a subsequence $x_{k_m}$ such that $|x_{k_m}-x|\rightarrow 0$ for some $x\in K$. Note that $\mathcal{P}$ is weakly compact, there is a subsequence $P_{k_{m_l}}\in \mathcal{P}$ such that $P_{k_{m_l}}$ converges to some $P\in \mathcal{P}$. For any $\varphi\in C_b(\Omega_n)$, note that $\varphi(X_{\cdot}^{x})\in L^1_C(\Omega_d)$. Then in view of Lemma \ref{SDE continuity wrt y} and Lemma 29 in \cite{DHP}, we get that \begin{align*} &\lim\limits_{l\rightarrow\infty}|E_{P_{{k_{m_l}}}\circ(X_{\cdot}^{x_{{k_{m_l}}}})^{-1}}[\varphi]-E_{{P}\circ(X_{\cdot}^{x})^{-1}}[\varphi]|\\ &\leq \lim\limits_{l\rightarrow\infty}|E_{P_{{k_{m_l}}}\circ(X_{\cdot}^{x_{{k_{m_l}}}})^{-1}}[\varphi]-E_{P_{{k_{m_l}}}\circ(X_{\cdot}^{x})^{-1}}[\varphi]| +\lim\limits_{l\rightarrow\infty}|E_{P_{{k_{m_l}}}\circ(X_{\cdot}^{x})^{-1}}[\varphi]-E_{{P}\circ(X_{\cdot}^{x})^{-1}}[\varphi]|\\ &\leq \lim\limits_{l\rightarrow\infty}|E_{P_{{k_{m_l}}}}[\varphi(X_{\cdot}^{x_{{k_{m_l}}}})]-E_{P_{{k_{m_l}}}}[\varphi(X_{\cdot}^{x})]| +\lim\limits_{l\rightarrow\infty}|E_{P_{{k_{m_l}}}}[\varphi(X_{\cdot}^{x})]-E_{P}[\varphi(X_{\cdot}^{x})]|\\ &\leq \lim\limits_{l\rightarrow\infty}\mathbb{\hat{E}}[|\varphi(X_{\cdot}^{x_{{k_{m_l}}}})-\varphi(X_{\cdot}^{x})|] =0, \end{align*} which ends the proof. \end{proof} \begin{theorem}\label{exit time quasi continuity lemma} Let $K$ be a bounded set in $ \mathbb{R}^n$. Then ${\tau}_Q^{0,1}$ and ${\tau}_{\overline{Q}}^{0,1}$ both belong to $L_C^1(\Omega_n,\mathcal{P}^K_2)$. \end{theorem} \begin{proof} We just need to prove the case that $K$ is a compact set since a bounded set is contained in some compact set. Let $\Gamma=\{{\tau}_Q^{0,1}={\tau}_{\overline{Q}}^{0,1}\}$. Then $c^K_2(\Gamma^c)=\sup_{x\in K}c^x_2(\Gamma^c)=\sup_{x\in K}c(\{{\tau}_Q^{x}<{\tau}_{\overline{Q}}^{x}\})=0$ by Theorem \ref{exit times equal lemma}. Moreover, we can write the polar set as $$\Gamma^c=\{{\tau}_Q^{0,1}<{\tau}_{\overline{Q}}^{0,1}\}=\bigcup_{s<r;s,r\in \mathbb{Q}}\{{\tau}_Q^{0,1}\leq s\}\cap\{{\tau}_{\overline{Q}}^{0,1}\geq r\}.$$ By the semi-continuity of ${\tau}_Q^{0,1}$ and ${\tau}_{\overline{Q}}^{0,1}$, we conclude that $\{{\tau}_Q^{0,1}\leq s\}\cap\{{\tau}_{\overline{Q}}^{0,1}\geq r\}$ is closed. Note that $c^K_2$ is weakly compact by Lemma \ref{induced probability measures weakly compact}. Then according to Proposition \ref{downward convergence proposition}, for any $\varepsilon>0$, there exists an open set $O\supset{\Gamma}^c$ such that $c^K_2(O)<\frac\varepsilon 2$. By Lemma \ref{stopping time lemma}, we can take $k$ large enough such that $ c^K_2({\tau}_Q^{0,1}>k)\leq \frac\varepsilon 2. $ Set $F=O^c\cap \{{\tau}_Q^{0,1}\leq k \}$. It is obvious that $c^K_2(F^c)\leq \varepsilon $ and ${\tau}_Q^{0,1}={\tau}_{\overline{Q}}^{0,1}$ are continuous on $F$. Recalling Lemma \ref{square stopping time lemma}, we conclude that $$ \mathbb{\hat{E}}^K_2[{{\tau}_{{Q}}^{0,1}}I_{\{{{\tau}_{{Q}}^{0,1}}>N\}}]\leq\mathbb{\hat{E}}^K_2[{{\tau}_{\overline{Q}}^{0,1}}I_{\{{{\tau}_{\overline{Q}}^{0,1}}>N\}}]\leq \frac{\mathbb{\hat{E}}^K_2[|{{\tau}_{\overline{Q}}^{0,1}}|^2]}{N}=\frac{\sup_{x\in K}\mathbb{\hat{E}}[|{{\tau}_{\overline{Q}}^{x}}|^2]}{N}\rightarrow 0,\ \text{as}\ N\rightarrow \infty,$$ which together with the characterization of $L_C^1(\Omega_n,\mathcal{P}^K_2)$ (Theorem \ref{LG characteriazation theorem}) imply the desired the result. \end{proof} Finally we study the continuity property of ${\tau}^{x}_Q$ with respect to $x$. For each $\varepsilon>0$, we denote $Q_\varepsilon:=\{x\in Q:dist(x,\partial Q)>\varepsilon\}$ and $Q_{-\varepsilon}:=\{x\in \mathbb{R}^n:dist(x,Q)<\varepsilon\}$. Then the exterior ball condition can be preserved for the following approximation from inside. \begin{lemma} For any $\varepsilon>0$, $Q_\varepsilon$ also satisfies the exterior ball condition. \end{lemma} \begin{proof} Let $x$ be in $\partial Q_\varepsilon$. Then there exists a point $x'\in \partial Q$ such that $d(x,x')=\varepsilon$. Assume that $U(y,r)$ is the exterior ball of $Q$ at $x'$. We claim that $U(y+(x-x'),r)=U(y,r)+(x-x')$ is the exterior ball of $Q_\varepsilon$ at $x$. Indeed, for any $z\in Q_\varepsilon$, we have $z+(x'-x)\in Q$ and then $$ d(z,y+x-x')=d(z+(x'-x),y)>r. $$ The proof is complete. \end{proof} For any fixed $T>0$, by a standard argument we can find some constant $C_T$ depending on $T$ such that $$ \hat{\mathbb{E}}[\sup_{0\leq t\leq T}|X^x_t-X^y_t|^{n+1}]\leq C_T|x-y|^{n+1}. $$ It follows from Kolmogorov's criterion for continuity that for any fixed $\alpha\in (0,\frac{1}{n+1})$ \begin{equation}\label{887268439202} \hat{\mathbb{E}}[\eta_T^{n+1}]<\infty,\ \ \ \text{where}\ \eta_T:=\sup_{x\neq y}\frac{\sup_{0\leq t\leq T}|X^x_t-X^y_t|}{|x-y|^\alpha}. \end{equation} From this and Lemma \ref{semi-continuity of exit time}, it is easy to prove that, q.s., ${\tau}^{x}_Q$ and $ {\tau}^{x}_{\overline{Q}}$ are lower and upper-continuous with respect to $x$. Then Theorem \ref{exit times equal lemma} implies that, \begin{equation}\label{21312423423} {\tau}^{x_k}_Q\rightarrow {\tau}^{x}_Q, \ \ \ \ \text{q.s.} \end{equation} whenever $|x_k- x|\rightarrow 0$. Moreover, we have \begin{lemma}\label{tau continuous lemma} Assume $|x_k- x|\rightarrow 0$. Then \begin{equation} \hat{\mathbb{E}}[{{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}] \rightarrow 0, \ \text{as}\ k\rightarrow \infty. \end{equation} \end{lemma} \begin{proof} For any $L>0,T>0$ and $\varepsilon>0$, let $\alpha$ and $\eta_T$ be defined as in (\ref{887268439202}). We consider the set that ${Q}_{-L{\varepsilon}^\alpha}=\{v\in\mathbb{R}^n:dist(v,Q)<L{\varepsilon}^\alpha\}$. For any $y$ such that $|x-y|\leq \varepsilon$, on $\{\eta_T\leq L\}\cap \{{\tau}^{y}_Q\leq T \}$ we have that \[ \sup_{0\leq t\leq {\tau}^{y}_Q}|X^y_{t}-X^x_{t}|\leq L\varepsilon^{\alpha}, \] which implies that ${\tau}^{y}_Q\leq {\tau}^{x}_{{Q}_{-L{\varepsilon}^\alpha}}$. Similarly, for $Q_{L{\varepsilon}^\alpha}=\{v\in Q:dist(v,Q)>L{\varepsilon}^\alpha\}$, we have ${\tau}^{x}_{{Q}_{L{\varepsilon}^\alpha}}\leq{\tau}^{y}_Q$ on $\{\eta_T\leq L\}\cap \{{\tau}^{y}_Q\leq T \}$. For each $Q_{-L{\varepsilon}^\alpha}$, take a bounded open set $\widetilde{Q}_{-L{\varepsilon}^\alpha}$ with smooth boundary such that $Q_{-L{\varepsilon}^\alpha}\subset \widetilde{Q}_{-L{\varepsilon}^\alpha}$ and $\widetilde{Q}_{-L{\varepsilon}^\alpha}\downarrow \overline{Q}$. It follows from Theorem \ref{exit time quasi continuity lemma} that ${\tau}^{0,1}_{\widetilde{Q}_{-L{\varepsilon}^\alpha}}-{\tau}^{0,1}_{Q_{L{\varepsilon}^\alpha}}\in L_C^1(\Omega_n,\mathcal{P}\circ (X^x_{\cdot})^{-1})$ since ${\widetilde{Q}_{-L{\varepsilon}^\alpha}}$ and ${Q_{L{\varepsilon}^\alpha}}$ both satisfy the exterior ball condition. Then we get that \begin{equation} \label{4354536576867} \begin{split} &\hat{\mathbb{E}}[{{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}]\\ &\leq \hat{\mathbb{E}}[({{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q})I_{{\{ {\tau}^{x_k}_{{Q}}\leq T\}}}I_{\{\eta_T\leq L\}}]+\hat{\mathbb{E}}[({{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q})I_{{\{ {\tau}^{x_k}_{{Q}}\leq T\}}}I_{\{\eta_T> L\}}]\\ &\ \ \ \ +\hat{\mathbb{E}}[({{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q})I_{{\{ {\tau}^{x_k}_{{Q}}> T\}}}]\\ &\leq \hat{\mathbb{E}}[({{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q})I_{{\{ {\tau}^{x_k}_{{Q}}\leq T\}}}I_{\{\eta_T\leq L\}}]+\hat{\mathbb{E}}[{{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}I_{\{\eta_T> L\}}]+\hat{\mathbb{E}}[{{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}I_{{\{ {\tau}^{x_k}_{{Q}}> T\}}}]\\ &=:I_1+I_2+I_3. \end{split} \end{equation} For $I_1$, we take $k$ large enough such that $|x_k-x|\leq \varepsilon$. Then for any $T$ and $L$, it follows that \begin{align*} I_1\leq \hat{\mathbb{E}}[({\tau}^{x}_{\widetilde{Q}_{-L{\varepsilon}^\alpha}}-{\tau}^{x}_{Q_{L{\varepsilon}^\alpha}})I_{{\{ {\tau}^{x_k}_{{Q}}\leq T\}}}I_{\{\eta_T\leq L\}}] \leq \hat{\mathbb{E}}[{\tau}^{x}_{\widetilde{Q}_{-L{\varepsilon}^\alpha}}-{\tau}^{x}_{Q_{L{\varepsilon}^\alpha}}] = \hat{\mathbb{E}}^x_2[{\tau}^{0,1}_{\widetilde{Q}_{-L{\varepsilon}^\alpha}}-{\tau}^{0,1}_{Q_{L{\varepsilon}^\alpha}}], \end{align*} which indicates that for each $\varepsilon>0$ \[ \limsup_{k\rightarrow\infty}I_1\leq \hat{\mathbb{E}}^x_2[{\tau}^{0,1}_{\widetilde{Q}_{-L{\varepsilon}^\alpha}}-{\tau}^{0,1}_{Q_{L{\varepsilon}^\alpha}}]. \] Sending $\varepsilon\rightarrow 0$ and using Proposition \ref{downward convergence proposition} and Theorem \ref{exit times equal lemma}, we get that \[ \limsup_{k\rightarrow\infty}I_1\leq \hat{\mathbb{E}}^x_2[{\tau}^{0,1}_{{{\overline{Q}}}}-{\tau}^{0,1}_{Q}]=\hat{\mathbb{E}}[{\tau}^{x}_{{{\overline{Q}}}}-{\tau}^{x}_{Q}]= 0. \] For any $\delta>0$, by Lemma \ref{square stopping time lemma} and Proposition 19 in \cite{DHP} we can first take $T$ large enough such that $I_3\leq \delta$ and then take $L$ large enough such that $I_2\leq \delta$ for each $k$. Now letting $k\rightarrow \infty$ in (\ref{4354536576867}), we get that $$ \limsup_{k\rightarrow\infty}\hat{\mathbb{E}}[{{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}]\leq 2\delta. $$ Sending $\delta\rightarrow 0$ and we get the desired result. \end{proof} \section{Application to probabilistic representations for PDEs} This section is devoted to studying the relationship between SDEs driven by generalized $G$-Brownian motion and fully nonlinear elliptic equations. In fact, with the help of the results of the previous sections, we shall introduce a stochastic representation for a class of fully nonlinear elliptic equations with Dirichlet boundary. The following results are important for our future discussion. First, we shall extend the Theorem \ref{extended BM strongmarkov1} to a more general case. \begin{theorem}\label{extended strong markov theorem0} Let $\tau$ be an optional time satisfying assumption (H). Then for each $Y\in {L}_C^1(\Omega_n,\mathcal{P}\circ (X^x_{\tau+\cdot})^{-1})$, \begin{equation} Y(X^x_{\tau+\cdot})\in L^{1,\tau+}_C(\Omega_d)\ \ \ \ \text{and}\ \ \ \ \hat{\mathbb{E}}_{\tau+}[Y(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[ Y(X^y_{\cdot})]_{y=X^x_{\tau}}. \end{equation} Moreover, if $\tau$ is also a stopping time, we have $ Y(X^x_{\tau+\cdot})\in L^{1,\tau}_C(\Omega_d)$. \end{theorem} \begin{proof} The proof shall be divided into the following three steps. {\it 1 Bounded case.} Suppose $Y$ is bounded by some constant $C_Y$. Then by Theorem \ref{LG characteriazation theorem}, for any $\varepsilon>0$ we can pick a closed set $D$ in $\Omega_n$ such that $c(O)\leq \varepsilon$ and $Y|_{D}$ is continuous, where $O:=(X^x_{\tau+\cdot})^{-1}(D^c)$. By Tietze extension theorem, there is a continuous function $\widetilde{Y}$ on $\Omega_n$ such that $\widetilde{Y}=Y$ on $D$ and $|\widetilde{Y}|\leq C_Y$. Recalling Theorem \ref{extended BM strongmarkov1}, we get that $$\hat{\mathbb{E}}_{\tau+}[\widetilde{Y}(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[\widetilde{Y} (X^y_{\cdot})]_{y=X^x_{\tau}}.$$ For the left side, it holds that \begin{align*} \hat{\mathbb{E}}[|\widetilde{Y}(X^x_{\tau+\cdot})-Y(X^x_{\tau+\cdot})|] \leq \hat{\mathbb{E}}[|\widetilde{Y}-{Y}|(X^x_{\tau+\cdot}) I_{O^c}]+\hat{\mathbb{E}}[|\widetilde{Y}-{Y}|(X^x_{\tau+\cdot}) I_{O}] \leq 2C_Yc(O)\leq 2C_Y\varepsilon. \end{align*} For the right side, since $|Y-\widetilde{Y}|\leq 2C_YI_{D^c}$ and ${D^c}$ is open in $\Omega_n$, applying Theorem \ref{extended BM strongmarkov1} yields \begin{align*} \hat{\mathbb{E}}[|\hat{\mathbb{E}}[\widetilde{Y} (X^y_{\cdot})]_{y=X^x_{\tau}}-\hat{\mathbb{E}}[{Y} (X^y_{\cdot})]_{y=X^x_{\tau}}|] \leq 2C_Y\hat{\mathbb{E}}[\hat{\mathbb{E}}[ I_{{D^c}}(X^y_{\cdot})]_{y=X^x_{\tau}}] =2C_Y\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[ I_{{D^c}}(X^x_{\tau+\cdot})]] =2C_Y\hat{\mathbb{E}}[ I_{{D^c}}(X^x_{\tau+\cdot})], \end{align*} which implies that \begin{align*} \hat{\mathbb{E}}[|\hat{\mathbb{E}}[\widetilde{Y} (X^y_{\cdot})]_{y=X^x_{\tau}}-\hat{\mathbb{E}}[{Y} (X^y_{\cdot})]_{y=X^x_{\tau}}|]\leq 2C_Y\hat{\mathbb{E}}[ I_{{D^c}}(X^x_{\tau+\cdot})I_{{O^c}}]+2C_Y\hat{\mathbb{E}}[ I_{{D^c}}(X^x_{\tau+\cdot})I_{{O}}]\leq 2C_Y\varepsilon. \end{align*} Sine $\varepsilon$ can be arbitrarily small, it follows that $Y(X^x_{\tau+\cdot})\in L^{1,\tau+}_C(\Omega_d)$ and $$\hat{\mathbb{E}}_{\tau+}[{Y}(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[{Y} (X^y_{\cdot})]_{y=X^x_{\tau}}.$$ {\it 2 Unbounded case.} Define $Y_N=(Y\wedge N)\vee(-N)$ for each $N\geq 1$. By Step 1, we have \begin{equation}\label{234353546545} \hat{\mathbb{E}}_{\tau+}[{Y_N}(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[{Y_N} (X^y_{\cdot})]_{y=X^x_{\tau}}. \end{equation} For the left side, we have that \begin{align*} \hat{\mathbb{E}}[|{Y_N}(X^x_{\tau+\cdot})-{Y}(X^x_{\tau+\cdot})|] =\hat{\mathbb{E}}[|{Y_N}-Y|(X^x_{\tau+\cdot})] \leq \hat{\mathbb{E}}[(|{Y}|I_{|Y|>N})(X^x_{\tau+\cdot})] \rightarrow 0,\ \text{as}\ N\rightarrow \infty, \end{align*} which indicates that $Y(X^x_{\tau+\cdot})\in L^{1,\tau+}_C(\Omega_d)$. For the right side, it holds that \begin{align*} \hat{\mathbb{E}}[|\hat{\mathbb{E}}[{Y_N} (X^y_{\cdot})]_{y=X^x_{\tau}}-\hat{\mathbb{E}}[{Y} (X^y_{\cdot})]_{y=X^x_{\tau}}|] \leq \hat{\mathbb{E}}[\hat{\mathbb{E}}[(|{Y}|I_{|Y|>N})(X^y_{\cdot})]_{y=X^x_{\tau}}]. \end{align*} We claim that for each $N\geq 1$ \begin{align}\label{myq3} \hat{\mathbb{E}}[(|{Y}|I_{|Y|>N})(X^y_{\cdot})]_{y=X^x_{\tau}}=\hat{\mathbb{E}}_{\tau+}[(|{Y}|I_{|Y|>N})(X^x_{\tau+\cdot})], \end{align} whose proof will be justified in Step 3. Thus, we derive that \[ \hat{\mathbb{E}}[|\hat{\mathbb{E}}[{Y_N} (X^y_{\cdot})]_{y=X^x_{\tau}}-\hat{\mathbb{E}}[{Y} (X^y_{\cdot})]_{y=X^x_{\tau}}|]\leq \hat{\mathbb{E}}[(|{Y}|I_{|Y|>N})(X^x_{\tau+\cdot})]\rightarrow 0,\ \text{as}\ N\rightarrow \infty. \] Consequently, letting $N\rightarrow\infty$ in (\ref{234353546545}) yields the desired result. {\it 3 The proof of equation \eqref{myq3}.} For each $I_{\{|y|>N\}}$, we can choose a sequence $\varphi_k\in C_b(\mathbb{R}^n)$ such that $\varphi_k\uparrow I_{\{|y|>N\}}$. Define $\bar{Y}^k=(|Y|\wedge k)\varphi_k(Y)$ and it is obvious that $\bar{Y}^k\uparrow |{Y}|I_{\{|Y|>N\}}$. Then Step 1 and Proposition \ref{downward convergence proposition}, Proposition 3.25 (iv) in \cite{HJL} yield (\ref{myq3}). The proof is complete. \end{proof} \begin{corollary}\label{belong to LC1 corollary} If $Y\in {L}_C^1(\Omega_n,\mathcal{P}\circ (X^x_{\cdot})^{-1})$, then \begin{equation}\label{98876444556} Y(X^x_{\cdot})\in L^{1}_C(\Omega_d).\end{equation} In particular, $\tau_Q^x,\tau_{\overline{Q}}^x\in L^{1}_C(\Omega_d).$ \end{corollary} \begin{proof} Taking $\tau\equiv0$ in Theorem \ref{extended strong markov theorem0}, we get (\ref{98876444556}). From Theorem \ref{exit time quasi continuity lemma}, we have ${\tau}_Q^{0,1}, {\tau}_{\overline{Q}}^{0,1}\in L_C^1(\Omega_n,\mathcal{P}\circ (X^x_{\cdot})^{-1})$, which ends the proof. \end{proof} \begin{proposition}\label{tauQ continuous lemma} Let $\tau$ be an optional time such that $ \tau\leq {\tau}_{Q}^{x}$, q.s. Then \begin{equation}\label{9897687643435} {{\tau}_{Q}^{0,1}}={{\tau}_{\overline{Q}}^{0,1}},\ \ \ \ \mathcal{P}\circ (X^x_{\tau+\cdot})^{-1}\text{-q.s.} \end{equation} Moreover, ${{\tau}_{Q}^{0,1}}$ and ${{\tau}_{\overline{Q}}^{0,1}}$ both belong to the space ${L}_C^1(\Omega_n, \mathcal{P}\circ (X^x_{\tau+\cdot})^{-1})$. \end{proposition} \begin{proof} Employing the symbols in the proof of Theorem \ref{exit times equal lemma}, we can get that \begin{equation}\label{992987755576} \hat{\mathbb{E}}[\delta_t(X^x_{\tau+\cdot})]\leq \hat{\mathbb{E}}[f_m(X^x_{\tau+\cdot})]=\mathbb{\hat{E}}[\mathbb{\hat{E}}[f_m(X^y_{\cdot})]_{y=X^x_{{\tau}}}] \end{equation} Note that $\hat{\mathbb{E}}[\delta_t(X^y_{\cdot})]=0$ for each $y\in \overline{Q}$. We can repeat the analysis in the proof of Theorem \ref{exit times equal lemma} to obtain the rightside of (\ref{992987755576}) converges to $0$ and this indicates that the equation (\ref{9897687643435}) holds. Recalling Theorem \ref{exit time quasi continuity lemma}, for any fixed $\varepsilon>0$, there exists an open set $O\subset \Omega_n$ such that $c^{\overline{Q}}_2(O)\leq \varepsilon$ and ${{\tau}_{Q}^{0,1}},{{\tau}_{\overline{Q}}^{0,1}}$ are continuous on $O^c$. Note that $c^{y}_2(O)\leq c^{\overline{Q}}_2(O)\leq \varepsilon$ for each $y\in \overline{Q}$. Then using Theorem \ref{extended BM strongmarkov1}, we have that $$c((X^x_{\tau+\cdot})^{-1}(O))=\hat{\mathbb{E}}[I_{O}(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[\hat{\mathbb{E}}[I_{O}(X^y_{\cdot})]_{y=X^x_{\tau}}]\leq \varepsilon,$$ which together with ${\tau}_{Q}^{0,1}(X^x_{\tau+\cdot})\leq {\tau}_{\overline{Q}}^{0,1}(X^x_{\tau+\cdot})\leq {\tau}^x_{\overline{Q}}$ imply that ${{\tau}_{Q}^{0,1}},{{\tau}_{\overline{Q}}^{0,1}}\in {L}_C^1(\Omega_n, \mathcal{P}\circ (X^x_{\tau+\cdot})^{-1}).$ \end{proof} The following theorem plays a key role in proving the probabilistic representations. \begin{theorem}\label{brownian motion DDP for bounded domain} Assume $\varphi\in C(\partial Q)$ and $f\in C(\overline{Q})$. Let $u(x):=\hat{\mathbb{E}}[\varphi(X^x_{{{\tau}_{Q}^x}})-\int_0^{{{{\tau}_{Q}^x}}}f(X^x_s)ds]$. Then for any optional time $\tau\leq {{\tau}_{Q}^x}$ q.s., we have \begin{equation}\label{333333} u(x)=\hat{\mathbb{E}}[u(X^x_{\tau})-\int_0^{\tau}f(X^x_s)ds]. \end{equation} \end{theorem} \begin{proof} By Proposition \ref{tauQ continuous lemma}, for any $\varepsilon>0$ we can choose a closed set $D\subset \Omega_n$ such that $c((X^x_{\tau+\cdot})^{-1}(D^c))\leq \varepsilon$ and ${{\tau}_{Q}^{0,1}}$ is continuous on $D$. Then $Y:=\varphi(B'_{{{\tau}_{Q}^{0,1}}})-\int_{0}^{{{{\tau}_{Q}^{0,1}}}}f(B'_s)ds$ is also continuous on $D$, where $B'$ is the canonical process on $\Omega_n$. Hence, $Y$ is $\mathcal{P}\circ (X^x_{\tau+\cdot})^{-1}$-quasi-continuous on $\Omega_n$. For each $k\geq 1$, set $Y_k:=\varphi(B'_{{{\tau}_{Q}^{0,1}}})-\int_{0}^{{{{\tau}_{Q}^{0,1}}}\wedge k}f(B'_s)ds$, which is also $\mathcal{P}\circ (X^x_{\tau+\cdot})^{-1}$-quasi-continuous on $\Omega_n$. Thus $Y^k$ belongs to ${L}_C^{1}(\Omega_n,\mathcal{P}\circ (X^x_{\tau+\cdot})^{-1})$ by Theorem \ref{LG characteriazation theorem}. Note that for each $k\geq 1$ \begin{align*} \hat{\mathbb{E}}[|Y-Y_k|(X^x_{\tau+\cdot})]=\hat{\mathbb{E}}[|\int_{{{{\tau}_{Q}^{0,1}}}\wedge k}^{{{{\tau}_{Q}^{0,1}}}}f(B'_s)ds|(X^x_{\tau+\cdot})] =\hat{\mathbb{E}}[|\int_{({{{\tau}_{Q}^{x}}}-{\tau})\wedge k}^{{{{\tau}_{Q}^{x}}}-{{\tau}}}f(X^x_{{\tau}+s})ds|] \leq C_f \hat{\mathbb{E}}[{{{{\tau}_{Q}^{x}}}-{{\tau}}}-({{{{\tau}_{Q}^{x}}}-{{\tau}}})\wedge k]. \end{align*} Then by Lemma \ref{square stopping time lemma}, we have that\begin{align*} \hat{\mathbb{E}}[|Y-Y_k|(X^x_{\tau+\cdot})]&\leq C_f \hat{\mathbb{E}}[({{{{\tau}_{Q}^{x}}}-{{\tau}}}- k)I_{\{{{{{\tau}_{Q}^{x}}}-{{\tau}}}>k \}}] \leq C_f {\hat{\mathbb{E}}[{\tau}_{Q}^{x}I_{\{{\tau}_{Q}^{x}>k \}}]} \leq C_f \frac{\hat{\mathbb{E}}[({\tau}_{Q}^{x})^2]}{k} \rightarrow 0, \ \text{as $k\rightarrow\infty$}. \end{align*} This implies that $Y\in {L}_C^{1}(\Omega_n,\mathcal{P}\circ (X^x_{\tau+\cdot})^{-1})$. Now applying Theorem \ref{extended strong markov theorem0}, we obtain that \begin{equation*} \begin{split} \hat{\mathbb{E}}_{\tau+}[\varphi(X^x_{{{\tau}_{Q}^x}})-\int_{{{\tau}}}^{{{{\tau}_{Q}^x}}}f(X^x_s)ds] =\hat{\mathbb{E}}_{\tau+}[Y(X^x_{\tau+\cdot})] =\hat{\mathbb{E}}[Y(X^y_{\cdot})]_{y=X^x_{\tau}}=\hat{\mathbb{E}}[\varphi(X^y_{{{\tau}_{Q}^y}})-\int_{{{0}}}^{{{{\tau}_{Q}^y}}}f(X^y_s)ds]_{y=X^x_{\tau}} =u(X^x_{\tau}). \end{split} \end{equation*} Therefore, we derive that \begin{equation*} \begin{split} u(x) =\hat{\mathbb{E}}[\hat{\mathbb{E}}_{\tau+}[\varphi(X^x_{{{\tau}_{Q}^x}})-\int_{{{\tau}}}^{{{{\tau}_{Q}^x}}}f(X^x_s)ds]-\int_0^{{{\tau}}}f(X^x_s)ds] =\hat{\mathbb{E}}[u(X^x_{\tau})-\int_0^{{{\tau}}}f(X^x_s)ds], \end{split} \end{equation*} which ends the proof. \end{proof} Now we are ready to state our main result of this section, concerning a probabilistic representation for the viscosity solutions to fully nonlinear PDEs. For the definition and properties of viscosity solutions, we refer the reader to \cite{CC,CIL,IL}. \begin{theorem}\label{viscosity solution theorem} Assume that $\varphi\in C(\partial Q)$ and $f\in C(\overline{Q})$. Then $u(x):=\hat{\mathbb{E}}[\varphi(X^x_{{\tau}^{x}_Q})-\int_0^{{\tau}^{x}_Q}f(X^x_{s})ds]$ is the $C(\overline{Q})$-continuous viscosity solution of \begin{equation}\label{G elliptic PDE} \begin{cases} G(\sigma(x)^TD^2u(x)\sigma(x)+H(Du(x)),\sigma(x)^TDu(x))+ \langle b(x),Du(x)\rangle=f(x),\ x\in Q,\\ u(x)=\varphi(x),\ x\in \partial Q, \end{cases} \end{equation} where $H_{ij}(Du):=2\langle Du,h_{ij}\rangle$, $1\leq i,j\leq d$. \end{theorem} \begin{proof} The uniqueness of viscosity solutions can be found in \cite{CIL}. The proof shall be divided into two steps. {\it 1 The continuity.} We first consider the case that $\varphi\in C_{b.Lip}(\partial Q)$ and $f\in C_{b.Lip}(\overline{Q})$. Assume $x_k\rightarrow x$ on $\overline{Q}$. By the sub-linearity of $\hat{\mathbb{E}}$, we have \begin{equation}\label{eq. 21} \begin{split} |u(x)-u(x_k)| \leq \hat{\mathbb{E}}[|\varphi(X^{x}_{{\tau}^{x}_Q})-\varphi(X^{x_k}_{{\tau}^{x_k}_Q})|] +\hat{\mathbb{E}}[|\int_0^{{\tau}^{x}_Q}f(X^{x}_{s})ds-\int_0^{{\tau}^{x_k}_Q}f(X^{x_k}_{s})ds|]. \end{split} \end{equation} Then we just need to prove that the above two terms in equation (\ref{eq. 21}) converge to 0, as $k\rightarrow \infty$. For each $T>0$ and $\varepsilon>0$, we can decompose the first term into three parts as follows: \begin{align}\label{myq4} \begin{split} &\hat{\mathbb{E}}[|\varphi(X^{x}_{{\tau}^{x}_Q})-\varphi(X^{x_k}_{{\tau}^{x_k}_Q})|] \\&\leq \hat{\mathbb{E}}[|\varphi(X^{x}_{{\tau}^{x}_Q})-\varphi(X^{x_k}_{{\tau}^{x_k}_Q}) |I_{\{|{\tau}^{x}_Q(\omega)-{\tau}^{x_k}_Q(\omega)|< \varepsilon\}}I_{\{{\tau}^{x}_{{Q}}\vee {\tau}^{x_k}_{{Q}}\leq T\}}]\\ &\ \ +\hat{\mathbb{E}}[|\varphi(X^x_{{\tau}^{x}_Q})-\varphi(X^{x_k}_{{\tau}^{x_k}_Q}) |I_{\{|{\tau}^{x}_Q(\omega)-{\tau}^{x_k}_Q(\omega)|< \varepsilon\}}I_{\{{\tau}^{x}_{{Q}}\vee {\tau}^{x_k}_{{Q}}> T\}}] +\hat{\mathbb{E}}[|\varphi(X^{x}_{{\tau}^{x}_Q})-\varphi(X^{x_k}_{{\tau}^{x_k}_Q}) |I_{\{|{\tau}^{x}_Q(\omega)-{\tau}^{x_k}_Q(\omega)|\geq \varepsilon\}}]\\ &\leq \hat{\mathbb{E}}[|\varphi(X^{x}_{{\tau}^{x}_Q})-\varphi(X^{x_k}_{\tau^{x_k}_Q}) |I_{\{|{\tau}^{x}_Q(\omega)-{\tau}^{x_k}_Q(\omega)|< \varepsilon\}}I_{\{{\tau}^{x}_{{Q}}\vee {\tau}^{x_k}_{{Q}}\leq T\}}] +2C_\varphi\hat{\mathbb{E}}[I_{\{{\tau}^{x}_{{Q}}\vee {\tau}^{x_k}_{{Q}}> T\}}]+2C_\varphi\hat{\mathbb{E}}[I_{\{|{\tau}^{x}_Q(\omega)-{\tau}^{x_k}_Q(\omega)|\geq \varepsilon\}}]\\ &=:I_1^{k,\varepsilon,T}+I_2^{k,T}+I_3^{k,\varepsilon}. \end{split} \end{align} Now we shall deal with the three parts separately. For $I_1^{k,\varepsilon,T}$, by a direct calculation we have that \begin{align*} I_1^{k,\varepsilon,T} &\leq \hat{\mathbb{E}}[|\varphi(X^{x}_{{\tau}^{x}_Q})-\varphi(X^{x}_{{\tau}^{x_k}_Q}) |I_{\{|{\tau}^{x_k}_Q(\omega)-{\tau}^{x}_Q(\omega)|< \varepsilon\}}I_{\{{\tau}^{x}_{{Q}}\vee {\tau}^{x_k}_{{Q}}\leq T\}}] \\ &\ \ \ \ \ \ \ \ +\hat{\mathbb{E}}[|\varphi(X^{x}_{{\tau}^{x_k}_Q})-\varphi(X^{x_k}_{{\tau}^{x_k}_Q}) |I_{\{|{\tau}^{x}_Q(\omega)-{\tau}^{x_k}_Q(\omega)|< \varepsilon\}}I_{\{{\tau}^{x}_{{Q}}\vee {\tau}^{x_k}_{{Q}}\leq T\}}]\\ &\leq L_{\varphi}\hat{\mathbb{E}}[\sup_{\substack{t,s\in [0,T]\\0\leq |t-s|\leq\varepsilon }}|X^{x}_{t}-X^{x}_{s}|]+L_{\varphi}\hat{\mathbb{E}}[\sup_{ t\in [0,T]}|X^{x}_{t}-X^{x_k}_{t} |]. \end{align*} For each integer $\rho\geq 1$, denote $t^{\rho}_i=\frac{i}{\rho}T$, $i=0, \ldots, \rho$. Then one can easily check that \[ \sup_{\substack{t,s\in [0,T]\\0\leq |t-s|\leq\varepsilon }}| X^{x}_t- X^{x}_s|\leq 3\sup_{i}\sup_{s\in[t^{\rho}_i,t^{\rho}_{i+1}]}| X^{x}_{t^{\rho}_i}- X^{x}_s|, \] whenever $\varepsilon\leq \frac{T}{\rho}.$ Thus by a standard argument we can find some generic constant $C_{T}>0$ (which may vary from line to line) independent of $k,\varepsilon$ so that, for each $\varepsilon\leq \frac{T}{\rho}$, \begin{align*} \hat{\mathbb{E}}[\sup_{\substack{t,s\in [0,T]\\0\leq |t-s|\leq\varepsilon }}| X^{x}_t- X^{x}_s|^{4}] \leq 3^4\sum\limits_{i=0}^{\rho-1}\hat{\mathbb{E}}[\sup_{s\in[t^{\rho}_i,t^{\rho}_{i+1}]}| X^{x}_{t^{\rho}_i}- X^{x}_s|^{4}]\leq \frac{C_T}{\rho}. \end{align*} Moreover, it holds that \[ \hat{\mathbb{E}}[\sup_{ t\in [0,T]}|X^{x}_{t}-X^{x_k}_{t} |]\leq C_T|x-x_k|. \] Consequently, we obtain that for each $\rho$ \[ I_1^{k,\varepsilon,T}\leq C_TL_{\varphi}(\frac{1}{\rho^{\frac{1}{4}}}+|x-x_k|), \ \text{for each}\ \varepsilon\leq \frac{T}{\rho}, \] which indicates that $\limsup\limits_{k,\varepsilon\rightarrow 0}I_1^{k,\varepsilon,T}=0$ for each $T>0$. \\ For $I_2^{k,T}$, it follows from Lemma \ref{stopping time lemma} and Markov's inequality that\[ I_2^{k,T}\leq 2C_{\varphi}\hat{\mathbb{E}}[I_{\{{\tau}^{x}_{{Q}}+ {\tau}^{x_k}_{{Q}}> T\}}]\leq 2C_{\varphi}\{ \hat{\mathbb{E}}[I_{\{{\tau}^{x}_{{Q}}> \frac{T}{2}\}}]+\hat{\mathbb{E}}[I_{\{{\tau}^{x_k}_{{Q}}> \frac{T}{2}\}}]\} \leq \frac{8CC_{\varphi}}{T}, \ \text{for each} \ T>0.\] For $I_3^{k,\varepsilon}$, it follows from Lemma \ref{tau continuous lemma} that $ \limsup\limits_{k\rightarrow\infty}I_3^{k,\varepsilon}=0 $ for each $\varepsilon>0.$ By the above analysis, letting $k,\varepsilon\rightarrow 0$ and then sending $T\rightarrow\infty$ in equation \eqref{myq4} yield that \[ \limsup\limits_{k\rightarrow\infty}\hat{\mathbb{E}}[|\varphi(X^{x}_{{\tau}^{x}_Q})-\varphi(X^{x_k}_{{\tau}^{x_k}_Q})|]=0. \] Now we consider the second term in equation (\ref{eq. 21}). Since $f$ is bounded and Lipschitz continuous on $\overline{Q}$, we have \begin{align*} &\hat{\mathbb{E}}[|\int_0^{{\tau}^{x}_Q}f(X^x_{s})ds-\int_0^{{\tau}^{x_k}_Q}f(X^{x_k}_{s})ds|]\\ &\leq \hat{\mathbb{E}}[|\int_0^{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}(f(X^{x}_{s})-f(X^{x_k}_{s}))ds|]+ 2C_f\hat{\mathbb{E}}[{{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}]\\ &\leq \hat{\mathbb{E}}[|\int_0^{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}(f(X^{x}_{s})-f(X^{x_k}_{s}))ds|I_{\{{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}\leq T\}}]+2C_f\hat{\mathbb{E}}[{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}I_{\{{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}> T\}}]+ 2C_f\hat{\mathbb{E}}[{{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}]\\ &\leq L_f T\hat{\mathbb{E}}[\sup_{ t\in [0,T]}|X^{x}_{t}-X^{x_k}_{t} |]+2C_f\hat{\mathbb{E}}[{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}I_{\{{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}> T\}}]+ 2C_f\hat{\mathbb{E}}[{{\tau}^{x}_Q\vee {\tau}^{x_k}_Q}-{{\tau}^{x}_Q\wedge {\tau}^{x_k}_Q}]. \end{align*} For any $\delta>0$, by first letting $T$ large enough such that the second term is smaller than $2C_f\delta$, then letting $k\rightarrow\infty$, we deduce that $$ \limsup\limits_{k\rightarrow\infty}\hat{\mathbb{E}}[|\int_0^{{\tau}^{x}_Q}f(X^x_{s})ds-\int_0^{{\tau}^{x_k}_Q}f(X^{x_k}_{s})ds|]\leq 2C_f\delta,$$ which implies $$ \limsup\limits_{k\rightarrow\infty}\hat{\mathbb{E}}[|\int_0^{{\tau}^{x}_Q}f(X^x_{s})ds-\int_0^{{\tau}^{x_k}_Q}f(X^{x_k}_{s})ds|]=0.$$ Therefore, we obtain the continuity of $u$ on $\overline{Q}$. For the general case that $\varphi\in C(\partial Q)$ and $f\in C(\overline{Q})$, we could find a sequence of bounded and Lipschitz functions $\varphi_n\in C(\partial Q)$ and $f_n\in C(\overline{Q})$ such that $\varphi_n$ and $f_n$ converge uniformly to $\varphi$ and $f$, respectively. Then $u_n$ converges to $u$ uniformly in $\overline{Q}$ and this implies the desired result. {\it 2 Viscosity solution property.} We just prove the viscosity sub-solution case, since another case can be proved in a similar way. Assume that $u$ does not satisfy the viscosity sub-solution property. Then there exists a test function $\phi\in C^2(\overline{Q})$ such that $\phi\geq u$ on $Q$, $\phi(x_0)=u(x_0)$ for some point $x_0\in Q$ and $$ G(\sigma(x_0)^TD^2\phi(x_0)\sigma(x_0)+H(D\phi(x_0)),\sigma(x_0)^TD\phi(x_0))+ \langle b(x_0),D\phi(x_0)\rangle<f(x_0). $$ By the continuity, we can find an open ball $U(x_0,{\delta_0})\subset Q$ for some $\delta_0>0$ such that $$G(\sigma(x)^TD^2\phi(x)\sigma(x)+H(D\phi(x)),\sigma(x)^TD\phi(x))+ \langle b(x),D\phi(x)\rangle <f(x), \ \text{for all} \ x\in U(x_0,{\delta_0}).$$ Moreover, ${\tau}^{x_0}_{U(x_0,{\delta_0})}>0$ for q.s. $\omega$. Set $\Upsilon(x):=G(\sigma(x)^TD^2\phi(x)\sigma(x)+H(D\phi(x)),\sigma(x)^TD\phi(x))$. Applying It\^{o} formula to $\phi$, we have \begin{align*} &\phi(X^{x_0}_{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t})-\phi(x_0)-\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}\langle D\phi(X^{x_0}_s), b(X^{x_0}_s)\rangle ds\\ &=\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}\langle D\phi(X^{x_0}_s),\sigma(X^{x_0}_s) dB_s\rangle+\frac12\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}\langle \sigma^T(X^{x_0}_s)D^2\phi(X^{x_0}_s)\sigma(X^{x_0}_s)+H(D\phi(X^{x_0}_s)), d\langle B\rangle_s\rangle\\ &=\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}\Upsilon(X^{x_0}_s)ds+M_{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}, \end{align*} where $M$ is a $G$-martingale (Proposition \ref{martingale proposition}) and given by\begin{align*} M_t=\int_0^{ t}\langle D\phi(X^{x_0}_s),\sigma(X^{x_0}_s) dB_s\rangle+\frac12\int_0^{ t}\langle\sigma^T(X^{x_0}_s)D^2\phi(X^{x_0}_s)\sigma(X^{x_0}_s)+H(D\phi(X^{x_0}_s)), d\langle B\rangle_s\rangle -\int_0^{t}\Upsilon(X^{x_0}_s)ds. \end{align*} That is, \begin{align*} M_{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}+\phi(x_0)=\phi(X^{x_0}_{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t})-\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}\langle D\phi(X^{x_0}_s), b(X^{x_0}_s)\rangle ds-\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}\Upsilon(X^{x_0}_s)ds. \end{align*} Taking expectation on both sides and then using the optional sampling theorem for $G$-martingales (see Theorem 48 in \cite{HP1}), we get that \begin{equation*} \begin{split} \phi(x_0)=\hat{\mathbb{E}}[\phi(X^{x_0}_{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t})-\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}(\Upsilon(X^{x_0}_s)+\langle D\phi(X^{x_0}_s), b(X^{x_0}_s)\rangle) ds]. \end{split} \end{equation*} Recalling Lemma \ref{square stopping time lemma}, we have that\begin{align*} \hat{\mathbb{E}}[|\phi(X^{x_0}_{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t})-\phi(X^{x_0}_{{\tau}^{x_0}_{U(x_0,{\delta_0})}})|]&\leq 2C_{\phi}\hat{\mathbb{E}}[I_{\{{\tau}^{x_0}_{U(x_0,{\delta_0})}\geq t\}}]\rightarrow 0, \ \text{as}\ t\rightarrow\infty,\\ \hat{\mathbb{E}}[|\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}\wedge t}\psi(X^{x_0}_s)ds-\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}}\psi(X^{x_0}_s)ds|]&\leq 2C_{\psi}\hat{\mathbb{E}}[{\tau}^{x_0}_{U(x_0,{\delta_0})}I_{\{{\tau}^{x_0}_{U(x_0,{\delta_0})}\geq t\}}]\rightarrow 0,\ \text{as}\ t\rightarrow\infty, \end{align*} for $\psi:=\Upsilon+\langle D\phi, b\rangle.$ Therefore, it follows that \begin{equation*}\label{98767833333575} \begin{split} \phi(x_0)&=\hat{\mathbb{E}}[\phi(X^{x_0}_{{\tau}^{x_0}_{U(x_0,{\delta_0})}})-\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}}(\Upsilon(X^{x_0}_s)+\langle D\phi(X^{x_0}_s), b(X^{x_0}_s)\rangle) ds]\\ &=\hat{\mathbb{E}}^{x_0}_2[\phi(B'_{{\tau}^{0,1}_{U(x_0,{\delta_0})}}) -\int_0^{{\tau}^{0,1}_{U(x_0,{\delta_0})}}(\Upsilon(B'_s)+\langle D\phi(B'_s),b(B'_s)\rangle) ds]. \end{split} \end{equation*} Note that ${{\tau}^{0,1}_{U(x_0,{\delta_0})}}\in L_C^1(\Omega_n,\mathcal{P}\circ (X_\cdot^{x_0})^{-1})$ by Lemma \ref{exit time quasi continuity lemma}. Then a similar analysis as in the first part in the proof of Theorem \ref{brownian motion DDP for bounded domain} gives $\phi(B'_{{\tau}^{0,1}_{U(x_0,{\delta_0})}}) -\int_0^{{\tau}^{0,1}_{U(x_0,{\delta_0})}}\widetilde{\psi}(B'_s)ds\in L_C^1(\Omega_n,\mathcal{P}\circ (X_\cdot^{x_0})^{-1})$ for each $\widetilde{\psi}\in C(\overline{Q})$. Thus in sprit of Remark \ref{remark on tightness guarantee maximum and on closure} and the fact that $\Upsilon+\langle D\phi, b\rangle< f$ on $U(x_0,{\delta_0})$, we conclude that \begin{equation*} \begin{split} \phi(x_0)&=\max_{P\in \mathcal{P}}E_{P\circ (X^{x_0}_\cdot)^{-1}}[\phi(B'_{{\tau}^{0,1}_{U(x_0,{\delta_0})}})-\int_0^{{\tau}^{0,1}_{U(x_0,{\delta_0})}}(\Upsilon(B'_s)+\langle D\phi(B'_s),b(B'_s)\rangle) ds]\\ &>\max_{P\in \mathcal{P}}E_{P\circ (X^{x_0}_\cdot)^{-1}}[\phi(B'_{{\tau}^{0,1}_{U(x_0,{\delta_0})}})-\int_0^{{\tau}^{0,1}_{U(x_0,{\delta_0})}}f(B'_s)ds]\\ &=\hat{\mathbb{E}}[\phi(X^{x_0}_{{\tau}^{x_0}_{U(x_0,{\delta_0})}})-\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}}f(X^{x_0}_s)ds]. \end{split} \end{equation*} However, by Theorem \ref{brownian motion DDP for bounded domain}, we get that $$ u(x_0)=\hat{\mathbb{E}}[u(X^{x_0}_{{\tau}^{x_0}_{U(x_0,{\delta_0})}})-\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}}f(X^{x_0}_s)ds]\leq \hat{\mathbb{E}}[\phi(X^{x_0}_{{\tau}^{x_0}_{U(x_0,{\delta_0})}})-\int_0^{{\tau}^{x_0}_{U(x_0,{\delta_0})}}f(X^{x_0}_s)ds]<\phi(x_0)=u(x_0), $$ which is a contradiction. The proof is complete. \end{proof} \begin{corollary}\label{supremum realization} Assume that $\varphi\in C(\partial Q)$ and $f\in C(\overline{Q})$. For $u$ defined as the above theorem, we have \begin{equation}u(x)=\max_{P\in\mathcal{P}}E_P[\varphi(X^x_{{\tau}^{x}_Q})-\int_0^{{\tau}^{x}_Q}f(X^x_{s})ds]\end{equation} \end{corollary} \begin{proof} According to the proof of Theorem \ref{brownian motion DDP for bounded domain}, we have $\varphi(B'_{{\tau}^{0,1}_{Q}}) -\int_0^{{\tau}^{0,1}_{Q}}f(B'_s)ds\in L_C^1(\Omega_n,\mathcal{P}\circ (X_\cdot^{x})^{-1})$. Then Corollary \ref{belong to LC1 corollary} implies $\varphi(X^x_{{\tau}^{x}_Q})-\int_0^{{\tau}^{x}_Q}f(X^x_{s})ds\in L_C^1(\Omega_d)$, and the desired result now follows from Remark \ref{remark on tightness guarantee maximum and on closure}. \end{proof} The following result is a direct conclusion of Theorem \ref{viscosity solution theorem}. \begin{corollary}\label{viscosity solution theorem2} Assume $\varphi,f$ satisfy the same condition as above. Then $u(x):=-\hat{\mathbb{E}}[-\varphi(X^x_{{\tau}^{x}_Q})+\int_0^{{\tau}^{x}_Q}f(X^x_{s})ds]$ is the $C(\overline{Q})$-continuous viscosity solution of \begin{equation}\label{-G} \begin{cases} -G(-\sigma(x)^TD^2u(x)\sigma(x)-H(Du(x)),-\sigma(x)^TDu(x))+ \langle b(x),Du(x)\rangle=f(x),\ x\in Q,\\ u(x)=\varphi(x),\ x\in \partial Q. \end{cases} \end{equation} \end{corollary} \begin{proof} The proof is immediate from Theorem \ref{viscosity solution theorem} by taking $\tilde{\varphi}:=-\varphi$, $\tilde{f}:=-f$ and $\tilde{u}(x):=-u(x)$. \end{proof}
{ "timestamp": "2018-05-16T02:04:30", "yymm": "1804", "arxiv_id": "1804.05610", "language": "en", "url": "https://arxiv.org/abs/1804.05610" }
\section*{Introduction} We say that a system of equations with variables taking values in a set $S$ is partition regular over $S$ if for every finite partition of $S$ one cell of the partition contains a solution to the system. Many famous results in Ramsey theory (including Schur's theorem and van der Waerden's theorem) can be stated as saying that a certain system of equations is partition regular. The problem of whether a given system of equations is partition regular or not has been widely studied (see, e.g., \cite{BHL2013, BDHL, Deuber1973, Deuber1975, DeuberHindman1987, Moreira2017, Rado1933, Rado1943}). The first general result which concerns partition regularity of a system of linear equations with integer coefficients over the set of positive integers is due to Rado \cite{Rado1933, Rado1943}. For a single equation, it says that an equation $$a_1 x_1 + a_2 x_2 +\dots +a_n x_n=0$$ with nonzero integer coefficients is partition regular if and only if $\sum_{i\in I} a_i =0$ for some nonempty $I \subset \{1,\dots,n\}$. In general, Rado's theorem states that a system of linear equations of the form $\mathbf{A} \mathbf{x} = 0,$ where $\mathbf{A}$ is a matrix with integer entries, is partition regular if and only if the matrix $\mathbf{A}$ satisfies the so-called columns condition, stated below for an arbitrary domain. \begin{definition}\label{def:cc} Let $\mathbf{A}$ be a $k\times l$ matrix with entries in a domain $R$ with fraction field $K$. Denote the columns of $\mathbf{A}$ by $ \mathbf{c}_1,\dots, \mathbf{c}_l \in R^k$. We say that $\mathbf{A}$ satisfies the \emph{columns condition} if there exists an integer $m\geq 0$ and a partition of the set of columns $\{1,\dots, l\}=I_0 \cup I_1 \cup \dots \cup I_m$ such that $\sum_{i \in I_0} \mathbf{c}_i =0$ and such that for $t\in\{1,\dots,m\}$ the vector $\sum_{i \in I_t} \mathbf{c}_i$ lies in the $K$-vector space generated by the columns $c_j$ with $j\in I_0\cup\dots\cup I_{t-1}$. \end{definition} Several authors have studied partition regularity in more general contexts. Our study is inspired by a paper of Bergelson, Deuber, Hindman, and Lefmann \cite{BDHL}, where the authors studied equations with coefficients in arbitrary (commutative) rings. To this end, they generalised the columns condition. The following property is called the columns condition in \cite{BDHL}, but in order to distinguish it from the simpler condition considered above, we will refer to it as the generalised columns condition. \begin{definition}\label{def:gcc} Let $\mathbf{A}$ be a $k\times l$ matrix with entries in a ring $R$. Denote the columns of $\mathbf{A}$ by $ \mathbf{c}_1,\dots, \mathbf{c}_l \in R^k$. We say that $\mathbf{A}$ satisfies the \emph{generalised columns condition} if there exists an integer $m\geq 0$, a partition of the set of columns $\{1,\dots, l\}=I_0 \cup I_1 \cup \dots \cup I_m$, and elements $d_0,d_1,\dots,d_{m} \in R\setminus \{0\}$ such that the following conditions hold: \begin{enumerate} \item $d_0\cdot \sum_{i \in I_0} \mathbf{c}_i =0$. \item For $t\in\{1,\dots,m\}$ the vector $d_t \cdot \sum_{i \in I_t} \mathbf{c}_i$ lies in the $R$-module generated by the columns $c_j$ with $j\in I_0\cup\dots\cup I_{t-1}$. \item \label{jestjuzbardzopozno} If $m>0$, then for each $n\geq 0$ the ideal $ d_0 (d_1\cdots d_{m})^n R$ is infinite.\end{enumerate} \end{definition} The generalised columns condition is easily seen to be equivalent to the columns condition when $R$ is an infinite domain. When considering partition regularity of systems of linear equations, it is convenient to exclude the trivial solution consisting only of zeros. In \cite[Theorem 2.4]{BDHL} it is shown that if a matrix over an arbitrary ring satisfies the generalised columns condition, then the system of equations $\mathbf{A}\mathbf{x}=0$ is partition regular over $R$. However, this condition is in general \emph{not necessary}, e.g., when $R=\prod_{i=1}^{\infty} {\Z}/4{\Z}$. Our aim is to find a condition that is both \emph{necessary and sufficient}. The main idea of the paper is to consider partition regularity for modules rather than just for rings. Let $R$ be a ring, let $\mathbf{A}$ be a matrix with entries in $R$, and let $M$ be an $R$-module. It is meaningful to ask whether the system of equations $\mathbf{A}\mathbf{m}=0$ is \emph{partition regular} over $M$ in the sense that for every finite partition of $M$ one cell of the partition contains a (nontrivial) solution to the system $\mathbf{A}\mathbf{m}=0$. The special case of abelian groups (corresponding to the choice $R=\Z$) was previously studied by Deuber \cite{Deuber1975}. Studying partition regularity for modules rather than just for rings gives us extra technical flexibility and allows us to use notions and methods from commutative algebra. This enables us to generalise the results in \cite{BDHL}. In particular, we solve completely the problem of whether the system $\mathbf{A}\mathbf{x}=0$ is partition regular over a ring $R$ if $R$ is either noetherian or a domain. In order to state the first main result, we recall the notion of an associated prime. A prime ideal $\mathfrak{p}$ of a ring $R$ is an \emph{associated prime} of an $R$-module $M$ if there exists an element $m\in M$ with $\mathrm{ann}(m)=\mathfrak{p}$, where $\mathrm{ann}(m)=\{r\in R\mid rm=0\}$. A finitely generated module over a noetherian ring has only finitely many associated primes. The following result reduces the study of partition regularity for finitely generated modules over noetherian rings to the study of partition regularity over noetherian domains. \begin{introtheorem} Let $M$ be a finitely generated module over a noetherian ring $R$ and let $\mathbf{A}$ be a matrix with entries in $R$. Then the system $\mathbf{A}\mathbf{m}=0$ is partition regular over $M$ if and only if there exists an associated prime $\mathfrak{p}$ of $M$ such that the system $\mathbf{A}\mathbf{x}=0$ is partition regular over $R/\mathfrak{p}$. \end{introtheorem} In the case when $R$ is an (infinite) domain, the statement of the result simplifies considerably. \begin{introtheorem}\label{mainthmB} Let $R$ be an infinite domain and let $\mathbf{A}$ be a matrix with entries in $R$. Then the system $\mathbf{A}\mathbf{x}=0$ is partition regular over $R$ if and only if $\mathbf{A}$ satisfies the columns condition. \end{introtheorem} This result has been proved by Rado when $R$ is a subring of the complex numbers, and the case when $R$ is of characteristic zero can be obtained from it by a compactness argument (using, e.g., Lemma \ref{auxlemma:domains}). Thus the main interest of Theorem B is when the ring $R$ has positive characteristic. The advantage of our method is that it provides a uniform approach which works regardless of the characteristic. The crucial argument is to show that a matrix that does not satisfy the columns condition is not partition regular. To this end, Rado introduced the notion of $c_p$ colourings, defined as follows: For a prime number $p$, the colouring $c_p$ assigns to a positive integer $n$ the least nonzero digit of $n$ in base $p$. Rado proved that if a system of equations $\mathbf{A}\mathbf{x}=0$ with integer coefficients is partition regular over the set of positive integers with respect to all the colourings $c_p$, then the matrix $\mathbf{A}$ satisfies the columns condition. In order to prove Theorem B, we first reduce the problem to the case when $R$ is a finitely generated $\Z$-algebra. We then construct a family of colourings $c_{\mathfrak m}$, where the role of a prime $p$ in Rado's argument is played by an arbitrary maximal ideal $\mathfrak m$ of $R$ such that the local ring $R_{\mathfrak m}$ is regular. We prove that if a system of equations $\mathbf{A}\mathbf{x}=0$ is partition regular over $R$ with respect to all the colourings $c_{\mathfrak m}$, then the matrix $\mathbf{A}$ satisfies the columns condition. In order to study partition regularity over more general rings, in \cite{BDHL} the following definition was introduced. A ring $R$ is called a \emph{Rado ring} if the generalised columns condition is equivalent to partition regularity for all matrices $\mathbf{A}$ with entries in $R$. It follows from Theorem B that any domain is a Rado ring. Non-examples of Rado rings have been scarce. In fact, the only previously known example of a non-Rado ring comes from \cite[Theorem 3.5]{BDHL}, where it is shown that the ring $R=\prod_{i=1}^{\infty} {\Z}/n{\Z}$ is a Rado ring if and only if $n$ is squarefree. This example is a bit unsatisfactory since the ring in question is not noetherian. In Theorem \ref{thm:Radonoeth} we classify all noetherian Rado rings and in particular prove that all reduced (i.e., without nonzero nilpotents) noetherian rings are Rado. We also show that the ring $R=(\Z/p^2\Z)[X]$ is not a Rado ring. We also study partition regularity of nonhomogenous equations over arbitrary modules. In this case a system of equations $\mathbf{Am}=\mathbf{b}$ with $ 0\neq \mathbf{b}\in M^k$ is called partition regular over $M$ if for any finite colouring of $M$ one cell of the partition contains a solution $\mathbf{m}$. One way for a nonhomogenous equation to be partition regular is to admit a constant solution $\mathbf m = (m,\dots,m)$ with all the coordinates equal. In \cite{Rado1933}, Rado showed that this is the only possibility if $R=M=\Z$. In Theorem \ref{lem:nonhomogenous-one_eq} we rather easily extend this result to the case of an arbitrary module $M$ and a single equation $a_1 m_1 + \dots + a_l m_l = b$ using a colouring result of Straus \cite{Straus}. For systems of equations, we can only prove such a result under certain quite weak assumptions. We state below a slightly simplified version of the result. \begin{introtheorem}\label{mainthmC} Let $R$ be a ring and let $M$ be an $R$-module. Let $\mathbf{A}$ be a $k\times l$ matrix with entries in $R$ and let $\mathbf{b}\in M^k$ be nonzero. Assume that one of the following assumptions holds: \begin{enumerate} \item[(a)] either $k=1$; or \item[(b)] $R$ is a domain and $M$ is a torsion-free module; or \item[(c)] $R$ is a Dedekind domain; or \item[(d)] $R$ is a reduced noetherian ring and $M=R$.\end{enumerate} Then the system $\mathbf{Am}=\mathbf{b}$ is partition regular over $M$ if and only if it has a constant solution in $M$. \end{introtheorem} We do not know if the assumptions of Theorem C are necessary. In fact, we do not know any examples of modules over which nonhomogenous equations would be partition regular without admitting constant solutions. It might be argued that the definition of partition regularity for nonhomogenous equations is rather artificial, and that we should insist that the monochromatic solution to the equation $\mathbf{Am}=\mathbf{b}$ be nonconstant. Note, however, that if $\mathbf{Am}=\mathbf{b}$ admits a constant solution, the set of solutions of $\mathbf{Am}=\mathbf{b}$ is simply a translate of the set of solutions of the homogenous equation $\mathbf{Am}=0$. Thus the question of existence of a nonconstant monochromatic solution of $\mathbf{Am}=\mathbf{b}$ (or even a solution with all the variables different) is reduced to the corresponding problem for homogenous equations. While we do not study these questions in this paper, we refer the interested reader to \cite{HL06} for the case when $M=\Z$ or $M=\Q$. We briefly discuss the contents of the paper. In Section \ref{sec:bn} we introduce some basic properties of partition regularity over modules. In particular, we show that partition regularity behaves well with respect to short exact sequences of modules, which allows us to perform d\'evissage arguments. Several properties here generalise those proved by Deuber for abelian groups \cite{Deuber1975}. In Section \ref{sec:mod}, we apply these methods to finitely generated modules over noetherian rings and prove Theorem A. In Section \ref{sec:dom}, we introduce $\mathfrak m$-colourings (defined on fields that are finitely generated over $\F_p$ or $\Q$) and use them to prove Theorem B. The aim of Section \ref{sec:Rado} is twofold. We first classify noetherian Rado rings, and then characterise partition regularity over the infinite product ring $\prod_{i\in I} \Z/n\Z$. Using module-theoretic techniques we are able to generalise some results and answer some questions from \cite{BDHL} Finally, in Section \ref{sec:nonhom} we study nonhomogenous equations. We use here a classical method to deduce the existence of a constant solution of a system of equations from the existence of such a solution for linear combinations of individual equations. For general modules this method does not always work, and we introduce certain modules that measure obstruction to its applicability. We then show that this obstruction vanishes in the cases considered in Theorem C. We hope that the paper will also be of interest to readers with little or no background in commutative algebra. For this reason, we have tried to recall all the notions and to include precise references for the results that we need. Our general reference in commutative algebra is the book of Eisenbud \cite{book:Eisenbud}. \subsection*{Notations} All rings are assumed to be commutative and with a unit. By $\N=\{0,1,\dots\}$ we denote the set of natural numbers, and by $\F_p$ the finite field with $p$ elements. We denote by $R^*$ the group of invertible elements of a ring $R$. Given a quotient map $R\to R/I$, we denote the image of $x\in R$ in $R/I$ by $\bar{x}$ (the choice of $I$ will always be clear from the context). We use boldface letters to denote matrices and vectors. For a module $M$, we (somewhat unusually) regard elements of $M^k$ as $k\times 1$ matrices with entries in $M$. We denote the transpose of a matrix $\mathbf A$ by $\mathbf A^{\intercal}$. \section{Basic notions}\label{sec:bn} Let $R$ be a ring, $\mathbf{A}$ a $k\times l$ matrix with entries in $R$, $M$ an $R$-module, and $r\geq 1$ an integer. \begin{definition} We say that $\mathbf{A}$ is partition regular over $M$ for $r$ colours if for every colouring $\chi \colon M \to \{1,\dots, r\}$ of $M$ with $r$ colours there exists a nontrivial monochromatic solution of the equation $\mathbf{A}\mathbf{m} =0$ with $\mathbf{m}=(m_1,\dots, m_l)^\intercal \in M^l$, i.e., a solution with $$\chi(m_1)=\dots=\chi(m_l) \quad \text{ and } \quad \mathbf{m} \neq 0.$$ We say that $\mathbf{A}$ is partition regular over $M$ if $\mathbf{A}$ is partition regular over $M$ for any (finite) number of colours. \end{definition} We begin by developing some basic properties of these notions that will often be used in later chapters. For the rest of this section we will assume that $R$ is a ring and $\mathbf{A}$ is a matrix with entries in $R$. Let $M$ be an $R$-module and let $N$ be its submodule. Since every colouring of $M$ induces a colouring of $N$, we see that if $\mathbf{A}$ is partition regular over $N$ for $r$ colours, then it is also partition regular over $M$ for $r$ colours. We will use this fact repeatedly without explicitly referring to it. Partition regularity is preserved by homomorphisms, in the following sense: let $\varphi \colon R \to S$ be a ring homomorphism and let $M$ be an $S$-module. The module $M$ can be regarded as an $R$-module via restriction of scalars (with multiplication by $r\in R$ given by $rm=\varphi(r)m$). We denote this $R$-module by $\varphi^* M$. \begin{lemma} Let $\varphi \colon R \to S$ be a ring homomorphism, $M$ an $S$-module, $\mathbf{A}$ a matrix with entries in $R$, and $r\geq 1$ an integer. Let $\varphi_*\mathbf{A}$ be the image of $\mathbf{A}$ by $\varphi$. Then $\varphi_* \mathbf{A}$ is partition regular over $M$ for $r$ colours if and only if $\mathbf{A}$ is partition regular over $\varphi^* M$ for $r$ colours. \end{lemma} \begin{proof} Obvious. \end{proof} The next result is a variant of the usual finiteness property of partition regularity. The proof uses a rather standard compactness argument. \begin{proposition}\label{prop:compactness} Let $M$ be an $R$-module, $\mathbf{A}$ a matrix with entries in $R$, and $r\geq 1$ an integer. If $\mathbf{A}$ is partition regular over $M$ for $r$ colours, then there exists a finite subset $F$ of $M$ such that for every colouring of $F$ with $r$ colours there exists a nontrivial monochromatic vector $\mathbf{m}$ with entries in $F$ such that $\mathbf{A}\mathbf{m} =0$. \end{proposition} \begin{proof} Let $C=\{1,\dots, r\}^M$ be the space of all colourings of $M$ with $r$ colours considered as a topological space with product topology, using the discrete topology on the set $\{1,\dots, r\}$. For a finite set $F\subset M$, denote by $C_F$ the set of all colourings in $C$ that do not admit nontrivial monochromatic solutions to the equation $\mathbf{A}\mathbf{m} =0$ with entries in $F$. We will prove that $C_F=\emptyset$ for some $F$. Suppose the contrary. The sets $C_F$ are then closed and nonempty, and the family $\{C_F\}$ is closed under finite intersections. By compactness of $C$, the set $\bigcap C_F$ is nonempty, the intersection being taken over all the finite subsets of $M$. Any element of $\bigcap C_F$ is a colouring of $M$ that does not admit any nontrivial monochromatic solution to the equation $\mathbf{A}\mathbf{m} =0$ with entries in any finite set $F\subset M$, hence neither in all of $M$. This gives a contradition. \end{proof} We will mainly use Proposition \ref{prop:compactness} in the following form. \begin{corollary}\label{cor:compactness} Let $M$ be an $R$-module, $\mathbf{A}$ a matrix with entries in $R$, and $r\geq 1$ an integer. \begin{enumerate} \item\label{cor:compactness1} If $\mathbf{A}$ is partition regular over $M$ for $r$ colours, then it is partition regular for $r$ colours over some finitely generated submodule of $M$. \item \label{cor:compactness2} If $\mathbf{A}$ is partition regular over $M$, then it is partition regular over some countably generated submodule of $M$.\end{enumerate}\end{corollary} \begin{proof} For the proof of \eqref{cor:compactness1}, take the submodule generated by a finite set $F$ given by Proposition \ref{prop:compactness}. Property \eqref{cor:compactness2} follows from \eqref{cor:compactness1}. \end{proof} \begin{proposition}\label{prop:PR_under_localisation} Let $M$ be an $R$-module, $\mathbf{A}$ a matrix with entries in $R$, and $r\geq 1$ an integer. \begin{enumerate} \item\label{prop:PR_under_localisation1} Let $S$ be a multiplicative subset of $R$. Assume that $S$ does not contain zero divisors on $M$. Then $\mathbf{A}$ is partition regular over $M$ for $r$ colours if and only if it is partition regular over $S^{-1}M$ for $r$ colours. \item\label{prop:PR_under_localisation2} Assume that $R$ is a domain with fraction field $K$. Then $\mathbf{A}$ is partition regular over $R$ for $r$ colours if and only if it is partition regular over $K$ for $r$ colours. \end{enumerate} \end{proposition} \begin{proof} For the proof of \eqref{prop:PR_under_localisation1}, assume that $\mathbf{A}$ is partition regular over $M$ for $r$ colours. Since $S$ does not contain zero divisors on $M$, the canonical map $M\rightarrow S^{-1}M$ is injective and $\mathbf{A}$ is partition regular over $S^{-1}M$ for $r$ colours. For the opposite implication, assume that $\mathbf{A}$ is partition regular over $S^{-1}M$ for $r$ colours. By Corollary \ref{cor:compactness} there exists a finitely generated $R$-submodule $N$ of $S^{-1}M$ such that $\mathbf{A}$ is partition regular over $N$ for $r$ colours. Choosing a finite set $\{m_1/s_1,\dots, m_t/s_t\}$ of generators of $N$, we see that $N$ is isomorphic with a submodule of $M$ via the map $ n\mapsto s_1\cdots s_t n$. Hence $\mathbf{A}$ is partition regular over $M$ for $r$ colours. Property \eqref{prop:PR_under_localisation2} follows immediately from \eqref{prop:PR_under_localisation1}. \end{proof} We end this section with a property that allows us to perform d\'evissage arguments for partition regularity. \begin{proposition}\label{prop:PR_of_quotients} Let $M$ be an $R$-module, $N$ its submodule, $\mathbf{A}$ a matrix with entries in $R$, and $r,s\geq 1$ integers. \begin{enumerate} \item\label{prop:PR_of_quotients1} If $\mathbf{A}$ is partition regular over $M$ for $r+s$ colours, then either $\mathbf{A}$ is partition regular over $N$ for $r$ colours or $\mathbf{A}$ is partition regular over $M/N$ for $s$ colours. \item\label{prop:PR_of_quotients2} If $M=\bigoplus_{i=1}^t M_i$ is a direct sum of finitely many $R$-modules $M_i$, then $\mathbf{A}$ is partition regular over $M$ if and only if $\mathbf{A}$ is partition regular over some $M_i$. \end{enumerate} \end{proposition} \begin{proof} For the proof of \eqref{prop:PR_of_quotients1}, suppose that there exist a colouring $\chi_N \colon N \to \{1,\dots,r\}$ of $N$ and a colouring $\chi_{M/N} \colon M/N \to \{1,\dots,s\}$ of $M/N$, both not admitting any nontrivial monochromatic solutions to the equation $\mathbf{A}\mathbf{m}=0$ in $N$ (resp., in $M/N$). Denote by $\bar{m}$ the image of $m\in M$ in $M/N$. Consider the colouring $\chi \colon M \to \{1,\dots, r+s\}$ given by $$ \chi(m)=\begin{cases} \chi_N(m) \quad \text{if} \quad m \in N,\\ r+\chi_{M/N}(\bar{m}) \quad \text{if} \quad m \notin N. \end{cases}$$ It is then easy to see that the colouring $\chi$ does not admit any nontrivial monochromatic solutions to the equation $\mathbf{A}\mathbf{m}=0$ in $M$. Property \eqref{prop:PR_of_quotients2} follows immediately from \eqref{prop:PR_of_quotients1}. \end{proof} \section{Partition regularity over modules}\label{sec:mod} In this section we characterise partition regularity for finitely generated modules over noetherian rings. We use the notion of an associated prime. We recall that a prime ideal $\mathfrak{p}$ of $R$ is an associated prime of an $R$-module $M$ if there exists an injective $R$-module homomorphism $R/{\mathfrak p} \hookrightarrow M$; equivalently, there exists $m\in M$ with $\mathfrak{p}=\mathrm{ann}(m)$. If $M$ is a finitely generated module over a noetherian ring $R$, then the set $\Ass\, M$ of associated prime ideals of $M$ is finite (see \cite[Theorem 3.10]{book:Eisenbud}). We say that a submodule $N$ of $M$ is $\mathfrak{p}$-primary if $\Ass\, M{/}N=\{\mathfrak{p}\}$. \begin{theorem}\label{mainthm:modules} Let $M$ be a finitely generated module over a noetherian ring $R$ and let $\mathbf{A}$ be a matrix with entries in $R$. The following conditions are equivalent: \begin{enumerate} \item The matrix $\mathbf A$ is partition regular over $M$. \item There exists an associated prime $\mathfrak{p}$ of $M$ such that $\mathbf{A}$ is partition regular over $R/\mathfrak{p}$. \end{enumerate} \end{theorem} \begin{proof} If $\mathfrak{p}$ is an associated prime of $M$ such that $\mathbf A$ is partition regular over $R/\mathfrak{p}$, then $R/\mathfrak{p}$ embeds into $M$ and hence $\mathbf A$ is partition regular over $M$. Assume now that $\mathbf A$ is partition regular over $M$ and let $\mathfrak{p}_1,\dots, \mathfrak{p}_t$ be the associated primes of $M$. By primary decomposition (see \cite[Theorem 3.10]{book:Eisenbud}), there exist $\mathfrak{p}_i$-primary submodules $Q_i$ of $M$ such that $\bigcap_{i=1}^{t} Q_{i}=0 $. Hence $M$ embeds via the diagonal embedding into $\bigoplus_{i=1}^t M/Q_i$ and by Proposition \ref{prop:PR_of_quotients}.\eqref{prop:PR_of_quotients2}, $\mathbf A$ is partition regular over $M/Q_i$ for some $i\in \{1,\dots, t\}$. All zero divisors of the $R$-module $M/Q_i$ are in $\mathfrak{p}_{i}$ (see \cite[Theorem 3.1]{book:Eisenbud}), and hence by Proposition \ref{prop:PR_under_localisation}.\eqref{prop:PR_under_localisation1}, $\mathbf A$ is partition regular over the localised module $(M/Q_i)_{\mathfrak{p}_{i}}$. Since $\mathfrak{m}=\mathfrak{p}_{i}R_{\mathfrak{p}_{i}}$ is the only associated prime of $(M/Q_i)_{\mathfrak{p}_{i}}$, some power $\mathfrak{m}^h$ of $\mathfrak{m}$ annihilates $(M/Q_i)_{\mathfrak{p}_{i}}$ (see \cite[Proposition 3.9]{book:Eisenbud}) and \[ 0=\mathfrak{m}^h(M/Q_i)_{\mathfrak{p}_{i}}\subset \mathfrak{m}^{h-1}(M/Q_i)_{\mathfrak{p}_{i}} \subset \dots \subset \mathfrak{m}(M/Q_i)_{\mathfrak{p}_{i}} \subset (M/Q_i)_{\mathfrak{p}_{i}} \] is a finite filtration of $(M/Q_i)_{\mathfrak{p}_{i}}$. Every quotient $\mathfrak{m}^i(M/Q_i)_{\mathfrak{p}_{i}}/\mathfrak{m}^{i+1}(M/Q_i)_{\mathfrak{p}_{i}}$ is a finitely dimensional vector space over the field $R_{\mathfrak{p}_{i}}/\mathfrak{p}_{i}R_{\mathfrak{p}_{i}}$ and the above filtration can be refined so that all the quotients are isomorphic with the residue field $R_{\mathfrak{p}_{i}}/\mathfrak{p}_{i}R_{\mathfrak{p}_{i}}$. By repeated use of Proposition \ref{prop:PR_of_quotients}, we get that $\mathbf A$ is partition regular over $R_{\mathfrak{p}_{i}}/\mathfrak{p}_{i}R_{\mathfrak{p}_{i}}$. Since $R_{\mathfrak{p}_{i}}/\mathfrak{p}_{i}R_{\mathfrak{p}_{i}}$ is the fraction field of $R/\mathfrak{p}_i$, it follows from Proposition \ref{prop:PR_under_localisation}.\eqref{prop:PR_under_localisation2} that $\mathbf A$ is partition regular over $R/\mathfrak{p}_i$. \end{proof} \section{Partition regularity over integral domains and $\mathfrak m$-colourings}\label{sec:dom} The aim of this section is to study partition regularity over integral domains $R$. In this case the columns condition (Definition \ref{def:cc}) and the generalised columns condition (Definition \ref{def:gcc}) coincide as long as the integral domain $R$ is infinite. If $R$ is finite (meaning that $R$ is a finite field), the generalised columns condition is more restrictive and says that the sum of all the columns is zero. We begin with a simple lemma saying that the columns condition does not depend on the base ring, in the following sense. \begin{lemma}\label{lem:cc_in_fields} Let $R\subset S$ be domains and let $\mathbf{A}$ be a matrix with entries in $R$. Then $\mathbf{A}$ satisfies the columns condition as a matrix with entries in $R$ if and only if it satisfies the columns condition as a matrix with entries in $S$.\end{lemma} \begin{proof} It is immediate that if the columns condition holds for $R$, then it also holds for $S$, and that the converse holds if $S$ is the fraction field of $R$. Thus we may assume that $R$ and $S$ are both fields and that the columns condition holds over $S$. In this case, the columns condition means that a certain system of linear equations with coefficients in $R$ has a nontrivial solution in $S$. It then follows from basic linear algebra that this system also has a nontrivial solution in $R$. \end{proof} We will generalise the construction of the colourings $c_p$ that play a crucial role in the proof of Rado's theorem. Let $p$ be a prime number. Recall that the colouring $c_p \colon \Z \to \{0,\dots,p-1\}$ is given by the formula $c_p(n) = j$ if $n$ is of the form $n=p^k (pm+j)$ for some integers $k\geq 0$, $m\in \Z$, and $ j\in \{0,\dots,p-1\}$. We recall that for a local noetherian ring $S$ with maximal ideal $\mathfrak m$, Krull's theorem states that $\mathfrak m$ cannot be generated by fewer than $t=\dim S$ elements (see \cite[Corollary 10.7]{book:Eisenbud}); $S$ is called a regular local ring if $\mathfrak m$ can be generated by exactly $t$ elements. If $t=1$, then $S$ is a discrete valuation ring and any element $\pi$ that generates $\mathfrak m$ induces a $\pi$-adic discrete valuation $v \colon K\to \Z$ on the fraction field $K$ of $S$ (see \cite[11.1]{book:Eisenbud}). Every regular local ring is a domain (see \cite[Corollary 10.14]{book:Eisenbud}). Let $R$ be a finitely generated $\Z$-algebra. Let $\mathfrak m$ be a maximal ideal of $R$ such that $R_{\mathfrak m}$ is a regular local ring. We will now construct a finite colouring of the fraction field $K$ of $R_{\mathfrak m}.$ Choose generators $\pi_1,\dots,\pi_t$ of $\mathfrak m R_{\mathfrak m}$ with $t=\dim R_\mathfrak m$. Let $$S_i = R_\mathfrak m/(\pi_1,\dots,\pi_i)R_\mathfrak m\quad \text{ for } i\in\{0,\dots,t\}.$$ The rings $S_i$ are regular local rings and hence are domains. Let $K_i$ denote the fraction field of $S_i$ (note that $K=K_0$). We have $S_t \cong R_{\mathfrak m}/\mathfrak{m}R_{\mathfrak{m}} \cong R/{\mathfrak{m}}$. Since $\Z$ is a Jacobson ring (i.e., every prime ideal is an intersection of maximal ideals), we conclude from a general form of Nullstellensatz (see \cite[Theorem 4.19]{book:Eisenbud}) that $R/{\mathfrak m}$ is a finite field. For $i\in\{0,\dots,t-1\}$, the element $\pi_{i+1}$ is a prime element of $S_i$, and hence the ring $(S_i)_{(\pi_{i+1})}$ is a discrete valuation ring with fraction field $K_i$. Consider the induced $\pi_{i+1}$-adic discrete valuation $v_{i+1} \colon K_i \to \Z$. Every element $z\in K_i$ can be written as $$z = \pi_{i+1}^{v_{i+1}(z)} z' \quad \text{ for some } z'\in ((S_i)_{(\pi_{i+1})})^*.$$ Note that the residue field of $(S_i)_{(\pi_{i+1})}$ is $$(S_i)_{(\pi_{i+1})}/ \pi_{i+1} (S_i)_{(\pi_{i+1})} \cong K_{i+1}.$$ We now construct a colouring $c_{\mathfrak m} \colon K \to R/{\mathfrak m}$. Let $x\in K$. If $x=0$, we put $c_{\mathfrak m} (x)=0$. If $x\neq 0$, we put $x_0 = x$ and we construct inductively the elements $x_1,\dots,x_t$ with $x_i \in K_{i}$ such that for $i\in\{0,\dots,t-1\}$ the element $x_{i+1} \in K_{i+1}^*$ is the image of $x_i \pi_{i+1}^{-v_{i+1}(x_i)}$ in the residue field of $(S_i)_{(\pi_{i+1})}$ under the isomorphism $(S_i)_{(\pi_{i+1})}/ \pi_{i+1} (S_i)_{(\pi_{i+1})} \cong K_{i+1}$. The element $x_t$ is a nonzero element of $K_t=R/{\mathfrak m}$. We put $c_{\mathfrak m} (x)=x_t$. Note that the definition of the colouring $c_{\mathfrak m}$ depends not only on $\mathfrak m$, but also on the choice of generators $\pi_1,\dots,\pi_t$ of $\mathfrak m R_{\mathfrak m}$. By abuse of terminology, we refer to any such colouring as an $\mathfrak m$-\emph{colouring}. \begin{remark} We briefly present an alternative description of the colouring $c_{\mathfrak m}$. Any nonzero element $x\in K$ can be (non-uniquely) written in the form $$x = \pi_1^{a_1+1} y_1 + \pi_1^{a_1} \pi_2^{a_2+1} y_2 +\dots + \pi_1^{a_1}\cdots \pi_{t-2}^{a_{t-2}} \pi_{t-1}^{a_{t-1}+1} y_{t-1}+ \pi_1^{a_1}\cdots \pi_{t-1}^{a_{t-1}} \pi_t^{a_t} y_{t}$$ with $a_1,\dots,a_t \in \Z$, $y_i \in (R_{\mathfrak m})_{(\pi_1,\dots,\pi_i)}$ for $i\in\{1,\dots,t-1\}$, and $y_t\in R_{\mathfrak m}^*$. (The existence of such a representation is proved by an induction on $t$.) Let $z$ denote the image of $y_t$ in $R_{\mathfrak m}/{\mathfrak m}R_{\mathfrak m} \cong R/{\mathfrak m}$. Then $c_{\mathfrak m}(x)=z$. \end{remark} \begin{example}\mbox{ } \begin{enumerate} \item Let $R=\Z$ and let $p$ be a prime number. The ring $R_{(p)}$ is a regular local ring (actually, a discrete valuation ring), and we recover Rado's colouring $c_p$ as an example of an $\mathfrak m$-colouring for $\mathfrak m =(p)$. \item Let $R=\Z[x,y]$, let $p$ be a prime, and let $\mathfrak m = (p,x,y)$. For $f\in R$, write $$f=\sum_{(i,j)\in \N^2} f_{ij} x^i y^j.$$ The $\mathfrak m$-colouring associated to the choice of generators $\pi_1=x, \pi_2=y, \pi_3 =p$ is given by $$c_{\mathfrak m}(f) = c_p(f_{i_0 j_0}),$$ where $(i_0,j_0)$ is the lexicographically smallest element of $\N^2$ with $f_{i_0 j_0} \neq 0$. \item In the previous example, take instead $\pi_1=p, \pi_2=x, \pi_3 =y$. Then $$c_{\mathfrak m}(f) = c_p(f_{i_1 j_1}),$$ where $(i_1,j_1)$ is the lexicographically smallest element of $\N^2$ among all the elements $f_{i_1 j_1}$ with minimal $p$-valuation \end{enumerate} \end{example} \begin{lemma}\label{oneequationdom} Let $R$ be a finitely generated $\Z$-algebra, $\mathfrak m$ a maximal ideal of $R$ such that $R_{\mathfrak m}$ is a regular local ring, and $K$ the fraction field of $R_{\mathfrak m}$. Let $a_1,\dots,a_l \in R$. If the equation $\sum_{i=1}^l a_i m_i = 0$ has a nontrivial monochromatic solution $(m_1,\dots,m_l)^{\intercal}\in K^l$ with respect to an ${\mathfrak m}$-colouring, then there exists a nonempty $I\subset \{1,\dots,l\}$ such that $$\sum_{i\in I} a_i \in \mathfrak m.$$\end{lemma} \begin{proof} We will prove the claim by induction on $t=\dim R_\mathfrak m$. If $t=0$, then by Nullstellensatz $R= K$ is a finite field, and the fact that $(m_1,\dots,m_l)$ is monochromatic means that $m_1,\dots, m_l$ are all equal and nonzero. It follows that $\sum_{i=1}^l a_i=0$. If $t>0$, write $\mathfrak{m}R_{\mathfrak m} = (\pi_1,\dots,\pi_t) R_{\mathfrak m}$ with $t=\dim R_\mathfrak m$, let $c_{\mathfrak m}$ be the associated ${\mathfrak m}$-colouring, and as before denote the $\pi_1$-adic valuation on $K$ by $v_1$. Let $S=R/(\pi_1)$ and $\mathfrak n = \mathfrak{m}/(\pi_1)$. For $x\in (R_{\mathfrak m})_{(\pi_1)}$, denote by $\bar{x}$ the image of $x$ in the fraction field of $S_{\mathfrak n}$ by the quotient map. Then $\mathfrak n$ is a maximal ideal of $S$, $\mathfrak{n}S_{\mathfrak n}=(\bar{\pi}_2,\dots,\bar{\pi}_t)$, and $S_{\mathfrak n} \cong R_{\mathfrak m}/(\pi_1)$ is a regular local ring. Directly from the definition, we see that if $c_{\mathfrak n}$ is the $\mathfrak n$-colouring (with the choice of generators $\bar{\pi}_2,\dots,\bar{\pi}_t$ of $\mathfrak{n}S_{\mathfrak n}$), then for $x\in K^*$ we have $$c_{\mathfrak{m}}(x)=c_{\mathfrak n}(\overline{\pi_1^{-v_1(x)} x}).$$ Let now $(m_1,\dots,m_l)^{\intercal}\in K^l$ be a nontrivial monochromatic solution of the equation $\sum_{i=1}^l a_i m_i = 0$ with respect to the colouring $c_{\mathfrak m}$. Then all $m_i$ are nonzero (since $c_{\mathfrak m}(m_i)=0$ only for $m_i=0$). Put $\nu =\min_{1\leq i \leq l} v_1(m_i)$ and let $$J=\{i\in\{1,\dots,l\}\mid v_1(m_i) = \nu\}.$$ Multiplying the equation $\sum_{i=1}^l a_i m_i = 0$ by $\pi_1^{-\nu}$ and passing to the fraction field of $S_{\mathfrak n}$, we get $$\sum_{i\in J} \bar{a}_i \overline{\pi_1^{-\nu} m_i}=0.$$ Furthermore, we have $c_{\mathfrak n}(\overline{\pi_1^{-\nu} m_i})=c_{\mathfrak{m}}(m_i)$ for $i\in J$, and hence the elements $\overline{\pi_1^{-\nu} m_i}$ are monochromatic for $i\in J$. By the induction hypothesis, there exists a nonempty subset $I\subset J$ such that $\sum_{i\in I} \bar{a}_i$ lies in $\mathfrak{n}$. Hence $\sum_{i\in I} a_i$ lies in $\mathfrak{m}$. \end{proof} In order to proceed, we need the following fundamental fact. \begin{lemma}\label{regularlocus} Let $R$ be a domain that is a finitely generated $\Z$-algebra. Then there exists a maximal ideal $\mathfrak m$ of $R$ such that $R_{\mathfrak m}$ is a regular local ring.\end{lemma} This is a well-known result that is usually proven in a much more general context of excellent rings, introduced by Grothendieck. The ring $\Z$ is an example of an excellent ring, as is any Dedekind domain of characteristic zero. For the proof of Lemma \ref{regularlocus}, see, e.g., \cite[Corollaire 6.12.6]{bookEGAIV.2} or \cite[(32.B)]{bookMatsumuraCA}. \begin{lemma}\label{auxlemma:domains} Let $R$ be a domain with fraction field $K$ and let $\mathbf{A}$ be a matrix with entries in $R$. Let $R'$ be a subring of $K$ containing all the entries of $\mathbf{A}$ and let $K'$ denote its fraction field. If $\mathbf{A}$ is partition regular over $R$, then it is also partition regular over $K'(t)$. \end{lemma} \begin{proof} Since $\mathbf{A}$ is partition regular over $R$, it is also partition regular over $K$. We may regard $K$ as a $K'$-module. Fix a number of colours $r$. By Corollary \ref{cor:compactness}.\eqref{cor:compactness1}, there exists a finitely dimensional $K'$-vector space $V$ such that $\mathbf A$ is partition regular over $V$ for $r$ colours. Since $V$ is isomorphic with a~$K'$-vector subspace of $K'(t)$, we conclude that $\mathbf A$ is partition regular over $K'(t)$ for $r$ colours. This gives the claim since the number of colours $r$ was chosen arbitrarily. \end{proof} \begin{theorem}\label{mainthm:domains} Let $R$ be an infinite domain and let $\mathbf{A}$ be a $k\times l$ matrix with entries in $R$. The following conditions are equivalent: \begin{enumerate} \item The matrix $\mathbf{A}$ is partition regular over $R$. \item The matrix $\mathbf{A}$ satisfies the columns condition.\end{enumerate} \end{theorem} \begin{proof} The fact that matrices satisfying the columns condition over an infinite domain are partition regular follows from \cite[Theorem 2.4]{BDHL}. For the opposite implication, assume that $\mathbf{A}$ is partition regular over $R$. We will prove that $\mathbf{A}$ satisfies the columns condition. Let $K$ be the fraction field of $R$. For two vectors $\mathbf{v},\mathbf{w} \in R^k$, we denote their standard inner product by $(\mathbf{v},\mathbf{w})$. Let $ \mathbf{c}_1,\dots, \mathbf{c}_l \in R^k$ denote the columns of $\mathbf{A}$. Consider the set $$S=\{ J \subset \{1,\dots,l\} \mid \sum_{j\in J} \mathbf{c}_j \neq \mathbf{0}\}.$$ We claim that we can find a vector $\mathbf{v} \in R^k$ such that for all $J\in S$ we have $$(\sum_{j\in J} \mathbf{c}_j , \mathbf{v}) \neq 0.$$ In fact, for all $J\in S$ the set of vectors in $K^k$ orthogonal to $\sum_{j\in J} \mathbf{c}_j $ is a proper vector subspace of $K^k$. Since a vector space over an infinite field is not a finite union of its proper subspaces, we can find a vector in $K^k$ that is not orthogonal to $\sum_{j\in J} \mathbf{c}_j$ for all $J\in S$. Multiplying this vector by an appropriate element of $R$, we obtain a vector in $R^k$ that has the desired property. For $I\subset \{1,\dots,l\}$, let $V_I$ be the $K$-vector subspace of $K^k$ spanned by $\mathbf{c}_i$ with $i\in I$ and let $$ S_I=\{ J \subset \{1,\dots,l\} \mid J\cap I =\emptyset \mbox{ and } \sum_{j\in J} \mathbf{c}_j \not\in V_I\}.$$ A similar argument shows that there exists a vector $\mathbf{v}_I \in R^k$ (which depends on $I$, but not on $J$) such that \begin{equation}\label{eqn:S_Iset} (\mathbf{c}_i, \mathbf{v}_I)=0 \mbox{ for all } i\in I \qquad \mbox{ and }\qquad (\sum_{j\in J} \mathbf{c}_j,\mathbf{v}_I)\neq 0 \mbox{ for all } J \in S_I.\end{equation} Let $R'$ be the subring of $K$ generated by all the entries of $\mathbf{A}$, $\mathbf v$, and $\mathbf{v}_I$ for $I\subset \{1,\dots, l\} $, as well as the inverses of the elements $(\sum_{j\in J} \mathbf{c}_j,\mathbf{v})$ for $J\in S$ and $(\sum_{j\in J} \mathbf{c}_j,\mathbf{v}_I)$ for $I\subset \{1,\dots,l\}$ and $J\in S_I$. Denote the fraction field of $R'$ by $K'$. We will now prove that the matrix $\mathbf{A}$ satisfies the columns condition. Consider the polynomial ring $R''=R'[t]$ in one variable over $R'$. The ring $R''$ is a domain that is finitely generated as a $\Z$-algebra. By Lemma \ref{regularlocus}, there exists a maximal ideal $\mathfrak{m}$ of $R''$ such that $R''_{\mathfrak m}$ is a regular local ring. Let $c_{\mathfrak m}$ be an $\mathfrak m$-colouring of the fraction field $K''=K'(t)$ of $R''_{\mathfrak m}$. By Lemma \ref{auxlemma:domains}, $\mathbf{A}$ is partition regular over $K''$ and so the equation $\mathbf{A} \mathbf{m}=0$ has a nontrivial monochromatic solution $\mathbf{m} \in (K'')^l$. We first claim that there exists $I_0\subset \{1,\dots, l\}$ such that $\sum_{i\in I_0} \mathbf{c}_i = 0$. Taking the inner product of $\mathbf{A} \mathbf{m}=0$ with the vector $\mathbf{v}$, we get $$\sum_{i=1}^l (\mathbf{c}_i,\mathbf{v})m_{i}=0.$$ By Lemma \ref{oneequationdom}, there exists a nonempty subset $I_0 \subset \{1,\dots,l\}$ such that $\sum_{i\in I_0} (\mathbf{c}_i,\mathbf{v}) \in \mathfrak m$. This means that $$\sum_{i\in I_0} \mathbf{c}_i = 0,$$ since otherwise we would have $I_0\in S$, and hence $(\sum_{i\in I_0} \mathbf{c}_i,\mathbf{v})$ would be invertible in $R' \subset R''$. We will now construct inductively nonempty subsets $I_1,\dots,I_m$ such that $\{1,\dots,l\}=I_0\cup\dots\cup I_m$ and for $t\in\{1,\dots,m\}$ we have $$I_t \subset \{1,\dots,l\} \setminus (I_0\cup\dots\cup I_{t-1}) \qquad \text{ and } \qquad \sum_{i\in I_t} \mathbf{c}_i\in V_{t-1},$$ where $V_{t-1}$ denotes the $K$-vector space spanned by columns $\mathbf{c}_i$ with $i\in I_0\cup \dots\cup I_{t-1}$. Assume that the subsets $I_1,\dots,I_{t-1}$ have already been constructed, but $I_0\cup\dots\cup I_{t-1} \varsubsetneq \{1,\dots,l\}$. We will construct the set $I_t$. Let $\mathbf{v}_{t-1}=\mathbf{v}_{I_0\cup\dots\cup I_{t-1}}$ be the vector considered in \eqref{eqn:S_Iset}. Taking the inner product of $\mathbf{A} \mathbf{m}=0$ with the vector $\mathbf{v}_{t-1}$, we get $$\sum_{i=1}^l (\mathbf{c}_i,\mathbf{v}_{t-1})m_{i}=0.$$ Since $(\mathbf{c}_i,\mathbf{v}_{t-1})=0$ for all $i\in I_0\cup\dots\cup I_{t-1}$, using once more Lemma \ref{oneequationdom} we get that there exists a nonempty subset $I_t \subset \{1,\dots,l\}\setminus (I_0\cup\dots\cup I_{t-1})$ such that $\sum_{i\in I_t} (\mathbf{c}_i,\mathbf{v}_{t-1}) \in \mathfrak m$. This means that $\sum_{i\in I_t} (\mathbf{c}_i,\mathbf{v}_{t-1})$ is not invertible in $R' \subset R''$ and hence $$\sum_{i\in I_t} \mathbf{c}_i \in V_{t-1}.$$ This ends the inductive construction and shows that $\mathbf A$ satisfies the columns condition with the corresponding partition $\{1,\dots,l\}=I_{0}\cup\dots\cup I_m$ \end{proof} \begin{remark} In \cite{HLS2003} it was pointed out that while in the classical version of Rado's Theorem there are several known proofs of the claim that matrices satisfying the columns condition are partition regular, there is essentially only one known proof of the opposite implication, and it uses colourings $c_p$. As a corollary of the proof, one obtains the slightly unusual statement that a matrix with integer entries is partition regular over $\Z$ if and only if it is partition regular with respect to all the colourings $c_p$. The proof of Theorem \ref{mainthm:domains} establishes the following generalisation: A matrix with entries in a domain $R$ that is a finitely generated $\Z$-algebra is partition regular if and only if it is partition regular with respect to all the $\mathfrak m$-colourings of $R$. \end{remark} \section{Rado rings}\label{sec:Rado} In this section we study Rado rings. We recall that each matrix $\mathbf{A}$ with entries in a ring $R$ that satisfies the generalised columns condition is partition regular over $R$. A ring $R$ is called a Rado ring if the converse holds for all matrices $\mathbf{A}$ with entries in $R$. If $R$ is an infinite domain, the columns condition and the generalised columns condition coincide. If $R$ is a finite field, the generalised columns condition over $R$ is stronger. Nevertheless, in both cases we obtain the following result. \begin{corollary}\label{cor:domainsareRado} Every domain is a Rado ring. \end{corollary} \begin{proof} If $R$ is infinite, this follows from Theorem \ref{mainthm:domains}. If $R$ is finite, we may give each element of $R$ a different colour; we then easily see that a matrix $\mathbf{A}$ with entries in $R$ is partition regular over $R$ if and only if the sum of all the columns is zero, which in this case is equivalent to the generalised columns condition.\end{proof} In \cite{BDHL}, the only given example of a non-Rado ring was the infinite product ring $R=\prod_{i=1}^{\infty} \Z{/}n\Z$ for a non-squarefree integer $n$. This example is somewhat unsatisfactory, since the ring in question is not noetherian. In the next subsection we will classify noetherian rings that are Rado. As a by-product, we obtain many examples of noetherian non-Rado rings. \subsection*{Noetherian Rado rings} We begin with the following lemma that will be used to show that certain rings are not Rado. \begin{lemma}\label{lemexplmat} Let $R$ be a ring and let $b$ be an element of $R$. Consider the $3\times 3$ matrix \[ \mathbf{B}=\left( \begin{array}{ccc} 1 & 1 & -1 \\ 0 & b & 0 \\ 0 & 0 & b \end{array} \right). \] \begin{enumerate} \item\label{lemexplmat1} The matrix $\mathbf{B}$ is partition regular over $R$ if and only if $\ann(b)$ is infinite. \item\label{lemexplmat2} The matrix $\mathbf{B}$ satisfies the generalised columns condition over $R$ if and only if there exists $d\in R$ such that $db=0$ and $d^n R$ is infinite for all $n\geq 0$.\end{enumerate} \end{lemma} \begin{proof} We see from the form of the matrix that $\mathbf{B}$ is partition regular over $R$ if and only if the equation $x+y-z=0$ is partition regular over $\ann(b)$. By the main result of \cite{Deuber1975} this is equivalent to the fact that $\ann(b)$ is infinite. For the convenience of the reader, we give a sketch of an alternative direct proof. If $\ann(b)$ is finite, the equation is clearly not partition regular. If $\ann(b)$ contains (as an abelian group) elements of arbitrarily high order, then the equation is partition regular by the finite form of Schur's Theorem. Otherwise, if $\ann(b)$ is infinite, but all the elements have bounded order, then $\ann(b)$ contains for some prime $p$ a subgroup $V$ that is an $\F_p$-vector space of countable infinite dimension. Since the coefficients of the equation $x+y-z=0$ are in $\F_p$, the group $V$ can also be identified with $\F_p[X]$ regarded as an $\F_p[X]$-module. The conclusion follows from Theorem \ref{mainthm:domains}. For the proof of \eqref{lemexplmat2}, suppose first that there exists $d\in R$ such that $db=0$ and $d^n R$ is infinite for all $n\geq 0$. Denote the columns of $\mathbf B$ by $\mathbf c_1$, $\mathbf c_2$, $\mathbf c_3$. We claim that (using the notation of Definition \ref{def:gcc}) the matrix $\mathbf{B}$ satisfies the generalised columns condition with $m=1$, $I_0=\{2,3\}$, $I_1=\{1\}$, and $d_0=d_1=d$. In fact, we only need to note that $d(\mathbf c_2 +\mathbf c_3)=0$ and $d\mathbf c_1 = d\mathbf c_2 \in R\mathbf c_2 + R\mathbf c_3$. Conversely, suppose that $B$ satisfies the generalised columns condition with some choice of $m$, partition $\{1,2,3\}=I_0\cup \dots \cup I_m$, and elements $d_0,\dots,d_m$ satisfying the conditions of Definition \ref{def:gcc}. By looking at the top entry, we see that $I_0$ has exactly two elements (otherwise, the top entry in $\sum_{i\in I_0} \mathbf c_i$ would be a unit, and hence could not be annihilated by $d_0 \neq 0$). Hence $m=1$. The column $c_j$ with $j\in I_1$ satisfies $d_1\mathbf c_j \in \sum_{i\in I_0} R \mathbf c_i$. Considering the three possible choices of $I_0$, we easily compute that $d_1 b=0$. (For example, if $I_0=\{2,3\}$, then \[ d_1 \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right)\ = a_1\left( \begin{array}{c} 1 \\ b \\ 0 \end{array} \right)\ + a_2\left( \begin{array}{c} -1 \\ 0 \\ b \end{array} \right)\ \] for some $a_1, a_2\in R$. Then $d_1=a_1-a_2$, $a_1 b = a_2 b=0$, and hence $d_1 b=(a_1-a_2)b=0$. The reasoning is analogous in the remaining two cases.) By the generalised columns condition, $d_0d_1^n R$ is infinite for all $n\geq 0$. Thus $d=d_1$ satisfies the conditions in \eqref{lemexplmat2}. \end{proof} \begin{theorem}\label{thm:Radonoeth} Let $R$ be a noetherian ring. The following conditions are equivalent: \begin{enumerate} \item\label{thm:Radonoeth1} $R$ is a Rado ring. \item\label{thm:Radonoeth2} For every $\mathfrak p\in \mathrm{Ass}\, R$ either $R{/}{\mathfrak p}$ is a finite field or the ring $R_{\mathfrak p}$ is a field.\end{enumerate}\end{theorem} \begin{proof} Assume first that $R$ is a Rado ring and suppose that there exists a prime ideal $\mathfrak p\in{\Ass}\, {R}$ such that $R{/}\mathfrak p$ is infinite and $R_{\mathfrak p}$ is not a field. The latter means that $\mathfrak{p}R_{\mathfrak p} \neq 0$. The ideal $\mathfrak p$ might or might not be a minimal prime ideal; let $Q$ denote the set of minimal prime ideals of $R$ other than $\mathfrak p$. Since $R$ is noetherian, $Q$ is finite. Let $I=\{x\in \mathfrak p \mid x/1 = 0 \text{ in } R_{\mathfrak p}\}.$ Since $\mathfrak p R_{\mathfrak p} \neq 0$, we have $I\varsubsetneq \mathfrak p$. By prime avoidance (see \cite[Lemma 3.3]{book:Eisenbud}), there exists $b\in \mathfrak p$ such that $b\notin I$ and $b\notin \mathfrak q$ for all $\mathfrak q \in Q$. (We use here a variant of prime avoidance that allows for one ideal not to be prime.) We will prove that for this choice of $b$, the matrix $\mathbf B$ considered in Lemma \ref{lemexplmat} is partition regular over $R$, but does not satisfy the generalised columns condition, which contradicts the fact that $R$ is a Rado ring. Since $\mathfrak p\in \Ass\, R$, there exists $c \in R$ such that $\mathfrak p=\ann(c)$. Note that $\ann(b)$ contains $Rc$ which as an $R$-module is isomorphic to $R{/}\mathfrak p$. Hence $\ann(b)$ is infinite, and thus by Lemma \ref{lemexplmat}.\eqref{lemexplmat1} the matrix $\mathbf B$ is partition regular over $R$. Suppose now that $d\in R$ is such that $db=0$. Since $b\notin \mathfrak q$ for all $\mathfrak q \in Q$, we have $d \in \mathfrak q$ for all $\mathfrak q \in Q$. Furthermore, in the ring $R_{\mathfrak p}$ we have $db/1 = 0$ and $b/1\neq 0$, and hence $d\in \mathfrak p$. Thus, $d$ is contained in all the minimal prime ideals of $R$, and hence is nilpotent (see \cite[Corollary 2.12]{book:Eisenbud}). By Lemma \ref{lemexplmat}.\eqref{lemexplmat2}, $\mathbf B$ does not satisfy the generalised columns condition. This ends the proof of the implication $ \eqref{thm:Radonoeth1} \Rightarrow \eqref{thm:Radonoeth2}$. For the proof of the opposite implication, assume that for every $\mathfrak p\in \Ass\, R$ either $R{/}{\mathfrak p}$ is a finite field or the ring $R_{\mathfrak p}$ is a field, and choose a matrix $\mathbf A$ with entries in $R$ that is partition regular over $R$. We need to prove that $\mathbf A$ satisfies the generalised columns condition. By Theorem \ref{mainthm:modules}, there exists $\mathfrak p \in \Ass\, R$ such that $\mathbf A$ is partition regular over $R{/}\mathfrak p$. Write $\mathfrak p =\ann(d)$ for $d\in R$. Denote the columns of $\mathbf A$ by $\mathbf c_1, \dots, \mathbf c_l$. We need to consider two cases. \begin{description} \item[\normalfont\emph{Case 1}] $R{/}{\mathfrak p}$ is a finite field. Since $\mathbf A$ is partition regular over the finite field $R{/}\mathfrak p$, which is only possible if $\mathbf c_1 +\dots + \mathbf c_l =0 $ in $R{/}\mathfrak p$, we have $d( \mathbf c_1 +\dots+ \mathbf c_l)=0$ in $R$. Hence $\mathbf A$ satisfies the generalised columns condition over $R$. \item[\normalfont\emph{Case 2}] $R_{\mathfrak p}$ is a field and $R{/}{\mathfrak p}$ is infinite. By Theorem \ref{mainthm:domains}, since $\mathbf A$ is partition regular over $R{/}\mathfrak p$, there exists an integer $m\geq 0$, a partition $\{1,\dots,l\}=I_0\cup \dots\cup I_m$ and elements $d'_1,\dots,d'_m \in R \setminus \mathfrak p$ such that $\sum_{i \in I_0} \mathbf{c}_i \in \mathfrak p R^k$ and $$ d'_t \sum_{i \in I_t} \mathbf{c}_i \in \sum_{j\in I_0\cup\dots\cup I_{t-1}}\!\!\!\!\!\!\!\! R \mathbf{c}_j + \mathfrak p R^k \quad \text{ for } \quad t\in\{1,\dots,m\}.$$ Put $d_0=d$ and $d_t=dd'_t$ for $t\in\{1,\dots,m\}$. Since $\mathfrak p=\ann(d)$, in order to prove that $\mathbf A$ satisfies the generalised columns condition with the choice of $m$, partition $\{1,\dots,l\}=I_0\cup \dots\cup I_m$, and elements $d_0,\dots, d_m$, it is enough to note that $Rd_0(d_1\cdots d_m)^n$ is infinite for all $n\geq 0$. We claim that $d\notin \mathfrak p$. Indeed, since $R_{\mathfrak p}$ is a field, we would otherwise have $d/1 =0$ in $R_{\mathfrak p}$, which contradicts $\ann(d)=\mathfrak{p}$. Thus $d_t\notin \mathfrak p$ for $t\in\{0,\dots,m\}$, which implies that $\ann(d_0(d_1\cdots d_m)^n)\subset \mathfrak p$ for all $n\geq 0$. Thus $d_0(d_1\cdots d_m)^n R$ surjects onto $R{/}\mathfrak p$, and hence is infinite. This ends the proof.\qedhere \end{description}\end{proof} \begin{corollary} Every reduced noetherian ring is Rado.\end{corollary} \begin{proof} If $R$ is a reduced noetherian ring, then $\Ass \,R$ consists exactly of the minimal prime ideals of $R$ (this follows from \cite[Corollary 2.12 and Theorem 3.10]{book:Eisenbud}). Thus, for every $\mathfrak p\in \Ass\, R$, the ring $R_{\mathfrak p}$ is a reduced local artinian ring, hence a field. The claim follows from Theorem \ref{thm:Radonoeth}. \end{proof} We can now give an example of a noetherian ring that is not a Rado ring. \begin{example} Let $p$ be a prime number. The ring $R=(\Z/p^2\Z)[X]$ is not a Rado ring. \end{example} \begin{proof} The only associated prime ideal of $R$ is $\mathfrak p =pR$. The claim follows from Theorem \ref{thm:Radonoeth}.\end{proof} \subsection*{Partition regularity over the ring $R=\prod_{i\in I} \Z/n\Z$} In \cite{BDHL} the authors considered the problem of partition regularity of linear equations over the product ring $R=\prod_{i=1}^{\infty} \Z/n\Z$. In particular, they showed that $R$ is not a Rado ring if and only if the ring $\Z/n\Z$ contains a nilpotent element. They also characterised partition regularity of single equations over the ring $R=\prod_{i=1}^{\infty} \Z/4\Z$. We will give a general characterisation of partition regularity for matrices over the ring $R=\prod_{i\in I} \Z/n\Z$. We first treat the case when $n=p$ is a prime, and then deduce from it the general case. \begin{proposition}\label{thm:characterisation_of_products} Let $I$ be any set, $n$ a positive integer, and $R=\prod_{i\in I} \Z{/}n\Z$. Let $\mathbf{A}$ be a $k\times l$ matrix with entries in $R$ and write $\mathbf A=\prod_{i\in I} \mathbf{A}_i$ with matrices $\mathbf{A}_i$ having entries in $\Z/n\Z$. The following conditions are equivalent: \begin{enumerate} \item The matrix $\mathbf{A}$ is partition regular over $R$. \item There is a prime $p$ dividing $n$ such that either for some $i\in I$ the matrix $\mathbf{A}_i \bmod p$ satisfies the generalised columns condition or for infinitely many $i\in I$ the matrix $\mathbf{A}_i \bmod p$ satisfies the columns condition. \end{enumerate} \end{proposition} \begin{proof} We begin by treating the case when $n=p$ is a prime number. Each matrix $\mathbf{A}_i$ lies in the set $\mathrm{M}_{k\times l} (\F_p)$ of $k\times l$ matrices over $\F_p$, and hence may take only finitely many possible values. For $\mathbf B \in \mathrm{M}_{k\times l} (\F_p)$, let $I_\mathbf{B} \subset I$ denote the set of $i\in I$ such that $\mathbf{A}_i=\mathbf{B}$. We decompose the ring $R$ as a finite product $ R = \prod_{\mathbf{B}\in \mathrm{M}_{k\times l} (\F_p)} R_\mathbf{B}$ of rings $R_\mathbf{B}=\prod_{i\in I_\mathbf{B}} \F_p$. We see from Proposition \ref{prop:PR_of_quotients}.\eqref{prop:PR_of_quotients2} that $\mathbf{A}$ is partition regular over $R$ if and only if $\mathbf{B}$ (regarded as a matrix with the same entries on each coordinate) is partition regular over $R_{\mathbf B}$ for some $\mathbf B \in \mathrm{M}_{k\times l} (\F_p)$. Since $\mathbf B$ has entries in $\F_p \subset R_\mathbf{B}$, we may forget about the ring structure on $R_\mathbf{B}$, and regard it instead as an $\F_p$-vector space. If $R_\mathbf{B}$ is finite, then $\mathbf{B}$ is partition regular over $R_\mathbf{B}$ if and only it satisfies the generalised columns condition (i.e., its columns add up to zero). Now assume that $R_\mathbf{B}$ is infinite. We will show that $\mathbf{B}$ is partition regular over $R_\mathbf{B}$ if and only if it satisfies the columns condition over $\F_p$. By Corollary \ref{cor:compactness}.\eqref{cor:compactness2}, partition regularity over $R_\mathbf{B}$ is equivalent to partition regularity over an $\F_p$-vector space of countable infinite dimension that can be chosen to be the ring of polynomials $\F_p[X]$. Regarding now $\mathbf{B}$ as a matrix with coefficients in $\F_p[X]$, we conclude from Theorem \ref{mainthm:domains} that $\mathbf{B}$ is partition regular over $\F_p[X]$ if and only if it satisfies the columns condition over $\F_p[X]$. By Lemma \ref{lem:cc_in_fields} this is equivalent to $\mathbf{B}$ satisifying the columns condition over $\F_p$. This ends the proof in the case when $n=p$ is a prime number. In the general case, consider the prime decompostion $n=p_1^{\alpha_{1}}\cdots p_t^{\alpha_{t}}$ of $n$, and write $R$ as $$R= \prod_{i\in I} \Z/p_1^{\alpha_{1}}\Z \times \cdots \times\prod_{i\in I} \Z/p_t^{\alpha_{t}}\Z.$$ By Proposition \ref{prop:PR_of_quotients}.\eqref{prop:PR_of_quotients2} the matrix $\mathbf{A}$ is partition regular over $R$ if and only if $\mathbf{A}$ is partition regular over the ring $\prod_{i\in I} \Z/p_j^{\alpha_{j}}\Z$ for some $j\in \{1,\dots,t\}$. Consider the filtration \[ 0\subset p_j^{\alpha_{j}-1}\Z/p_j^{\alpha_{j}}\Z\subset \dots \subset p_j\Z/p_j^{\alpha_{j}}\Z\subset \Z/p_j^{\alpha_{j}}\Z \] of $\Z/p_j^{\alpha_{j}}\Z$ with quotients isomorphic to $\F_{p_{j}}$. Using Proposition \ref{prop:PR_of_quotients}.\eqref{prop:PR_of_quotients1}, we reduce the problem to partition regularity over the ring $\prod_{i\in I} \F_{p_{j}}$. This ends the proof. \end{proof} As an easy corollary of Proposition \ref{thm:characterisation_of_products}, we can prove that for single equations over the ring $ \prod_{i\in I} \Z/n\Z$ partition regularity is equivalent to the generalised columns condition. This was already proven in \cite{BDHL} for $n=4$ and $I$ countable. \begin{corollary}\label{cor:answers_to_natural_questions_act1} Let $n$ be an integer, $R=\prod_{i\in I} \Z/n\Z$, and $a_1,\dots, a_l\in R$. The following conditions are equivalent: \begin{enumerate} \item\label{cor:answers_to_natural_questions_act11} The equation $a_1x_1+\dots + a_lx_l=0$ is partition regular over $R$. \item\label{cor:answers_to_natural_questions_act12} The matrix $(a_1,\dots,a_l)$ satisfies the generalised columns condition.\end{enumerate} \end{corollary} \begin{proof} It is sufficient to prove that \eqref{cor:answers_to_natural_questions_act11} implies \eqref{cor:answers_to_natural_questions_act12}. Let $\mathbf{A}=\prod_{i\in I} \mathbf{A}_i$ denote the $1\times l$ matrix $\mathbf{A}=(a_1,\dots,a_l)$ and assume that $\mathbf{A}$ is partition regular over $R$. By Proposition \ref{thm:characterisation_of_products} there exists a prime number $p$ dividing $n$ such that one of the following two cases holds. \begin{description} \item[\normalfont\emph{Case 1}] $\mathbf{A}_i \bmod p$ satisfies the generalised columns condition for some $i\in I$. In this case $\sum_{j=1}^l a_j =0$ is a zero divisor in $R$. Then $\mathbf A$ satisfies the generalised columns condition with $m=0$. \item[\normalfont\emph{Case 2}] There exists a matrix $\mathbf{B}$ such that $\mathbf{A}_i=\mathbf{B}$ for $i\in I_\mathbf{B}$ with $I_\mathbf{B}\subset I$ infinite and $\mathbf{B} \bmod p$ satisfies the columns condition. Write the matrix $\mathbf{B}$ as $(b_1,\dots,b_l)$ with $b_j \in \Z/n\Z$. We may assume that $\mathbf{B} \bmod p$ does not satisfy the generalised columns condition (otherwise, we are in Case 1). Thus there exists $\emptyset\varsubsetneq J \varsubsetneq \{1,\dots,l\}$ such that $\sum_{j\in J} b_j \bmod p =0$ and $b_{j_0}\bmod p \neq 0$ for some $j_0\in J$. Write $n=p^{\alpha}n'$ with $p{\nmid} n'$ and let $e=(e_i)_{i\in I}\in R$ be the element with $e_i=1$ if $i\in I_\mathbf{B}$ and $e_i=0$ otherwise. Then it is easy to see that $\mathbf A$ satisfies the generalised columns condition with $m=1$, $I_0=J$, $I_1=\{1,\dots,l\}\setminus J$, $d_0=\frac{n}{p} e$ and $d_1=n'e$.\qedhere \end{description} \end{proof} In \cite{BDHL} the authors showed that the product $\prod_{i=1}^{\infty} \Z/np^2\Z$ is not Rado by constructing a $(p+1)\times (p+1)$ matrix that is partition regular but does not satisfy the generalised columns condition. They further asked if the minimal number of rows of such a matrix is $p+1$ \cite[p.\ 83]{BDHL}. By Corollary \ref{cor:answers_to_natural_questions_act1} we see that we cannnot find such a matrix with only one row. We will show that the minimal number of rows is always at most three, and is two if $p\geq 5$. \begin{corollary}\label{cor:answers_to_natural_questions_act2} Let $n$ be a positive integer, $p$ a prime number, and $I$ an infinite set. Let $R=\prod_{i\in I}\Z/np^2\Z$. There exists a $3\times 3$ matrix $\mathbf{B}$ with entries in $R$ which is partition regular over $R$, but does not satisfy the generalised columns condition. \end{corollary} \begin{proof} This follows immediately from Lemma \ref{lemexplmat} by taking the matrix $\mathbf B$ corresponding to $b=p$ (identified with the element $(p)_{i\in I}$ with $p$ on each coordinate).\qedhere \end{proof} \begin{corollary}\label{cor:answers_to_natural_questions_act3} Let $n$ be a positive integer, $p\geq 5$ a prime number, and $I$ an infinite set. Let $R=\prod_{i\in I}\Z/np^2\Z$. There exists a $2\times 3$ matrix $\mathbf{B}$ with entries in $R$ which is partition regular over $R$, but does not satisfy the generalised columns condition. \end{corollary} \begin{proof} The proof is analogous to the proof of Lemma \ref{lemexplmat}, using instead the $2\times 3$ matrix \[ \mathbf{B}=\left( \begin{array}{ccc} 1 & p-1 & 2 \\ 0 & 0 & p \end{array} \right) .\qedhere \ \end{proof} It seems that for $p\in\{2,3\}$ there does not exist a matrix with two rows that is partition regular over $R$ but does not satisfy the generalised columns condition. A proof of this fact would however involve a lengthy case-by-case analysis and we will not attempt it. \section{Nonhomogenous equations}\label{sec:nonhom} In this section we investigate the problem of partition regularity of nonhomogeneous equations over arbitrary modules. Let $R$ be a ring and let $M$ be an $R$-module. Let $\mathbf{A}$ be a $k\times l$ matrix with entries in $R$ and let $\mathbf{b}\in M^k$ be a vector. We say that the equation $\mathbf{Am}=\mathbf{b}$ is partition regular over $M$ if for any finite colouring of $M$ there exists a solution $\mathbf{m}=(m_1,\dots,m_l)^{\intercal}\in M^l$ of this equation with $m_1,\dots, m_l$ monochromatic and not all $m_i$ zero. (The latter condition is automatic if $\mathbf{b}\neq 0$.) A complete characterisation of partition regularity of nonhomogeneous equations over the ring of integers was done by Rado \cite{Rado1933}. It states that a nonhomogeneous equation $\mathbf{Am}=\mathbf{b}$ with $\mathbf{A}, \mathbf{b}$ having integer entries and $\mathbf{b}\neq 0$ is partition regular over the integers if and only if it has a \emph{constant solution}, i.e., if there exists a vector $\mathbf m =(m,\dots,m)^{\intercal} \in \Z^l$ with all entries equal and such that $\mathbf{Am}=\mathbf{b}$. In \cite[Theorem 4.2]{BDHL}, this characterisation was extended to a rather restricted class of domains (more precisely, to integral domains with at least one nonzero nonunit such that $R/mR$ is finite for each $m\in R\setminus \{0\}$). In this section we will generalise this result to a much wider class of rings. We will also study the problem more generally for modules. We first study the case of a single equation. Here, we replace the use of \cite[Lemma 4.1]{BDHL} (which is only proved under the above restrictive assumptions) with Theorem \ref{lem:nonhomogenous-one_eq} below which is obtained using a result of Straus \cite{Straus}. We then use the case of a single equation to describe partition regularity for systems of equations. The idea to derive the general case from the case of a single equation is standard and due to Rado \cite{Rado1943}. It allows us to immediately obtain the desired result if $R$ is a domain. In general, this approach might not work, and we quantify the obstructions by introducing certain modules $H_R(I,M)$ (see Definition \ref{defHmodules}). We then study conditions under which this obstruction vanishes. We recall the theorem of Straus. \begin{theorem}[\cite{Straus}]\label{lemmaStraus} Let $G$ be an abelian group, let $f_1,\dots,f_l \colon G\to G$ be any mappings (not necessarily homomorphisms), and let $b\in G$ be a nonzero element. Then there exists a finite colouring $\chi$ of $G$ such that the nonhomogeneous equation $$\sum_{i=1}^l\left(f_i(x_i)-f_i(y_i)\right) = b$$ has no solutions $x_i, y_i$ with $\chi(x_i) = \chi(y_i)$ for $i = 1,\dots, l$.\end{theorem} A characterisation of partition regularity of a single equation can be easily derived from this result. \begin{theorem}\label{lem:nonhomogenous-one_eq} Let $R$ be a ring and let $M$ be an $R$-module. Let $a_1,\dots, a_l\in R$, let $b\in M$ be nonzero, and write $a=\sum_{i=1}^l a_i$. The following conditions are equivalent: \begin{enumerate} \item\label{lem:nonhomogenous-one_eq1} The equation $\sum_{i=1}^l a_i m_i=b$ is partition regular over $M$. \item\label{lem:nonhomogenous-one_eq2} The equation $\sum_{i=1}^l a_i m_i=b$ has a constant solution in $M$. \item\label{lem:nonhomogenous-one_eq3} $b\in aM$.\end{enumerate} \end{theorem} \begin{proof} The equivalence of \eqref{lem:nonhomogenous-one_eq2} and \eqref{lem:nonhomogenous-one_eq3} is trivial, and so is the fact that both these conditions imply \eqref{lem:nonhomogenous-one_eq1}. We will now prove that \eqref{lem:nonhomogenous-one_eq1} implies \eqref{lem:nonhomogenous-one_eq3}. Suppose for the sake of contradiction that the equation $\sum_{i=1}^l a_i m_i=b$ is partition regular over $M$, but $b\not\in aM$. Passing to the quotient module $M/aM$ over the ring $R/aR$, we get that the equation $\sum_{i=1}^l \bar{a}_im_i=\bar{b}$ is partition regular over $M/aM$ and $\bar{b}\neq 0$. Consider the maps $f_i\colon M/aM \to M/aM$ given by $f_i( m)=a_i m$. Applying Lemma \ref{lemmaStraus}, we obtain a colouring $\chi$ of $M/aM$ such that the equation $$\sum_{i=1}^l\left(f_i(x_i)-f_i(y_i)\right) = \bar{b}$$ has no solutions $x_i, y_i \in M/aM$ with $\chi(x_i) = \chi(y_i)$ for $i \in \{ 1,\dots, l\}$. Since the equation $\sum_{i=1}^l \bar{a}_im_i=\bar{b}$ is partition regular over $M/aM$, it has a solution $(m_1,\dots, m_l)$ that is monochromatic with respect to $\chi$. Since $\bar{a}=\sum_{i=1}^l\bar{a}_i=0$, we can rewrite this as $$\sum_{i=1}^l(\bar{a}_i m_i-\bar{a}_i m_1)=\bar{b},$$ which contradicts Straus' result. \end{proof} In order to study partition regularity for more general nonhomogenous equations, we need to consider module homomorphism with very special properties. \begin{definition}\label{defHmodules} Let $R$ be a ring, $I$ an ideal of $R$, and $M$ an $R$-module. Denote $$Z_R(I,M)=\{\varphi\in \Hom_R(I,M) \mid \varphi(t)\in tM \text{ for all } t\in I\}.$$ This is an $R$-submodule of the module $\Hom_R(I,M)$ of all homomorphisms from $I$ to $M$. We call a homomorphism $\varphi \in Z_R(I,M)$ \emph{principal} if there exists $m\in M$ such that $\varphi(t)=tm$ for all $t\in I$. We denote the submodule of principal homomorphism by $B_R(I,M)$ and the quotient module by $H_R(I,M)=Z_R(I,M)/B_R(I,M)$.\end{definition} The construction of $H_R(I,M)$ is functorial in $M$, in the sense that a homomorphism $f\colon M\to N$ induces by composition with $f$ a homomorphism $H_R(I,M) \to H_R(I,N)$. If $R$ is noetherian and $M$ is a finitely generated $R$-module, then the modules $H_R(I,M)$ are finitely generated, being subquotients of $\Hom_R(I,M)$. Our aim is to find sufficient conditions for the module $H_R(I,M)$ to vanish, or---equivalently---for every homomorphism $\varphi\in Z_R(I,M)$ to be principal. This will allow us to conclude that certain systems of equations are not partition regular. \begin{theorem}\label{thmnonhomsyst} Let $R$ be a ring and let $M$ be an $R$-module. Let $\mathbf{A}=(a_{ij})$ be a $k\times l$ matrix with entries in $R$ and let $\mathbf{b}\in M^k$ be nonzero. Write $a_i =\sum_{j=1}^l a_{ij}$ and denote by $I$ the ideal $I=(a_1,\dots,a_k)R$. Assume that $H_R(I,M)=0$. The following conditions are equivalent: \begin{enumerate} \item\label{thmnonhomsyst1} The equation $\mathbf{Am}=\mathbf{b}$ is partition regular over $M$. \item\label{thmnonhomsyst2} The equation $\mathbf{Am}=\mathbf{b}$ has a constant solution in $M$. \end{enumerate}\end{theorem}\begin{proof} It is obvious that \eqref{thmnonhomsyst2} implies \eqref{thmnonhomsyst1}. For the opposite implication, assume that $\mathbf{Am}=\mathbf{b}$ is partition regular over $M$. For any vector $\mathbf{r}=(r_1,\dots,r_k)^{\intercal}\in R^k$, the single equation $\mathbf{r}^{\intercal}\mathbf{Am}=\mathbf{r}^{\intercal}\mathbf{b}$ obtained by taking a linear combination of rows of $\mathbf{Am}$ with coefficients from $\mathbf{r}$ is still partition regular. Applying to this equation Theorem \ref{lem:nonhomogenous-one_eq}, we conclude that \begin{equation}\label{eqnsysinsub} \mathbf{r}^{\intercal}\mathbf{b}=\sum_{i=1}^k r_i b_i \in \left(\sum_{i=1}^k r_i a_i\right)M.\end{equation} (The proposition can only be applied if $\mathbf{r}^{\intercal}\mathbf{b}\neq 0$, but the conclusion is obvious otherwise.) Define a map $\varphi\colon I \to M$ by putting $\varphi(\sum_{i=1}^k r_i a_i) = \sum_{i=1}^k r_i b_i$. This is well-defined, since if $\sum_{i=1}^k r_i a_i=\sum_{i=1}^k r_i' a_i$ for some $ r_i, r_i'\in R$, then applying \eqref{eqnsysinsub} to $\mathbf{r}=(r_1-r_1',\dots,r_k-r_k')^{\intercal}$, we get $\sum_{i=1}^k r_i b_i=\sum_{i=1}^k r_i' b_i$. Applying \eqref{eqnsysinsub} again, we get that $\varphi \in Z_R(I,M)$. Since $H_R(I,M)=0$, there exists $m\in M$ with $\varphi(t)=tm$ for all $t\in I$. Taking $t\in\{a_1,\dots,a_k\}$ shows that the constant vector $\mathbf{m}=(m,\dots,m)^{\intercal}\in M^l$ is a solution of $\mathbf{Am}=\mathbf{b}$.\end{proof} We begin our study of $H_R(I,M)$ modules with a proposition that gathers a few of their simple properties. \begin{proposition}\label{prop:Hmodbasic} Let $R$ be a ring, $I$ an ideal of $R$, and let $M$ and $\{M_{\lambda}\}_{\lambda \in \Lambda}$ be $R$-modules. \begin{enumerate} \item \label{prop:Hmodbasicprod}$H_R(I,\prod_{\lambda \in \Lambda} M_\lambda) = \prod_{\lambda \in \Lambda} H_R(I,M_{\lambda})$. \item \label{prop:Hmodbasicsum} Assume that $I$ is finitely generated. Then $H_R(I,\bigoplus_{\lambda \in \Lambda} M_\lambda) = \bigoplus_{\lambda \in \Lambda} H_R(I,M_{\lambda})$. \item \label{prop:Hmodbasicann} If $J=\mathrm{ann}(M)$, then $H_{R}(I,M)=H_{R/J}((I+J)/J,M)$. \item \label{prop:Hmodbasicmult}Let $S\subset R$ be a multiplicative set. If $I$ is finitely generated, then $S^{-1} H_R(I,M)$ is a submodule of $ H_{S^{-1}R}(S^{-1}I,S^{-1}M).$\end{enumerate} \end{proposition} \begin{proof} \eqref{prop:Hmodbasicprod}, \eqref{prop:Hmodbasicsum} Immediate. \eqref{prop:Hmodbasicann} It follows from the definition of $Z_R(I,M)$ that every $\varphi\in Z_R(I,M)$ factors through $I/(I\cap J) \cong (I+J)/J$. The equality follows easily from this. \eqref{prop:Hmodbasicmult} Any homomorphism $\varphi\in\Hom_R(I,M) $ induces by localisation a homomorphism $\varphi_S\in \Hom_{S^{-1}R}(S^{-1}I,S^{-1}M)$, giving rise to a map $$S^{-1} \Hom_R(I,M) \to \Hom_{S^{-1}R}(S^{-1}I,S^{-1}M).$$ This maps $S^{-1} Z_R(I,M)$ to $ Z_{S^{-1}R}(S^{-1}I,S^{-1}M)$ and $S^{-1} B_R(I,M)$ to $ B_{S^{-1}R}(S^{-1}I,S^{-1}M)$, thus inducing a map $\Phi\colon S^{-1} H_R(I,M) \to H_{S^{-1}R}(S^{-1}I,S^{-1}M).$ To prove that $\Phi$ is injective, we need to show that if the image of some $\varphi \in Z_R(I,M)$ is a principal homomorphism $\varphi_S \in B_{S^{-1}R}(S^{-1}I,S^{-1}M)$, then there exists some $s\in S$ such that $s\varphi$ is a principal homomorphism. Suppose that $\varphi$ is as above. Since $\varphi_S$ is principal, there exists $m/s \in S^{-1}M$ such that $\varphi(t)/1=tm/s$ in $S^{-1}M$ for all $t\in I$. This means that for all $t\in I$ there exists $s_t\in S$ such that $s_t(s\varphi(t)-tm)=0$. Since the ideal $I$ is finitely generated, say by $t_1,\dots,t_k$, we can assume that $s_t$ is independent of $t$ by replacing $s_t$ with $s'=s_{t_1}\cdots s_{t_k}$. This shows that $s'(s\varphi(t)-tm)=0$ for all $t\in I$ and hence $s's\varphi$ is a principal homomorphism, finishing the proof \end{proof} The following proposition gives some sufficient conditions for the modules $H_R(I,M)$ to vanish. \begin{proposition}\label{prop:Hmodules} Let $R$ be a ring, $I$ an ideal of $R$, and $M$ an $R$-module. \begin{enumerate} \item\label{prop:Hmodulesprin} If $I$ is principal, then $H_R(I,M)=0$. \item \label{prop:Hmodulesdom} If $R$ is a domain and $M$ is a torsion-free module, then $H_R(I,M)=0$. \item \label{prop:HmodulesDed} If $R$ is a Dedekind domain, then $H_R(I,M)=0$. \item \label{prop:Hmodulesred} If $R$ is a reduced ring with finitely many minimal prime ideals (so in particular a reduced noetherian ring), then $H_R(I,R)=0$. \end{enumerate}\end{proposition} \begin{proof} \eqref{prop:Hmodulesprin} Assume that $I=sR$ is a principal ideal and $\varphi\in Z_R(I,M)$. Write $\varphi(s)=sm$ with $m\in M$. Since $\varphi$ is an $R$-module homomorphism, it is clear that $\varphi(rs)=rsm$ for all $r\in R$ and hence $\varphi$ is principal. \eqref{prop:Hmodulesdom} Assume that $R$ is a domain. The claim is clear if $I=0$, so assume that $I\neq 0$. Choose some nonzero $s\in I$ and write $\varphi(s)=sm$ for some $m\in M$. For every $t\in I$ we have $$s\varphi(t)=\varphi(st)=t\varphi(s)=stm.$$ Since $M$ is torsion-free, we get $\varphi(t)=tm$. Therefore $\varphi$ is principal. \eqref{prop:HmodulesDed} If $R$ is a discrete valuation ring, the claim follows from \eqref{prop:Hmodulesprin} as every ideal of $R$ is principal. In general, the proof follows from a standard localisation argument using Proposition \ref{prop:Hmodbasic}.\eqref{prop:Hmodbasicmult}; indeed, for every maximal ideal $\mathfrak p$ of $R$, $R_{\mathfrak p}$ is a discrete valuation ring, so we get $H_R(I,M)_{\mathfrak p}\subset H_{R_{\mathfrak p}}(I_{\mathfrak p},M_{\mathfrak p})=0$, and hence $H_R(I,M)=0$ by \cite[Lemma 2.8]{book:Eisenbud}. \eqref{prop:Hmodulesred} Choose a map $\varphi \in Z_R(I,R)$. We will show that $\varphi$ is principal. Let $\mathfrak p_1, \dots, \mathfrak p_n$ be the minimal prime ideals of $R$. By Proposition \ref{prop:Hmodules}.\eqref{prop:Hmodulesdom} and Proposition \ref{prop:Hmodbasic}.\eqref{prop:Hmodbasicann}, the modules $H_R(I,R/\mathfrak p_i)=H_{R{/}\mathfrak p_i}((I+\mathfrak p_i){/}\mathfrak p_i,R/\mathfrak p_i)$ vanish for all $i$. Composing $\varphi$ with the canonical projections $R\to R/\mathfrak{p_i}$ and using the fact that $H_R(I,R/\mathfrak p_i)=0$, we obtain elements $r_i\in R$ such that \begin{equation}\label{eq:Hmodred} \varphi(t)-tr_i \in \mathfrak p_i \quad \quad \text{ for all } t\in I.\end{equation} After renumbering the prime ideals $\mathfrak p_1, \dots, \mathfrak p_n$, we may assume that $I$ is contained precisely in $\mathfrak p_1, \dots, \mathfrak p_m$ for some $0\leq m\leq n$. By prime avoidance (see \cite[Lemma 3.3]{book:Eisenbud}), there exists an element $t_0 \in I \setminus \bigcup_{i=m+1}^n \mathfrak p_i$. By definition of $\varphi$, there exists $r\in R$ such that $\varphi(t_0)=t_0r$. Applying \eqref{eq:Hmodred} for $t=t_0$, we get $t_0r-t_0r_i \in \mathfrak p_i$ for all $i$, and hence $r-r_i \in \mathfrak p_i$ for $ m+1\leq i \leq n$. We claim that $\varphi(t)=tr$ for all $t\in I$. Since the ring $R$ is reduced, $\mathfrak p_1 \cap \dots \cap \mathfrak p_n = \mathrm{nil}(R) =0$ (see \cite[Corollary 2.12]{book:Eisenbud}), and hence it is enough to check that $\varphi(t)-tr \in \mathfrak p_i$ for all $i$. This is true for $i \leq m$ since in this case both $\varphi(t)$ and $tr$ lie in $\mathfrak p_i$, and for $i \geq m+1$ since in this case $\varphi(t)-tr_i$ and $r-r_i \in \mathfrak p_i$. \end{proof} In the remaining part of this section we give examples when modules $H_R(I,M)$ do not vanish. Such examples are rather easy to construct if $M$ is not assumed to be finitely generated, but are more involved otherwise. We give three examples of triples $(R,I,M)$ for which $H_R(I,M)\neq 0$: \begin{enumerate} \item[(a)] when $R$ is a noetherian domain and $M$ is a torsion and not finitely generated $R$-module; \item[(b)] when $R$ is a local artinian ring and $M=R$; \item[(c)] when $R$ is a noetherian domain and $M$ is a finitely generated (torsion) $R$-module.\end{enumerate} \begin{example} Let $k$ be a field, $R=k[X,Y]$, $I=(X,Y)$, and $M=k(X,Y)/k[X,Y]$. Then $H_R(I,M)\neq 0$. \end{example} \begin{proof} For $f\in k(X,Y)$, we denote by $\overline{f}$ its image by the quotient map $k(X,Y)\to k(X,Y)/k[X,Y]$. Consider the unique $R$-linear map $\varphi\colon I \to M$ such that $\varphi(X)=\overline{Y^{-1}}, \varphi(Y)=\overline{YX^{-1}}$. Since $tM=M$ for all nonzero $t\in R$, it is clear that $\varphi \in Z_R(I,M)$. We claim that $\varphi \not\in B_R(I,M)$. Indeed, suppose otherwise and write $\varphi(t)=t\overline{f}$ for some $f\in k(X,Y)$. Comparing the values of $\varphi(X)$ and $\varphi(Y)$, we get $Xf-Y^{-1}=g$ and $Yf-YX^{-1}=h$ for some $g,h \in k[X,Y]$. Thus $Yg-Xh=Y-1$, which gives a contradiction. \end{proof} \begin{example}\label{examplemod1} Let $k$ be a field and consider the ring \begin{equation}\label{ringcount} R=k[X,Y,Z,W]/\left((X,Y,Z,W)^3+(Z^2, ZW, W^2, YW, YZ-XW)\right).\end{equation} We denote by $x,y,z,w\in R$ the images of $X,Y,Z,W$ in $R$. Let $I=(x,y)R$. Then $H_R(I,R)\neq 0$. \end{example} \begin{proof} The ring $R$ is a $10$-dimensional $k$-algebra with basis $1, x, y, z, w, x^2, xy, y^2, xz, xw$. The ideal $I$ is a vector subspace with basis $\mathcal B$ given by $x, y, x^2, xy, y^2, xz, xw$. Define the map $\varphi\in \Hom_R(I,M)$ on the basis $\mathcal B$ by mapping $x$ to $\varphi(x)=xz$ and mapping the remaining elements of $\mathcal B$ to $0$. It is easy to check that this is an $R$-module homorphism. We claim that $\varphi\in Z_R(I,M)$. Any $t\in I$ can be written as $t =\lambda x + s$ with $\lambda \in k$ and $s\in J=(y, x^2, xz, xw)R$. We claim that $\varphi(t)\in tR$. This is clear if $\lambda = 0$, since then $\varphi(t)=0$. On the other hand if $\lambda \neq 0$, we may rewrite $t$ in the form $t=ux+vy$ for a unit $u\in R^*$ and $v\in R$. Then $$\varphi(t)=\lambda x z = (ux+vy)(z-u^{-1}vw) $$ lies in the ideal $tR$. We now claim that $\varphi \notin B_R(I,M)$. Suppose the contrary. Then there exists $r\in R$ such that $\varphi(t)=tr$ for all $t\in I$. Putting $t=x$ and $t=y$ gives $xr=xz$ and $yr=0$. Writing $r$ in the basis of $R$ and using these equalities gives a contradiction.\end{proof} \begin{example}\label{examplemod2} Let $R=k[X,Y,Z,W]$, $I=(X,Y)R$, and let $M$ denote the quotient ring considered in \eqref{ringcount}, now regarded as an $R$-module. Then $H_R(I,M)\neq 0$.\end{example} \begin{proof} Follows from Example \ref{examplemod1} and Proposition \ref{prop:Hmodbasic}.\eqref{prop:Hmodbasicann}.\end{proof} \begin{theorem}\label{corprinhom} Let $R$ be a ring and let $M$ be an $R$-module. Assume that one of the following assumptions holds: \begin{enumerate} \item[(a)] either $R$ is a domain and $M$ is a torsion-free module; or \item[(b)] $R$ is a Dedekind domain; or \item[(c)] $R$ is a reduced ring with finitely many minimal prime ideals and $M=R$.\end{enumerate} Let $\mathbf{A}$ be a $k\times l$ matrix with entries in $R$ and let $\mathbf{b}\in M^k$ be nonzero. The following conditions are equivalent: \begin{enumerate} \item\label{thmnonhomsyst1} The equation $\mathbf{Am}=\mathbf{b}$ is partition regular over $M$. \item\label{thmnonhomsyst2} The equation $\mathbf{Am}=\mathbf{b}$ has a constant solution in $M$.\end{enumerate}\end{theorem} \begin{proof} Follows immediately from Theorem \ref{thmnonhomsyst} and Proposition \ref{prop:Hmodules}.\end{proof} Note that vanishing of the modules $H_R(I,M)$ is only needed for the particular method of the proof, and not necessarily for the claim of Theorem \ref{corprinhom}. For example, if in Example \ref{examplemod1} the field $k$ (and hence the ring $R$) is finite, then the claim of Theorem \ref{corprinhom} certainly holds (since we may give each element of $R$ a different colour), even though $H_R(I,M)$ does not vanish. We do not know any example of a ring $R$ and an $R$-module $M$ for which the conlusion of Theorem \ref{corprinhom} fails. \begin{question} Does there exist a ring $R$, an $R$-module $M$, a $k\times l$ matrix $\mathbf{A}$ with entries in $R$, and a nonzero $\mathbf{b} \in M^k$ such that the equation $\mathbf{Am}=\mathbf{b}$ is partition regular over $M$ even though it does not have any constant solutions in $M$? Can one choose $M=R$? \end{question} \bibliographystyle{amsplain}
{ "timestamp": "2018-04-17T02:10:10", "yymm": "1804", "arxiv_id": "1804.05341", "language": "en", "url": "https://arxiv.org/abs/1804.05341" }
\section{Introduction} During the last few decades, several countries across the world have faced the ageing population problem. It is well known that the ageing of the population threatens the sustainability of Pays-As-You-Go pension systems, that is essentially based on a sufficiently high ratio workers to pensioner. To tackle this issue, many governments have introduced new reforms that lead to deep changes into the pension systems.\\ In particular, in Italy, the \textit{Dini reform} (introduced in 1995) instituted a completely different pension system for the new classes of workers. Before the reform, the public pension was provided through a defined benefit salary-related system. Namely, the pension provided was simply a service-based percentage of the last salary of the worker. After the reform, the public pension has been provided through a contribution-based system, where the worker contributes by himself to build his pension.\\ This remarkable change generated two different classes of workers, the pre-reform workers and the post-reform workers, and lead to a pension gap between their corresponding pension rates and replacement ratios.\\ The aim of this paper is to illustrate a possible way to fill this pension gap investing optimally in a defined contribution (DC) pension fund during the working life. In order to achieve the goal, a stochastic optimal control problem with suitable annual targets is solved. We consider two different salary growths to represent two different classes of workers: a linear salary growth (blue-collar workers) and an exponential salary growth (white-collar workers).\\ A numerical section illustrates the practical application of the model. Our results are in line with previous results on the comparison between the old pre-reform Italian pension and the new post-reform one, see \citeasnoun{borella-codamoscarola-gde} and \citeasnoun{borella-codamoscarola-jpef}. We find that the gap between salary-related pension and contribution-based pension is larger for workers with dynamic career than for workers with a stagnant career. A slow salary increase associated to late retirement can produce a new pension that is almost equal to (or even exceeds) the old pension. Expectedly, the gap is easier to cover in the case of late retirement, and vice versa. Interestingly, the gap increases when the rate of growth of the salary increases.\\ The reminder of the paper is as follows. In Section 2 we introduce the milestones of the Italian pension system and the consequences of the Dini reform. In Section 3 we build the model and the corresponding stochastic optimal control problem. In Section 4 we derive the closed-form solutions to the problem for the two different salary growths considered. In Section 5 we carry out some simulations in order to test the model and show the behaviour of the optimal investment strategy and the optimal fund growth in a base case scenario. In Section 6 we make sensitivity analysis of the pension distribution with respect to retirement age. In Section 7 we investigate the break even points that lead the ``new'' pension to equal the ``old'' pension. Section 8 concludes. \section{The Italian pension provision} The Italian pension system has been modified through a series of legislative measures taken by different governments during the 90s. We only hint at the \emph{Dini reform} which is useful in order to understand the following model.\\ The Dini reform (law 335, 1995) has changed the system for the calculation of the pension from a salary-based system to a contribution-related system. The workers shifted from one system to the other depending on the contributions paid at the end of 1995. Therefore, three different situations were created: \begin{enumerate} \item The workers with at least eighteen years of contributions on 31/12/1995 remained under the salary-related system and therefore they were not touched by the reform. \item The workers with less than eighteen years of contributions on 31/12/1995 were subjected to a mixed method. \item The workers who were first employed after 31/12/1995 are subjected to the contribution-based system. \end{enumerate} In this paper, we compare the method for the calculation of the public pension before the Dini reform with the ``new'' method for the calculation of the public pension in Italy after the reform.\\ The formula for the pension rate $P_o$ before the Dini reform was \begin{equation} \label{oldpension} P_o = 0.02\cdot T\cdot S(T), \end{equation} where $S(T)$ is the final salary and $T$ indicates the number of past working years. In the following, we shall call the pension rate \eqref{oldpension} the ``old pension''. The old pension is a percentage of the product of the last salary by the years of service, and the related net replacement ratio --- which is the ratio between the first pension rate received after retirement and the last salary perceived before retirement --- is given by $$ \Pi_o = \frac{P_o}{S(T)} = 0.02\cdot T. $$ In contrast, the new formula for the pension rate $P_n$ is described by \begin{equation} \label{newpension} P_n=\beta\cdot c\cdot \sum_{t=0}^{T-1}S(t)(1+w)^{T-t}, \end{equation} where \begin{itemize} \item $\beta$ is the conversion coefficient between a lump sum and the annuity rate, and its choice should reflect actuarial fairness. If it does, then $\beta=1/\ddot{a}_x$, where $\ddot{a}_x$ is the single premium of a lifetime annuity issued to a policyholder aged $x$, i.e., \begin{equation}\label{annuity} \ddot{a}_x= \sum_{n=1}^{\omega-x} \, _np_x \cdot v^n, \end{equation} where $\omega$ is the extreme age, $v=1/(1+r)$ is the annual discount factor and $_np_x$ is the survival probability from age $x$ to age $x+n$. \item $c$ is the contribution percentage for the calculation of the pension rate (supposed to be constant during the whole working life and, for the employees, set by law to 33\%). \item $T$ indicates the number of past working years. \item $w$ is the mean real GDP increase. \end{itemize} In the following, we shall call the pension rate \eqref{newpension} the ``new pension''. Compared to the old pension, the new pension is given by a complicated formula and depends not only on all the salaries but also on other parameters. Its related net replacement ratio is $$ \Pi_n = \frac{P_n}{S(T)}. $$ \section{The optimization problem} The main idea of this paper is the following.\\ \indent We assume that the worker was firstly employed after 31/12/1995 and will receive the new pension \eqref{newpension} from the first pillar (i.e., the public pension), but he wants to integrate it with additional income from the second pillar (i.e., the private pension funds) to obtain a pension rate that is as close as possible to the one that he would have obtained with the old pension rule \eqref{oldpension}. Since the Dini reform, and apart from a few exceptions accessible only by self-employed (not considered in this paper), pension funds in Italy are defined contribution (DC) and not defined benefit (DB). This means that the contribution to be paid into the fund is fixed a priori in the scheme's rules and the benefit obtained at retirement depends on the investment performance of the fund in the accumulation period.\\ \indent We assume that at time 0 the worker joins a DC pension scheme and has control on the investment strategy to be adopted on the time horizon $[0,T]$, where $T$ is the retirement time. The financial market consists of two assets, a riskless asset $B=\{B(t)\}_{t\geq 0}$ and a risky asset $Z=\{Z(t)\}_{t\geq 0}$, whose dynamics are described by \begin{equation} \label{risklessasset} dB(t) = rB(t)dt, \end{equation} \begin{equation} \label{riskyasset} dZ(t) = \mu Z(t)dt + \sigma Z(t)dW(t), \end{equation} where $r$ is a constant rate of interest and $\{W(t)\}_{t\geq 0}$ is a standard Brownian motion defined and adapted on a complete filtered probability space $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\geq 0},\mathbb{P})$. We assume that the contribution $c(t)$ paid into the fund at time $t$ is a fixed proportion of the salary of the member $$c(t)=kS(t), \quad t\in[0,T],$$ where $k\in(0,1)$ and $S(t)$ is the salary of the member at time $t$. Finally, the proportion of portfolio invested into the risky asset at time $t\in[0,T]$ is $y(t)$. Hence, the dynamics of the wealth are described by the following SDE \begin{equation} \label{fundSDE} \left\{ \begin{array}{ll} dX(t) = \{[(\mu-r)y(t)+r]X(t)+c(t)\}dt+\sigma y(t)X(t)dW(t)\\ X(0)=x_0 \end{array} \right. \end{equation} where $x_0 \geq 0$ is the initial wealth paid into the fund (it can also be a transfer value from another pension fund). Because the aim of the worker is to reach a pension rate that is as close as possible to that of the salary-related method, we assume that there exist annual targets $\{F(t)\}_{t=0,1,2,...T}$ that he wants to achieve, and that his preferences are described by the loss suffered when the targets are not met. Thus, we introduce the following quadratic loss (or disutility) function $$L(t,X(t)) = (F(t)-X(t))^2, \quad t\in[0,T].$$ \begin{rem} The use of a quadratic loss is very common in the context of pension funds. Some examples are \citeasnoun{boulier-trussant-florens}, \citeasnoun{boulier-michel-wisnia}, \citeasnoun{cairns}, \citeasnoun{gerrard-haberman-vigna}, \citeasnoun{gerrard-haberman-vigna06}, \citeasnoun{gerrard-hoejgaard-vigna}. Moreover, \citeasnoun{vigna-qf} and \citeasnoun{menoncin-vigna} deeply analyse and discuss the link between ``utility-based'' approach and ``target-based'' approach. From a theoretical point of view, the quadratic loss function also penalizes any deviations above the target, and this can be considered as a drawback to the model. However, the choice of trying to achieve a target and no more than this has the effect of a natural limitation on the overall risk of the portfolio: once the desired target is reached, there is no reason for further exposure to risk and therefore the surplus becomes undesirable. This is in accordance also with the fact that the mean-variance approach to portfolio selection has been shown to be equivalent to the minimization of a quadratic loss function: see the seminal papers by \citeasnoun{zhou-li} and \citeasnoun{li-ng}, and, in the context of DC pension schemes, \citeasnoun{vigna-qf}. The idea that people act by following subjective targets is also accepted in the decision theory literature. For instance, \citeasnoun{kahneman-tversky} support the use of targets in the cost function, and \citeasnoun{bordley-licalzi} investigate and support the target-based approach in decision making under uncertainty.\end{rem} We now need to define the targets. Recalling that the worker's goal is to reach the old pension pre-Dini reform, we set as final target $F(T)$ the amount that the retiree aged $x$ should pay to an insurance company in order to fill the gap between the previous pension rate and the current one, i.e., \begin{equation} \label{finaltarget} F(T) = (P_o-P_n)\ddot{a}_x , \end{equation} where $\ddot{a}_x$ is the price of the annuity given by \eqref{annuity} to a retiree aged $x$, $P_o$ is given by $(\ref{oldpension})$, $P_n$ is the continuous formulation of equation $(\ref{newpension})$, which means $$P_n=\beta c \int_0^T S(t)e^{w(T-t)}dt.$$ The interim targets $F(t)$ for $t\in [0,T)$ are set to be the compounded value of fund plus contributions using the interest rate $r^*$ that matches a continuity condition between interim targets and final target, i.e. \begin{equation}\label{compoundedtargets} F(t)=x_0e^{r^*t}+\int_0^t c(s)e^{r^*(t-s)}ds, \end{equation} with $r^*$ such that\footnote{We will approximate the value $r^*$ with Newton-Raphson algorithm.} \begin{equation} \label{continuitycond} \lim_{t\rightarrow T^-} F(t)=F(T). \end{equation} The worker's goal is to minimize the conditional expected losses that can be experienced from the fund until retirement \begin{equation} \mathbb{E}_{0,x_0}\left[\int_0^T e^{-\rho s}L(s,X(s))ds+e^{-\rho T}L(T,X(T))\right],\end{equation} where $\rho>0$ is the (subjective) intertemporal discount factor.\\ As usual in optimization problems in DC pension schemes, the contribution rate is \emph{not} a control variable, and the only control variable for the worker is the share of portfolio $y(t)$ to be invested into the risky asset at time $t\in[0,T]$. To formulate the optimization problem, we define the performance criterion at time $t$ with wealth $x$, i.e., \begin{equation} \label{Jform} J_{t,x}(y(\cdot))=\mathbb{E}_{t,x}\left[\int_t^T e^{-\rho s}L(s,X(s))ds+e^{-\rho T}L(T,X(T))\right] \end{equation} and the admissible strategies. \begin{definition} An investment strategy $y(\cdot)$ is said to be \textbf{admissible} if $y(\cdot)\in L^2_\mathcal{F}(0,T;\mathbb{R})$. \end{definition} \noindent The minimization problem, then, becomes \begin{equation}\label{min-prob}\mbox{Minimize } J_{0,x_0}(y(\cdot))\end{equation} over the set of admissible strategies. \section{Solution} In order to solve the optimization problem \eqref{min-prob}, the value function is defined as \begin{equation} \label{Vform} V(t,x):=\inf_{y(\cdot)} J_{t,x}(y(\cdot)) \quad \forall(t,x)\in U=[0,T]\times(-\infty,+\infty). \end{equation} \begin{rem} In this work, we neither set boundaries on the values that the fund $X(\cdot)$ can assume, nor set boundaries on the share $y(t)$ to be invested in the risky asset. The existence of a minimum finite bound on $X(t)$ would be desirable, as well as proper boundaries on the investment strategy. The former would be intended to protect the retiree from outliving his asset and not being able to buy a minimum level of pension at time $T$, the latter to comply with the usual forbiddance of short-selling and borrowing. However, adding restrictions to the state variable and the control variable means adding boundary conditions to the problem and this makes it extremely hard (and often impossible) to solve analytically. Among the few works that treat optimization problems with restrictions in DC pension schemes, see \citeasnoun{digiacinto-gozzi-federico-08} and \citeasnoun{digiacinto-federico-gozzi-vigna-ejor}.\end{rem} We write the HJB equation \begin{equation} \label{HJBmodel} \inf_{y\in\mathbb{R}}[e^{-\rho t}L(t,x)+\mathcal{L}^yV(t,x)]=0, \quad \forall \; (t,x)\in U \end{equation} $$V(T,x)=e^{-\rho T}L(T,x), \quad \forall \; x\in\mathbb{R},$$ where $$\mathcal{L}^uf(t,x)=\frac{\partial}{\partial t}f(t,x)+b(t,x,u)\frac{\partial}{\partial x}f(t,x)+\frac 1 2 \sigma^2(t,x,u)\frac{\partial^2}{\partial x^2}f(t,x)$$ is the infinitesimal operator and the functions $b(\cdot)$ and $\sigma(\cdot)$ are the drift and the diffusion terms of the process $X=\{X(t)\}_{t\geq 0}$ defined by $(\ref{fundSDE})$.\\ Substituting into $(\ref{HJBmodel})$, we obtain $\forall(t,x)\in U$, \begin{equation} \label{explicitHJB} \inf_{y\in\mathbb{R}}\left\{e^{-\rho t}(F(t)-x)^2+\frac{\partial V}{\partial t}+[x(y(\mu-r)+r)+c(t)]\frac{\partial V}{\partial x} +\frac 1 2 x^2y^2\sigma^2\frac{\partial^2 V}{\partial x^2}\right\}=0, \end{equation} with the boundary condition \begin{equation} \label{boundarycond} V(T,x)=e^{-\rho T}L(T,x). \end{equation} To have an easier notation, let us define \begin{equation} \label{psiform} \psi(t,x,y):=e^{-\rho t}(F(t)-x)^2+\frac{\partial V}{\partial t}+[x(y(\mu-r)+r)+c(t)]\frac{\partial V}{\partial x} +\frac 1 2 x^2y^2\sigma^2\frac{\partial^2 V}{\partial x^2}. \end{equation} Thus, equation $(\ref{explicitHJB})$ becomes \begin{equation} \label{psiequation} \inf_{y\in\mathbb{R}}\psi(t,x,y)=0 \quad \Rightarrow \quad \psi(t,x,y^*)=0. \end{equation} The first and second order conditions are \begin{equation} \label{firstorder} \psi_y(t,x,y^*)=0, \end{equation} \begin{equation} \label{secondorder} \psi_{yy}(t,x,y^*)>0. \end{equation} Therefore, $(\ref{firstorder})$ becomes $$x(\mu-r)\frac{\partial V}{\partial x}+x^2y^*\sigma^2\frac{\partial^2 V}{\partial x^2}=0,$$ so that \begin{equation} \label{opimtcontrol} y^*=-\frac{\mu-r}{\sigma}\frac{1}{x\sigma}\frac{V_x}{V_{xx}}. \end{equation} Moreover, condition $(\ref{secondorder})$ is satisfied if and only if \begin{equation} \label{necessarycondition} x^2\sigma^2\frac{\partial^2 V}{\partial x^2}>0 \quad \Leftrightarrow \quad \frac{\partial^2 V}{\partial x^2}>0. \end{equation} We will show later that this condition is actually satisfied, so that the solution is a minimum.\\ By substituting $(\ref{opimtcontrol})$ into $(\ref{psiequation})$, we obtain the non-linear PDE \begin{equation} \label{finalHJB} e^{-\rho t}(F(t)-x)^2+V_t+[rx+c(t)]V_x-\frac 1 2 \left(\frac{\mu-r}{\sigma}\right)^2\frac{V_x^2}{V_{xx}}=0. \end{equation} We guess a solution of the form \begin{equation} \label{Vguess} V(t,x)=e^{-\rho t}[\alpha(t)x^2+\beta(t)x+\gamma(t)]. \end{equation} From the boundary condition $(\ref{boundarycond})$, we obtain $$e^{-\rho T}(F(T)-x)^2=e^{-\rho T}[\alpha(T)x^2+\beta(T)x+\gamma(T)], \quad \forall x\in(-\infty,+\infty),$$ so that \begin{equation} \label{boundaryconds} \alpha(T) = 1, \quad \beta(T)=-2F(T), \quad \gamma(T)=[F(T)]^2. \end{equation} The partial derivatives of $V$ are $$V_t(t,x) = -\rho e^{-\rho t}[\alpha(t)x^2+\beta(t)x+\gamma(t)]+e^{-\rho t}[\alpha '(t)x^2+\beta '(t)x+\gamma '(t)],$$ $$V_x(t,x) = e^{-\rho t}[2\alpha(t)x+\beta(t)], \quad V_{xx}(t,x)=2e^{-\rho t}\alpha(t),$$ and substituting them into $(\ref{opimtcontrol})$, we derive the \emph{optimal investment strategy} at time $t$ with wealth $x$, i.e., \begin{equation} \label{optinvestment} y^*(t,x) = -\frac{\mu-r}{\sigma}\frac{1}{x\sigma}\left(x+\frac{\beta(t)}{2\alpha(t)}\right). \end{equation} Substituting the partial derivatives of $V(\cdot,\cdot)$ into $(\ref{finalHJB})$, we have \begin{align} \label{finalHJB2} &[1-\rho\alpha(t)+\alpha '(t)+2r\alpha(t)-\lambda^2\alpha(t)]x^2+[-2F(t)-\rho\beta(t)+\beta '(t)+r\beta(t)+\nonumber\\ &+2\alpha(t)c(t)-\lambda^2\beta(t)]x+\left[F(t)^2-\rho\gamma(t)+\gamma '(t)+c(t)\beta(t)-\lambda^2\frac{\beta(t)^2}{4\alpha(t)}\right]=0, \end{align} where $\lambda:=(\mu-r)/\sigma$ is the \emph{Sharpe ratio} of the risky asset.\\ Since $(\ref{finalHJB2})$ must hold $\forall(t,x)\in U$, we derive the following system of ODE's \begin{equation} \label{systemODE} \left\{ \begin{array}{lll} \alpha '(t)=[\rho+\lambda^2-2r]\alpha(t)-1=a\alpha(t)-1\\ \beta '(t)= [\rho+\lambda^2-r]\beta(t)+2F(t)-2c(t)\alpha(t)=\tilde{a}\beta(t)+2F(t)-2c(t)\alpha(t)\\ \gamma '(t)= \rho\gamma(t)-F(t)^2-c(t)\beta(t)+\lambda^2\frac{\beta(t)^2}{4\alpha(t)} \end{array} \right. \end{equation} where we have defined $a:=\rho+\lambda^2-2r$, $\tilde{a}=a+r$, and with the boundary conditions $(\ref{boundaryconds})$. \subsection{Solution of the problem with two different salary evolutions}\label{sec:solution-with-salaries} Two different salaries are compared: a linear salary \begin{equation} \label{lsalary} S_l(t) = S_0(1+g_l t), \quad t\in[0,T], \end{equation} and an exponential salary \begin{equation} \label{esalary} S_e(t) = S_0e^{g_e t}, \quad t\in[0,T], \end{equation} where $S_0$ is the initial salary and $g_i$ ($i=l,e$) is the mean real salary increase. The contribution in the two cases is $$c_i(t)=k_i S_i(t), \quad t\in[0,T], \; i=l,e,$$ \begin{rem} We have selected two simple models for the salary growth for analytical tractability and the aim of providing closed-form solutions for the optimal investment strategy.\footnote{ For a more accurate and realistic model for the salary growth in the Italian context we refer to the micro-simulation model developed by \citeasnoun{borella-codamoscarola-gde} and \citeasnoun{borella-codamoscarola-jpef}.} The two different salary growths may represent two different categories of workers, the exponential growth being associated to white-collar workers with dynamic salary increase, the linear salary increase being associated to blue-collar workers with smooth salary increase. The distinct $k's$ reflect the assumption that the savings capacity of white-collar workers is higher than the savings capacity of blue-collar workers. \end{rem} Therefore, there are also two different families of targets.\\ For the linear salary case (for notational convenience, in the following we will write $g$ and $k$ in the place of $g_l$ and $k_l$): \begin{align} \label{linintertarg} F_l(t) &= x_0e^{r^*t}+\int_0^t kS_0(1+gs)e^{r^*(t-s)}ds \\ \nonumber &=\left[x_0+\frac{kS_0}{r^*}+\frac{kgS_0}{(r^*)^2}\right]e^{r^*t}- \frac{kgS_0}{r^*}t-\frac{kS_0}{r^*} \left[1+\frac{g}{r^*}\right], \end{align} \begin{equation} \label{linfintarg} F_l(T)=(\Pi_o-\Pi_n^l)S_l(T)\ddot{a}_x, \end{equation} where $\Pi_n^l$ is the net replacement ratio for the new public pension with linear salary.\\ For the exponential salary case (for notational convenience, in the following we will write $g$ and $k$ in the place of $g_e$ and $k_e$): \begin{equation} \label{expintertarg} F_e(t) = x_0e^{r^*t}+\int_0^t kS_0e^{gs}e^{r^*(t-s)}ds = \left[x_0-\frac{kS_0}{g-r^*}\right]e^{r^*t}+\frac{kS_0}{g-r^*}e^{gt}. \end{equation} \begin{equation} \label{expfintarg} F_e(T)=(\Pi_o-\Pi_n^e)S_e(T)\ddot{a}_x, \end{equation} where $\Pi_n^e$ is the net replacement ratio for the new public pension with exponential salary.\\ Solving the system $(\ref{systemODE})$ in both cases, we find the following solutions \begin{equation} \label{alphaform} \alpha(t)=\left(1-\frac{1}{a}\right)e^{-a(T-t)}+\frac{1}{a}, \end{equation} \begin{align*} \beta_l(t) &= -2F_l(T)e^{-\tilde{a}(T-t)}+\frac{k_4}{r}\left(1+\frac{g}{r}\right)e^{(r-\tilde{a})(t-t)} +\frac{1}{\tilde{a}}\left(k_5+\frac{k_1g}{a\tilde{a}}+\frac{k_2g}{\tilde{a}}\right)+\\ &+\frac{k_4g}{r}te^{(r-\tilde{a})(T-t)}+\frac{g}{\tilde{a}}\left(\frac{k_1}{a}+k_2\right)t+ \frac{k_3}{r^*-\tilde{a}}\left[e^{r^*t}-e^{\tilde{a}t+(r^*-\tilde{a})T}\right]+\\ &-\left[\frac{k_4}{r}+\frac{k_5}{\tilde{a}}+\frac{k_4g}{r^2}+\frac{k_1g}{a(\tilde{a})^2}+ \frac{k_2g}{(\tilde{a})^2}+\left(\frac{k_4g}{r}+\frac{k_1g}{a\tilde{a}}+ \frac{k_2g}{\tilde{a}}\right)T\right]e^{-\tilde{a}(T-t)}, \end{align*} \begin{align*} \beta_e(t) = &- 2F_e(T)e^{-\tilde{a}(T-t)} -\frac{k_4}{g-r}e^{(g+a)t-aT}- \frac{\tilde{k_4}}{g-\tilde{a}}e^{gt}+\\ &+\left(\frac{k_4}{g-r}+\frac{\tilde{k_4}}{g-\tilde{a}}\right)e^{\tilde{a}t+(g-\tilde{a})T}- \frac{\tilde{k_3}}{r^*-\tilde{a}}\left[e^{\tilde{a}t+(r^*-\tilde{a})T}-e^{r^*t}\right], \end{align*} \begin{align*} \gamma_i(t) = &+ e^{-\rho(T-t)}\int_t^T\left[F_i(s)^2+c_i(s)\beta_i(s)+ \lambda^2\frac{\beta_i(s)^2}{4\alpha(s)}\right]e^{\rho(T-s)}ds+\\ &+F_i(T)^2e^{-\rho(T-t)} \quad \qquad i=l,e \end{align*} where \begin{align*} \tilde{a}&=a+r, \quad k_1=2kS_0, \quad k_2=\frac{2kS_0}{r^*}, \quad k_3=2x_0+k_2+\frac{2kgS_0}{(r^*)^2}, \quad k_4=k_1-\frac{k_1}{a},\\ k_5&=\frac{k_1}{a}+k_2\left(1+\frac{g}{r^*}\right), \quad \tilde{k_2}=\frac{2kS_0}{g-r^*}, \quad \tilde{k_3}=2x_0-\tilde{k_2}, \quad \tilde{k_4}=\frac{k_1}{a}-\tilde{k_2}. \end{align*} Substituting these solutions into $(\ref{optinvestment})$, we obtain the two optimal investments $$y_l^*(t) \quad \mbox{and} \quad y_e^*(t), \quad t\in[0,T].$$ Finally, we observe that condition $(\ref{necessarycondition})$ is satisfied. Indeed, $$V_{xx}=2e^{-\rho t}\alpha(t)=2e^{-\rho t}(e^{-a(T-t)}+a^{-1}(1-e^{-a(T-t)})).$$ If $a>0$, then $(e^{-a(T-t)}+a^{-1}(1-e^{-a(T-t)}))>0$, obviously.\\ If $a<0$, then $(e^{-a(T-t)}+a^{-1}(1-e^{-a(T-t)}))>0$, because $a^{-1}<0$ and also $1-e^{-a(T-t)}<0$. \section{Simulations } We have carried out several numerical simulations in order to investigate some quantities of interest to the pension fund member when the model is implemented in the practice. In particular, we have investigated to what extent the gap between the old pension and the new pension is filled. We have first considered a base case, and then made some sensitivity analysis. \subsection{Base case: Assumptions}\label{sec:base-case-assumptions} The assumptions for the base case are the following: \begin{itemize} \item the initial fund is $X(0)=1$; \item the public pension contribution is $c=33\%$; \item the mean GDP growth rate is $w=1.5\%$; \item the riskless interest rate is $r=1.5\%$; \item the drift of the risky asset is $\mu=6\%$; \item the diffusion of the risky asset is $\sigma=12\%$; \item the intertemporal discount factor is $\rho=3\%$; \item the annuity value is calculated with the Italian projected mortality table IP55 (for males born between 1948 and 1960); it is $\ddot{a}_{65}(1.5\%)=17.875$; therefore, the conversion factor from lump sum to annuity is $\beta=1/\ddot{a}_{65}(1.5\%)=0.0056$; \item the age when the member joins the scheme is $x_0=30$; \item the time horizon is $T=35$ meaning that the retirement age is $x_T=65$; \item the initial salary is $S(0)=1$. \end{itemize} We have considered two different salary growths, with different values for the annual salary increase $g$ and the annual contribution rate $k$: \begin{itemize} \item exponential salary: $S_{e}(\cdot)$ as in formula $(\ref{esalary})$ with $g_{e}=6\%$ and salary contribution percentage $k_{e}=10\%$; \item linear salary: $S_{l}(\cdot)$ as in formula $(\ref{lsalary})$ and $g_{l}=8\%$ and salary contribution percentage $k_{l}=4\%$. \end{itemize} In the simulations, we have discretized the Brownian motion with a discretization step equal to two weeks ($\Delta t=1/26$), and we have simulated its behaviour over time in 1000 different scenarios. At each time point, we have not applied the optimal unconstrained investment strategy $y^*(t)$, because short-selling and borrowing are likely to be forbidden. We have, instead, implemented the ``sub-optimal'' constrained investment strategies, which are constrained to stay between 0 and 1. In particular, the sub-optimal $y^{so}(t)$ is defined as \begin{equation*} y^{so}(t)=\left\{\begin{array}{ll} 0 & \mbox{for } y^*(t)<0 \\ y^*(t) & \mbox{for } y^*(t)\in[0,1]\\ 1 & \mbox{for } y^*(t)>0 \end{array} \right.\end{equation*} where $y^*(t)$ is the optimal investment strategy.\footnote{In the following figures we have denoted the ``sub-optimal'' constrained investment strategy by $y^*(\cdot)$ and the ``sub-optimal'' constrained fund growth by $X^*(\cdot)$ for the sake of simplicity.}\\ The adoption of constrained investment strategies leads, as a desirable consequence, the fund not to run under $0$. Constrained suboptimal strategies of this type are not new in the literature and they are good approximations of the optimal investment strategies. They were applied, e.g., by \citeasnoun{gerrard-haberman-vigna06} and \citeasnoun{vigna-qf} in the context of DC pension schemes with a constant interest rate, and they proved to be satisfactory: with respect to the unrestricted case the effect on the final results turned out to be negligible and the controls resulted to be more stable over time. In each scenario of market returns, the sub-optimal value $y^{so}(t)$, $t\in[0,T]$, has been calculated and then adopted for the fund growth.\\ \subsection{Base case: Results}\label{sec:base-case-results} Regarding the investment strategy adopted and the evolution of the fund over time in the base case, we present the following results: \begin{itemize} \item Table 1 reports, for both cases of exponential and linear growths, the old pension $P_o$, the new pension $P_n$, the old net replacement ratio $\Pi_o$, the new replacement ratio $\Pi_n$, the rate of growth of the targets $r^*$ and the last salary $S(T)$; \item the two graphs of Figure \ref{fig-y-new-exp} report some percentiles (graph on the left) and mean and standard deviation (graph on the right) of the distribution, over the 1000 scenarios, of the investment strategy $y^{so}(t)$ for $t\in[0,T]$, for the exponential salary growth; \item the two graphs of Figure \ref{fig-X-new-exp} report some percentiles (graph on the left) and mean and standard deviation (graph on the right) of the distribution, over the 1000 scenarios, of the fund $X^{so}(t)$ for $t\in[0,T]$, for the exponential salary growth; \item the two graphs of Figure \ref{fig-y-new-lin} report some percentiles (graph on the left) and mean and standard deviation (graph on the right) of the distribution, over the 1000 scenarios, of the investment strategy $y^{so}(t)$ for $t\in[0,T]$, for the linear salary growth; \item the two graphs of Figure \ref{fig-X-new-lin} report some percentiles (graph on the left) and mean and standard deviation (graph on the right) of the distribution, over the 1000 scenarios, of the fund $X^{so}(t)$ for $t\in[0,T]$, for the linear salary growth. \end{itemize} \begin{center} \begin{tabular}{lllllll} \hline & $\mathbf{P_o}$ & $\mathbf{P_n}$ & $\mathbf{\Pi_o}$ & $\mathbf{\Pi_n}$ & $\mathbf{r^*}$ & $\mathbf{S(T)}$\\ \hline $\mathbf{S_e}$ & 5.716 & 2.657 & 0.7 & 0.325 & 0.078&8.166\\ \hline $\mathbf{S_l}$ & 2.66 & 1.936 & 0.7 & 0.509 & 0.049&3.8\\ \hline \end{tabular}\vspace{0.3cm}\\ Table 1. \end{center} \begin{figure}[H] \centering \includegraphics[scale=0.4]{y__new_values_exp.eps} \caption{Optimal investment strategy, exponential salary growth} \label{fig-y-new-exp} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.4]{X__new_values_exp.eps} \caption{Fund growth, exponential salary growth} \label{fig-X-new-exp} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.4]{y__new_values_lin.eps} \caption{Optimal investment strategy, linear salary growth} \label{fig-y-new-lin} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.4]{X__new_values_lin.eps} \caption{Fund growth, linear salary growth} \label{fig-X-new-lin} \end{figure} We observe the following: \begin{enumerate} \item as expected, with exponential salary growth the final salary is significantly larger (more than double) than that with linear salary growth; therefore, despite the old replacement ratio (that does not depend on the salary growth) is the same ($70\%$), the old pension is much larger with a dynamic career than with a smooth one; \item the investment strategy for the exponential growth is remarkably riskier than that for the linear growth: for the exponential growth in almost $75\%$ of the cases the portfolio is entirely invested in the risky asset for all $t$, while for the linear growth all percentiles of $y^{so}(t)$ (apart from the $95th$ one) decrease gradually from 1 to 0 over time; \item the larger riskiness of the strategy for the exponential growth is due to the larger gap between the old and the new pension: the old replacement ratio is $70\%$ in both cases, but the new replacement ratio (that does depend on the salary growth) is $32\%$ for the exponential growth and $51\%$ for the linear growth; the larger gap to fill in for the exponential increase entails riskier strategies, and the smaller gap to fill in for the linear increase entails less risky strategies; these results seem to suggest that the new reform affects to a larger extent workers with a dynamic career than workers with a smooth career; \item with both salary increases, on average the investment in the risky asset decreases over time and approaches 0 when retirement approaches; this result is in line with previous results on optimal investment strategies for DC pension schemes (see e.g. \citeasnoun{haberman-vigna}) and is consistent with the lifestyle strategy (see \citeasnoun{cairns}), that is an investment strategy largely adopted in DC pension funds in the UK. \end{enumerate} Finally, because the aim of this work is to reduce the gap between the old and the new pension, it is fundamental to investigate to what extent the gap is reduced. If the worker joins the pension fund for $T$ years, then at retirement he will receive the new pension $P_n$ plus the additional pension $P_{add}$ provided by the pension fund. Therefore, his total pension will be $P_{tot}$, given by \[ P_{tot}=P_n+P_{add}\] where \[ P_{add}=\frac{X^{so}(T)}{\ddot{a}_{65}(1.5\%)}. \] In order to compare the old pension with the total pension, \begin{itemize} \item Figure \ref{fig-pens-new-exp} reports, in the case of exponential salary growth, the distribution, over the 1000 scenarios, of the final pension $P_{tot}$ that the retiree will receive; the old pension $P_o$ is also indicated, as a benchmark; \item Figure \ref{fig-pens-new-lin} reports, in the case of linear salary growth, the distribution, over the 1000 scenarios, of the final pension $P_{tot}$ that the retiree will receive; the old pension $P_o$ is also indicated, as a benchmark. \end{itemize} \begin{figure}[H] \centering \includegraphics[scale=0.4]{Pens_exp_hist_65.eps} \caption{Distribution of final pension, exponential salary growth} \label{fig-pens-new-exp} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.4]{Pens_lin_hist_65.eps} \caption{Distribution of final pension, linear salary growth} \label{fig-pens-new-lin} \end{figure} We notice that \begin{itemize} \item in the case of exponential salary growth, the final pension is distributed more uniformly between 3.3 and 5.6 (the old pension being 5.7), while for the linear salary growth there is a large concentration of the final pension on the immediate left of the target, between 2.5 and the target 2.66; \item what observed above is due to the fact that it is easier to reach the target in the case of linear increase than in the case of exponential increase, because (as observed in point 3. above) the gap between the old and the new pension is larger with the exponential increase than with the linear increase; \item the fact that it is relatively easy to approach the target with the linear increase is consistent with the rate of increase of the annual targets $r^*=4.86\%$ (see Table 1). This rate lies between the return on the riskless asset ($1.5\%$) and the expected return on the risky asset ($6\%$); opposite, with exponential salary increase the rate of increase of annual targets is $r^*=7.76\%$ that is larger than the expected return on the risky asset. \end{itemize} \section{Changing the retirement age} In Section \ref{sec:base-case-results} we have set the retirement age equal to 65. It is clear that results strongly depend on the retirement age. In this section we consider different retirement ages, namely 60, 63, 65, 67 and 70, and investigate how the distribution of the final pension $P_{tot}$ changes accordingly. Table 2 reports, for each retirement age considered $x_T=x$, the annuity value $\ddot{a}_{x}$, the conversion factor from lump sum into pension $\beta_x$, and, for both cases of exponential and linear growths, the old pension $P_o$, the new pension $P_n$, the old net replacement ratio $\Pi_o$ and the new net replacement ratio $\Pi_n$. \begin{center} \begin{tabular}{l|l||ll|llll||llll} \hline $\mathbf{x_T}$ & $\mathbf{T}$ & $\mathbf{\ddot{a}_{x}}$ & $\boldsymbol{\beta_x}$ & $\mathbf{P_o^e}$ & $\mathbf{P_n^e}$ & $\mathbf{\Pi_o^e}$ & $\mathbf{\Pi_n^e}$ & $\mathbf{P_o^l}$ & $\mathbf{P_n^l}$ & $\mathbf{\Pi_o^l}$ & $\mathbf{\Pi_n^l}$\\ \hline 60 & 30 & 20.95 & 0.048 & 3.63 & 1.57 & 0.6 & 0.26 & 2.04 & 1.26 & 0.6&0.37\\ \hline 63 & 33 & 19.11 & 0.052 & 4.78 & 2.15 & 0.66 & 0.3 & 2.4 & 1.63 & 0.66&0.45\\ \hline 65 & 35 & 17.88 & 0.056 & 5.72 & 2.66 & 0.7 & 0.33 & 2.66 & 1.94 & 0.7&0.51\\ \hline 67 & 37 & 16.64 & 0.06 & 6.81 & 3.29 & 0.74 & 0.36 & 2.93 & 2.3 & 0.74&0.58\\ \hline 70 & 40 & 14.81 & 0.068 & 8.82 & 4.56 & 0.8 & 0.41 & 3.36 & 2.98 & 0.8&0.71\\ \hline \end{tabular}\vspace{0.3cm}\\ Table 2. \end{center} The graphs in Figure \ref{fig:age} report the pension distribution with the different retirement ages considered in the case of exponential salary growth, while the graphs in Figure \ref{fig:age-lin} report those related to the linear salary increase. \begin{figure}[H] \centering \includegraphics[width=.36\columnwidth]{Pens_hist_60.eps}\includegraphics[width=.36\columnwidth]{Pens_hist_63.eps}\includegraphics[width=.36\columnwidth]{Pens_hist_65.eps} \includegraphics[width=.36\columnwidth]{Pens_hist_67.eps}\includegraphics[width=.36\columnwidth]{Pens_hist_70.eps} \caption{Pension distribution with different retirement ages (exponential growth)} \label{fig:age} \end{figure} \begin{figure}[H] \centering \includegraphics[width=.36\columnwidth]{Pens_hist_60_lin.eps}\includegraphics[width=.36\columnwidth]{Pens_hist_63_lin.eps}\includegraphics[width=.36\columnwidth]{Pens_hist_65_lin.eps} \includegraphics[width=.36\columnwidth]{Pens_hist_67_lin.eps}\includegraphics[width=.36\columnwidth]{Pens_hist_70_lin.eps} \caption{Pension distribution with different retirement ages (linear growth)} \label{fig:age-lin} \end{figure} We notice the following: \begin{itemize} \item The comparison between exponential and linear salary increase confirms what already observed in Section \ref{sec:base-case-results} at all ages: the distribution of the final pension is more spread out in the area on the left of the old pension in the case of exponential salary increase, while it is more peaked immediately on the left of the old pension in case of linear target, showing the larger chance of approaching the target in the case of linear increase. \item With both salary increases, we observe that it is easier to reach the target with an older retirement age: higher retirement age means lower gap between the old and the total pension. This is intuitive and expected, and is due to different reasons: (i) because of actuarial fairness principles, the highest the retirement age, the lower the price of the lifetime annuity; (ii) a higher retirement age also means that the fund grows for a longer time, meaning a higher lump sum to be converted into pension. These two factors imply that the higher the retirement age, the higher the final pension, everything else being equal. \item In the extreme case of linear salary increase and retirement age equal to 70, the difference between the old and the new pension is so small that investing in the riskless asset for the entire working life (40 years) is sufficient to cover the gap, and the final pension $P_{tot}=3.464$ turns out to be higher than the old pension $P_o=3.36$ in $100\%$ of the cases. \end{itemize} \section{Break even points } In the previous sections we have seen that, expectedly, with both salary growths the old pension is larger than the new pension. This result heavily depends on the choice of the parameters and may no longer hold if some parameters change. We have calculated what values of some key parameters would equate the old pension to the new pension, leaving the values of the remaining parameters equal to those of the base case. In particular, we have calculated the break even points for the parameters $w$, $\beta$ and $g$. Figure 9 reports the six plots of the quantity $P_o-P_n$ (difference between the old and the new pension) as a function of $\beta$, $g$ and $w$, in both cases of exponential and linear salary. In particular, figures \ref{bep-exp-beta}, \ref{bep-exp-g} and \ref{bep-exp-w} report the exponential salary case, while figures \ref{bep-lin-beta}, \ref{bep-lin-g} and \ref{bep-lin-w} report the linear salary case. The red point is the base case, the green point on the $x-$axis is the break even point that equals the old pension to the new pension. \begin{figure}[H]\label{fig:break-even-points} \centering \subfloat[][Break even point for $\beta$ (exp salary)] {\includegraphics[width=.42\columnwidth]{break_even_exp_beta.eps}\label{bep-exp-beta}} \qquad \qquad \subfloat[][Break even point for $\beta$ (lin salary)] {\includegraphics[width=.42\columnwidth]{break_even_lin_beta.eps}\label{bep-lin-beta}} \\ \subfloat[][Break even point for $w$ (exp salary)] {\includegraphics[width=.42\columnwidth]{break_even_exp_w.eps}\label{bep-exp-w}} \qquad \qquad \subfloat[][Break even point for $w$ (lin salary)] {\includegraphics[width=.42\columnwidth]{break_even_lin_w.eps}\label{bep-lin-w}} \\ \subfloat[][Break even point for $g$ (exp salary)] {\includegraphics[width=.42\columnwidth]{break_even_exp_g.eps}\label{bep-exp-g}} \qquad \qquad \subfloat[][Break even point for $g$ (lin salary)] {\includegraphics[width=.42\columnwidth]{break_even_lin_g.eps}\label{bep-lin-g}} \caption{Break even points w.r.t. $\beta$, $g$ and $w$ for exponential and linear salary.} \end{figure} We notice the following: \begin{itemize} \item the difference between the old and the new pension decreases with $\beta$, i.e., it increases with the price of the annuity $1/\beta$. This is obvious, because the old pension is not affected by the price of the annuity, while the new pension is affected by $\beta$ and it increases with it; therefore, the higher $\beta$ the higher $P_n$, the lower $P_o-P_n$. With exponential increase, the old and the new pension are equal when $\beta=0.12$, that corresponds to price of the unitary annuity equal to approximately 8.33, against the base value of 17.785; with linear increase, the old and the new pension are equal when approximately $\beta=0.078$, that corresponds to price of the unitary annuity equal to 12.82, against the base value of 17.785; \item the difference between the old and the new pension decreases with $w$. This is again obvious, because the old pension is not affected by the mean GDP growth rate $w$, while the new pension is affected by $w$ and it increases with it; therefore, the larger $w$ the larger the new pension, the lower the gap between the old and the new pension. With exponential increase, the old and the new pension are equal with a mean GDP of approximately $w=6.5\%$, with linear increase, the old and the new pension are equal with mean GDP of approximately $w=3.5\%$; \item the difference between the old and the new pension increases with $g$. This result is interesting, because both the old pension and the new pension are positively correlated with the salary growth $g$, but to a different extent: the old pension is affected by it only via the final salary that is used to calculate the pension income, the new pension is affected by it via the yearly contributions that are paid into the fund and accumulated until retirement. Figures \ref{bep-exp-g} and \ref{bep-lin-g} seem to suggest that the impact of $g$ on the old pension is larger than that on the new pension, leading to a larger gap in case of increase of $g$; \item the break even point for the salary increase rate is about $g_e=1\%$ for exponential salary increase, about $g_l=1.5\%$ for linear increase. This result indicates that with sufficiently small salary increase the old pension and the new pension coincide. In the presence of salary increase rates smaller than the break even point, the new pension is larger than the old one. This is consistent with what observed in point 3. in Section \ref{sec:base-case-results}: the effect of the pension reform is more considerable for workers with dynamic career than for workers with smooth career. \end{itemize} \section{Conclusions} In this paper, we have tackled the issue of the gap between an ``old'' pre-reform salary-related pension and a ``new'' post-reform contribution-based pension. We have investigated to what extent the gap can be reduced by adding to the state pension another pension provided by a DC pension scheme. We have used stochastic optimal control and a target-based approach to find the optimal investment strategy suitable to cover the gap between the salary-related and the contribution-based pension. The numerical simulations suggest that the gap between the salary-related pension and the contribution-based pension is larger for workers with dynamic career than for workers with a stagnant career, meaning more difficulties for the first class of workers to fill in the gap than for the second class of workers, even by assuming a higher savings capacity. Intuitively, the gap is easier to cover in the case of late retirement, and vice versa. This result is consistent with results in \citeasnoun{borella-codamoscarola-jpef}. A slow salary increase associate to late retirement age can produce a new pension that is almost equal to (or even exceeds) the old pension. Expectedly, the gap reduces when the mean GDP increase and when the price of the annuity increases. Interestingly, the gap increases when the rate of increase of the salary increases. \bibliographystyle{dcu}
{ "timestamp": "2018-04-17T02:10:22", "yymm": "1804", "arxiv_id": "1804.05354", "language": "en", "url": "https://arxiv.org/abs/1804.05354" }
\section{Introduction} This paper is devoted to the study of differential forms on Lie groupoids with coefficients. While multiplicative differential forms with values in representations of Lie groupoids have been treated in the literature, see e.g. \cite{CSS}, here we consider the broader context of forms with values in $\VB$-groupoids. Recall that $\VB$-groupoids are, roughly, vector bundles in the category of Lie groupoids. They naturally extend the notion of Lie-groupoid representations by enconding the information of representations {\em up to homotopy} \cite{Gra-Met1} (see also \cite{Arias-Crai1}), with the tangent bundle playing the role of the adjoint representation. The infinitesimal counterparts of $\VB$-groupoids are known as $\VB$-algebroids. In this paper, we introduce the notion of multiplicative differential form with values in $\VB$-groupoids and establish two main results: first, we provide a purely infinitesimal description of these objects, extending the works in \cite{AC, BC, CSS}; second, we describe a cohomology theory for such differential forms, that we prove to be Morita invariant. Multiplicative differential forms (with trivial coeffcients) on Lie groupoids appear in various contexts, often in connection with the (Lie-theoretic) integration of geometric structures: e.g. in symplectic and pre-symplectic groupoids \cite{Wein, bcwz}, which are the global objects integrating Poisson and Dirac structures, respectively. The Lie theory of multiplicative differential forms with trivial coefficients is by now totally understood \cite{AC, BC}. More recently, differential forms with coefficients in a representation and their infinitesimal versions, known as Spencer operators, were studied in \cite{CSS} in an effort to understand the work of Cartan on Lie pseudogroups from a global perspective (see also \cite{CO}). One important application of \cite{CSS} is providing the infinitesimal characterization of certain multiplicative distributions on Lie groupoids, a relevant topic due to its relation with quantization of Poisson manifolds \cite{Haw} and the Lie theory of Dirac structures \cite{Jotz-Ortiz, Jotz}. There is, however, an additional requirement the distributions studied in \cite{CSS} must satisfy: the tangent space of the manifold of units of the Lie groupoid must be contained in the distribution. By allowing more general $\VB$-groupoids as coefficients, we are able to drop this extra condition and develop a Lie theory for general multiplicative distributions. There is yet another important feature of differential forms on Lie groupoids: their multiplicativity is a cocycle condition on a differential complex, known as the Bott-Shulman complex. Its cohomology is a Morita invariant of the Lie groupoid related to the de Rham cohomology of the associated classifying space \cite{Behrend}. Here, we introduce a differential complex for forms with coeffcients generalizing the Bott-Shulman complex and prove the Morita invariance of its cohomology. In view of the recent work on the Morita invariance of $\VB$-groupoids \cite{Hoyo-Ort}, it is to be expected that this complex will be a useful tool in understanding connections on vector bundles over differentiable stacks. \smallskip \paragraph{\bf Statement of results.} To explain our main results it is necessary to recall some facts regarding $\VB$-groupoids. These are presented in greater detail in Section 2. Let $\G \toto M$ be a Lie groupoid and $\V \toto E$ be a $\VB$-groupoid over $\G$. We denote by $C \to M$ the \textit{core bundle of $\V$}; it is defined as the kernel of the source map (seen as a vector bundle morphism) $\widetilde{\sour}: \V|_M \to E$. The target map $\widetilde{\tar}: \V|_M \to E$ induces a vector bundle morphism $\partial: C \to E$ known as the \textit{core anchor}. The Lie algebroid of $\G$ and the $\VB$-algebroid of $V$ are denoted by $A \to M$ and $\v \to E$, respectively. Recall that $\v$ is also a double vector bundle, where the second vector bundle structure $\v \to A$ comes from applying the Lie functor to the structure maps of the vector bundle $\V \to \G$. The sections of $\v \to E$ which are vector bundle morphisms from $E \to M$ to $\v \to A$ are known as \textit{linear sections} and we denote them as $\Gamma_{lin}(E,\v)$. The projection of a linear section on the section of $A$ it covers is denoted by $\pr: \Gamma_{lin}(E,\v) \to \Gamma(A)$. A fundamental result regarding linear sections is that the right-invariant vector fields of $\V$ coming from $\Gamma_{lin}(E,\v)$ are linear \cite[Prop.~3.9]{Esp-Tor-Vit} (see also Proposition \ref{lemma:right_linear} below). So, there is a derivation $\Delta_\eta: \Gamma(\V) \to \Gamma(\V)$ and a Lie derivative operator $L_{\Delta_\eta}: \Omega(\G, \V) \to \Omega(\G, \V)$ corresponding to each element $\eta \in \Gamma_{lin}(E,\v)$. We review basic facts about linear vector fields and derivations on vector bundles in the Appendix. Recently, in \cite{Esp-Tor-Vit}, derivations of $\VB$-groupoids coming from multiplicative linear vector fields on $\V$ were studied. It is important to stress that their derivations are not related to ours. We are now able to state our main result in the Lie theory of differential forms. First, an element $\vartheta \in \Omega^q(\G, \V)$ is said to be \textit{multiplicative} if it defines a $\VB$-groupoid morphism when seen as a map $T\G \oplus \dots \oplus T\G \to \V$. \begin{theorem}\label{thm:intro1} If $\G \toto M$ is a source 1-connected groupoid, then there is a natural 1-1 correspondence between multiplicative forms $\vartheta \in \Omega^q(\G, \V)$ and triples $(D,l,\theta)$, where $l: A \to \wedge^{q-1} T^*M \otimes C$ is a vector bundle map, $\theta \in \Omega^q(M, E)$ and $D: \Gamma_{lin}(E,\v) \to \Omega^q(M, C)$ satisfies \begin{equation}\label{comp_eq} D(\B \Phi) = - \Phi \circ \theta, \,\, D(f \eta) = f D(\eta) + df \wedge l(\pr(\eta)), \end{equation} where $f \in C^{\infty}(M), \,\, \eta \in \Gamma_{lin}(E,\v)$, $\B \Phi \in \Gamma_{lin}(E, \v)$ is the linear section covering the zero section of $A$ corresponding to a vector bundle morphism $\Phi: E \to C$. Also, the following equations hold, for $\eta_1, \, \eta_2 \in \Gamma_{lin}(E,\v)$, $\alpha, \, \beta \in \Gamma(A)$: \begin{align*} \tag{IM1} D([\eta_1, \eta_2]) & = L_{\nabla_{\eta_1}} D(\eta_2) - L_{\nabla_{\eta_2}} D(\eta_1)\\ \tag{IM2}l([\pr(\eta), \beta]) & = L_{\nabla_{\eta}} l(\beta) - i_{\rho(\beta)} D(\eta)\\ \tag{IM3}i_{\rho(\alpha)}l(\beta) & = - i_{\rho(\beta)} l(\alpha)\\ \tag{IM4}L_{\nabla_\eta} \theta & = \partial(D(\eta))\\ \tag{IM5}i_{\rho(\alpha)} \theta & = \partial(l(\alpha)), \end{align*} where $\nabla$ is the fat representation of $\Gamma_{lin}(E,\v)$ on $\partial: C \to E$, $\rho: A \to TM$ is the anchor of $A$. The triple $(D,l, \theta)$ is obtained from $\vartheta$ by the formulas: $$ D(\eta)= L_{\Delta_\eta}(\vartheta)|_M, \,\, l(\alpha) = i_{\overrightarrow{\alpha}} \vartheta|_M, \,\, \theta = \vartheta|_M. $$ \end{theorem} We shall refer to the set of equations (IM1)-(IM5) as the \textit{IM equations} (where IM stands for ``infinitesimally multiplicative'') and to a triple $(D,l,\theta)$ satisfying them together with \eqref{comp_eq} as an \textit{IM $q$-form on $A$ with values in $\v$}. In section 3, we show how Theorem \ref{thm:intro1} recovers previous infinitesimal-global results in the literature regarding differential forms on Lie groupoids. In this section, we also show how multiplicative forms with values in $\VB$-groupoids and IM forms with values in $\VB$-algebroids give rise to a notion of multiplicative forms and IM forms with values in representations up to homotopy. Some of the equations appearing in this context have also appeared in \cite{Wald} in connection with higher gauge theory. A distribution $\H \subset T\G$ on the Lie groupoid $\G$ is called \textit{multiplicative} if it is a subgroupoid of the tangent groupoid $T\G$. These distributions can be studied via Theorem \ref{thm:intro1} by considering the quotient projection $\vartheta: T\G \to T\G/\H$ as a 1-form with value in a $\VB$-groupoid. In Section 6, we show how the resulting IM 1-form on $A$ with values in $\mathrm{Lie}(T\G/\H)$ can be refined to give an infinitesimal description of multiplicative distributions using the notion of \textit{IM distributions}. The infinitesimal-global correspondence between multiplicative and IM distributions (Theorem \ref{thm:IM_dist}) recovers the result of \cite{CSS} characterizing multiplicative distributions when the base manifold of $\H$ is $TM$. It is important to mention that the set of equations satisfied by IM distributions does not depend on the choice of connections on $A$. This improves the characterization of $\VB$-subalgebroids of $TA$ appering in \cite{Dru-Jotz-Ort}. Our approach to prove Theorem \ref{thm:intro1} is built on ideas developed in \cite{BC, Bur-Drum}. Namely, the multiplicativity of $\vartheta \in \Omega^q(\G,\V)$ can be characterized by a cocycle equation for the corresponding function on the big groupoid $$ \bG = T\G \oplus \dots \oplus T\G \oplus \V^* $$ and the corresponding IM $q$-form on $A$ with values in $\v$ can be obtained by taking Lie derivatives along right-invariant vector fields of $\bG$ coming from linear and other special sections of its Lie algebroid known as \textit{core} sections. This is done in Section 5. The multiplicative forms with values in a $\VB$-groupoid are also 1-cocycles in a complex $C^{\bullet,q}(\V)$ defined as follows: \begin{equation*} \begin{aligned} \C^{p,q}(\V) & = \{\vartheta \in \Omega^q(B_p \G, \pr_1^*\V) \,\,| \,\,\ \widetilde{\sour} \circ \vartheta = \partial_0^*\theta, \, \, \text{ for some } \theta \in \Omega^q(B_{p-1}\G, t^*E)\}, \end{aligned} \end{equation*} where $B_p\G$ is the space of composable $p$-arrows, $\pr_1: (g_1, \dots, g_p) \mapsto g_1$ is the projection on the first arrow, $t:(g_1,\dots, g_{p-1}) \mapsto \tar(g_1)$, for the target map, and $\partial_0: (g_1, \dots, g_p) \mapsto (g_2, \dots, g_p)$ is the 0th face map. Note that $C^{\bullet,0}(\V) = C_{\VB}^\bullet(\V)$, the $\VB$-groupoid complex introduced in \cite{Gra-Met1} to calculate the cohomology of $\G$ with values in 2-term representation up to homotopy. Also, the differential $\delta: C^{\bullet,q}(\V) \to C^{\bullet+1, q}(\V)$ (see \eqref{differential} for the explicit formula) is a natural extension of the differential on $C^{\bullet}_{\VB}(\V)$. Our second main result is the following: \begin{theorem}\label{thm:intro2} The cohomology of the complex $C^{\bullet,q}(\V)$ is a Morita invariant of $\G$. \end{theorem} The construction of the differential complex and some particular examples are presented in Section 4. The proof of Theorem \ref{thm:intro2} is contained in Section 5.4 and also explores the idea of seeing differential forms as functions in a bigger groupoid. In fact, we prove that $C^{\bullet, q}(\V)$ can be embedded as a subcomplex of the differentiable cochain complex of $\bG$ and use this fact to prove the result. As corollaries of Theorem \ref{thm:intro2} we recover the result of Morita invariance of $\VB$-groupoid cohomology obtained in \cite{Hoyo-Ort} and the Morita invariance of C\"ech cohomology of $\G$ with values in the sheaf of $q$-differential forms $\Omega^q$ obtained in \cite{Behrend}. \subsection*{Acknowledments} This work is based on the second author's doctoral thesis \cite{Egea} carried out at IMPA under the supervision of Henrique Bursztyn and supported by a doctoral scholarship from CNPq. During the final stages of the paper, the second author held a post doc position at ICMC-USP supported by a fellowship from CAPES. The authors would like to thank Henrique Bursztyn for suggesting the problem as well as for many conversations which helped to improve this work. \tableofcontents \section{VB-groupoids and VB-algebroids} In this preliminary section, we recall some properties of $\VB$-groupoids and $\VB$-algebroids focusing in the study of the right-invariant vector fields on $\VB$-groupoids coming from core and linear sections on the respective $\VB$-algebroids. It will be particularly important to understand how this relationship behaves under dualization. Our main reference here is \cite{Bur-Cab-Hoy} (see also \cite{Mckz2}). \subsection{Definitions} For a vector bundle $E \to M$, we shall refer to the multiplication by non-negative scalars $\h: \mathbb{R}_{\geq 0}\times E \Arrow E$, $\h_\lambda(e)=\lambda e$ as the \textit{homogeneous structure} on $E$. A \textit{$\VB$-groupoid} $\V \toto E$ over $\G \toto M$ is a vector bundle $p_\V: \V \to \G$ such that the homogeneous structure on $\V$ defines Lie groupoid morphisms for each $\lambda$. We shall denote the source and target maps for the groupoid $\G \toto M$ as $\sour, \tar$ and, for $\V \toto E$, as $\widetilde{\sour}, \widetilde{\tar}$. As usual, we shall denote the multiplication on $\G$ by concatenation and on $\V$ by $\bullet$, the inversion using the $(\cdot)^{-1}$ notation and treat the unit maps as inclusions $M \subset \G$, $E \subset \V$. It is important to emphasize that the compatibility between the multiplication and the linear structure on $\V$ can be also presented as the \textit{interchange law}: if $(v_1, v_2), (w_1,w_2) \in \V_{g_1} \times \V_{g_2}$ are composable pairs of vectors, then $(v_1+w_1, v_2+w_2)$ is composable and \begin{equation} (v_1 + w_1)\bullet (v_2 + w_2) = v_1 \bullet v_2 + w_1\bullet w_2. \end{equation} The \textit{core bundle} of $\V$ is the vector bundle $C \to M$ defined as $C:=\ker(\widetilde{\sour})|_M $. There is a short exact sequence of vector bundles over $\G$ \begin{equation}\label{core_ses} 0 \to \tar^*C \to \V \to \sour^*E \to 0 \end{equation} called the \textit{core exact sequence}. The inclusion $\tar^*C \to \V$ defines a map $c \mapsto c_R$ on the level of sections given as $ c_R(g) = c(\tar(g))\bullet 0_g. $ The target map $\widetilde{\tar}$ restricted to $C$ defines a vector bundle map $\partial:= \widetilde{\tar}|_C: C \to E$ called the \textit{core anchor}. It will be important to introduce left-core sections $c_L \in \Gamma(\V)$ as well: they are defined by $$ c_L(g) = - 0_g \bullet c(\sour(g))^{-1} = 0_g \bullet (c- \partial(c))(\sour(g)), \,\,\, c \in \Gamma(C). $$ \begin{example}\em For a Lie groupoid $\G \toto M$, its tangent bundle is an example of $\VB$-groupoid, $T\G \toto TM$. The core bundle is the Lie algebroid of $\G$, $A \to M$, with core anchor given by the anchor of the Lie algebroid, $\rho: A \to TM$. Also, for a section $\alpha \in \Gamma(A)$, $\alpha_R$ (resp. $\alpha_L) \in \mathfrak{X}(\G)$ is the right-invariant (resp. left-invariant) vector field which we denote here as $\overrightarrow{\alpha}$ (resp. $\overleftarrow{\alpha}$). \end{example} \begin{example}\label{semi-direct}\em Let $\mathcal{E}=C[1]\oplus E$ \footnote{$C$ has degree $-1$ and $E$ has degree $0$.} be a graded vector bundle carrying a representation up to homotopy (ruth) of $\G \toto M$. We denote by $\partial: C\to E$ the 2-term complex, $\Psi_g$ the quasi-action on the complex, $$ \begin{CD} C_{\sour(g)} @> \Psi_g >> C_{\tar(g)}\\ @V \partial VV @VV \partial V \\ E_{\sour(g)} @> \Psi_g>>E_{\tar(g)}, \end{CD} $$ and $\Omega_{g_1, g_2}: E_{\sour(g_2)} \to C_{\tar(g_1)}$ the curvature term associated to the representation (see \cite{Arias-Crai1, Gra-Met1}). \textit{The semidirect product of $\G$ with the ruth} is the vector bundle $\V = \tar^*C \oplus \sour^*E \to \G$ endowed with the $\VB$-groupoid structure $\V \toto E$ given by: \begin{align}\label{Str} \begin{split} \widetilde{\sour}(g, c, e) & = e, \,\, \widetilde{\tar}(g, c, e) = g \cdot e + \partial(c)\\ (g_1, c_1, e_1) \bullet (g_2, c_2, e_2) & = (g_1g_2, c_1 + g_1 \cdot c_2 - \Omega_{(g_1,g_2)}(e_2), e_2)\\ (g,c,e)^{-1} & = (g^{-1}, \,\,\Omega_{(g^{-1}, g)}(e) - g^{-1} \cdot c, \partial(c) + g \cdot e). \end{split} \end{align} \end{example} \begin{example}\em A \textbf{double vector bundle} is a pair of vector bundles $\v \to E$ and $A \to M$ such that $\v$ is a $\VB$-groupoid over $A$ \footnote{A vector bundle can be seen as a Lie groupoid where $\sour=\tar=$ the bundle projection and multiplication is the fiberwise sum.}. In this case, one can prove that $\v \to A$ is also a $\VB$-groupoid over $E \to M$. \end{example} \paragraph{\bf Differentiation.} A \textit{$\VB$-algebroid} is a pair of Lie algebroids $\v \to E$, $A \to M$ such that $p_\v: \v \to A$ is a vector bundle and the homogeneous structure on $\v$ defines Lie algebroid morphisms for each $\lambda$. In particular, a $\VB$-algebroid is a double vector bundle. Let $C_\v=\ker(\pi) \cap \ker(p_\v)$ be its core bundle and $\partial_\v: C_\v \to E$ be the restriction of the anchor map $\rho_\v: \v \to TE$ to the core bundle. There is an inclusion $\B: \Gamma(C_\v) \hookrightarrow \Gamma(E,\v)$ defined as follows: \begin{equation}\label{B_inj} \mathcal{B}c (e) = 0_e +_A c(x), \,\, e \in E_x, \end{equation} where $+_A$ is the sum on the vector bundle $\v \to A$. We shall refer to such sections as \textit{core sections}. The Lie functor applied to a $\VB$-groupoid gives rise to a $\VB$-algebroid \cite{Bur-Cab-Hoy}. In the next Proposition, we investigate how core sections integrate to $\VB$-groupoids. \begin{proposition}\label{right_core} If $\v = \mathrm{Lie}(\V)$, then $$ C_\v = C, \,\,\partial_\v = \partial. $$ Moreover, for $c \in \Gamma(C)$, the right-invariant vector field $\overrightarrow{\mathcal{B}c} \in \mathfrak{X}(\V)$ is the vertical lift \footnote{On a vector bundle $E \to M$, any section $u \in \Gamma(E)$ gives rise to a vector field $u^\uparrow \in \frakx(E)$ called \textit{the vertical lift of $u$} whose flow is given by $e \mapsto e + \epsilon u(x)$, for $e \in E_x$.} of $c_R \in \Gamma(\V)$: $$ \overrightarrow{\mathcal{B}c} = c_R^\uparrow. $$ In particular, $[\B c_1, \B c_2] = 0$, for $c_1, c_2 \in \Gamma(C)$. \end{proposition} \begin{proof} The first statament follows from expressing $T\widetilde{\sour}, \, Tp_\V$ and $T\widetilde{\tar}$ under the decomposition $T_{0_x}\V = \V_x \oplus T_x \G = \V_x \oplus T_xM \oplus A_x$ and $T_{0_x}E = E_x \oplus T_xM$, for $x \in M, \, 0_x \in E_x$. A section $c: M \to C$ induces a flow of bisections of $\V$, $b_{\epsilon}: E \to \V$, by $$ b_{\epsilon}(e) = e+ \epsilon \,c(x), \,\,\, \text{ for } e \in E_x. $$ Upon differentiation, one obtains $\B c (e)= \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} b_{\epsilon}(e)$. So, the flow of $\overrightarrow{\mathcal{B}c}$ is $\Fl^\epsilon_{\overrightarrow{\B c}}(v) = b_\epsilon(\widetilde{\tar}(v)) \bullet v$. But, for $v \in \V_g$, using the interchange law, \begin{align*} b_\epsilon(\widetilde{\tar}(v)) \bullet v & = (\widetilde{\tar}(v)+ \epsilon \,c(\tar(g)))\bullet (v + 0_{g}) = v + \epsilon \, c(\tar(g))\bullet 0_g \\ & = v + \epsilon \, c_R(g). \end{align*} The result now follows from differentiation. At last, the statement regarding the Lie bracket of $\B c_1, \, \B c_2$ follows from the fact that $[\mu_1^\uparrow, \mu_2^\uparrow] = 0$, for any sections $\mu_1, \mu_2: \G \to \V$. \end{proof} In the following, we shall denote the inclusion $C_x \hookrightarrow \v_{0_x}$ by $c \mapsto \overline{c}$: $$ \overline{c} = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0}(\epsilon c) \in \v \subset T_{0_x}\V, \,\,\, x \in M, 0_x \in E. $$ \begin{remark}\em \label{Bphi} One can extend the construction of core sections $\mathcal{B}c$ to any fiber preserving map $\Phi: E \to C$. Indeed, $ \mathcal{B}\Phi (e) = 0_e +_A \overline{\Phi}(e), $ defines a section of $\v \to E$ coming from differentiating the flow of bisections $b_\epsilon: E \to \V$, $b_\epsilon(e) = e+ \epsilon \, \Phi(e)$. In the case $\Phi$ is a vector bundle morphism, it is straightforward to check that $ \overrightarrow{\mathcal{B}\Phi} = W_{\Phi_R}, $ the linear vector field on $\V$ associated to the endomorphism $\Phi_R: \V \to \V$, $\Phi_R(v)= \Phi(\widetilde{\tar}(v))\bullet 0_g$, for $v \in \V_g$. \end{remark} \medskip \subsection{Fat groupoid and fat algebroid.} The theory of derivations and linear vector fields is recalled in the appendix for the convenience of the reader. \subsubsection{Definitions and the fat representations} Following \cite{Gra-Met1}, we define the \textit{fat category} of $\V$ as the category whose space of objects is $\mathcal{F}(\V)$, the space of pointwise splittings of the core exact sequence \eqref{core_ses}. More precisely, an element of $\mathcal{F}(\V)$ is a pair $(g,\b)$, where $ \b: E_{\sour(g)} \to \V_g $ is a linear map satisfying $\widetilde{\sour}\circ \b = \mathrm{id}$. There is a Lie category structure \footnote{A Lie category is defined exactly as a Lie groupoid, except for the existence of inverse.} on $\mathcal{F}(\V) \toto M$ with source, target maps induced by the source and target of $\G$ and multiplication given by: $$ (g_1, \b_1)(g_2, \b_2) = (g_1 g_2, \b_1 \cdot \b_2), \,\, (\b_1 \cdot \b_2)(e) = \b_1(\widetilde{\tar}(\b_2(e)))\bullet \b_2(e). $$ There is a natural Lie category representation of $\mathcal{F}(\V) \toto M$ on the vector bundle $\V|_M \to M$ given as follows: $$ \widehat{\Psi}_{(g,b)}(v) = b(\widetilde{\tar}(v)) \bullet v \bullet b(\widetilde{\sour}(v))^{-1}, \,\, v \in \V_{\sour(g)} $$ The representation $\widehat{\Psi}$ preserves the decomposition $\V|_M = E \oplus C$, inducing a representation $\Psi$ on the core anchor complex $\partial: C \to E$ given by \begin{equation}\label{fat_rep2} \Psi_{(g,\b)}(e) = \widetilde{\tar}(\b(e)), \;\; \Psi_{(g,b)}(c) = \b(\partial(c)) \bullet c \bullet 0_{g^{-1}}. \end{equation} The invertible elements $(g,b)$ of $\F(\V)$ are those which satisfy the additional condition: $\Psi_{(g,b)}: E_{\sour(g)} \to E_{\tar(g)}$ is an isomorphism; they define a Lie groupoid $\mathcal{F}_{\rm inv}(\V) \toto M$ called \textit{the fat groupoid}. The map $b^{-1}: E_{\tar(g)} \to \V_{g^{-1}}$ is given by $$ b^{-1}(e) = b(\Psi_{b}^{-1}(e))^{-1}. $$ Note that the bisections of $\mathcal{F}_{\rm inv}(\V) \toto M$ correspond bijectively to the bisections of $\V \toto E$ satisfying the additional property of being a vector bundle morphism from $E$ to $\V$. The projection $\mathcal{F}(\V) \to \G$ defines an affine bundle with fiber over $g \in \G$ modeled on ${\rm Hom}(E_{\sour(g)}, C_{\tar(g)})$. Also, \eqref{fat_rep2} induces a representation of the groupoid of invertible elements $\F_{\rm inv}(\V) \toto M$ on $\partial: C \to E$ called \textit{the fat representation}. \begin{example}\em For a representation of $\G \toto M$ on a vector bundle $C \to M$, consider the semidirect product $\tar^*C \toto M$ (see Example \ref{semi-direct}). In this case, $E=0$ and one can check that $$ \F(\tar^*C) = \F_{\rm inv}(\tar^*C) \cong \G $$ Also, the fat representation on $C$ is isomorphic to the representation of $\G$ on $C$. \end{example} Let us now study the infinitesimal picture. Let $\pi: \v \to E$ be a $\VB$-algebroid over $A \to M$ and recall that there is an underlying double vector bundle structure on $\v$. As such, one can define $\F(\v)$ and $\F_{\rm inv}(\v)$ as above. The fact that $\widetilde{\sour}= \widetilde{\tar} = \pi$ in this case implies that $\F(\v) = \F_{\rm inv}(\v)$. Also, $\F(\v)$ is a vector bundle which fits into an exact sequence \begin{equation}\label{linear_ses} 0 \to \Hom(E,C) \to \F(\v) \stackrel{\pr} \to A \to 0, \end{equation} where the inclusion $\Hom(E,C) \to \F(\v)$ (at the level of sections) is exactly $\Phi \mapsto \mathcal{B}\Phi$ (see Remark \ref{Bphi}). In the following, we shall denote $\Gamma(\F(\v))$ as $\Gamma_{lin}(E,\v)$ and refer to these sections as \textit{the linear sections of $\v$}. It is important to recall that $\{\B c, \eta\}$, for $c \in \Gamma(C)$ and $\eta \in \Gamma_{lin}(E,\v)$, generate $\Gamma(E, \v)$ as a $C^\infty(E)$-module (see \cite[Prop.~2.2]{Mac-doubles}). \begin{proposition}\label{lemma:right_linear} If $\v = \mathrm{Lie}(\V)$, then $\mathcal{F}(\v)$ is the Lie algebroid of $\mathcal{F}_{\rm inv}(\V)$. Moreover, given a section $\eta \in \Gamma_{lin}(E,\v)$, the corresponding right-invariant vector field $\overrightarrow{\eta} \in \mathfrak{X}(\V)$ is a linear vector field. \end{proposition} \begin{proof} A path on the source fiber of $\mathcal{F}_{\rm inv}(\V)$ starting at the identity section, $\b_{\epsilon}: E_x \to \V_{g_\epsilon}$, $\sour(g_\epsilon) = x$, defines upon differentiation a map $\mathfrak{b}: E_x \to \v_a$, where $a = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} g_\epsilon$. The linearity of $\mathfrak{b}$ follows from differentiating $ \b_{\epsilon} \circ \h_{\lambda} = \h^\V_{\lambda} \circ \b_{\epsilon} $ and the characterization of linear maps as those which commutes with the homogeneous structures \cite{Gra-Rot}. This proves that the Lie algebroid of $\F_{\rm inv}(\V)$ is contained in $\F(\v)$. Conversely, consider a linear map $\mathfrak{b}: E_x \to \v_a$ satisfying $\pi \circ \mathfrak{b} = {\rm id}$ and let $\eta: E \to \v$ be a linear section of $\v \to E$ such that $\eta|_{E_x} = \mathfrak{b}$. Define \begin{equation}\label{bis_flow} \b_\epsilon = \Fl^\epsilon_{\overrightarrow{\eta}}|_{E_x}: E_x \to \V_{\Fl_{\rho(\alpha)}^{\epsilon}}, \end{equation} where $\alpha = \pr(\eta) \in \Gamma(A)$. As $\left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \b_\epsilon = \mathfrak{b}$, the result will follow if we prove that $\b_\epsilon$ is linear for every $\epsilon$. It is straightforward to check that $\h^\V_{1/\lambda} \circ \Fl^\epsilon_{\overrightarrow{\eta}} \circ \h_{\lambda}$ is also the flow of a right invariant vector field, for all $\lambda > 0$. Also, $$ \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \h^\V_{1/\lambda} \circ \Fl^\epsilon_{\overrightarrow{\eta}} \circ \h_{\lambda}(e) = \h^\v_{1/\lambda} \circ \eta \circ \h_{\lambda}(e) = \eta(e), \,\,\forall \, e \in E, $$ by linearity of $\eta$. By uniqueness of integration, it follows that $\Fl^\epsilon_{\overrightarrow{\eta}} \circ \h_{\lambda} = \h^\V_\lambda \circ \Fl_{\overrightarrow{\eta}}^\epsilon$, which implies linearity both of $\b_\epsilon$ and $\overrightarrow{\eta}$. This concludes the proof. \end{proof} We shall refer to $\F(\v)$ as \textit{the fat algebroid}. The next Proposition investigates how the fat representation \eqref{fat_rep2} differentiates to give a representation of $\F(\v)$ on $\partial: C \to E$. \begin{proposition}\label{prop:fat_rep} Let $\v = \mathrm{Lie}(\V)$ and $\nabla$ be the $\F(\v)$-connection on $\partial: C \to E$ obtained by differentiation of \eqref{fat_rep2}. Then, \begin{equation}\label{fat_rep1} \mathcal{B} (\nabla_\eta \,c) = [\eta, \mathcal{B}c],\, \,\, \nabla_{\eta} \,u = \Delta_{\rho_v(\eta)}^\top(u), \end{equation} where $\eta \in \Gamma_{lin}(E,\v),\, u \in \Gamma(E), \, c \in \Gamma(C)$ and $\Delta_{\rho_v(\eta)}^\top: \Gamma(E) \to \Gamma(E)$ is the adjoint derivation associated to the linear vector field $\rho_\v(\eta) \in \frakx(E)$. \end{proposition} \begin{proof} It is straightforward to check the equality restricted to $E$. Given $\eta \in \Gamma_{lin}(E,\v)$, the flow of the right-invariant (linear) vector field $\overrightarrow{\eta} \in \mathfrak{X}(\V)$, $\Fl_{\overrightarrow{\eta}}^\epsilon: \V \to \V$, preserve the subbundle $\tar^*C \hookrightarrow \V$. Indeed, \begin{equation}\label{eq:pull_back} \Fl_{\overrightarrow{\eta}}^\epsilon(c \bullet 0_g) = b_\epsilon(\partial c) \bullet c \bullet 0_{g} = (b_\epsilon(\partial c) \bullet c \bullet 0_{g_{\epsilon}^{-1}}) \bullet 0_{g_\epsilon g} \end{equation} where $b_\epsilon: E_{\tar(g)} \to \V_{g_\epsilon} \in \F_{\rm inv}(\V)$ is \eqref{bis_flow}. Note that we can rephrase \eqref{eq:pull_back} as follows: if $F: \tar^*C \to C$ is the pull-back map, then $F \circ \Fl_{\overrightarrow{\eta}}^\epsilon = \Psi_{(g_\epsilon, b_\epsilon)}\circ F$. This implies that $\Delta^\top_{\overrightarrow{\eta}}(c_R) = (\nabla_\eta c)_R$, for $c \in \Gamma(C)$. But, from Proposition \ref{right_core} and \eqref{der_adjoint}, $$ \overrightarrow{\mathcal{B}(\nabla_\eta c)} = (\nabla_\eta c)_R^\uparrow = (\Delta^\top_{\overrightarrow{\eta}}(c_R))^\uparrow = [\overrightarrow{\eta}, c_R^\uparrow] = [\overrightarrow{\eta}, \overrightarrow{\mathcal{B}c}] = \overrightarrow{[\eta,\mathcal{B}c]}. $$ This concludes the proof. \end{proof} \begin{remark}\em We would like to emphasize that it follows from Propositions \ref{right_core}, \ref{lemma:right_linear} and \ref{prop:fat_rep} that \begin{align} \label{VB1} [\Gamma_{\rm lin}(E, \v), \Gamma_{\rm lin}(E, \v)] & \subset \Gamma_{\rm lin}(E, \v)\\ \label{VB2} [\Gamma_{\rm lin}(E, \v), \Gamma(C)] & \subset \Gamma(C)\\ \label{VB3} [\Gamma(C), \Gamma(C)] & = \{0\}, \end{align} when $\v = \mathrm{Lie}(\V)$. These relations can be used as axioms to define a $\VB$-algebroid structure on a double vector bundle as was done in \cite{Gra-Met2}. From this point of view, it follows directly from the definition that formulas \eqref{fat_rep1} extend to arbitrary $\VB$-algebroids giving a representation on $\partial: C \to E$. It is important to point out that the equivalence between the definition of $\VB$-algebroids we use here and the one given in \cite{Gra-Met2} was proved in \cite{Bur-Cab-Hoy}. \end{remark} \subsubsection{Example: jet groupoid and jet algebroid.} The jet groupoid $J^1\G$ is the Lie groupoid of 1-jets of bisections of $\G$. Its Lie algebroid is the jet algebroid $J^1A$, the space of 1-jets of sections of $A$. It is straightforward to see that $$ J^1\G = \F_{\rm inv}(T\G), \,\,J^1A = \F(TA). $$ The fat representation \eqref{fat_rep2} of $J^1\G$ on the anchor complex $\rho: A \to TM$ is called the adjoint representation \cite{CSS, Ev-Lu-We}. The jet prolongation $\Gamma(A) \ni \alpha \mapsto j^1 \alpha \in \Gamma(J^1A)$ splits the short exact sequence \eqref{linear_ses} on the level of sections giving rise to the so called \textit{classical Spencer operator} $D^{\rm clas}: \Gamma(J^1A) \to \Omega^1(M,A)$. More concretely, for $\eta \in \Gamma(J^1A)$, $\alpha= \pr(\eta)$: \begin{equation}\label{Spencer_decomp} \eta = j^1\alpha - \B D^{\rm clas}(\eta), \end{equation} where $\B D^{\rm clas}(\eta) \in \Gamma(J^1A)$ is the section coming from $D^{\rm clas}(\eta): TM \to A$ (see Remark \ref{Bphi}). The fat representation of $J^1A$ on $\rho: A \to TM$ is given by: \begin{equation}\label{jet_representation} \nabla_{\eta} \,\beta = [\alpha, \beta] + D^{\rm clas}_{\rho(\beta)}(\eta), \,\,\, \nabla_{\eta}\, X = [\rho(\alpha), X] + \rho(D^{\rm clas}_X(\eta)), \end{equation} for $ \alpha = \pr(\eta), \, \beta \in \Gamma(A),\, X \in \frakx(M)$. From Lemma \ref{lemma:right_linear}, there is a linear vector field (hence a derivation) on $T\G$ associated to any section of $J^1A$. The following proposition gives a characterization of these derivations. \begin{proposition} For $\eta \in \Gamma(J^1A)$, the corresponding derivation $\Delta_\eta: \Gamma(T\G) \to \Gamma(T\G)$ is given by \begin{equation}\label{right_inv_der} \Delta_\eta(U)(g) = [\overrightarrow{\pr(\eta)}, U](g) + D^{\rm clas}_{T\tar(U(g))}(\eta) \bullet 0_g, \end{equation} \end{proposition} \begin{proof} Let $\alpha: M \to A$. The linear section of $TA \to TM$ corresponding to $j^1\alpha$ is the differentiation of $\alpha$ as a map, $T\alpha: TM \to TA$. As such, it is well-known that the right-invariant vector field on $T\G$ corresponding to $T\alpha$ is the tangent lift of $\overrightarrow{\alpha}$ \cite[Eq.~(45), \S 9.7]{Mckz2}. Hence, it follows from Example \ref{ex:tan_lift} that \begin{equation}\label{jet_deriv} \Delta_{j^1\alpha}^\top = [\overrightarrow{\alpha}, \cdot]. \end{equation} Now, decompose a section $\eta \in \Gamma(J^1A)$ as $\eta = j^1\alpha - \B D^{\rm clas}(\eta)$ and the result will follow from Remark \ref{Bphi} applied to the morphism $D^{\rm clas}(\eta): TM \to A$; the change in sign is the result of taking adjoints (see Remark \ref{der_linear}). \end{proof} \subsection{Dualization.} For a $\VB$-groupoid $\V \toto E$, its dual has a $\VB$-groupoid structure $\V^* \toto C^*$. We shall prove here that the fat groupoids of $\V$ and $\V^*$ are isomorphic under an adjoint operation. The associated Lie algebroid morphism recovers the correspondence introduced by Mackenzie \cite{Mac-doubles} between linear sections of a double vector bundle and its dual. This will allow us describe the right-invariant vector fields of $\V^*$ coming from linear sections. Let us briefly recall the $\VB$-groupoid structure of $\V^* \toto C^*$. The source and target maps are determined by $$ \<\widetilde{\sour}(\psi), c(\sour(g))\> = \<\psi, c_L(g)\>, \,\,\, \<\widetilde{\tar}(\psi), c(\tar(g))\> = \<\psi, c_R(g)\>, \,\, c \in \Gamma(C), \, \psi \in \V^*_g. $$ The multiplication is determined by \begin{equation}\label{dual_mult} \<\psi_1 \bullet \psi_2, v_1 \bullet v_2 \> = \<\psi_1, v_1\> + \<\psi_2, v_2\>, \,\,\, \psi_i \in \V_{g_i}^*, v_i \in \V_{g_i}, \, i=1,2. \end{equation} Using the decomposition $\V|_M = E\oplus C$, one has that the unit map $C^* \hookrightarrow \V^*|_M$ is the identification $C^* = \mathrm{Ann}(E)$. It is straightforward to check that $E^*$ can be identified with the core of $\V^* \toto C^*$ as follows: given $v \in \V_x, x \in M$, an element $\varphi \in E^*_x$ defines a covector in the core of $\V^*$ by $$ \<\varphi, v\> = \<\varphi, \partial(c)+e\>, \text{ where } v = c+e \in C_x\oplus E_x. $$ Note that the core anchor for $\V^* \toto C^*$ is $\partial^*:E^* \to C^*$. \begin{example}\em The dual $\VB$-groupoid corresponding to $T\G \toto TM$ is the cotangent Lie groupoid $T^*\G \toto A^*$. Also, the dual $\VB$-groupoid corresponding to the semi-direct product $\tar^*C \toto M$ is the (right) action groupoid $C^*\rtimes \G \toto C^*$. \end{example} \begin{example}\em For a double vector bundle, $$ \begin{CD} \v @>>>A\\ @VVV @VVV\\ E @>>> M \end{CD} $$ one can dualize both the horizontal and the vertical $\VB$-groupoid structure giving rise to two different $\VB$-groupoids which are also double vector bundles: $$ \begin{CD} \v_A^* @>>> A\\ @VVV @VVV\\ C^* @>>> M \end{CD} \hspace{15pt} \text{ and } \hspace{15pt} \begin{CD} \v_E^* @>>> C^*\\ @VVV @VVV\\ E @>>> M. \end{CD} $$ There is a non-degenerate pairing $\dbrac{\cdot, \cdot}: \v_E^* \times_{C^*} \v_A^* \to \R$ given as \begin{equation}\label{double_brac} \dbrac{\gamma, \phi} = \<\gamma, d\>_E - \<\phi,d\>_A,\,\,\, \end{equation} where $d \in \v$ is any element such that $(\gamma,d,\phi) \in \v_E^*\times_E \v \times_A \v_A^*$. We refer to \cite[Chapter~9]{Mckz2} for details. \end{example} The fat groupoids associated to $\V$ and $\V^*$ are isomorphic via an adjoint operation $\mathcal{A}: \F_{\rm inv}(\V) \to \F_{\rm inv}(\V^*)$. Given an invertible element $b: E_{\sour(g)} \to \V_g$, define $b^\top: C^*_{\sour(g)} \to \V^*_g$ as follows: for $v \in \V_g$, \begin{equation} \<b^{\top}(\mu), v\> = \<\mu, \, \Psi_{b^{-1}}(c)\>, \,\,\, \text{ where } c = (v-b(\widetilde{\sour}(v)))\bullet 0_{g^{-1}} \in C_{\tar(g)}. \end{equation} Using the definitions of the structure maps $\widetilde{\sour}, \widetilde{\tar}: \V^* \to C^*$, one can prove that $$ \widetilde{\sour}(b^\top(\mu)) = \mu, \,\,\, \widetilde{\tar}(b^\top(\mu)) =\Psi_{b^{-1}}^*(\mu). $$ Hence, $b^\top \in \F_{\rm inv}(\V^*)$ and we define $\mathcal{A}(b) = b^\top$. We list here some properties of $\mathcal{A}$, which the reader can check. \begin{enumerate} \item $\<b^\top(\mu),b(e)\> =0$, for all $(e, \mu) \in E \times_M C^*$; \item $\Psi_{b^\top} = \Psi_{b^{-1}}^*$ acting on $\partial^*: E^* \to C^*$; \item $1_\V^\top = 1_{\V^*}$; \item $(b_1 \cdot b_2)^\top = b_1^\top \cdot b_2^\top$; \item $(b^\top)^\top = b$. \end{enumerate} In particular, $\mathcal{A}$ is a Lie groupoid morphism. We denote by $\mathfrak{a}=\mathrm{Lie}(\mathcal{A}): \F(\v) \to \F(\mathrm{Lie}(\V^*))$ its derivative, where $\mathrm{Lie}(\V^*) \to C^*$ is the $\VB$-algebroid of $\V^*$. \begin{lemma}\label{lem:adj} Let $\eta \in \Gamma_{lin}(E,\v)$. One has that $$ b_\epsilon = \Fl_{\overrightarrow{\eta}}^\epsilon|_{E_x}: E_x \to \V_{\Fl^\epsilon_{\overrightarrow{\alpha}}(x)} \Leftrightarrow b_\epsilon^\top = \Fl_{\overrightarrow{\eta}^\top}^\epsilon|_{C_x^*}: C_x^* \to \V^*_{\Fl^\epsilon_{\overrightarrow{\alpha}}(x)}, $$ where $\overrightarrow{\eta}^\top \in \mathfrak{X}(\V^*)$ is the adjoint linear vector field (see Appendix \ref{app:linear}). In particular, $$ \overrightarrow{\mathfrak{a}(\eta)} = \overrightarrow{\eta}^\top $$ \end{lemma} \begin{proof} First note that property (1) of $\mathcal{A}$ completely determines $b^\top$ (i.e. if $\<\xi, b(e)\>=0$, for every $e \in E_x$, then $\xi = b^\top(\mu)$, where $\mu = \widetilde{\sour}(\xi) \in C^*_x$). Now, using \eqref{eq:adj_flow}, for every $(e, \mu) \in E\times_M C^*$, $$ \<\Fl_{\overrightarrow{\eta}^\top}^\epsilon(\mu), \Fl_{\overrightarrow{\eta}}^\epsilon(e)\> = \<\mu, e\> = 0. $$ This proves the result. \end{proof} We shall now give a detailed description of $\mathfrak{a}$. First of all, recall that there is an isomorphism of double vector bundles, \begin{equation}\label{dual_identification} i_\V : \mathrm{Lie}(\V^*) \stackrel{\sim}\longrightarrow \v^*_A, \end{equation} defined as follows (see \cite[Thm.~11.5.12]{Mckz2}): $$ \<i_\V(\left.\frac{d}{d\epsilon}\right|_{\epsilon=0}\psi_\epsilon), \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} v_\epsilon\>_A = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \<\psi_\epsilon, v_\epsilon\>, $$ for a path $(\psi_\epsilon, v_\epsilon) \in \V^* \times_\G \V$ on the source fiber starting at the identity section. From now on, we shall always make the identification $\v_A^* =\mathrm{Lie}(\V^*)$ without further notice. It is well-known (see \cite{Mac-doubles}) that there exists a bijection between the linear sections of a double vector bundle $D$ and its dual $D_A^*$: \begin{equation}\label{adjoint_cor} \begin{array}{rcl} \Gamma_{lin}(E, D) & \stackrel{\sim}\longrightarrow & \Gamma_{lin}(C^*, D_A^*)\\ \eta & \longmapsto & \eta^\top. \end{array} \end{equation} Let us briefly recall this correspondence. Given a linear section $\eta \in \Gamma_{lin}(E,D)$, the function $\ell_\eta \in C^{\infty}(D_E^*)$ $$ \ell_\eta(\gamma)= \<\gamma, \eta(e)\>_E, \,\,\, \text{ for } \begin{minipage}[h]{30pt} \xy {\ar@{|->}_{}(0,25)*++{\gamma};(10,25)*++{\mu}}\\ {\ar@{|->}_{}(0,25)*++{};(0,15)*++{e}}\\ {\ar@{|->}_{}(0,15)*++{};(10,15)*++{x}}\\ {\ar@{|->}_{}(10,25)*++{};(10,15)*++{}}\\ \endxy \end{minipage} \in D_E^*, $$ is fiberwise linear not only with respect to the projection $D_E^* \to E$ but also with respect to $D_E^* \to C^*$. So, there exists a section $\eta^\top \in \Gamma_{lin}(C^*, D_A^*)$ such that $\ell_\eta(\varphi) = \dbrac{\varphi, \eta^\top(\mu)}$, for the bracket $\dbrac{\cdot, \cdot}$ \eqref{double_brac}. Now, one can see from the definition of $\dbrac{\cdot,\cdot}$ that $\eta^\top$ is completely determined by the condition \begin{equation}\label{eq:dual} \<\eta^\top(\mu), \eta(e)\>_A =0, \,\,\, \forall \, (e,\mu) \in E \times_M C^*. \end{equation} \begin{proposition}\label{right_adjoint} Let $\v=\mathrm{Lie}(\V)$, for a $\VB$-groupoid $\V \toto E$. One has that \begin{equation} \mathfrak{a}(\eta) = \eta^\top, \,\,\,\eta \in \Gamma_{lin}(E,\v). \end{equation} \end{proposition} \begin{proof} Using Lemma \eqref{lem:adj}, it suffices to show that $\overrightarrow{\eta^\top}= \overrightarrow{\eta}^\top$. Let $\brac{\cdot, \cdot}: T(\V) \times_{T\G} T(\V^*) \to \R$ be the derivative of the natural bracket $\<\cdot, \cdot\>: \V \times_\G \V^* \to \R$. For any $(\psi, v) \in \V^*\times_\G V$, using the right-invariance of the flows and \eqref{dual_mult}, one has that \begin{align*} \brac{ \overrightarrow{\eta^\top}(\psi), \overrightarrow{\eta}(v)} = & \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \< \Fl_{\eta^\top}^\epsilon(\widetilde{\tar}(\psi)) \bullet \psi, \Fl_{\eta}^\epsilon(\widetilde{\tar}(v)) \bullet v\> \\ & = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \left(\<\Fl_{\eta}^\epsilon(\widetilde{\tar}(v)), \Fl_{\eta^\top}^\epsilon(\widetilde{\tar}(\psi))\> + \< \psi, v\> \right)\\ & = \<\eta^\top(\widetilde{\tar}(\psi)), \eta(\widetilde{\tar}(v))\>_A = 0. \end{align*} Now, from \eqref{eq:dual_linear}, it follows that $\overrightarrow{\eta^\top}$ must be the adjoint of $\overrightarrow{\eta}$ as we wanted to prove. \end{proof} One can now use $\eta^\top, \B \varphi$ to describe the Lie algebroid structure on $\v_A^* \to C^*$, in the case $\v=\mathrm{Lie}(\V)$. Note that the $\F(\v)$-connection on $\partial^*: E^* \to C^*$ is obtained by differentiation of $(\Psi_{b}^{-1})^*$. So, using that $\mathfrak{a}$ is a Lie algebroid morphism and Proposition \ref{prop:fat_rep}, one gets \begin{align}\label{eq:dual_vb} \begin{split} [\eta_1^\top, \eta_2^\top] & = [\eta_1,\eta_2]^\top, \,\,\, [\eta^\top, \B\varphi] = \B(\nabla_\eta^\top \varphi), \,\,\,[\B\varphi_1, \B \varphi_2]=0\\ \rho_{\v^*_A}(\eta^\top) & = W_{\nabla_\eta}, \,\, \rho_{\v^*_A}(\B\varphi) = \partial^*(\varphi)^\uparrow \end{split} \end{align} $\nabla$ is the $\F(\v)$-connection on $\partial: C \to E$ given by \eqref{fat_rep1} and $W_{\nabla_\eta} \in \frakx(C^*)$ is the linear vector field corresponding to $\nabla_\eta: \Gamma(C) \to \Gamma(C)$. \begin{remark}\em For a general $\VB$-algebroid $\v$ (not necessarily integrable), the existence of a Lie algebroid structure on $\v_A^* \to C^*$ is the result of the double linearity of the Poisson bracket on $\v_E^* \to C^*$ (see \cite{Mac-doubles}[Eqs.~(19)]). Note that Mackenzie in \cite{Mac-doubles} chooses to work with $(\v_E^*)_{C^*}^*$ whereas we work directly with $\v_A^*$ using \eqref{double_brac}. This explains the difference in the sign between our expression for $\rho_{\v^*_A}(\B \varphi)$ and his. \end{remark} \section{Multiplicative differential forms on Lie groupoids} In this section, we define our main object of study and state our result regarding its Lie theory. \subsection{Definition and examples} \begin{definition} Let $\V \toto E$ be a $\VB$-groupoid over $\G \toto M$. A differential $k$-form on $\G$ with values in $\V$, $\vartheta \in \Omega^k(\G, \V)$ is said to be multiplicative if the corresponding map $$ \xy {\ar@{->}_{}(0,12)*++{\bigoplus^k T\G}; (0,0)*++{\bigoplus^k TM}} \\ {\ar@{->}_{}(2,12)*++{\vphantom{\bigoplus^k T\G}}; (2,0)*++{\vphantom{\bigoplus^k TM}}}\\ {\ar@{->}_{}(5,12)*++{}; (15,12)*++{}}\\ {\ar@{->}_{}(5,0)*++{}; (15,0)*++{}}\\ {\ar@{->}_{}(16,12)*++{\!\!\!\V}; (16,0)*++{\!\!\!E}}\\ {\ar@{->}_{}(14,12)*++{\vphantom{V}}; (14,0)*++{\vphantom{E}}} \endxy $$ is a Lie groupoid morphism. \end{definition} \begin{remark}\em It is straightforward to check that the base morphism $\oplus^k TM \to E$ is the map associated to a differential form $\theta \in \Omega^k(M,E)$ and \begin{equation} \theta = \vartheta|_M. \end{equation} \end{remark} \begin{example}\em Let $C \to M$ be a representation of $\G \toto M$. A differential form $\vartheta \in \Omega^k(\G, \tar^*C)$ with values in the semi-direct product $\tar^*C \toto M$ is multiplicative if and only if \begin{equation}\label{mult_Ezero} (m^*\vartheta)_{(g_1,g_2)} = pr_1^*\vartheta + g_1 \cdot \, pr_2^*\vartheta, \end{equation} where $pr_1, pr_2: \G_{(2)} \to \G$ are the projections on the first and second components, respectively (in Lemma \ref{lemma:rep_vb} below we prove a generalization of this fact). The Lie theory of these multiplicative forms were studied in \cite{CSS}. Note that by considering $C= M \times \R$ with the trivial representation, one recovers the theory of multiplicative differential forms on $\G$ \cite{BC}. \end{example} There is a lift operation which takes multiplicative forms $\vartheta \in \Omega^k(\G, \V)$ with values in $\V$ to multiplicative forms on the fat groupoid $\F_{\rm inv}(\V)$ with values in the fat representation $C \to M$. Let us be more precise. Given $b: E_{\sour(g)} \to \V_g$, an element of $\F_{\rm inv}(\V)$, and $\xi_1, \dots, \xi_k \in T_{(g,b)}\F_{\rm inv}(\V)$, define $\widehat{\vartheta} \in \Omega^k(\F_{\rm inv}(\V), \tar^*C)$ by \begin{equation}\label{lift_form} \widehat{\vartheta}(\xi_1, \dots, \xi_k) \bullet 0_g = \vartheta(T\pr(\xi_1), \dots, T\pr(\xi_k)) - b(\theta(T\sour(\xi_1), \dots, T\sour(\xi_k)) \end{equation} Let us show that $\widehat{\vartheta}$ is multiplicative. We shall restrict ourselves to the case $k=1$ for simplicity. The general case follows similarly. Given a composable pair of vectors $(\xi_1, \xi_2)$, where $\xi_i \in T_{(g_i, b_i)}\F_{\rm inv}(\V)$, $i=1,2$, one has that \begin{align*} \widehat{\vartheta}(\xi_1 \bullet \xi_2)\bullet 0_{g_1g_2} = \vartheta(U_1 \bullet U_2) - b_1(\widetilde{\tar}(b_2(\theta(X_2))))\bullet b_2(\theta(X_2)), \end{align*} where $T\pr(\xi_i)=U_i \in T_{g_i}\G$ and $T\sour(U_i)=X_i \in T_{\sour(g_i)}M$, $i=1,2$. Now, using the multiplicativity of $\vartheta$ and the interchange law, it follows that \begin{align*} \widehat{\vartheta}(\xi_1 \bullet \xi_2)\bullet 0_{g_1g_2} & = (\vartheta(U_1) - b_1(\widetilde{\tar}(b_2(\theta(X_2)))))\bullet (\vartheta(U_2)-b_2(\theta(X_2)))\\ & = (\vartheta(U_1) - b_1(\theta(X_1))) \bullet 0_{g_2} + b_1(\partial(\widehat{\vartheta}(\xi_2)))\bullet \widehat{\vartheta}(\xi_2)\bullet 0_{g_1^{-1}}0_{g_1g_2}\\ & = (\widehat{\vartheta}(\xi_1) + b_1\cdot \widehat{\vartheta}(\xi_2)) \bullet 0_{g_1g_2}. \end{align*} We have used in the second equality that $\partial(\widehat{\vartheta}(\xi_2)) = \theta(X_1) - \widetilde{\tar}(b_2(\theta(X_2)))$. \begin{example}\em Let $\V = T\G \toto TM$ be the tangent groupoid. Differential forms $\vartheta \in \Omega^k(\G, T\G)$ with values in $T\G$ are also known as vector valued forms. The Lie theory of multiplicative vector valued forms were studied in \cite{Bur-Drum} (see also \cite{Bur-Drum2}). The lift operation $\vartheta \mapsto \widehat{\vartheta}$ takes multiplicative vector valued forms to multiplicative forms on the jet groupoid $J^1\G$ with values in the adjoint representation $\tar^*A$. The lift of $\mathrm{id} \in \Omega^1(\G, T\G)$ is exactly the Cartan 1-form on the jet groupoid $J^1\G$. \end{example} \subsection{Infinitesimal multiplicative forms on Lie algebroids} Let $(A,\rho, [\cdot, \cdot])$ be a Lie algebroid and $\v \to E$ be a $\VB$-algebroid over $A$ \begin{definition} An IM $k$-form on $A$ with values on $\v$ is a triple $(D,l,\theta)$ where $D: \Gamma_{lin}(E,\v) \to \Omega^k(M, C)$, $l: A \to \wedge^{k-1} T^*M \otimes C$ and $\theta \in \Omega^k(M, E)$ satisfying \begin{equation}\label{eq:compatibility} D(\mathcal{B}\Phi) = -\Phi \circ \theta, \,\,\,\, D(f\eta) = fD(\eta) + df \wedge l(\pr(\eta)) \end{equation} and the set of equations known as {\bf IM-equations} \begin{align} \tag{IM1} D([\eta_1, \eta_2]) & = L_{\nabla_{\eta_1}} D(\eta_2) - L_{\nabla_{\eta_2}} D(\eta_1)\\ \tag{IM2} l([\pr(\eta), \beta]) & = L_{\nabla_{\eta}} l(\beta) - i_{\rho(\beta)} D(\eta)\\ \tag{IM3} i_{\rho(\alpha)}l(\beta) & = - i_{\rho(\beta)} l(\alpha)\\ \tag{IM4} L_{\nabla_\eta} \theta & = \partial(D(\eta)) \text{ (redundancies?) } \\ \tag{IM5} i_{\rho(\alpha)} \theta & = \partial(l(\alpha)), \end{align} where $\nabla$ is the fat representation of $\F(\v)$ on $\partial:C \to E$ and $L_{\nabla_\eta}$ is the operator \eqref{der_Lie}. \end{definition} Let us give some examples. \begin{example}\em[Spencer operators]\label{ex:spencer} Let $\v = A \times_M C \to M$ be a $\VB$-algebroid with $E=0$. In this case, the projection $\pr: \Gamma_{lin}(E,\v) \ni \eta \mapsto \alpha \in \Gamma(A)$ gives an isomorphism of Lie algebroids $\F(\v) \cong \Gamma(A)$. The fat representation gives a flat $A$-connection $\nabla: \Gamma(A) \times \Gamma(C) \to \Gamma(C)$. An IM $k$-form with values in $A \times_M C$ is a pair $(D,l)$ such that $D: \Gamma(A) \to \Omega^k(M,C)$, $l: A \to \wedge^{k-1} T^*M \otimes C$ satisfying the compatibility $D(f\alpha) = fD(\alpha) + df \wedge l(\alpha)$ and the equations \begin{align*} D([\alpha_1, \alpha_2]) & = L_{\nabla_{\alpha_1}} D(\alpha_2) - L_{\nabla_{\alpha_2}} D(\alpha_1)\\ l([\alpha_1, \alpha_2]) & = L_{\nabla_{\alpha_1}}l(\alpha_2) - i_{\rho(\alpha_2)} D(\alpha_1)\\ i_{\rho(\alpha_1)}l(\alpha_2) & = - i_{\rho(\alpha_2)}l(\alpha_1) \end{align*} In \cite{CSS}, such pairs are called \textit{$C$-valued Spencer operators on $A$.} \end{example} \begin{remark}\em Note that (IM1), (IM2), (IM3) in the definition of an IM $k$-form on $A$ with values in a general $\VB$-algebroid $\v$ can be interpreted as saying that $(D,l\circ \pr)$ is a $C$-valued Spencer operator on the fat Lie algebroid $\F(\v)$. \end{remark} \begin{example}\em \label{ex:vector_val} An \textit{IM $(1,k)$-form} on $A$ is a triple $(\mathfrak{D}, \mathfrak{l}, \mathfrak{r})$, where $\mathfrak{D}: \Gamma(A) \to \Omega^k(M, A)$, $\mathfrak{l}: A \to \wedge^{k-1} T^*M \otimes A$, $\mathfrak{r}: T^*M \to \wedge^k T^*M$ satisfy the Leibniz condition $$ \mathfrak{D}(f\,\alpha) = f\, \mathfrak{D}(\alpha) + df \wedge \mathfrak{l}(\alpha) - \alpha \wedge \mathfrak{r}(df) $$ and \textit{the IM-equations}: \begin{minipage}[t]{5cm} \begin{align*} \mathfrak{D}([\alpha, \beta]) & = \alpha \cdot \mathfrak{D}(\beta) - \beta \cdot \,\mathfrak{D}(\alpha);\\ \mathfrak{l}([\alpha,\beta]) & = \alpha \cdot \mathfrak{l}(\beta) - i_{\rho(\beta)} \mathfrak{D}(\alpha);\\ \mathfrak{r}(\Lie_{\rho(\alpha)} \omega) & = \alpha \cdot \mathfrak{r}(\omega) - \<\omega, \rho(\mathfrak{D}(\alpha))\>; \\ \end{align*} \end{minipage} \begin{minipage}[t]{5cm} \begin{align*} i_{\rho(\alpha)} \mathfrak{l}(\beta) & = - i_{\rho(\beta)} \mathfrak{l}(\alpha);\\ i_{\rho(\alpha)} \mathfrak{r}(\omega) & = \langle \omega, \rho(\mathfrak{l}(\alpha) \rangle. \end{align*} \end{minipage} Here, $\,\cdot\,$ is the action of the Lie algebra $\Gamma(A)$ on $\Omega^\bullet(M, A)$ given by $ \alpha \cdot (\omega \otimes \beta) = \Lie_{\rho(\alpha)}\omega \otimes \beta + \omega \otimes [\alpha, \beta]. $ These objects were introduced in \cite{Bur-Drum} as the infinitesimal data associated to multiplicative vector valued forms on Lie groupoids. There is a 1-1 correspondence between IM $(1,k)$-forms on $A$ and IM $k$-forms on $A$ with values in $TA$. Indeed, for a triple $(\mathfrak{D}, \mathfrak{l}, \mathfrak{r})$, define $$ l = \mathfrak{l}, \,\, \theta = \mathfrak{r}^*, \,\, D(\eta) = \mathfrak{D}(\alpha) + D^{\rm clas}(\eta) \circ \theta, $$ where $\alpha = \pr(\eta)$ and $D^{\rm clas}$ is the classical Spencer operator. One can check that $(\mathfrak{D}, \mathfrak{l}, \mathfrak{r})$ is an IM $(1,k)$-form on $A$ if and only if $(D,l, \theta)$ is an IM $k$-form on $A$ with values on $TA$. Note that $\mathfrak{D}(\alpha) = D(j^1\alpha)$. \end{example} \subsection{Infinitesimal-global correspondence} We are now able to state our result regarding the correspondence between multiplicative forms on Lie groupoids and IM forms on Lie algebroids. \begin{theorem}\label{thm:main} For source 1-connected Lie groupoid $\G \toto M$, there is a 1-1 correspondence between multiplicative $\vartheta \in \Omega^k(\G, \V)$ and IM $k$-forms $(D, l, \theta)$ on $A$ with values on $\v=\mathrm{Lie}(\V)$. The correspondence is given by \begin{equation}\label{eq:inf_comp} D(\eta) = L_{\Delta_\eta}(\vartheta)|_M, \,\,\, l(\alpha) = i_{\overrightarrow{\alpha}} \vartheta|_M, \,\,\, \theta = \vartheta|_M, \end{equation} where $\Delta_{\eta}: \Gamma(\V) \to \Gamma(\V)$ is the (adjoint) derivation corresponding to the linear vector field $\overrightarrow{\eta} \in \frakx(\V)$ and $L_{\Delta_\eta}$ is the operator \eqref{der_Lie}. \end{theorem} We shall postpone the proof of Theorem \ref{thm:main} to \S \ref{proof1} and will focus here on how it recovers some particular results when one restricts the $\VB$-groupoid to special cases. It is specially important to obtain explicit expressions for the derivation $\Delta_\eta$ on these cases. \begin{corollary} Let $\G \toto M$ be a source 1-connected groupoid with a representation on $C \to M$. There is a 1-1 correspondence between differential forms $\vartheta \in \Omega^k(\G, \tar^*C)$ satisfying equation \eqref{mult_Ezero} and $C$-valued Spencer operators $(D, l)$ on $A$. The correspondence is given by \begin{align*} D(\alpha)(X_1,\dots,X_k) & = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} (\Fl_{\overrightarrow{\alpha}}^\epsilon(x))^{-1}\cdot \vartheta(T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_1), \dots, T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_k))\\ l(\alpha) & = i_{\overrightarrow{\alpha}} \vartheta|_M. \end{align*} \end{corollary} \begin{proof} Consider the semi-direct product $\V = \tar^*C \toto M$. One has that $A \times_M C = \mathrm{Lie}(\tar^*C)$ (see Example \ref{ex:spencer}). In this case, the flow of the linear vector field $\overrightarrow{\eta}$ corresponding to a linear section $\eta \cong \alpha \in \Gamma(A)$ is given by: $$ \Fl_{\overrightarrow{\eta}}^\epsilon(g,c) = (\Fl_{\overrightarrow{\alpha}}^\epsilon(g), \Fl_{\overrightarrow{\alpha}}^\epsilon(\tar(g))\cdot c). $$ Hence, for $X_1, \dots, X_k \in T_xM \subset T_x\G$, one has that \begin{align*} (L_{\Delta_\eta}\vartheta)(X_1,\dots, X_k) & = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \Fl_{\overrightarrow{\eta}}^{-\epsilon}(\vartheta(T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_1), \dots, T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_k)))\\ & = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0}\Fl_{\overrightarrow{\alpha}}^{-\epsilon}(\Fl_{\rho(\alpha)}^\epsilon(x)) \cdot \vartheta(T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_1), \dots, T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_k))\\ & = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} (\Fl_{\overrightarrow{\alpha}}^\epsilon(x))^{-1}\cdot \vartheta(T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_1), \dots, T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_k)). \end{align*} Note that we are identifying the vertical subspaces of both $T_{(x,c)}(\tar^*C)$ and $T_c C$ with $C_x$. \end{proof} \begin{example}\em The IM $1$-form on $A$ with values in $TA$ corresponding to $\mathrm{id}_\G \in \Omega^1(\G, T\G)$ is exactly $(D^{\rm clas}, \mathrm{id}_A, \mathrm{id}_{TM})$. This follows directly from \eqref{right_inv_der}. This correspondence should be seen as an alternative way to express the fact established in \cite{CSS} that $(D^{\rm clas}, \pr)$ is the Spencer operator on $J^1A$ with values in $A$ associated to the Cartan 1-form on the jet groupoid. \end{example} \begin{corollary}\label{cor:vec_val} On a source 1-connected groupoid $\G \toto M$, there is a 1-1 correspondence between multiplicative vector valued forms $\vartheta \in \Omega^k(\G, T\G)$ and IM $(1,k)$-forms $(\mathfrak{D}, \mathfrak{l}, \mathfrak{r})$ given by $$ \mathfrak{D}(\alpha) = [\overrightarrow{\alpha}, \vartheta]|_M, \,\,\mathfrak{l} = i_{\overrightarrow{\alpha}}\vartheta|_M, \,\,\, \mathfrak{r}(\mu) = \langle\tar^*\mu, \vartheta\rangle|_M, $$ $[\cdot, \cdot]$ is the Fr\"olicher-Nijenhuis bracket on $\Omega^k(\G, T\G)$. \end{corollary} \begin{proof} The 1-1 correspondence follows directly from Theorem \ref{thm:main} and Example \ref{ex:vector_val}. One only has to prove the formula for $\mathfrak{D}$. Now, if $D: \Gamma(J^1A) \to \Omega^k(M, A)$ is the first component of the IM $k$-form on $A$ with values in $TA$ corresponding to $\vartheta$, then $$ \mathfrak{D}(\alpha) = D(j^1 \alpha) = L_{\Delta_{\overrightarrow{j^1\alpha}}} \vartheta|_M. $$ Using that $\Delta_{\overrightarrow{j^1\alpha}} = \Lie_{\overrightarrow{\alpha}}$(see \eqref{jet_deriv}, one can check that the operator $L_{\Delta_{\overrightarrow{j^1\alpha}}}$ on $\Omega^\bullet(\G, T\G)$ is exactly $[\overrightarrow{\alpha}, \cdot]$. \end{proof} \subsection{Coefficients in a representation up to homotopy} We here show how multiplicative forms with values in semi-direct products of groupoids with ruth give rise naturally to a notion of multiplicative forms with values in ruth. \begin{definition} A multiplicative $k$-form on $\G$ with values in a ruth $\mathcal{E}=C[1]\oplus E$ is a pair $\omega \in \Omega^k(\G, \tar^*C)$, $\theta \in \Omega^k(M, E)$ satisfying \begin{align} \partial\circ\omega_g & = \tar^*\theta - \Psi_g \circ \sour^*\theta \label{mul_forms1}\\ \m^*\omega_{(g_1,g_2)} & = pr_1^* \omega + \Psi_{g_1} \circ pr_2^*\omega - \Omega_{g_1,g_2}\circ s^* \theta, \label{mul_forms2} \end{align} where $s: \G_{(2)} \to M$ is the map $s(g_1,g_2)=\sour(g_2)$ and $(\partial, \Psi, \Omega)$ are the structure operators of the ruth. \end{definition} Let us give some examples. \begin{example}\em Any $\lambda \in \Omega^k(M,C)$ defines a multiplicative form on $\G$ with values in $\E$ as follows: $$ \theta = \partial \circ \lambda, \,\,\, \omega_g = \tar^*\lambda - \Psi_g \circ \sour^*\lambda, \,\,g \in \G. $$ \end{example} \begin{example}\em Let $\Psi$ be a representation of $\G$ on the complex $\partial: C \to E$. It can be considered as a ruth on $\mathcal{E}=C[1]\oplus E$ with $\Omega=0$. In this case, a multiplicative $k$-form with values in $\mathcal{E}$ is a pair $\omega \in \Omega^k(\G, \tar^*C)$, $\theta \in \Omega^k(M, E)$ satisfying: \begin{align}\label{eq:omega_zero} \begin{split} \partial\circ\omega_g & = \tar^*\theta - \Psi_g \circ \sour^*\theta\\ \m^*\omega_{(g_1,g_2)} & = pr_1^* \omega + \Psi_{g_1} \circ pr_2^*\omega. \end{split} \end{align} Note that the case $E=0$ recovers the multiplicative differential forms with values in representations studied in \cite{CSS}. Also, when $E=0$ and $C=M \times \mathbb{R}$ with the trivial representation, one recovers the multiplicative forms studied in \cite{AC, BC}. \end{example} Regarding the last example, we shall point out that equations \eqref{eq:omega_zero} appeared in \cite{Wald} in the context of higher gauge theory, where they were used to define forms on Lie groupoids with values in Lie 2-algebras. \begin{example}\em Consider the differentiable cohomologies (see \cite{AC} and references therein) $H^p(\G_{(\bullet)})$ and $H^p(\Omega^k(\G_{(\bullet)})$ associated to $\G$ . An element $\psi \in H^2(\G_{(\bullet)})$ defines a ruth on the complex $0:C\to E$, where $C=E=M\times \mathbb{R}$, the quasi-action is the trivial representation and $$ \Omega_{g_1,g_2} = \psi(g_1,g_2) \in \mathbb{R} \cong \Hom(E_{\sour(g_2)}, C_{\tar(g_1)}). $$ The cup product with $\psi$ defines a map $\ast_{\psi}: H^0(\Omega^k(\G_{(\bullet)}) \to H^2(\Omega^k(\G_{(\bullet)})$ given by $$ (\ast_\psi \, \theta)_{(g_1,g_2)} = \psi(g_1,g_2) \,s^*\theta. $$ We claim that multiplicative k-forms with values in the ruth determined by $\psi$ correspond to elements of $\ker(\ast_{\psi})$. Indeed, $\theta \in \Omega^k(M)$ is a cocycle if and only if it satisfies $\tar^*\theta - \sour^*\theta =0$ and the fact that $[\ast_\psi\,\theta]=0$ in cohomology is equivalent to the existence of $\omega \in \Omega^k(\G)$ such that $\pr_1^*\omega - m^*\omega|_{(g_1,g_2)} + \pr_2^*\omega = \psi(g_1,g_2) \,s^*\theta$. \end{example} \begin{lemma}\label{lemma:rep_vb} Let $\mathcal{E}= C[1]\oplus E$ be a ruth of $\G$ and consider the associated $\VB$-groupoid $\V = \tar^*C \oplus \sour^*E \toto E$. There is a 1-1 correspondence between multiplicative $k$-forms $\theta \in \Omega^k(M, E), \omega \in \Omega^k(\G, \tar^*C)$ with values in $\mathcal{E}$ and multiplicative $k$-forms $\vartheta \in \Omega^k(\G, \tar^*C \oplus \sour^*E)$ with values in $\V$. The correspondence is given by $$ \vartheta(U_1, \dots, U_k) = (\omega(U_1, \dots, U_k), \theta(T\sour(U_1), \dots, T\sour(U_k))). $$ \end{lemma} \begin{proof} It is a straightforward consequence of the formulas \eqref{Str}. More precisely, equation \eqref{mul_forms1} is equivalent to the compatibility of $\vartheta$ with the target map and \eqref{mul_forms2} is equivalent to the compatibility of $\vartheta$ with the multiplication. \end{proof} Let us now recall briefly how any $\VB$-groupoid can be presented as a semi-direct product. Let $\V \toto E$ be a $\VB$-groupoid over $\G \toto M$. A horizontal lift is a splitting $h: \sour^*E \to \V$ of the short exact sequence \eqref{core_ses} such that $h|_M: E \hookrightarrow \V$ is the unit. Equivalently, $h$ can be seen as a section of the projection $\F(\V) \to \G$ preserving source, target and unit maps. In general, $h: \G \to \F(\V)$ will not preserve multiplication. For a composable pair $(g_1, g_2) \in \G_{(2)}$, there is a curvature term $\Omega_{(g_1,g_2)}: E_{\sour(g_2)} \to C_{\tar(g_1)}$ such that $$ h(g_1 g_2) - h(g_1) \cdot h(g_2) = \Omega_{(g_1,g_2)} \bullet 0_{g_1g_2} $$ A horizontal lift also induces a quasi-action $\Psi$ of $\G$ on the core complex $\partial: C \to E$: $$ \begin{CD} C_{\sour(g)} @> \Psi: \,c \,\mapsto\, h(g,\partial(c)) \bullet c \bullet 0_{g^{-1}} >> C_{\tar(g)}\\ @V \partial VV @VV \partial V \\ E_{\sour(g)} @>> \Psi: \,e \, \mapsto\, \widetilde{\tar}(h(g,e)) > E_{\tar(g)}. \end{CD} $$ The quasi-action $\Psi$, the curvature $\Omega$ and the core-anchor $\partial: C \to E$ define a ruth of $\G$ on the $\E=C[1]\oplus E$. Moreover, the map $$ \tar^*C \oplus \sour^*E \to \V, \,\,\, (g, c, e) \mapsto c\bullet 0_g + h(g,e) $$ is an isomorphism of $\VB$-groupoids. This construction establishes an equivalence of categories between 2-term ruth and $\VB$-groupoids \cite{Hoyo-Ort,Gra-Met1}. Let us move to the infinitesimal picture. Let $\mathcal{E}=C[1]\oplus E$ be a ruth of a Lie algebroid $A \to M$ and denote by $\nabla$ the $A$-connection on the 2-term complex $\partial: C \to E$ and $K \in \Omega^2(A, \Hom(E,C))$ the curvature term. \begin{definition}\label{def:IMforms} An IM $k$-form on $A$ with values in the representation $\mathcal{E}$ is a triple $(\mathbb{D}, l, \theta)$, where $\mathbb{D}: \Gamma(A) \longrightarrow \Omega^k(M,C)$, $l: A \longrightarrow \wedge^{k-1}T^*M \otimes C$, $\theta \in \Omega^k(M, E)$ are operators satisfying the Leibniz rule \begin{equation}\label{Leibniz_ruth} \mathbb{D}(f\alpha) = f\mathbb{D}(\alpha) + df \wedge l(\alpha), \end{equation} (IM3) and (IM5) together with the additional IM equations \begin{align} \mathbb{D}([\alpha,\beta]) & = L_{\nabla_\alpha} \mathbb{D}(\beta) - L_{\nabla_\beta} \mathbb{D}(\alpha) - K_{\alpha,\beta} \circ \theta\label{Eq:IM1}\\ l([\alpha,\beta]) & = L_{\nabla_\alpha} l(\beta) - i_{\rho(\beta)} \mathbb{D}(\alpha)\label{Eq:IM2}\\ L_{\nabla_\alpha}\theta & = \partial(\mathbb{D}(\alpha))\label{Eq:IM3} \end{align} where $\alpha,\beta \in \Gamma(A)$. \end{definition} There is a close relationship between IM forms with values in ruth and IM forms with values in $\VB$-algebroids as we now explain. For a $\VB$-algebroid $\v \to E$, a splitting $\sigma: A \to \F(\v)$ of \eqref{linear_ses} gives rise to a ruth of $A$ on the graded vector bundle $C[1] \oplus E$ (see \cite{Gra-Met2, Arias-Crai2}). The complex $C \to E$ is the core complex $\partial: C \to E$, the $A$-connection on $E$ and the curvature $K \in \Omega^2(A, {\rm Hom}(E,C))$ are determined by the corresponding core sections of $\v$ (see Remark \ref{Bphi}): \begin{align*} \B(\nabla_{\alpha} c) & = [\sigma(\alpha), \B c], \,\,\, \nabla_{\alpha}|_E = \Delta_{\rho_{\v}(\sigma(\alpha))}^\top,\\ \B(K(\alpha_1, \alpha_2)) & = \sigma([\alpha_1,\alpha_2]) - [\sigma(\alpha_1), \sigma(\alpha_2)]. \end{align*} \begin{lemma}\label{lemma:IMruth} Let $\sigma: A \to \F(\v)$ be a splitting of \eqref{linear_ses} and consider the corresponding ruth on $\mathcal{E}=C[1]\oplus E$. One has that $(D,l,\theta)$ is an IM $k$-form on $A$ with values in $\v$ if and only if $(D\circ \sigma, l, \theta)$ is an IM $k$-form on $A$ with values in $\mathcal{E}$. \end{lemma} \begin{proof} The equivalence between equations \eqref{Eq:IM2}, \eqref{Eq:IM3} and (IM2), (IM4), respectively is an immediate consequence of the relationship between the $A$-connection and $\F(\v)$-connection on $\partial: C \to E$. The equivalence between \eqref{Eq:IM1} and (IM1) also uses the definition of $K$ and properties \eqref{eq:compatibility} of $D$. \end{proof} Given a ruth $(\partial, \Psi, \Omega)$ of $\G \toto M$ on $\E=C[1]\oplus E$, consider the semi-direct product $\V= \tar^*C \oplus \sour^*E \toto E$ and the corresponding $\VB$-algebroid $\v \to E$. We can apply the Lie functor to the natural horizontal lift $h: \G \to \F(\V)$ (since it preserves the identity and source maps) to obtain a splitting $\sigma: A \to \F(\v)$. Define $\mathrm{Lie}(\partial, \Psi, \Omega)$ as the corrresponding ruth $(\partial, \nabla, K)$ of $A$ on $\mathcal{E}=C[1]\oplus E$. We refer to \cite[Lemma~4.5]{Bra-Cab-Ort} for explicit formulas for $\nabla$ and $K$. \begin{theorem} Let $\G \toto M$ be a source 1-connected Lie groupoid and $\mathcal{E}=C[1]\oplus E$ be a ruth of $G$ with structure operators $(\partial, \Psi, \Omega)$. There is a 1-1 correspondence between multiplicative forms $(\theta, \omega)$ with values in $\E$ and IM $k$-forms $(\mathbb{D},l, \theta)$ on $A$ with values in the ruth $\mathrm{Lie}(\partial, \Psi, \Omega)$. The correspondence is given by \begin{align*} \mathbb{D}(\alpha)(X_1,\dots, X_k) & = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \Psi_{(\Fl^\epsilon_{\overrightarrow{\alpha}}(x))^{-1}} (\omega(T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_1),\dots, \Fl^\epsilon_{\overrightarrow{\alpha}}(X_k)))\\ l(\alpha) & = i_{\overrightarrow{\alpha}}\omega|_M \end{align*} \end{theorem} \begin{proof} Let $\V = \tar^*C \oplus \sour^*E \toto E$ be the $\VB$-groupoid corresponding to the ruth on $\E$ and consider its $\VB$-algebroid $\v=\mathrm{Lie}(\V)$. There is a chain of 1-1 correspondences coming from Lemmas \ref{lemma:rep_vb}, \ref{lemma:IMruth} and Theorem \ref{thm:main} $$ (\omega,\theta) \text{\, mult.} \leftrightarrow \vartheta = (\omega, \sour^*\theta) \in \Omega_{mult}^k(\G, \V) \leftrightarrow (D,l,\theta) \leftrightarrow (\mathbb{D}=D\circ \sigma, l, \theta), $$ where $\sigma: A \to \F(\v)$ is the natural splitting of $\v$. One only has to check the explicit formulas for the correspondence. Since $l(\alpha) = i_{\overrightarrow{\alpha}}\vartheta|_M = (i_{\overrightarrow{\alpha}}\omega|_M, 0)$, the formula for $l$ holds. Also, $ \mathbb{D}(\alpha) = L_{\Delta_{\overrightarrow{\sigma(\alpha)}}} \vartheta|_M. $ Using Proposition \eqref{prop:der_eq} and the fact that (see \cite{Bra-Cab-Ort}) $$ \Fl^\epsilon_{\overrightarrow{\sigma(\alpha)}}(g,c,e) = (\Fl^\epsilon_{\overrightarrow{\alpha}}(g), \Psi_{\Fl^\epsilon_{\overrightarrow{\alpha}}(\tar(g))}(c) - \Omega_{(\Fl^\epsilon_{\overrightarrow{\alpha}}(\tar(g)), g)}(e), e) $$ one has that \begin{align*} (L_{\Delta_{\overrightarrow{\sigma(\alpha)}}} \vartheta)(X_1,\dots,X_k) &= \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \Fl_{\overrightarrow{\sigma(\alpha)}}^{-\epsilon}(\vartheta(T\Fl^\epsilon_{\overrightarrow{\alpha}}(X_1), \dots, T\Fl_{\overrightarrow{\alpha}}^\epsilon(X_k)))\\ & = \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} [\Psi_{\Fl_{\overrightarrow{\alpha}}^{\epsilon}(x)^{-1}}(\omega(T\Fl_{\overrightarrow{\alpha}}^\epsilon(X_1), \dots, T\Fl_{\overrightarrow{\alpha}}^\epsilon(X_k)))\\ & \hspace{-50pt} - \underbrace{\Omega_{(\Fl_{\overrightarrow{\alpha}}^{\epsilon}(x)^{-1}, \Fl_{\overrightarrow{\alpha}}^{\epsilon}(x))}(\theta(X_1,\dots, X_k))}_{f(\epsilon)} ], \end{align*} where we have used that $\Fl^{-\epsilon}_{\overrightarrow{\alpha}}(\tar(\Fl_{\overrightarrow{\alpha}}^\epsilon(x))) = \Fl_{\overrightarrow{\alpha}}^{\epsilon}(x)^{-1}$. The result now follows from the fact that $\epsilon=0$ is a zero of order $\geq 2$ of $f(\epsilon)$. \end{proof} \section{$\VB$-groupoid cohomology of differential forms} In this section, we will introduce a cochain complex for which the cocycles in degree 1 are exactly the multiplicative forms with values in $\VB$-groupoids. Also, its cohomology will be a Morita invariant of the groupoid. This indicates that one should consider this complex as the appropriate setting to study connections on vector bundles over stacks. Let $B_\bullet \G$ be the nerve of the groupoid $\G \toto M$. It is the simplicial manifolds whose space of $p$-simplices is $B_p\G = \{(g_1, \dots, g_p) \in \G^p \,\, | \,\, \sour(g_i) = \tar(g_{i+1})\}$ and the face maps: $\partial_i : B_p\G \to B_{p-1}\G$, $i=0,\dots, p$ are defined by $$ \partial_i(g_1, \dots, g_p) = \begin{cases} (g_2, \dots, g_p), & \text{ if } i=0,\\ (g_1, \dots, g_{i-1}, g_{i}g_{i+1}, g_{i+2}, \dots, g_{p}), & \text{ if } 1 \leq i \leq p-1,\\ (g_1, \dots, g_{p-1}), & \text{ if } i=p. \end{cases} $$ The differentiable cochain complex of $\G$ is $C^\bullet(\G) = C^{\infty}(B_{\bullet}\G)$ with the differential $\delta: C^{p-1}(\G) \to C^{p}(\G)$ given by $$ \delta = \sum_{i=0}^{p}(-1)^i \partial^*_i. $$ For a $\VB$-groupoid $\V \toto E$ over $\G$, we define the complex $C^{p,q}(\V)$ of \textit{differential $\VB$-groupoid forms} as follows: \begin{equation} \begin{aligned} \C^{0,q}(\V) & = \Omega^q(M, C)\\ \C^{p,q}(\V) & = \{\vartheta \in \Omega^q(B_p \G, \pr_1^*\V) \,\,| \,\,\ \widetilde{\sour} \circ \vartheta = \partial_0^*\theta, \, \, \theta \in \Omega^q(B_{p-1}\G, t^*E)\}, \,\,\, p \geq 1 \end{aligned} \end{equation} where $\pr_1: (g_1, \dots, g_p) \mapsto g_1$ is the projection on the first arrow and $t:(g_1,\dots, g_{p-1}) \mapsto \tar(g_1)$. The differential $\delta:\C^{p,q}(\V) \to C^{p+1,q}(\V)$ is defined as \begin{equation}\label{differential} \begin{aligned} \delta \vartheta|_g & = - (\tar^*\vartheta) \bullet 0_g - 0_g \bullet (\sour^*\vartheta)^{-1}, \,\, p =0\\ \delta \vartheta & = -\partial_1^*\vartheta \bullet (\partial_0^*\vartheta)^{-1} + \sum_{i=2}^{p+1} (-1)^{i}\partial_i^*\vartheta , \,\, p \geq 1. \end{aligned} \end{equation} It is straightforward to check $\vartheta \in \Omega^q(\G, \V)$ is multiplicative if and only if $\vartheta \in C^{1,q}(\V)$ and $\delta \vartheta = 0$. We shall postpone the proof that $(C^{\bullet, q}(\V), \delta)$ is a differential complex to \S \ref{proofs}. \begin{example}\label{ex:cech}\em For $\V = \G \times \R$ with the trivial representation, one has that $$ C^{p,q}(\G \times \R) = \Omega^q(B_p\G), \,\,\, \delta = \sum_{i=0}^p (-1)^i \partial_i^*. $$ So, $(C^{p,q}(\G \times \R), \delta)$ recovers the Bott-Shulman complex of $q$-differential forms on Lie groupoids. Its cohomology is also known as the \textit{C\"ech cohomology of $\G$ with values in the sheaf of $q$-differential forms $\Omega^q$} \cite{Behrend}. \end{example} \begin{example}\label{ex:2-term_rep}\em Let $\V = \sour^*E \oplus \tar^*C \toto E$ be the semidirect product of $\G$ with a ruth $(\partial,\Psi, \Omega)$ and consider $\vartheta \in \Omega^q(B_p\G, \pr_1^*\V)$. It is straightforward to check that $\vartheta \in \C^{p,q}(\V)$ if and only if there exists $\omega \in \Omega^q(B_p\G, \tar^*C)$ and $\theta \in \Omega^q(B_{p-1}\G, \tar^*E)$ such that $\vartheta = (\omega, \partial_0^*\theta)$. In this case, using \eqref{Str}, one can check that $$ \delta \vartheta|_{(g_1, \dots, g_{p+1})} = -(\Omega_{(g_1,g_2)} \circ \left((\partial_0 \circ \partial_0)^*\theta\right) - \delta_C(\omega), \,\, \partial_0^*(\delta_E(\theta) + \partial \circ \omega)\,), $$ where $$ \delta_C(\omega)|_{(g_1,\dots, g_{p+1})} = \Psi_{g_1}\circ (\partial_0^*\omega) + \sum_{i=1}^{p+1}(-1)^i \partial_i^* \omega $$ and similarly for $\delta_E$. For $q=0$, this is exactly the complex calculating the cohomology of $\G$ with values in the ruth $\E=C[1]\oplus E$, $H^\bullet(\G, \E)$. It is also important to note that, for $E=0$, the complex reduces to $(\Omega^q(B_\bullet \G, t^*C), \delta_C)$; this complex was introduced in \cite{Cab-Drum} in the context of van Est isomorphisms. \end{example} \begin{example}\em For $q=0$, one has that $C^{p,0}(\V)= C_{\VB}^p(\V)$, the $\VB$-groupoid complex of $\V$ introduced in \cite{Gra-Met1}. This complex gives an intrisic model to the cohomology of $\G$ with values in representation up to homotopy. \end{example} We shall now investigate the invariance of the cohomology $H^\bullet(\C^{\bullet,q}(\V))$ under Morita equivalence. A Lie groupoid morphism $\phi$ from $\G \toto M$ to $\G' \toto M'$ is a Morita map if it is fully faithful (i.e. the source and target maps define a good fibered product of manifolds $\G = \G' \times_{M'\times M'} (M \times M)$) and essentially surjective (the map $(g': y \leftarrow \phi(x), x) \mapsto \tar(g')$ is a surjective submersion from $\G' \times_{M'} M \to M'$). We refer to \cite{Hoyo} for details. \begin{theorem}\label{thm:morita} Let $\V \toto E$ be a $\VB$-groupoid over $\G\toto M$ and $\phi: \G' \to \G$ a Morita map. Then the pull-back of differential forms $\phi^*: \C^{p,q}(\V) \to \C^{p,q}(\phi^*\V)$, $$ (\phi^*\vartheta)(\underline{U}_1, \dots, \underline{U}_q) = \vartheta(T\phi(\underline{U}_1), \dots, T\phi(\underline{U}_q))), \,\,\,\,\,\underline{U}_i \in T(B_p\G). $$ is a quasi-isomorphism. \end{theorem} By applying Theorem \ref{thm:morita} to the examples \ref{ex:cech} and \ref{ex:2-term_rep}, one obtains the following corollaries: \begin{corollary}\cite{Behrend} The C\"ech cohomology of $\G$ with values in the sheaf of $q$-differential forms $\Omega^q$ is a Morita invariant of $\G$. \end{corollary} \begin{corollary}\cite{Hoyo-Ort} The cohomology of $\G$ with values in a 2-term representation up to homotopy is a Morita invariant. \end{corollary} We shall postpone the proof of Theorem \ref{thm:morita} to \S 5.4. \section{The proofs}\label{proofs} \subsection{Strategies} Let us begin by explaining the general philosophy which will guide us through the proofs of Theorem \ref{thm:main} and \ref{thm:morita}. The same strategy used in \cite{Bur-Drum} to study multiplicative tensor fields on a Lie groupoid will give us the proper viewpoint to tackle the correspondence between multiplicative differential forms with values in $\VB$-groupoids and IM-forms with values in $\VB$-algebroids. The presence of coefficients encoded in the $\VB$-groupoid will force us to adapt (in a non-trivial way) most of the techniques used there as one can note by comparing this section with \cite[Section~4]{Bur-Drum}. The key point of the strategy is to treat a differential form $\vartheta \in \Omega^k(\G, \V)$ as a \textit{componentwise linear function} (see Definition in \S \ref{embedding} below) on \begin{equation}\label{big_groupoid} \bG= \underbrace{T\G \times_\G \dots \times_\G T\G}_{k-times}\times_\G \V^*. \end{equation} The space $\bG$ is a Lie groupoid and our aim is to establish a dictionary between the multiplicativity properties of $\vartheta$ and those of the corresponding function on $\bG$ as shown below: \begin{tabular}[t]{|c|c|} \hline \textbf{Differential forms} & \textbf{Componentwise linear}\\ \textbf{with values in $\V$}& \textbf{functions on $\bG$}\\ \hline Multiplicativity & Cocycle equation on\\ & the differentiable cochain complex\\ \hline Operations $L_\Delta, i_{\overrightarrow{\alpha}}$, & Lie derivative along special\\ restriction to the units & right-invariant vector fields\\ \hline Infinitesimal components $(D,l,\theta)$ & Infinitesimal cocyle evaluated \\ & on special sections of $\bA=\mathrm{Lie}(\bG)$\\ \hline IM-equations & Infinitesimal cocycle equation on the \\ & Chevalley-Eilenberg complex \\ \hline Complex of differential & Subcomplex of the cochain\\ $\VB$-groupoid forms & differentiable complex\\ \hline \end{tabular} \vspace{10pt} Once this is settled, Theorem \ref{thm:main} will be a simple consequence of the 1-1 correspondence between cocycles on the differentiable cohomology of a (1-connected) Lie groupoid and cocycles on the Chevalley-Eilenberg of its Lie algebroid. Also, Theorem \ref{thm:morita} will follow from arguments similar to those used in \cite{Hoyo-Ort} to prove the Morita invariance of the $\VB$-groupoid cohomology of $\V$. \subsection{General embedding trick}\label{embedding} In this first section we recall how tensor fields can be embedded as functions on fiber products. We also extend the formulas on \cite[Section~4.1]{Bur-Drum} relating Lie derivatives of tensor fields and Lie derivatives of their corresponding componentwise linear functions along lifted vector fields to a more general setting. These formulas will serve as basis for most of the calculations done in the proof of Theorem \ref{thm:main}. Given vector bundles $E_i \to M$, denote by $\times_M E_i$ the $k$-fold fiber product $E_1 \times_M \dots \times_M E_k$. Consider the embedding \begin{equation}\label{F_def} \begin{array}{ccc} \c: \,\,\,\Gamma(E_1^* \otimes \dots \otimes E_k^*) & \hookrightarrow & \hspace{-30pt}C^\infty(\times_M E_i)\vspace{10pt}\\ \tau = \mu_1 \otimes \dots \otimes \mu_k & \mapsto & \c_\tau = \ell_{\mu_1} \circ \pr^1 \cdots \ell_{\mu_k} \circ \pr^k, \end{array} \end{equation} where $\pr^i: E_1 \times_M \dots \times_M E_k \to E_i$ is the natural projection. We shall refer to the functions in the image as \textit{componentwise linear functions}. When $E_1 = \dots = E_k = E$, we see $\Gamma(\Lambda^k E^*)$ inside $\Gamma(\otimes^k E^*)$ as usual: $$ \mu_1 \wedge \dots \wedge \mu_k \mapsto \sum_{\sigma \in S_k} \mathrm{sgn}(\sigma) \mu_{\sigma(1)} \otimes \dots \otimes \mu_{\sigma(k)} $$ \begin{remark}{\em The space $C^\infty(\times_M E_i)$ has a $C^\infty(M)$-module structure defined by multiplication of functions (see $C^\infty(M)$ inside it by pulling back functions under the projection $\times_M E_i \to M$). The map $\c$ is a morphism of $C^\infty(M)$-modules. } \end{remark} There is a notion of tensor product for derivations that we now describe. Let $\Delta_i: \Gamma(E_i^*) \to \Gamma(E_i^*)$, $i=1, \dots, k$, be derivations having the same symbol $\sharp(\Delta) = X \in \frakx(M)$. The \textit{tensor product derivation} is defined as \begin{align*} \otimes_1^k \Delta_i = \Delta_1\otimes \dots \otimes \Delta_k&: \Gamma(E_1^* \otimes \dots \otimes E_k^*) \to \Gamma(E_1^* \otimes \dots \otimes E_k^*)\\ (\otimes_1^k\Delta_i) (\mu_1\otimes \dots \otimes \mu_k) & = \sum_{i=1}^k \mu_1 \otimes \dots \otimes \Delta_i(\mu_1) \otimes \dots \otimes \mu_k \end{align*} \begin{example}\em Let $\tau \in \Omega^k(M, E)$ and $\Delta: \Gamma(E) \to \Gamma(E)$ be a derivation with symbol equal to $X \in \frakx(M)$. One has that \begin{equation} L_\Delta(\tau) = (\underbrace{\Lie_X \otimes \dots \otimes \Lie_X}_{k-times} \otimes \Delta) (\tau), \end{equation} where $L_\Delta$ is the operator \eqref{der_Lie}. \end{example} \begin{example}\em\label{tensor_example} Let $\tau \in \Omega^k(M, \Lambda^l A)$, for a Lie algebroid $(A, [\cdot, \cdot], \rho)$. In this case, $$ (\underbrace{\Lie_{\rho(\alpha)} \otimes \dots \otimes \Lie_{\rho(\alpha)}}_{k-times} \otimes \underbrace{[\alpha, \cdot] \otimes \dots \otimes [\alpha, \cdot]}_{l-times})(\tau)= a \cdot \tau, $$ where $\cdot$ is the natural action of $\Gamma(A)$ on $\Omega^k(M, \Lambda^lA)$ defined as $$ \alpha \cdot (\xi \otimes \frakx) = (\Lie_{\rho(\alpha)} \xi) \otimes \frakx + \xi \otimes [\alpha, \frakx], $$ for the Schouten-Nijenhuis bracket $[\cdot, \cdot]$ on $\Lambda^\bullet A$. \end{example} Our next result concerns some equivariance properties of the map $\c$. It should be seen as a generalization of Proposition~4.1 in \cite{Bur-Drum} (when applied to Example \ref{tensor_example}). \begin{proposition} Let $\Delta_i: \Gamma(E_i^*) \to \Gamma(E_i^*)$ be derivations having the same symbol, $W_i \in \frakx(E_i)$ the corresponding linear vector fields and $\varphi_j \in \Gamma(E_j)$. One has that \begin{align} \label{der_inv} \Lie_{(W_1, \dots, W_k)} \c_\tau & = \c_{\otimes_1^k\Delta_i \tau} \\ \label{contraction_inv} \Lie_{(0,\dots, \varphi_j^\uparrow, \dots, 0)} \c_\tau & = \c_{\tau(\dots, \varphi_j, \dots)} \circ \gamma_{(j)}, \end{align} where $\gamma_{(j)}: \prod_i E_i \to \prod_{i \neq j} E_i$ is the forgetful projection. \end{proposition} \begin{proof} The result follows \eqref{F_def}, the Leibniz rule for Lie derivatives and $$ \Lie_{W_j} \ell_{\mu_j} = \ell_{\Delta_j(\mu_j)}, \,\,\, \Lie_{\varphi_j^\uparrow} \ell_{\mu_j} = \<\mu_j, \varphi_j\> \circ p_j, $$ where $\mu_j \in \Gamma(E_j^*)$ and $p_j: E_j \to M$ is the projection. \end{proof} \begin{corollary}\label{cor:action} Let $\tau \in \Omega^k(M, E)$ and $\Delta: \Gamma(E) \to \Gamma(E)$ be a derivation with symbol $X \in \frakx(M)$. One has that \begin{align*} \Lie_{(X^{T,k}, W)} \c_\tau & = \c_{L_\Delta \tau}, \\ \Lie_{(Y^{\uparrow, k}_{(j)},0)} \c_\tau & = (-1)^{j-1} \c_{i_Y \tau} \circ \gamma_{(j)}, \,\, j=1, \dots, k,\\ \Lie_{(0, \varphi^\uparrow)} \c_\tau & = \c_{\<\varphi, \tau\>} \circ \gamma_{(k+1)}, \end{align*} where $(X^{T,k}, W), Y^{\uparrow,k}_{(j)} \in \frakx(\times_M^k TM)$ are the vector fields \begin{align*} (X^{T, k}, W)(Z_1, \dots, Z_k,e) & = (X^T(Z_1), \dots, X^T(Z_k), W(e)),\\ Y^{\uparrow,k}_{(j)}(Z_1, \dots, Z_k) & = (0_{Z_1}, \dots,0_{Z_{j-1}}, Y^\uparrow(Z_j), 0_{Z_{j+1}}, \dots, 0_{Z_k}), \end{align*} $(\cdot)^T, (\cdot)^\uparrow$ is the tangent and vertical lift, respectively, $W$ is the linear vector field associated to $\Delta$. \end{corollary} \subsection{Proof of Theorem \ref{thm:main}}\label{proof1} \subsubsection{From global to infinitesimal} Given a $\VB$-groupoid $\V \toto E$ over $\G \toto M$, there is a natural groupoid $\bG \toto \bM$, where $$ \bG= \times_\G^k T\G \times_\G \V^*, \,\,\, \bM = \times_M^k TM \times_M C^* $$ and the groupoid structure maps are defined as the fiber product of each of the corresponding maps on $T\G$ and $\V^*$. \footnote{Note that we are using the fact that the projection maps $(T\G \toto TM) \to (\G \toto M)$ and $(\V^* \toto C^*) \to (\G \toto M)$ fulfill the conditions for the fiber product to have a Lie groupoid structure. We refer to the appendix of \cite{Bur-Cab-Hoy} for a detailed discussion regarding fiber products of Lie groupoids.} \begin{proposition} $\vartheta \in \Omega^k(\G, \V)$ is multiplicative if and only if $\c_\vartheta$ is a cocycle on the differentiable cohomology of $\bG$. \end{proposition} \begin{proof} It is straightforward to see that $\vartheta$ multiplicative implies that $\c_{\vartheta}$ is a cocycle. Conversely, let us assume that $\c_\vartheta$ is a cocycle. First, note that $\c_\vartheta|_\bM = 0$ implies that $ \vartheta|_M \in \Omega^k(M, E) $ under the decomposition $\V|_M = E\oplus C$. So, define $\theta \in \Omega^k(M, E)$ as $\theta:=\vartheta|_M$. We claim that $\widetilde{\sour} \circ \vartheta = \sour^*\theta$ and $\widetilde{\tar}\circ \vartheta = \tar^*\theta$. Indeed, for $\varphi \in E_{\sour(g)}^*$ and $U_1, \dots, U_k \in T_{g} \G$, it follows from the multiplicativity of both $\c_\vartheta$ and the pairing $\<\cdot, \cdot\>$ that \begin{align*} \< \widetilde{\sour}(\vartheta(U_1, \dots, U_k)), \varphi\> & = \< \widetilde{\sour}(\vartheta(U_1, \dots, U_k)), \varphi - \partial^*(\varphi)\>\\ & = \<\vartheta(U_1, \dots, U_k)\bullet \widetilde{\sour}(\vartheta(U_1, \dots, U_k)), 0_{g_1} \bullet (\varphi - \partial^*(\varphi)) \>\\ & = \c_{\vartheta}(U_1, \dots, U_k, 0_{g_1} \bullet (\varphi - \partial^*(\varphi)))\\ & = \c_\vartheta(T\sour(U_1), \dots, T\sour(U_k), \varphi - \partial^*(\varphi))\\ & = \<\theta(T\sour(U_1), \dots, T\sour(U_k)), \varphi\>. \end{align*} A similar argument proves that $\widetilde{\tar}\circ \vartheta = \tar^*\theta$. Finally, to prove compatibility with the multiplication, it follows directly from the cocycle equation for $\c_\vartheta$ that \begin{align*} \<\vartheta(U_1 \bullet V_1, \dots, U_k \bullet V_k), \psi_1 \bullet \psi_2\> = \< \vartheta(U_1, \dots, U_k) \bullet \vartheta(V_1, \dots, V_k), \psi_1 \bullet \psi_k\>, \end{align*} for any $(V_1, \dots, V_k) \in \times^k T_{g_2}\G$ composable with $(U_1, \dots, U_k)$ and $\psi_1 \in \V^*_{g_1}, \psi_2 \in \V^*_{g_2}$ a composable pair. The result now follows from the fact that any $\psi \in \V^*_{g_1g_2}$ can be written as a produt $\psi = \psi_1 \bullet \psi_2$. \end{proof} The Lie algebroid of $\bG$ is $\bA \to \bM$ $$ \bA = \times_A^k TA \times_A \v^*_A, $$ with Lie algebroid structure determined componentwise by $TA \to TM$ and $\v_A^* \to C^*$. Note that we are using identifications \eqref{dual_identification} and $TA \cong \mathrm{Lie}(T\G)$ (see \cite{Mckz-Xu} for a discussion of tangent groupoids and tangent algebroids). It follows from the general theory of double vector bundles that $\Gamma(\bM, \bA)$ is generated as a $C^\infty(\bM)$-module (see \cite[Proposition~2.2]{Mac-doubles}) by $\chi_\eta, \B \beta_{(j)}, \B \varphi: \bM \to \bA$ defined as follows: \begin{align}\label{gen_set} \begin{split} \chi_\eta(X_1,\dots, X_k, \mu) & = (T\alpha(X_1), \dots, T\alpha(X_k), \eta^\top(\mu)),\\ \B \beta_{(j)}(X_1, \dots, X_k, \mu) & = (T0(X_1), \dots, B\beta(X_j), \dots, T0(X_k), 0_\mu),\\ \B\varphi(X_1, \dots, X_k,\mu) & = (T0(X_1), \dots, T0(X_k), \B\varphi(\mu)), \end{split} \end{align} where $\eta \in \Gamma_{lin}(E,\v)$, $\alpha = \pr(\eta), \,\beta \in \Gamma(A)$ and $\varphi \in \Gamma(E^*)$. Using \eqref{eq:dual_vb}, one can see that the Lie algebroid structure of $\bA \to \bM$ is given in terms of the generators as follows (see Corollary \eqref{cor:action} for notation): \begin{equation}\label{big_algebroid} \begin{aligned} \noindent [\chi_{\eta_1}, \chi_{\eta_2}] = \chi_{[\eta_1, \eta_2]}, & \,\,\,\,\,\, [\chi_\eta, \B \beta_{(j)}] = \B [\alpha, \beta]_{(j)}\\ [\chi_\eta, \B \varphi] = \B(\nabla_\eta^\top \varphi), & \,\,\,\,\,\,\,[\B(\ast), \B(\ast')] = 0\\ \rho_\bA(\chi_\eta) = (\rho(\alpha)^{T,k}, W_{\nabla_\eta}) & , \,\,\ \rho_\bA(\B\beta_{(j)}) = (\rho(\alpha)^{\uparrow, k}_{(j)},0), \,\,\, \rho_\bA(\B\varphi) = (0, \partial^*(\varphi)^\uparrow). \end{aligned} \end{equation} \begin{proposition}\label{prop:inf_comp} Let $\vartheta \in \Omega^k(\G, \V)$ be a multiplicative $k$-form. There exists $D: \Gamma_{lin}(E, \v) \to \Omega^k(M, C)$, $l: A \to \wedge^{k-1}T^*M \otimes C$ and $\theta \in \Omega^k(M,E)$ such that $$ D(\mathcal{B}\Phi) = -\Phi \circ \theta, \,\,\,\, D(f\eta) = fD(\eta) + df \wedge l(\pr(\eta)), $$ for $\Phi \in \mathrm{Hom}(E,C)$, $f \in C^\infty(M)$, satisfying \begin{align*} D(\eta) & = L_{\Delta_\eta}(\vartheta)|_M\\ l(\alpha) & = i_{\overrightarrow{a}} \vartheta|_M\\ \theta & = \vartheta|_M. \end{align*} \end{proposition} \begin{proof} Consider the multiplicative function $\c_\vartheta \in C^\infty(\bG)$ and the corresponding Lie algebroid cocycle $\mathrm{Lie}(\c_\vartheta) \in \Gamma(\bA^*)$. By evaluating $\mathrm{Lie}(\c_\vartheta)$ on $\chi \in \Gamma(\bA)$, \begin{equation}\label{eq:inf_global} \<\mathrm{Lie}(\c_\vartheta), \chi\> = \Lie_{\overrightarrow{\chi}}\c_\vartheta|_\bM, \end{equation} with $\chi$ varying in the set of generators \eqref{gen_set}, one obtains maps from the space of parameters $\Gamma_{lin}(E,\v)$, $\Gamma(A)$ and $\Gamma(E^*)$ to $C^\infty(\bM)$. We will show that $D,l,\theta$ will appear as refinements of these maps. First note that it follows from \cite[Eq.~(45), \S 9.7]{Mckz2}, Propositions \ref{right_core} and \ref{right_adjoint} that \begin{align*} \overrightarrow{\chi_\eta} & = (\overrightarrow{\alpha}^{T,k}, \overrightarrow{\eta}^\top), \,\,\,\, \overrightarrow{\B\beta_{(j)}} = (\overrightarrow{\beta}^{\uparrow,k}_{(j)},0), \,\,\,\,\,\, \overrightarrow{\B \varphi} = (0, \varphi_R^\uparrow). \end{align*} From Corollary \eqref{cor:action}, \begin{align*} \Lie_{\overrightarrow{\chi_\eta}}\c_\vartheta = \c_{L_{\Delta_\eta}\vartheta}, \,\,\, \Lie_{\overrightarrow{\B\beta_{(j)}}}\c_\vartheta = (-1)^{j-1} \c_{i_{\beta} \vartheta} \circ \pi_{(j)}, \,\,\,\, \Lie_{\overrightarrow{\B \varphi}} \c_\vartheta = \c_{\<\varphi_R, \vartheta\>}\circ \pi_{(k+1)}, \end{align*} where $\pi_{(j)}: \times_\G^k T\G \times_\G \V^* \to \times_\G^{k-1} T\G \times_\G \V^*$ forgets the $j$-th component, $j=1,\dots, k$, and $\pi_{(k+1)}: \times_\G^k T\G \times_\G \V^* \to \times_\G^k T\G$ forgets the component on $\V^*$. So, by restricting to the units $\bM \subset \bG$ and using \eqref{eq:inf_global}, one notes that there exists $D: \Gamma_{lin}(E, \v) \to \Omega^k(M, C)$, $l_j: \Gamma(A) \to \Omega^{k-1}(M,C), \, j=1,\dots, k,$ and $r:\Gamma(E^*) \to \Omega^k(M)$ satisfying \begin{align}\label{eq:liner_inf} \begin{split} \c_{D(\eta)} & =\<\mathrm{Lie}(\c_\vartheta), \chi_\eta\> = \c_{L_{\Delta_\eta} \vartheta} |_\bM \\ \c_{l_j(\beta)} \circ \gamma_{(j)} & = \<\mathrm{Lie}(\c_{\vartheta}), \B \beta_{(j)}\> = (-1)^{j-1}\c_{i_{\beta} \vartheta} \circ \pi_{(j)}|_\bM\\ \c_{r(\varphi)} \circ \gamma_{(k+1)} & = \<\mathrm{Lie}(\c_{\vartheta}), \B \varphi\> = \c_{\<\varphi_R, \vartheta\>} \circ \pi_{(k+1)}|_\bM \end{split} \end{align} Since \eqref{F_def} is injective, it follows that $ D(\eta)=L_{\Delta_\eta}(\vartheta)|_M, \,\,\, l_j(\beta) = (-1)^{j-1}i_\beta \vartheta |_M, \,\,\, r(\varphi) = \<\varphi_R, \vartheta\>|_M$. Note that $l_j$ and $r$ are $C^\infty(M)$-linear. Also, if we define $l=l_1$, then $l_j=(-1)^{j-1}l$ \footnote{The skew-symmetry of $\vartheta$ gives the relationship between the $l_j$'s. In general, for a tensor $\tau \in \Gamma(\otimes^k T^*\G \otimes \V)$, one gets $k$ non-related maps $l_j: A \to \otimes^{k-1} T^*\G \otimes \V$ by contracting $\tau$ with $\overrightarrow{\beta}$ on the $j$-th factor.}\,. Also, \begin{align*} \<\varphi_R(g),\vartheta(U_1, \dots, U_k)\> & = \<\varphi(\tar(g))\bullet 0_g, \vartheta(T\tar(U_1)\bullet U_1, \dots, T\tar(U_k) \bullet U_k)\>\\ & = \< \varphi(\tar(g)), \,\theta(T\tar(U_1),\dots, T\tar(U_k)\> \end{align*} In other words, $\<\varphi_R,\vartheta\> = \tar^*\<\varphi, \theta\>$. So, if one sees $r$ as an element of $\Omega^k(M, E)$, one has that $r=\theta$. The properties of $D$ follows from the following facts: $$ L_{\Delta_{\B \Phi}}(\vartheta) = - \Phi \circ \vartheta, \,\, L_{\Delta_{f\eta}} \vartheta = (\tar^*f) L_{\Delta_{\eta}}(\vartheta) + (\tar^*df)\wedge i_{\overrightarrow{\alpha}}\vartheta. $$ The minus sign on $L_{\Delta_{\B \Phi}}$ comes from the relationship between $\Delta_{\B \Phi}$ and $\Delta_{\B \Phi}^\top$ (see Remark \ref{der_linear}). \end{proof} \subsubsection{From infinitesimal to global} Given a triple $(D, l, \theta)$ satisfying the compatibilities \eqref{eq:compatibility}, we construct a function $ \Upsilon: \times_A^k TA \times_A \v_A^* \to \R $ as follows: for $$ \xy {\ar@{|->}_{}(0,25)*++{\chi_j \in TA};(22,25)*++{a \in A}}\\ {\ar@{|->}_{}(0,25)*++{};(0,15)*++{}}\\ {\ar@{|->}_{}(0,15)*++{X_j \in TM};(22,15)*++{x \in M}}\\ {\ar@{|->}_{}(22,25)*++{};(22,15)*++{}}\\ \endxy \,\,\,\,\,\,\,\text{ and } \,\,\,\,\,\,\,\, \xy {\ar@{|->}_{}(0,25)*++{\phi \in \v_A^*};(20,25)*++{a \in A}}\\ {\ar@{|->}_{}(0,25)*++{};(0,15)*++{}}\\ {\ar@{|->}_{}(0,15)*++{\mu \in C^*};(20,15)*++{x \in M,}}\\ {\ar@{|->}_{}(20,25)*++{};(20,15)*++{}}\\ \endxy \vspace{25pt} $$ $j=1, \dots, k$, choose $\zeta \in \Gamma_{lin}(E,\v)$ such that $\zeta^\top(\mu)$ projects over $a$ under the projection $\v_A^* \to A$ and let $\xi \in E_x^*$ and $a_j \in A_x$ be elements determined by $$ \phi = \zeta^\top(\mu) +_{C^*} (0_\mu +_A \overline{\xi}), \,\,\,\,\chi_j = T\alpha(X_j) +_{TM} (T0(X_j) +_A \overline{a_j}), \,\,\, \alpha = \pr(\zeta). $$ Define \begin{align}\label{Flinear_def} \begin{split} \Upsilon(\chi_1, \dots, \chi_k, \phi) & = \<\mu, D(\zeta)(X_1, \dots, X_k)\>\\ & \hspace{-50pt}+ \<\mu,\sum_{j=1}^k (-1)^{j-1} l(a_j)(X_1, \dots, \widehat{X_{j}}, \dots, X_k)\> + \<\xi, \theta(X_1,\dots, X_k)\>. \end{split} \end{align} Note that $\Upsilon$ is skew-symmetric with respect to the $TA$ components. \begin{proposition} $\Upsilon$ is a well-defined componentwise linear function (i.e. there exists $\tau \in \Gamma(\wedge^k T^*A \otimes \v)$ such that $\Upsilon = \c_\tau$). Moreover, $\Upsilon$ is fiberwise linear with respect to vector bundle structure $\bA \to \bM$. As an element of $\Gamma(\bA^*)$, one has that \begin{equation} \begin{aligned}\label{linear_comp} \<\Upsilon, \chi_\eta\> & = \c_{D(\eta)} , \,\, \<\Upsilon, \B\beta_{(j)}\> = (-1)^{j-1}\c_{l(\beta)}\circ \gamma_{(j)} \\ \<\Upsilon, \B\varphi\> & = \c_{\<\varphi, \theta\>} \circ \gamma_{(k+1)} \end{aligned} \end{equation} as functions on $\bM$. \end{proposition} \begin{proof} Let $\{\alpha_r\}_{r=1, \dots, \mathrm{rank}(A)}$, $\{\varphi_s\}_{s=1, \dots, \mathrm{rank}(E)}$ be local frames of $A$ and $E^*$, respectively. If $\eta_r \in \Gamma_{lin}(E, \v)$ are local sections such that $\pr(\eta_r) = \alpha_r$, then $\{\eta_r^\top, \B \varphi_s\}$ is a local frame for $\v_A^* \to C^*$ (see \cite[Prop.~2.2]{Mac-doubles}). Write $$ \chi_j = t^r\cdot T\alpha_r(X_j) +_{TM} h_{j}^{r}\cdot \B \alpha_r(X_j), \,\,\, \phi = t^r\cdot \eta^\top_r(\mu) +_{C^*} g^s \cdot \B\varphi_s, $$ for $t^r, h_j^r, g^s \in \R$ and $a = t^r \alpha_r(x)$ (here $\cdot$ stands for the multiplication by scalars on $TA \to TM$ and $\v_A^* \to C^*$ respectively). Now, for any linear section $\zeta \in \Gamma_{lin}(E, \v)$ $$ p_*: \zeta^\top(\mu) \mapsto a \Leftrightarrow \zeta^\top = (f^r \circ p_*) \cdot \eta_r^\top, \,\,\,\text{ for } f^r \in C_{loc}^\infty(M) \text{ such that } f^r(x) = t^r, $$ where $p_*: \v_A^* \to A$ is the vector bundle projection. So, $\xi = g^s \varphi_s(x)$ and, using that $$ T(f^r \alpha_r)(X_j) = f^r(x) \cdot T\alpha_r(X_j) +_{TM} (\Lie_{X_j}f)(x)\cdot \B\alpha_r(X_j), $$ one gets that $$ a_j = (h_j^r - (\Lie_{X_j}f^r)(x)) \alpha_r(x). $$ Using the Leibniz rule for $D$, one finally obtains the following local expression for $\Upsilon$: \begin{align*} \Upsilon(\chi_1, \dots, \chi_k, \phi)& = \<\mu, t^rD(\eta_r)(X_1,\dots, X_k)\\ & \hspace{-50pt}+ \sum_{j=1}^k (-1)^{j-1} h^r_j \, l(\alpha_r)(X_1, \dots, \hat{X_j}, \dots, X_k)\> + g^s \<\varphi_s, \theta(X_1,\dots, X_k)\>. \end{align*} The dependence on $t^r, h_j^r, g^s$ of the local expression of $\Upsilon$ shows that it is fiberwise linear on $\bA \to \bM$. At last, the equalities \eqref{linear_comp} follow from the definition of $\Upsilon$ and \begin{equation*} (\chi_1, \dots, \chi_k, \phi) = \begin{cases} \chi_\eta(X_1, \dots, X_k, \mu) \Leftrightarrow a_i=0, \, \xi=0, \text{ for } \zeta =\eta\\ \B\beta_{(j)}(X_1, \dots, X_k, \mu) \Leftrightarrow a_i = \delta_i^j \beta, \,\xi=0, \text{ for } \zeta =0\\ \B \varphi(X_1, \dots, X_k,\mu) \Leftrightarrow a_i=0,\, \xi = \varphi(x), \text{ for } \zeta = 0, \end{cases} \end{equation*} where $\delta^j_i$ is the Kronecker delta. \end{proof} \begin{proposition}\label{prop:IM_eq} The section $\Upsilon \in \Gamma(\bA^*)$ given by \eqref{Flinear_def} is a Lie algebroid cocycle if and only if $(D,l,\theta)$ is an IM $k$-form with values in $\v$. \end{proposition} \begin{proof} The proof follows closely the proof of Proposition 4.12 in \cite{Bur-Drum}. By definition, $\Upsilon$ is a cocycle if and only the equation \begin{equation}\label{eq:cocycle} \<\Upsilon,[\chi_1, \chi_2]\> = \Lie_{\rho_\bA(\chi_1)} \< \Upsilon, \chi_2\> - \Lie_{\rho_\bA(\chi_2)} \<\Upsilon, \chi_1\> \end{equation} is fulfilled for any $\chi_1, \chi_2 \in \Gamma(\bA)$. It suffices to consider $\chi_1, \chi_2$ varying on the set of generators \eqref{gen_set}. In these cases, one uses equations \eqref{linear_comp} and \eqref{big_algebroid} on each of the six possible cases to show the equivalence of \eqref{eq:cocycle} to the set of IM equations. \smallskip \paragraph{\bf Equation (IM1):} Let $\eta_1, \eta_2 \in \Gamma_{lin}(E, \v)$ and consider $\chi_1= \chi_{\eta_1}, \, \chi_2 = \chi_{\eta_2}$. In this case, one can see that \eqref{eq:cocycle} is equivalent to \begin{align*} \c_{D([\eta_1, \eta_2])} & = \Lie_{(\rho(\alpha_1)^{T,k}, W_{\nabla_{\eta_1}})} \c_{D(\eta_2)} - \Lie_{(\rho(\alpha_2)^{T,k}, W_{\nabla_{\eta_2}})} \c_{D(\eta_1)}\\ & = \c_{L_{\nabla_{\eta_1}} D(\eta_2) - L_{\nabla_{\eta_2}} D(\eta_1)}, \end{align*} where the last equality follows from Corollary \ref{cor:action}. \smallskip \paragraph{\bf Equations (IM2) and (IM4):} Let $\eta \in \Gamma_{lin}(E,\v)$ and consider $\chi_1=\chi_{\eta}$, $\chi_2 = \B \beta_{(j)}$. In this case, \eqref{eq:cocycle} reduces to \begin{align*} \<\Upsilon, \B([\alpha, \beta])_{(j)}\> & = (-1)^{(j-1)} \Lie_{(\rho(\alpha)^{T,k}, W_{\nabla_{\eta}})}(\c_{l(\beta)}\circ \gamma_{(j)}) - \Lie_{(\rho(\beta)^{\uparrow, k}_{(j)},0)} \c_{D(\eta)}\\ & = (-1)^{(j-1)} (\Lie_{(\rho(\alpha)^{T,k-1}, W_{\nabla_{\eta}})} \c_{l(\beta)}) \circ \gamma_{(j)} - \Lie_{(\rho(\beta)^{\uparrow, k}_{(j)},0)} \c_{D(\eta)}. \end{align*} Now, using Corollary \ref{cor:action}, one can show that the last equation is equivalent to \begin{align*} (-1)^{j-1}\c_{l([\alpha,\beta])} \circ \gamma_{(j)} = (-1)^{j-1}(\c_{L_{\nabla_\eta} l(\beta) - i_{\rho(\beta)}D(\eta)})\circ \gamma_{(j)} \end{align*} Similarly, if $\chi_2 = \B \varphi$, for $\varphi \in \Gamma(E^*)$, one obtains that $$ \<\nabla_\eta^\top \varphi, \theta\> = \Lie_{\rho(\alpha)} \<\varphi, \theta\> - \<\varphi, \partial(D(\eta))\> \Leftrightarrow \<\varphi, L_{\nabla_\eta} \theta\> = \< \varphi, \partial(D(\eta))\>. $$ \smallskip \paragraph{\bf Equation (IM3):} It follows exactly as in the proof of Equation (IM4) in \cite{Bur-Drum}. \paragraph{\bf Equation (IM5):} Let now $\chi_1= \B\beta_{(j)}$, $j=1,\dots, k$, and $\chi_2 = \B\varphi$. As $[\B\beta_{(j)},\B\varphi ]=0$, equation \eqref{eq:cocycle} reduces to \begin{align*} 0 & = \Lie_{(\rho(\beta)^{\uparrow,k}_{(j)},0)} (\c_{\<\varphi, \theta\>}\circ \gamma_{(k+1)}) - (-1)^{j-1} \Lie_{(0, \partial^*(\varphi)^\uparrow)} (\c_{l(\beta)}\circ \gamma_{(j)})\\ & = (\Lie_{\rho(\beta)^{\uparrow,k}_{(j)}}\c_{\<\varphi, \theta\>}) \circ \gamma_{(k+1)} - (-1)^{j-1} (\Lie_{(0, \partial^*(\varphi)^\uparrow))} \c_{l(\beta)}) \circ \gamma_{(j)}\\ & = (-1)^{j-1}\c_{\<\varphi, i_{\rho(\beta)}\theta\>} \circ \gamma_{(j)}\circ \gamma_{(k+1)}- (-1)^{j-1}\c_{\<\varphi, \partial(l(\beta))\>}\circ \gamma_{(k)}\circ \gamma_{(j)}. \end{align*} The equation now follows from the fact that $\gamma_{(j)}\circ \gamma_{(k+1)}$ and $\gamma_{(k)}\circ \gamma_{(j)}$ are the same projection from $\times_M^{k} TM \times_M C^*$ from $\times_M^{k-1} TM$ which forgets the $j$-th component on $TM$ and the last component on $C^*$. \end{proof} \begin{proof}[Proof of Theorem \eqref{thm:main}] \noindent \paragraph{\bf Differentiation:} Let $\vartheta \in \Omega^k(\G, \V)$ be a multiplicative $k$-form. From Proposition \ref{prop:inf_comp}, there exists $D: \Gamma_{lin}(E,\v) \to \Omega^k(M,C)$, $l: A \to \wedge^{k-1} T^*M \otimes C$ and $\theta \in \Omega^k(M,E)$ satisfying \eqref{eq:inf_comp}. If $\Upsilon \in \Gamma(\bA^*)$ is the element defined by \eqref{Flinear_def} using the triple $(D,l,\theta)$, then $\Upsilon=\mathrm{Lie}(\c_{\vartheta})$. Indeed, the equality follows from comparing the values of $\Upsilon$ and $\mathrm{Lie}(\c_\vartheta)$ on the set of generators of $\Gamma(\bA)$ as given by equations \eqref{eq:liner_inf} and \eqref{linear_comp}, respectively. As $\mathrm{Lie}(\c_\vartheta)$ is a cocycle on $\bA$, it follows from Proposition \ref{prop:IM_eq} that $(D,l,\theta)$ satisfies the IM-equations. \smallskip \paragraph{\bf Integration:} Let $(D,l,\theta)$ be an IM $k$-form on $A$ with values on $\v=\mathrm{Lie}(\V)$ and define $\Upsilon$ as \eqref{Flinear_def}. By Proposition \ref{prop:IM_eq}, $\Upsilon \in \Gamma(\bA^*)$ is a Lie algebroid cocycle. As $\G$ is 1-connected, so is $\bG$ \footnote{We are using the well-known fact that a source fiber of a $\VB$-groupoids is an affine bundle over the corresponding source fiber of the base groupoid (see \cite[Rem.~3.1.1.(a)]{Bur-Cab-Hoy}).} Hence, $\Upsilon$ integrate to a componentwise multiplicative function on $\bG$ which is skew-symmetric on the $T\G$ components (i.e. there exists $\vartheta \in \Omega^k(\G, \V)$ such that $\Upsilon = \mathrm{Lie}(\c_\vartheta)$)\footnote{We are using \cite[Prop.~A.3]{Bur-Drum} here to ensure that the multiplicative function on $\bG$ is componentwise linear and skew-symmetric.} At last, it follows from comparing \eqref{eq:liner_inf} and \eqref{linear_comp} that $(D,l,\theta)$ satisfies \eqref{eq:inf_comp}. This concludes the proof. \end{proof} We end this subsection with an important remark interpreting the Lie functor at the level of differential forms (instead of functions). \begin{remark}\label{Lie_on_forms}\em Let $\vartheta \in \Omega^k(\G, \V)$ be a multiplicative form and consider the $k$-form $\tau \in \Omega^k(A, \v)$ such that $\Upsilon = \mathrm{Lie}(\c_{\vartheta}) = \c_\tau$; we interpret $\tau$ as $\mathrm{Lie}(\vartheta)$. Now, consider both $A \to M$ and $\v \to E$ as Lie groupoids with respect to their vector bundle structures and let us apply the Lie functor from $A$ to $\mathrm{Lie}(A) = A$ and from $\v$ to $\mathrm{Lie}(\v)=\v$ (with zero bracket and zero anchor). The linearity of $\c_\tau$ (with respect to $\bA \to \bM$) is equivalent to $\tau$ being a multiplicative form on $A$ with values in $\v$. Also, as $\mathrm{Lie}(\c_\tau) = \c_\tau$, one gets from comparing \eqref{eq:liner_inf} and \eqref{Flinear_def} that the infinitesimal components of $\vartheta$ and $\tau$ coincide. In particular, applying Theorem \ref{thm:main} to $\tau$, one gets infinitesimal formulas for $D, l, \theta$ \begin{align}\label{eq:linear_corresp} D(\eta) = L_{\Delta_{\eta^\uparrow}}(\tau)|_M, \,\, l(\alpha) = i_{\alpha^\uparrow}\tau|_M, \,\, \theta= \tau|_M. \end{align} Here, $M \subset A$ as the zero section and one has to notice that the vertical lifts $\eta^\uparrow \in \frakx(\v), \alpha^\uparrow \in \frakx(A)$, for $\eta \in \Gamma_{lin}(E,\v)$ and $\alpha \in \Gamma(A)$, are exactly the right-invariant vector fields. \end{remark} \subsection{Proof of Theorem \ref{thm:morita}} We will follow closely the proof of the Morita invariance of the cohomology of 2-term ruth given in \cite{Hoyo-Ort}. The main ingredient of the proof is to embed the differential complex $C^{p,q}(\V)$ in the differentiable cochain complex of the big groupoid $\bG$ (see \eqref{big_groupoid}). We shall refer to \cite{Cab-Drum} for the details in what follows. First, one has that \begin{equation}\label{form_decomp} \begin{aligned} B_p(\times_\G^q T\G) & \cong \times_{B_p\G}^q T(B_p\G)\\ B_p(\times_\G^q T\G \times_\G \V^*) & \cong \times_{B_p\G}^q T(B_p\G) \times_{B_p\G} B_p(\V^*). \end{aligned} \end{equation} The isomorphisms are given by \begin{align*} (\underline{U}_1, \dots, \underline{U}_p) & \mapsto (\mathbb{U}^1, \dots, \mathbb{U}^q)\\ ((\underline{U}_1, \xi_1), \dots, (\underline{U}_p, \xi_p)) & \mapsto ((\mathbb{U}^1, \dots, \mathbb{U}^q), (\xi_1, \dots, \xi_p)), \end{align*} where $\underline{U}_i = (U_i^1, \dots, U_i^q)$, each $U_i^j \in T_{g_i}\G$, for $i=1,\dots, p$, $j = 1, \dots, q$ and $\mathbb{U}^j = (U^j_1, \dots, U^j_p) \in T_{(g_1,\dots, g_p)} B_p\G$. Let us now define $C^{p}_{\rm ext}(\bG)$ as the space of functions $B_p\bG \to \R$ which are multi-linear with respect to the decomposition \eqref{form_decomp} and skew-symmetric on the first $q$-components. From \cite[Prop.~4.7]{Cab-Drum}, we know that $C^{\bullet}_{\rm ext}(\bG)$ is a subcomplex of the differentiable cochain complex of $\bG$ (in fact, there is a chain map $P_{\rm ext}: C^\bullet(\bG) \to C^\bullet_{\rm ext}(\bG)$ which is a projection!). Now, define $\F: C^{p,q}(\V) \to C^p_{\rm ext}(\bG)$ as follows: for $\vartheta \in C^{p,q}(\V)$, $$ \F_\vartheta((\underline{U}_1, \xi_1), \dots, (\underline{U}_p,\xi_p)) = \<\xi_1, \vartheta(\mathbb{U}^1, \dots, \mathbb{U}^q)\>. $$ \begin{lemma}\label{diff_char} A cochain $f \in C^p_{\rm ext}(\bG)$ belongs to the image of $\F$ if and only if it satisfies \begin{itemize} \item[(i)] $f((\underline{U}_0, 0_{g_0}), (\underline{U}_1, \xi_1),\dots, (\underline{U}_{p-1}, \xi_{p-1})) = 0$; \item[(ii)]$f((\underline{U}_0\cdot \underline{U}_1, 0_{g_0}\cdot \xi_1), (\underline{U}_2, \xi_2),\dots, (\underline{U}_p, \xi_p)) = f((\underline{U}_1, \xi_1),\dots, (\underline{U}_p, \xi_p)) $, \end{itemize} for all $((\underline{U}_0, 0_{g_0}),(\underline{U}_1, \xi_1), \dots, (\underline{U}_p, \xi_p)) \in B_{p+1}\bG$. \end{lemma} \begin{proof} It is straightforward to check $f \in \F(C^{p,q}(\V))$ satisfies both (i) and (ii). Conversely, if $f$ satisfies (i), then we can define a fiber preserving (over $B_p\G$) map $\widehat{f}: B_p(\times_\G^q T\G) \to \pr_1^*\V$ by $$ \<\xi_1, \hat{f}(\underline{U}_1, \dots, \underline{U}_p)\> = f((\underline{U}_1, \xi_1),\dots, (\underline{U}_p, \xi_p)), \,\,\, \xi_1 \in \V^*_{g_1}, $$ where $(\underline{U}_1, \dots, \underline{U}_p) \in B_p(\times_\G^q T\G)$ is in the fiber over $(g_1, \dots, g_p)$ and $(\xi_2, \dots, \xi_p) \in B_{p-1}(\V^*)$ is any element in the fiber over $(g_2, \dots, g_p)$ with $\widetilde{\sour}(\xi_1) = \widetilde{\tar}(\xi_2)$. The multi-linearity and skew-symmetry of $f$ with respect to \eqref{form_decomp} together with (i) implies that $\hat{f}$ is well-defined and that it can be seen as an element $\vartheta \in \Omega^q(B_p\G, \pr_1^*\V)$ such that $f = \F_{\vartheta}$. At last, one can prove that (ii) implies the existence of $\theta \in \Omega^q(B_{p-1}\G, \tar^*E)$ satisfying $$ \widetilde{\sour} \circ \vartheta = \partial_0^*\theta, $$ following exactly the same argument in the proof of Lemma 5.4 of \cite{Gra-Met1}. This concludes the proof. \end{proof} One should think of $C^{p,q}(\V)$ inside $C^p_{\rm ext}(\bG)$ in the same way $\VB$-groupoid cochain complex of $\V$ sits inside the complex of linear cochains (see \cite{Gra-Met1}). \begin{lemma} $(C^{p,q}(\V), \delta)$ is a chain complex and $\F$ is a chain map. \end{lemma} \begin{proof} The image of $\F$ is a cochain subcomplex of $C^\bullet_{\rm ext}(\V)$ as one can see by noting that: for $f \in C^{\bullet}_{\rm ext}(\V)$, \begin{equation}\label{lem:chain_complex} \begin{aligned} \text{if } f \text{ satisfies (i), then } \delta f \text{ satisfies (i)} & \Leftrightarrow f \text{ satisfies (ii)};\\ f \text{ satisfies (ii)} & \Rightarrow \delta f \text{ satisfies (ii)}. \end{aligned} \end{equation} As $\F$ is a monomorphism, one only has to check that the differential induced by $\F$ on $C^{p,q}(\V)$ coincides with \eqref{differential}. For $\vartheta \in C^{p,q}(\V)$, \begin{align*} \delta \F_\vartheta((\underline{U}_1, \xi_1), \dots, (\underline{U}_{p+1},\xi_{p+1})) & = \<\xi_2, (\partial_0^*\vartheta)(\mathbb{U}^1, \dots, \mathbb{U}^q)\>\\ & \hspace{-120pt} - \<\xi_1 \bullet \xi_2, (\partial_1^*\vartheta)(\mathbb{U}^1, \dots, \mathbb{U}^q)\> + \sum_{i=2}^{p+1} (-1)^i \<\xi_1, (\partial_i^*\vartheta)(\mathbb{U}^1, \dots, \mathbb{U}^q)\>. \end{align*} The result now follows from \eqref{dual_mult} and $$ \partial_1^*\vartheta = (\partial_1^*\vartheta)\bullet (\partial_0^*\vartheta)^{-1} \bullet (\partial_0^*\vartheta). $$ \end{proof} \begin{lemma} $\F$ is a quasi-isomorphism. \end{lemma} The proof follows exactly the same argument of \cite[Lemma~3.1]{Cab-Drum} with minor modifications (see also \cite[Prop.~4.1]{Hoyo-Ort}). \begin{proof} In the following, we shall identify $C^{p,q}(\V)$ with the image of $\F$ (i.e. the subspace of $C^{p}_{\rm ext}(\bG)$ characterized by Lemma \ref{diff_char}). The result will follow from the following claim: if $f_0 \in C^p_{\rm ext}(\bG)$ satisfies $\delta f_0 \in C^{p,q}(\V)$, then there exists $f_1 \in C^{p-1}_{\rm ext}(\bG)$ such that $f_0 + \delta f_1 \in C^{p,q}(\V)$. To prove the claim, first note that, from \eqref{lem:chain_complex}, it suffices to show that if $\delta f_0$ satisfies (i), then there exists $f_1$ such that $f_0+\delta f_1$ satisfies (i). To do that we shall use a recursion argument. Let us say that $f \in C^p_{\rm ext}(\bG)$ satisfies \textit{(i) up to $l$} if $$ f((\underline{U}_0, 0_{g_0}), \dots, (\underline{U}_l, 0_{g_l}), (\underline{U}_{l+1}, \xi_{l+1}), \dots, (\underline{U}_{p-1}, \xi_{p-1})) = 0, $$ for every $(\underline{U}_0,\dots, \underline{U}_{p-1}) \in B_p(\times_\G^q T\G)$ and $(\xi_{l+1}, \dots, \xi_{p-1}) \in B_{p-1-l}(\V^*)$ with $\widetilde{\tar}(\xi_{l+1})=0$. It is clear that any $f \in C^p_{\rm ext}(\bG)$ satisfies (i) up to $p$ due to multilinearity. We claim that if $f_0$ is such that $\delta f_0$ satisfies (i) and $f_0$ satisfies (i) up to $l$, then there exists $f_1 \in C^{p-1}_{\rm ext}(\V)$ such that $f_0+ \delta f_1$ satisfies (i) up to $l-1$. Indeed, define $f_1$ as follows: $$ f_1((\underline{U}_1, \xi_1), \dots, (\underline{U}_{p-1},\xi_{p-1})) = - f_0((\underline{U},\xi), (\underline{U}_1, \xi_1), \dots, (\underline{U}_{p-1}, \xi_{p-1})), $$ where $$ \underline{U}=\underline{U}_{p-1}^{-1} \bullet \dots \bullet \underline{U}_1^{-1}, \,\,\, \xi= h(g_{p-1}^{-1}\dots g_1^{-1}, \widetilde{\tar}(\xi_1)), $$ $h: \sour^*C^* \to \V^*$ is a horizontal splitting. It is a straightforward computation to check that $f_1$ is indeed multi-linear and skew-symmetric. Also, \begin{align*} (f_0+\delta f_1)((\underline{U}_0, 0_{g_0}), \dots, (\underline{U}_{l-1}, 0_{g_{l-1}}), (\underline{U}_l, \xi_l), \dots, (\underline{U}_{p-1}, \xi_{p-1})) & = \\ & \hspace{-260pt} = (\delta f_0)((\underline{U}_{p-1}^{-1} \bullet \dots \bullet \underline{U}_0^{-1}, 0_{g_{p-1}^{-1}}\dots 0_{g_0^{-1}}), (\underline{U}_0, 0_{g_0}), \dots, (\underline{U}_{p-1}, \xi_{p-1}))\\ & \hspace{-260pt} = 0. \end{align*} The proof now follows from the fact that $f$ satisfies (i) up to 0 if and only if it satisfies (i). \end{proof} We are now able to complete the proof of Theorem \ref{thm:morita}. \begin{proof}[Proof of Theorem \ref{thm:morita}] It follows as a Corollary of \cite[Thm.~3.5]{Hoyo-Ort} that the map $$ F: \underbrace{(\times_{\G'}^q T\G') \times_\G \phi^*(\V^*)}_{\bG'} \to (\times_\G^q T\G) \times_\G \V^* $$ induced by the differential of $\phi: \G' \to \G$ and the natural pull-back map $\phi^*(\V^*) \to \V^*$ is a Morita map. As such it defines an isomorphism between the differentiable cochain cohomology $H^\bullet(\bG) \stackrel{\sim}\to H^\bullet(\bG')$. Now, it is straightforward to see that $F$ preserves the $C^\bullet_{\rm ext}$ complexes and the existence of a chain projection $P_{\rm ext}: C^\bullet \to C^\bullet_{\rm ext}$ (see \cite{Cab-Drum}) implies that $F$ induces an isomorphism between the $\rm ext$-chain cohomologies. Now, the proof follows from the following diagram $$ \begin{CD} C^{p,q}(\V) @> \phi^* >> C^{p,q}(\phi^*\V) \\ @V\F VV @VV \F V\\ C^p_{\rm ext }(\bG) @>> F^* > C^p_{\rm ext}(\bG') \end{CD} $$ and the fact that $\F$ is a quasi-isomorphism. \end{proof} \section{Multiplicative distributions} In this section, we study multiplicative distribution on Lie groupoids. These are subbundles $\H\subset T\G$ such that $\H \toto \H_M$ is a subgroupoid of $T\G \toto TM$, where $\H_M \subset TM$. The Lie theory of such structures were studied in \cite{CSS} in the case $\H_M = TM$, which we refer here as the \textit{wide case}. We aim here to extend their results to the general case. \subsection{Some properties} \noindent For a multiplicative distribution $\H \toto \H_M$, note that $T\sour(\H) = \sour^*\H_M$ and the exact sequence \eqref{core_ses} induces \begin{equation}\label{distr_ses} 0 \to \tar^*K \stackrel{r}\to \H\stackrel{Ts} \to s^*\H_M \to 0, \end{equation} where $K=\H|_M \cap A$ and $r(g,k) = k \bullet 0_g$ is the right-multiplication. In general, we say that an arbitrary distribution $\H \subset T\G$ is \textit{right-invariant} if it fits in the short exact sequence \eqref{distr_ses}. Note that, for right-invariant distributions, $T\sour: \H \to \H_M$ is automatically a surjective submersion. We shall refer to the pair $(\H_M, K)$ as the \textit{profile} of the right-invariant distribution $\H$. \begin{example}\em For a regular groupoid (i.e. the orbits of $\G$ have constant rank), the isotropy distribution $\H= \ker(T\sour) \cap \ker(T\tar)$ is multiplicative. Here, $$ \H_M = 0, \,\, K = \ker(\rho). $$ In the case $\G = \G(P)$, the gauge groupoid associated to a principal bundle $P \to M$, it can be proved that connections on $P$ correspond to multiplicative complements to the isotropy distribution (see \cite[Prop.~3.4]{Bur-Drum2}). \end{example} \begin{example}\em For a multiplicative distribution on a Lie group, $\H_M = 0$ and $K$ is an ideal on the Lie algebra. The distribution is the bi-invariant distribution corresponding to $K$ \cite{Ort1}. \end{example} \begin{example}\em A Cartan connection on $\G$ is a multiplicative distribution with profile $$ \H_M = TM, \,\, K = 0. $$ The existence of a Cartan connection on $\G$ imposes strong restrictions on the Lie groupoid. Indeed, assuming $\G$ has source 1-connected fibers and $M$ is compact, simply connected, the existence of a Cartan connection is equivalent to $\G$ being the action groupoid corresponding to a Lie group action on $M$ (see \cite[Rem.~2.14]{Arias-Crai1}). \end{example} \begin{example}\em Multiplicative distributions on a vector bundle $E \to M$ (seen as a Lie groupoid with multiplication given by fiberwise sum) are called \textit{linear distributions}. An important example is given by the horizontal distribution corresponding to a connection $\nabla: \mathfrak{X}(M) \times \Gamma(E) \to \Gamma(E)$. In fact, any linear distribution on $E$ with profile $(TM, 0)$ is the horizontal distribution for a connection on $E$. \end{example} For a right-invariant distribution, let us consider pointwise splittings of \eqref{distr_ses} as follows: $$ \F_\H(T\G) = \{ (g,b) \in \F(T\G) \,\, | \,\, b(X) \subset \H, \,\,\, \forall \, X \in (\H_M)_{\sour(g)}\}, \,\,\, J^1_\H\G= J^1\G \cap \F_\H(T\G). $$ The next Proposition gives criteria to a right-invariant distribution to be multiplicative. \begin{proposition} A right-invariant distribution $\H$ with profile $(\H_M, K)$ is multiplicative if and only if \begin{itemize} \item[(i)]$\rho(K) \subset \H_M$; \item[(ii)]$\F_\H(T\G) \toto M$ is a Lie subcategory of $\F(T\G) \toto M$; \item[(iii)] The fat representation of $\F(T\G)$ on $\rho: A \to TM$ restricts to a representation of $\F_\H(T\G)$ on $\rho: K \to \H_M$. \end{itemize} \end{proposition} \begin{proof} It is straightforward to check that if $\H$ is multiplicative, then (i), (ii) and (iii) holds. Conversely, let us assume that (i), (ii) and (iii) hold. First note that $\H_M = \H|_M \cap TM$, since the units are in $\F_\H(T\G)$. Also, $T\tar(\H) \subset \H_M$ because $\rho(K)\subset \H_M$ and $\Psi_b$ preserves $\H_M$, for $b \in \F_\H(T\G)$. It will be a surjective submersion onto $\H_M$ once we prove that the inversion preserves $\H$ (since $T\tar = T\sour \circ \iota$). Let $U_1, U_2 \in \H$ be composable vectors and write $ U_1 = b_1(X_1) + k_1 \bullet 0_{g_1}, \,\,\, U_2 = b_2(X_2) + k_2\bullet 0_{g_2}, $ for $b_1, b_2 \in \F_\H(T\G)$, $X_1, X_2 \in \H_M$ and $k_1, k_2 \in K$. One can check that $$ U_1\bullet U_2 = (b_1 \cdot b_2)(X_2) + (\Psi_{b_1}(k_2) + k_1) \bullet 0_{g_1g_2} \in \H. $$ Let us now prove that $U \in \H \Rightarrow U^{-1} \in \H$. Write $U=b(X)+k\bullet 0_g$, for $b \in \F_H(T\G)$ and $k \in K$. Note that $$ k^{-1} = \rho(k) - k \in \H_M \oplus K \subset \H|_M \Rightarrow (k\bullet 0_g)^{-1}=0_{g^{-1}}\bullet k^{-1} \in \H. $$ Choose $\hat{b}:T_{\tar(g)}M \to T_{g^{-1}}\G$ in $\F_\H(T\G)$ and write $b(X)^{-1} = \hat{b}(\Psi_b(X)) + c \bullet 0_{g^{-1}}, \,\, \text{ for } c \in A$. Now, $$ X=b(X)^{-1} \bullet b(X) = (\hat{b}\cdot b)(X) + c \Rightarrow c \in \H|_M \cap A = K \Rightarrow b(X)^{-1} \in \H. $$ Hence, $U^{-1} =b(X)^{-1} + (k \bullet 0_{g})^{-1} \in \H$ and this proves that $\H \toto \H_M$ is a Lie subgroupoid. \end{proof} From now on, let us fix a multiplicative distribution $\H \subset T\G$ with profile $(\H_M, K)$, consider the $\VB$-groupoid $\V = T\G/\H \toto TM/\H_M$, with core $A/K$. The core anchor of $\V$ is the map induced by $\rho: A \to TM$ on the quotient bundles, $\overline{\rho}: A/K \to TM/\H_M$. Any $(g,b) \in \F_{\H}(T\G)$ defines a linear map from $(TM/\H_M)_{\sour(g)} \to (T\G/\H)_{g}$ which we denote by $[(g,b)]$. Also, $$ \F_{\rm inv}(\V) = \{[(g,b)] \,\,| \,\, (g,b) \in J^1_{\H}\G\}. $$ The fat representation of $\F_{\rm inv}(\V)$ on $\overline{\rho}: A/K \to TM/\H_M$ is given by $\Psi_{[(g,b)]} = [\Psi_{(g,b)}]$, where $[\Psi_{(g,b)}]$ is the quotient chain map induced by the adjoint representation of $J^1\G$ on $\rho:A \to TM$. Infinitesimally, we have a similar picture. Let $\frakh \subset TA$ and $\v = TA/\frakh$ be the $\VB$-algebroids of $\H$ and $T\G/\H$, respectively. Define $$ J^1_\frakh A = \{ \eta \in J^1A\,\,|\,\, \eta(\H_M) \subset \frakh\}, \,\,\,\Hom_\frakh(TM,A) = J^1_\frakh A \cap \Hom(TM,A). $$ The short exact sequence \eqref{linear_ses} for $TA \to TM$ induces $$ 0 \to \Hom_\frakh(TM,A) \to J^1_\frakh A \to A \to 0. $$ The surjectiveness of $J^1_\frakh A \to A$ follows from the existence of adapted connections \footnote{A connection $\nabla: \mathfrak{X}(M) \times \Gamma(A) \to \Gamma(A)$ is \textit{adapted} if the linear vector fields on $A$ corresponding to the derivations $\nabla_X$, $X \in \H_M$, belong to the distribution $\frakh$.} (see \cite[Prop.~5.5]{Dru-Jotz-Ort}). Any section $\eta \in \Gamma(J^1_\frakh A)$ induces a linear section $[\eta]: TM/\H_M \to \v$ and the map $\eta \mapsto [\eta]$ defines a surjection $\Gamma(J^1_\frakh A) \to \Gamma_{lin}(TM/\H_M, \v)$. \begin{example}\em[Wide case] In the case $\H_M = TM$, the $\VB$-groupoid $\V$ has no side bundle, i.e. $ \V=\tar^*(A/K)$. So, in this case, $\F_{\rm inv}(\V) \cong \G$ and the fat representation on $A/K$ coincides with the $\G$ representation defined as follows: for $g \in \G$ and $a \in A_{\sour(g)}$, choose any $U \in \H_g$ such that $T\sour(U) = \rho(a)$ and define \begin{equation*} g \cdot [a] = [U \bullet a \bullet 0_{g^{-1}}], \end{equation*} where $[\cdot]$ is the class mod $K$. It is straightforward to check that $\cdot$ does not depend on the choices made. Similarly, $\F(\v) \cong A$ and the fat representation of $A$ on $A/K$ is given as follows: \begin{equation}\label{A_connection} \nabla_\alpha [\beta] = \pi_A([\alpha, \beta] + D^{\rm clas}_{\rho(\beta)}(\eta)), \end{equation} where $\alpha, \beta \in \Gamma(A)$, $\pi_A: A \to A/K$ is the quotient projection and $\eta \in \Gamma(J^1_\frakh A)$ is any section such that $\pr(\eta)=\alpha$. Again, the formula is well-defined and it is a consequence of \eqref{jet_representation}. \end{example} \subsection{IM distributions} We shall now focus on the infinitesimal picture. It is well-known (see \cite[Thm.~5.7]{Dru-Jotz-Ort}) that there is a 1-1 correspondence between linear distributions $\frakh \subset TA$ with profile $(\H_M, K)$ and operators $\bbD: \Gamma(A) \to \Gamma(\H_M^* \otimes (A/K))$ satisfying a Leibniz identity \begin{equation}\label{bbd:leibniz} \bbD_X(f\alpha) = f\bbD_X(\alpha) + (\Lie_Xf) \pi_A(\alpha). \end{equation} The operator is obtained from $\frakh$ as follows: \begin{equation}\label{bbD2} \bbD_X(\alpha) = \pi_A([\widehat{X}, \alpha^\uparrow](0_x)), \,\, X \in (\H_M)_x, \end{equation} where $\widehat{X} \in \Gamma(A, \frakh)$ is a projectable vector field (with respect to $TA \to TM$) satisfying $\widehat{X}(0_x) = X$. Reciprocally, one can reconstruct $\frakh$ from $\bbD$ by using the splitting $\sigma: A \to J^1A$ of \eqref{linear_ses} associated to any connection $\nabla$ on $A$ \textit{adapted to $\bbD$} (i.e. such that $\pi_A(\nabla_X \alpha) = \bbD_X(\alpha)$, for all $X \in \H_M$). In this case, \begin{equation}\label{h_from_D} \frakh_\bbD = \{\sigma(a)(X) +_A (0_a +_{TM} \overline{k}) \,\, | \,\, (a,k,X) \in A \times_M K \times_M \H_M\}. \end{equation} \begin{remark}\em In the wide case, formula \eqref{bbD2} implies that $\bbD$ is the Spencer operator associated to $\frakh$ obtained in \cite{CSS}. \end{remark} Let us now define $$ \mathcal{J}_\bbD = \{ \eta \in \Gamma(J^1A) \,\, | \,\, \pi_A(D^{\rm clas}_X(\eta)) = \bbD_X(\pr(\eta)), \,\, \forall \, X \in \H_M\}. $$ \begin{proposition} For a linear distribution, one has that $$ \mathcal{J}_\bbD = \Gamma(J^1_\frakh A). $$ Moreover, for $\eta \in \mathcal{J}_\bbD$, the following facts hold for the fat connection $\nabla_\eta$ on $\rho: A \to TM$: \begin{itemize} \item[(a)] $\nabla_\eta \text{ preserves } \H_M \Longleftrightarrow \overline{\rho}(\bbD_{X}(\alpha)) = -\pi_M([\rho(\alpha), X ])$, $\forall \, X \in \Gamma(\H_M)$, \item[(b)] $\nabla_\eta \text{ preserves } K \Longleftrightarrow \bbD_{\rho(k)}(\alpha) = -\pi_A([\alpha, k]), \,\,\, \forall \, k \in \Gamma(K).$ \end{itemize} where $\alpha = \pr(\eta)$. \end{proposition} \begin{proof} Choose any connection $\nabla$ on $A$ adapted to $\bbD$ and let $\sigma: A \to J^1A$ be the corresponding splitting of \eqref{linear_ses}. Decompose $\eta = \sigma(\alpha) + B\Phi$, where $\alpha= \pr(\eta)$ and $\Phi: TM \to A$ is a vector bundle morphism. Now, from \eqref{h_from_D}, $\eta(\H_M) \subset \frakh_\bbD$ if, and only if, $\Phi(X) \in K$, for all $X \in \H_M$. Now, using that $D^{\rm clas}_X(\eta) = \nabla_X \alpha - \Phi(X)$, one has that $$ \pi_A(\Phi(X)) = 0 \Leftrightarrow \pi_A( \nabla_X \alpha - D^{\rm clas}_X(\eta)) = 0 \Leftrightarrow \pi_A(D^{\rm clas}_X(\eta)) = \bbD_X(\alpha). $$ The rest of the Proposition follows from the explicit formulas \eqref{jet_representation} of the fat representation of $J^1A$ on $\rho: A \to TM$. \end{proof} In case the fat connection $\nabla_\eta$ preserves the profile $(\H_M, K)$ of a linear distribution $\frakh$, for $\eta \in J^1 A$, we define $\overline{\nabla}_\eta$ as the connection on the quotient complex $\overline{\rho}: A/K \to TM/\H_M$. \begin{definition} An IM distribution on a Lie algebroid $A$ is a triple $(\H_M, K, \mathbb{D})$, where $\H_M \subset TM$, $K \subset A$ are subbundles with $\rho(K) \subset \H_M$, $ \mathbb{D}: \Gamma(A) \to \Gamma(\H_M^* \otimes (A/K)) $ is a $\mathbb{R}$-linear operator satisfying the Leibniz condition \eqref{bbd:leibniz} and the IM equations: \begin{align} \label{IM_dist1} \overline{\rho}(\mathbb{D}_X(\alpha)) & = - \pi_M([\rho(\alpha),X]),\\ \label{IM_dist2} \mathbb{D}_{\rho(k)}(\alpha) & = -\pi_A([\alpha,k]),\\ \label{IM_dist3} \mathbb{D}_X([\pr(\eta_1), \pr(\eta_2)]) & = \overline{\nabla}_{\eta_1} \mathbb{D}_X(\pr(\eta_2)) - \pi_A(D^{\rm clas}_{[\rho(\pr(\eta_1)), X]}(\eta_2))\\ \nonumber & \hspace{-30pt} - \overline{\nabla}_{\eta_2} \mathbb{D}_X(\pr(\eta_1)) + \pi_A(D^{\rm clas}_{[\rho(\pr(\eta_2)), X]}(\eta_1)), \end{align} for $\alpha \in \Gamma(A), k \in \Gamma(K), \, X \in \Gamma(\H_M), \, \eta_1, \eta_2 \in \mathcal{J}_\bbD$. \end{definition} In the following remarks, we show how IM distributions relate to the Spencer operators obtained in \cite{CSS} and to the infinitesimal data obtained in \cite{Dru-Jotz-Ort}. \begin{remark}\em In the case $\H_M = TM$, one has that $\overline{\rho} \equiv 0$ and $\pi_M \equiv 0$, so \eqref{IM_dist1} is trivially satisfied. Also, for $\eta \in \mathcal{J}_\bbD$, the quotient connection $\overline{\nabla}_\eta$ only depends on $\pr(\eta)=\alpha$. Indeed, $$ \overline{\nabla}_\eta [\beta] = \pi_A([\alpha, \beta]) + \pi_A(D^{\rm clas}_{\rho(\beta)}(\eta)) = \pi_A([\alpha, \beta]) + \bbD_{\rho(\beta)}(\alpha) $$ (this is exactly the $A$-connection \eqref{A_connection}). Hence, \eqref{IM_dist3} can be rewritten as: \begin{align*} \mathbb{D}_X([\alpha_1, \alpha_2]) = \nabla_{\alpha_1} \mathbb{D}_X(\alpha_2) - \bbD_{[\rho(\alpha_1), X]}(\alpha_2) - \nabla_{\alpha_2} \mathbb{D}_X(\alpha_1) + \bbD_{[\rho(\alpha_2), X]}(\alpha_1), \end{align*} So, in the wide case, an IM distribution $(TM, K, \bbD)$ is the same as a Spencer operator on $A$ relative to $K$ (see \cite[Dfn.~2.16]{CSS}). \end{remark} \begin{remark}\em By choosing a connection $\nabla^0$ adapted to $\bbD$, one can show that \eqref{IM_dist3} is equivalent to equation (5.16) in \cite[Thm.~5.17]{Dru-Jotz-Ort} (a result giving conditions to linear distribution on $A$ to be a $\VB$-subalgebroid of $TA \to TM$). Indeed, any linear section $\eta \in \mathcal{J}_\bbD$ can be written as $ \eta = j^1\alpha - (\B \nabla^0_{\cdot} \alpha + \B\Phi), $ where $\Phi: TM \to A$ satisfies $\Phi(\H_M) \subset K$. Using this decomposition, one can check that \eqref{IM_dist3} can be re-written as \begin{align*} \bbD_X([\alpha_1, \alpha_2]) & = \underbrace{\widehat{\nabla}^{\rm bas}_{\alpha_1}\bbD_X(\alpha_2) - \widehat{\nabla}^{\rm bas}_{\alpha_2}\bbD_X(\alpha_1) + \pi_A(\nabla^0_{[\rho(\alpha_2), X]}) \alpha_1 - \nabla^0_{[\rho(\alpha_1), X]} \alpha_2)}_{(\ast)}\\ & \hspace{-50pt} + \underbrace{ \pi_A(\Phi_1(\nabla_{\alpha_2}^{\rm bas} X) - \Phi_2(\nabla_{\alpha_1}^{\rm bas} X) - \Phi_1\circ \rho \circ \Phi_2(X) + \Phi_2 \circ \rho \circ \Phi_1(X))}_{(\ast \ast)}, \end{align*} where $\nabla^{\rm bas}$ is the $A$-connection of the adjoint representation up to homotopy associated to $\nabla^0$ and $\widehat{\nabla}^{\rm bas}_{\alpha}$ is the quotient $A$-connection on $\overline{\rho}:A/K \to TM/\H_M$. Now, $(\ast)$ is exactly the expression appearing in \cite{Dru-Jotz-Ort} and it is straightforward to check that $(\ast \ast) = 0$ using that $\Phi_i(\H_M) \subset K$ and that $\nabla^{\rm bas}$ preserves $\H_M$. \end{remark} The following Proposition gives alternative characterizations of IM distributions. \begin{proposition}\label{prop:IM_equiv} Let $K \subset A$, $\H_M \subset TM$ be subbundles and $\bbD: \Gamma(A) \to \Gamma(\H_M^*\otimes A/K)$ be an $\R$-linear operator satisfying the Leibniz equation \eqref{bbd:leibniz}. Consider the linear distribution \eqref{h_from_D} $\frakh \subset TA$ corresponding to $\bbD$. The following are equivalent: \begin{itemize} \item[(a)] $(\H_M, K, \bbD)$ is an IM distribution; \item[(b)] $\frakh \subset TA$ is a $\VB$-subalgebroid; \item[(c)] $J^1_{\frakh} A \subset J^1A$ is a Lie subalgebroid, $\rho(K) \subset \H_M$ and the fat representation of $J^1A$ on $\rho: A \to TM$ restricts to a representation of $J^1_\frakh A$ on $\rho: K \to \H_M$; \item[(d)] The quotient double vector bundle $\v:=TA/\frakh \to TM/\H_M$ with core $A/K$ is a $\VB$-algebroid and $\pi_A: A \to A/K$, $\pi_M: TM \to TM/\H_M$ and the operator $D: \Gamma_{lin}(TM/\H_M, \v) \to \Omega^1(M, A/K)$ given by $$ D([\eta]) = \pi_A(D^{\rm clas}(\eta)), \, \text{ for } \eta \in J^1_\frakh A $$ define an IM 1-form on $A$ with values in $\v$. \end{itemize} \end{proposition} \begin{proof} \noindent \paragraph{$(a) \Leftrightarrow (b)$} It is the content of Theorem 5.17 of \cite{Dru-Jotz-Ort}. \medskip \paragraph{$(b) \Leftrightarrow (c)$} It follows from the description of the Lie algebroid structure on $TA \to TM$ using linear and core sections. More concretely, $\Gamma(\H_M,\frakh)$ is generated by $\eta \in \Gamma(J^1_\frakh A)$ and $\B k$, for $k \in K$, as a $C^\infty(\H_M)$-module. Now, from Proposition \ref{prop:derivation}, $$ \rho_{TA}(\eta) = W_{\nabla_\eta} \in T\H_M \Leftrightarrow \nabla_\eta \text{ preserves } \H_M, $$ where $\nabla_\eta$ is the fat connection on $TM$. Similarly, $\rho_{TA}(\B k) = \rho(k)^\uparrow \in T\H_M \Leftrightarrow \rho(k) \in \H_M$. As for the Lie bracket, $\Gamma(\H_M,\frakh)$ will be involutive if and only if $J^1_\frakh A \subset J^1A$ is a Lie subalgebroid and $\nabla_\eta$ preserves $K$, for every $\eta \in J^1_\frakh A$. \medskip \paragraph{$(b) \Rightarrow (d)$} The fact that $\frakh \subset TA$ is a $\VB$-subalgebroid implies that $\v=TA/\frakh \to TM/\H_M$ is a $\VB$-algebroid and the quotient map $TA \to \v$ is a $\VB$-algebroid morphism. The $\VB$-algebroid structure on $\v$ is determined by the following equations: for $\eta, \eta_1, \eta_2 \in \Gamma(J^1_\frakh A)$, $\beta \in \Gamma(A)$, $X \in \mathfrak{X}(M)$, \begin{align} \label{IM1_dist}[[\eta_1], [\eta_2]] & = [\,[\eta_1, \eta_2]\,] \,\,\,\,\, \textit{(linear-linear bracket)}\\ \label{IM2_dist}\nabla_{[\eta]}\pi_A(\beta) & = \pi_A(\nabla_\eta \beta) \,\,\,\textit{(linear-core bracket)}\\ \label{IM4_dist}\nabla_{[\eta]} \pi_M(X) & = \pi_M(\nabla_\eta X))\,\,\,\,\,\, \textit{(anchor on linear sections)}\\ \label{IM5_dist}\partial(\pi_A(\beta)) &= \pi_M(\rho(\beta))\,\,\,\,\, \textit{(anchor on core sections)} \end{align} where $\nabla_{[\eta]}$ is the fat representation on the core anchor complex $\partial: A/K \to TM/\H_M$. It is straightforward to check that \eqref{IM2_dist}, \eqref{IM4_dist} and \eqref{IM5_dist} are exactly the IM-equations (IM2), (IM4) and (IM5), respectively, for $(D,\pi_A,\pi_M)$. Also, \eqref{IM1_dist} and the fact that $(D^{\rm clas}, \mathrm{id}_A, \mathrm{id}_{TM})$ is a IM 1-form on $A$ with values in $TA$ imply the remaining equation (IM1) for $(D,\pi_A, \pi_A)$. Indeed, \begin{align*} D_X([[\eta_1],[\eta_2]]) & = \pi_A(D_X^{\rm clas}([\eta_1,\eta_2])) \\ & \hspace{-50pt} = \pi_A\left(\nabla_{\eta_1} D_X^{\rm clas}(\eta_2) - D^{\rm clas}_{[\rho(\alpha_2), X]}(\eta_1) - \nabla_{\eta_2} D_X^{\rm clas}(\eta_1) + D^{\rm clas}_{[\rho(\alpha_1), X]}(\eta_2)\right)\\ & \hspace{-50pt} = \nabla_{[\eta_1]} D_X([\eta_2]) - D_{[\rho(\alpha_2), X]}([\eta_1]) - \nabla_{[\eta_2]} D_X([\eta_1]) + D_{[\rho(\alpha_1), X]}([\eta_2]). & \end{align*} \paragraph{$(d) \Rightarrow (c)$} The IM-equations (IM2), (IM4) and (IM5) for $(D, \pi_A, \pi_M)$ are exactly \eqref{IM2_dist}, \eqref{IM4_dist} and \eqref{IM5_dist}, respectively. It is straightforward to check that: \eqref{IM5_dist} implies that $\rho(K) \subset \H_M$ and \eqref{IM2_dist} together with \eqref{IM4_dist} implies that the fat connection $\nabla_\eta$ preserves $\rho: K \to \H_M$ if $\eta \in J^1_\frakh A$. The remaining equation (IM1) can be written as $$ D([[\eta_1], [\eta_2]]) = \pi_A(D^{\rm clas}([\eta_1, \eta_2])), $$ for $\eta_1, \eta_2 \in \Gamma(J^1_\frakh A)$. Let $\eta \in \Gamma(J^1_\frakh A)$ be any section such that $[\eta] = [[\eta_1], [\eta_2]]$. As $\pr(\eta) = \pr([\eta_1, \eta_2])$, there exists $\Phi: TM \to A$ such that $\eta = [\eta_1, \eta_2] + \B \Phi$. Now, as $$ \pi_A(D^{\rm clas}(\eta)) = D([[\eta_1], [\eta_2]]) = \pi_A(D^{\rm clas}[\eta_1, \eta_2]), $$ one has that $\Phi(TM) \subset K$. This implies that $[\eta_1,\eta_2] \in \Gamma(J^1_\frakh A)$ as we wanted to prove. \end{proof} We are now able to state our main result regarding multiplicative distributions. It generalizes \cite[Thm.~2]{CSS}. \begin{theorem}\label{thm:IM_dist} Let $\G \toto M$ be a source 1-connected groupoid. There is a 1-1 correspondence between multiplicative distributions $\H \subset T\G$ with profile $(\H_M, K)$ and IM distributions on $A$, $\bbD: \Gamma(A) \to \Gamma(\H_M^* \otimes (A/K))$. For $X \in (\H_M)_x$, \begin{equation}\label{IM_dist_integration} \bbD_X(\alpha) = \pi_A([\widetilde{X}, \overrightarrow{\alpha}]), \end{equation} where $\widetilde{X} \in \Gamma(\H)$ is any section with $\widetilde{X}(x) = X$. \end{theorem} \begin{proof} The 1-1 correspondence between multiplicative distributions and IM distributions is a straightforward consequence of the Lie theory of $\VB$-groupoids. Indeed, given a multiplicative distribution $\H \subset T\G$, its $\VB$-algebroid $\frakh \subset TA$ correspond to an IM distribution via \eqref{bbD2} and Proposition \ref{prop:IM_equiv}. Reciprocally, given an IM distribution, the corresponding $\VB$-subalgebroid $\frakh \subset TA$ \eqref{h_from_D} can be integrated to a $\VB$-subgroupoid $\H \subset T\G$ using \cite[Cor.~4.3.7]{Bur-Cab-Hoy}. So, what remains to be shown is \eqref{IM_dist_integration}. For this, consider the quotient projection $\vartheta: T\G \to T\G/\H$. It is a multiplicative 1-form with values in the $\VB$-groupoid $\V:= T\G/\H$. Hence, from Theorem \ref{thm:main}, there exists an IM 1-form $(D, l, \theta)$ on $A$ with values in $\v:=TA/\frakh$, where $\frakh \subset TA$ is the $\VB$-algebroid of $\H$ and $$ D_X(\eta) = \Delta_{\overrightarrow{\eta}}(\vartheta(\widetilde{X})) - \vartheta([\overrightarrow{\alpha}, \widetilde{X}]), $$ where $\eta \in \Gamma_{lin}(TM/\H_M, \v)$, $\alpha = \pr(\eta)$, $X \in T_xM$ and $\widetilde{X} \in \frakx(\G)$ is any vector field with $\widetilde{X}(x)$. If $X \in \H_M$, one can always choose $\widetilde{X} \in \Gamma(\H)$ so that $\vartheta(\widetilde{X})=0$. In this case, $$ D_X(\eta) = \vartheta([\widetilde{X}, \overrightarrow{\alpha}](x)) = \pi_A([\widetilde{X}, \overrightarrow{\alpha}](x)), \,\,\,\, \forall \, X \in \H_M, $$ where we have used that $\vartheta|_A = l$ and $l=\pi_A: A \to A/K$, the quotient projection. This result can now be applied to the linear distribution $\frakh \subset TA$ to obtain the following fact regarding the IM distribution \eqref{bbD2}: $ D^{\tau}_X(\eta) = \bbD_X(\alpha). $ where $\tau: TA \to TA/\frakh$ is the quotient projection and $D^{\tau}$ is the IM 1-form associated to $\tau$. The result now follows from the coincidence between the infinitesimal components of $\vartheta$ and $\tau=\mathrm{Lie}(\vartheta)$ (see Remark \eqref{Lie_on_forms}). \end{proof} \begin{remark}\em The problem of involutivity of $\H \subset T\G$ is studied in \cite{CSS,Jotz-Ortiz}. \end{remark}
{ "timestamp": "2018-04-17T02:09:01", "yymm": "1804", "arxiv_id": "1804.05289", "language": "en", "url": "https://arxiv.org/abs/1804.05289" }
\section{Experiments} \label{sec:experiments} \subsection{Experimental Setting} \label{subsec:experiment-setup} \noindent\textbf{Datasets.} We experiment on the ImageNet VID dataset\footnote{\url{http://www.image-net.org/challenges/LSVRC/}}, a large-scale benchmark for video object detection, which contains 3862 training videos and 555 validation videos with annotated bounding boxes of 30 classes. Following the standard practice, we train our models on the training set and measure the performance on the validation set using the mean average precision (mAP) metric. We use a subset of ImageNet DET dataset and VID dataset to train our base detector, following~\cite{kang2016object,zhu2017flow,feichtenhofer2017detect}. \noindent\textbf{Implementation details.} We train a Faster R-CNN as our base detector. We use ResNet-101 as the backbone network and select 15 anchors corresponding to 5 scales and 3 aspect ratios for the RPN. A total of 200k iterations of SGD training is performed on 8 GPUs. We keep boxes with an objectness score higher than 0.001, which results in a mAP of $74.5$ and a recall rate of 91.6 with an average of 37 boxes per image. During the joint training of PRU, two random frames are sampled from a video with a temporal interval between $6$ and $18$. We use the results of the base detector as input ROIs for propagation. To obtain the MHI between frame $t$ and $t+\tau$, we uniformly sample five images apart from frame $t$ and $t+\tau$ when $\tau$ is larger than $6$ for acceleration. The batch size is set to 64 and each GPU holds 8 images in each iteration. Training lasts 90 epochs with a learning rate of $0.002$ followed by 30 epochs with a learning rate of $0.0002$. At each stage of the inference, we apply non-maximum suppression (NMS) with a threshold $0.5$ to bidirectionally propagated boxes with the same class label before they are further refined. The propagation source of suppressed boxes are considered as linked with that of reserved ones to form an object tube. For the tube rescoring, we train a classifier with the backbone of ResNet-101 and the $K=6$ frames are sampled from each tube during inference. \subsection{Results} \label{subsec:results} We summarize the cost/performance curve of our approach designed based on Scale-Time Lattice (ST-Lattice) and existing methods in Figure~\ref{fig:overall-results}. The tradeoff is made under different temporal intervals. The proposed ST-Lattice is clearly better than baselines such as na\"{i}ve interpolation and DFF~\cite{zhu2017deep} which achieves a real-time detection rate by using optical flow to propagate features. ST-Lattice also achieves better tradeoff than state-of-the-art methods, including D\&T~\cite{feichtenhofer2017detect}, TPN$+$LSTM~\cite{kang2017object}, and FGFA~\cite{zhu2017flow}. In particular, our method achieves a mAP of $79.6$ at $20$ fps, which is competitive with D\&T\cite{feichtenhofer2017detect} that achieves $79.8$ at about $5$ fps. After a tradeoff on key frame selection, our approach still maintains a mAP of $79.0$ at an impressive $62$ fps. We show the detailed class-wise performance in the supplementary material. To further demonstrate how the performance and computational cost are balanced using the ST-Lattice space, we pick a configuration (with a fixed key frame interval of 24) and show the time cost of each edge and the mAP of each node in Figure~\ref{fig:cost-allocation}. Thanks to the ST-Lattice, we can flexibly seek a suitable configuration to meet a variety of demands. We provide some examples in Fig.~\ref{fig:examples}, showing the results of per frame baseline and different nodes in the proposed ST-Lattice. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/overall_results.pdf} \vskip -0.45cm \caption[Performance and runtime on ImageNet VID dataset compared with existing methods.]{Performance and runtime on ImageNet VID dataset compared with existing methods.\footnotemark} \label{fig:overall-results} \vskip -0.2cm \end{figure} \footnotetext{The mAP is evaluated on all frames, except for the fast version of D\&T, which is evaluated on sparse key frames. We expect its performance will be lower in the full all-frame evaluation if the detections on other frames are interpolated.} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/cost_flow.pdf} \vspace{-0.4cm} \caption{Cost allocation and mAP in the Scale-Time Lattice space. The value in the parenthesis refers to the improvement relative to interpolation.} \label{fig:cost-allocation} \vspace{-0.2cm} \end{figure} \begin{figure*}[hbt] \centering \includegraphics[width=0.88\linewidth]{figures/example.pdf} \caption{\small Examples video clips of the proposed Scale-Time Lattice. The per-frame baseline and detection results in different nodes are shown in different rows.} \label{fig:examples} \vspace{-0.2cm} \end{figure*} \subsection{Ablation Study} \label{subsec:ablation-study} In the following ablation study, we use a fixed key frame interval of 24 unless otherwise indicated and run only the first stage of our approach. \vspace{0.1cm} \noindent \textbf{Temporal propagation.} In the design space of ST-Lattice, there are many propagation methods that can be explored. We compare the proposed propagation module with other alternatives, such as linear interpolation and RGB difference based regression, under different temporal intervals. For a selected key frame interval, we evaluate the mAP of propagated results on the intermediate frame from two consecutive key frames, without any refinement or rescoring. We use different intervals (from 2 to 24) to see the balance between runtime and mAP. Results are shown in Figure~\ref{fig:propagation}. The fps is computed \wrt the detection time plus propagation/interpolation time. The MHI based method outperforms other baselines by a large margin. It even surpasses per-frame detection results when the temporal interval is small ($10$ frames apart). To take a deeper look into the differences of those propagation methods, we divide the ground truths into three parts according to object motion following~\cite{zhu2017flow}. We find that the gain mainly originates from objects with fast motion, which are considered more difficult than those with slow motion. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/propagation.pdf} \vskip -0.3cm \caption{Results of different propagation methods under different key frame intervals. (Left) the overall results. (Right) Detailed results based on different object motion.} \label{fig:propagation} \vspace{-0.4cm} \end{figure} \vspace{0.1cm} \noindent \textbf{Designs of PRU.} Our design of the basic unit is a two-step regression component PRU that takes the $B_{t,s}$ and $B_{t+2\tau,s}$ as input and outputs $B_{t+\tau,s+1}$. Here, we test some variants of PRU as well as a single-step regression module, as shown in Figure~\ref{fig:basic-unit-variants}. $M$ represents motion displacement and $O$ denotes the offset \wrt the ground truth. The results are shown in Table~\ref{tab:basic-unit}. We find that design (a) that decouples the estimation of temporal motion displacement and spatial offset simplifies the learning target of regressors, thus yielding a better results than designs (b) and (d). In addition, comparing (a) and (c), joint training of two-stage regression also improves the results by back propagating the gradient of the refinement component to the propagation component, which in turn increases the mAP of the first step results. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/unit_variants.pdf} \vspace{-0.2cm} \caption{Variants of the basic unit. (a) is our design in Section~\ref{sec:technical-design} that regresses motion and offset respectively at two stages; (b) is a variant of our design that regresses the overall offset instead of motion at the first stage; (c) is the same of (a) in structure but not trained jointly; (d) is a single-step regression unit.} \label{fig:basic-unit-variants} \end{figure} \begin{table}[htb] \centering \caption{Performance of different designs of basic unit. $v_T$ and $v_S$ refers to $B_{t+\tau,s}$ (the blue node) and $B_{t+\tau,s+1}$ (the green node) in Figure~\ref{fig:basic-unit-variants}, respectively.} \small \begin{tabular}{*{4}{c}} \toprule & $v_T$ mAP (\%) & $v_S$ mAP (\%) & Runtime (ms) \\ \midrule unit (a) & 71.6 & 73.9 & 21 \\ unit (b) & 70.6 & 72.1 & 21 \\ unit (c) & 71.4 & 73.7 & 21 \\ unit (d) & N/A & 71.0 & 12 \\ \bottomrule \end{tabular} \label{tab:basic-unit} \end{table} \vspace{0.1cm} \noindent \textbf{Cost allocation.} We investigate different cost allocation strategies by trying networks of different depths for the propagation and refinement components. Allocating computational costs at different edges on the ST-Lattice would not have the same effects, so we test different strategies by replacing the network of propagation and refinement components with cheaper or more expensive ones. The results in Table~\ref{tab:network-size} indicate that the performance increases as the network gets deeper for both the propagation and refinement. Notably, it is more fruitful to use a deeper network for the spatial refinement network than the temporal propagation network. Specifically, keeping the other one as medium, increasing the network size of spatial refinement from small to large results in a gain of $1.2$ mAP ($72.5\rightarrow73.7$), while adding the same computational cost on $\cF_T$ only leads to an improvement of $0.8$ mAP ($72.7\rightarrow73.5$). \begin{table}[t] \centering \caption{Performance of different combinations of propagation (T) and refinement (S) components. The two numbers ($x$/$y$) represent the mAP after propagation and after spatial refinement, respectively. \textit{Small}, \textit{medium} and \textit{large} refers to channel-reduced ResNet-18, ResNet-18 and ResNet-34.} \small \begin{tabular}{c|c|ccc} \hline \multicolumn{2}{l|}{\multirow{2}{*}{}} & \multicolumn{3}{c}{Net S} \\ \cline{3-5} \multicolumn{2}{l|}{} & small & medium & large \\ \hline \multirow{3}{*}{\rotatebox{90}{Net T}} & small & 67.7/71.1 & 67.7/72.7 & 67.8/72.6 \\ & Medium & 71.5/72.5 & 71.6/73.9 & 71.5/73.7 \\ & Large & 72.8/73.1 & 72.0/73.5 & 71.8/74.2 \\ \hline \end{tabular} \label{tab:network-size} \end{table} \vspace{0.1cm} \noindent \textbf{Key frame selection.} The selection of input nodes is another design option available on the ST-Lattice. In order to compare the effects of different key frame selection strategies, we evaluate the na\"{i}ve interpolation approach and the proposed ST-Lattice based on uniformly sampled and adaptively selected key frames. The results are shown in Figure~\ref{fig:keyframe}. For the na\"{i}ve interpolation, the adaptive scheme leads to a large performance gain. Though the adaptive key frame selection does not bring as much improvement to ST-Lattice as interpolation, it is still superior to uniform sampling. Especially, its advantage stands out when the interval gets larger. Adaptive selection works better because through our formulation, more hard samples are selected for running per-frame detector (rather than propagation) and leave easier samples for propagation. This phenomenon can be observed when we quantify the mAP of detections on adaptively selected key frames than uniformly sampled ones ($73.3$ vs $74.1$), suggesting that more harder samples are selected by the adaptive scheme. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/adaptive.pdf} \vspace{-0.2cm} \caption{Uniformly sampled and adaptively selected key frames.} \label{fig:keyframe} \vspace{-0.3cm} \end{figure} \section{Scale-Time Lattice} \label{sec:framework} \begin{figure*} \centering \includegraphics[height=158pt]{figures/framework.pdf} \vspace{-0.1cm} \caption{\small The Scale-Time Lattice, where each node represents the detection results at a certain scale and time point, and each edge represents an operation from one node to another. In particular, the horizontal edges (in blue color) represent the temporal propagation from one time step to the next; while the vertical edges (in green color) represent the spatial refinement from low to high resolutions. Given a video, the image-based detection is only done at sparsely chosen key frames, and the results are propagated along a pre-defined path to the bottom row. The final results at the bottom cover all time points. } \label{fig:stlattice} \end{figure*} In developing a framework for video object detection, our primary goal is to \emph{precisely} localize objects in each frame, while meeting runtime requirements, \eg~high detection speed. One way to achieve this is to apply the expensive object detectors on as few key frames as possible, and rely on the spatial and temporal connections to generate detection results for the intermediate frames. While this is a natural idea, finding an optimal design is non-trivial. In this work, we propose the \emph{Scale-Time Lattice}, which unifies the sparse image-based detection and the construction of dense video detection results into a single framework. A good balance of computational cost and detection performance can then be achieved by carefully allocating resources to different components within this framework. The \emph{Scale-Time Lattice}, as shown in Fig.~\ref{fig:stlattice}, is formulated as a directed acyclic graph. Each node in this graph stands for the intermediate detection results at a certain spatial resolution and time point, in the form of bounding boxes. The nodes are arranged in a way similar to a lattice: from left to right, they follow the temporal order, while from top to bottom, their scales increase gradually. An edge in the graph represents a certain operation that takes the detection results from the head node and produces the detection results at the tail node. In this work, we define two key operations, \emph{temporal propagation} and \emph{spatial refinement}, which respectively correspond to the horizontal and vertical edges in the graph. Particularly, the \emph{temporal propagation} edges connect nodes at the same spatial scale but adjacent time steps. The \emph{spatial refinement} edges connect nodes at the same time step but neighboring scales. Along this graph, detection results will be propagated from one node to another via the operations introduced above following certain paths. Eventually, the video detection results can be derived from the nodes at the bottom row, which are at the finest scale and cover every time step. On top of the Scale-Time Lattice, a video detection pipeline involves three steps: 1) generating object detection results on sparse key frames; 2) planning the paths from image-based detection results (input nodes) to the dense video detection results (output nodes); 3) propagating key frame detection results to the intermediate frames and refine them across scales. The detection accuracy of the approach is measured at the output nodes. The Scale-Time Lattice framework provides a rich design space for optimizing the detection pipeline. Since the total computational cost equals to the summation of the cost on all paths, including the cost for invoking image-based detectors, it is now convenient to seek a cost/performance tradeoff by well allocating the budget of computation to different elements in the lattice. For example, by sampling more key frames, we can improve detection performance, but also introduce heavy computational cost. On the other hand, we find that with much cheaper networks, the propagation/refinement edges can carry the detection results over a long path while still maintaining competitive accuracy. Hence, we may obtain a much better accuracy/cost tradeoff if the cost budget is used instead for the right component. Unlike previous pursuits of accuracy/cost balance like spatial pyramid or feature flow, the Scale-Time Lattice operates from coarse to fine, both temporally and spatially. The operation flow across the scale-time lattice narrows the temporal interval while increasing the spatial resolution. In the following section, we will describe the technical details of individual operations along the lattice. \section{Conclusion} \label{sec:conclusion} We have presented the Scale-Time Lattice, a flexible framework that offers a rich design space to balance the performance and cost in video object detection. It provides a joint perspective that integrates detection, temporal propagation, and across-scale refinement. We have shown various configurations designed under this space and demonstrated their competitive performance against state-of-the-art video object detectors with much faster speed. The proposed Scale-Time Lattice is not only useful for designing algorithms for video object detection, but also can be applied to other video-related domains such as video object segmentation and tracking. \vspace{-0.2cm} \paragraph{Acknowledgment} This work is partially supported by the Big Data Collaboration Research grant from SenseTime Group (CUHK Agreement No. TS1610626), the Early Career Scheme (ECS) of Hong Kong (No. 24204215), and the General Research Fund (GRF) of Hong Kong (No. 14236516). \section{Related Work} \label{sec:related} \noindent \textbf{Object detection in images.} Contemporary object detection methods have been dominated by deep CNNs, most of which follow two paradigms, \emph{two-stage} and \emph{single-stage}. A two-stage pipeline firstly generates region proposals, which are then classified and refined. In the seminal work~\cite{girshick2014rich}, Girshick \etal proposed R-CNN, an initial instantiation of the two-stage paradigm. More efficient frameworks have been developed since then. Fast R-CNN~\cite{girshick2015fast} accelerates feature extraction by sharing computation. Faster R-CNN~\cite{ren2015faster} takes a step further by introducing a Region Proposal Network (RPN) to generate region proposals, and sharing features across stages. Recently, new variants, \eg~R-FCN~\cite{dai2016r}, FPN~\cite{lin2017feature}, and Mask R-CNN~\cite{he2017mask}, further improve the performance. Compared to two-stage pipelines, a single-stage method is often more efficient but less accurate. Liu \etal~\cite{liu2016ssd} proposed Single Shot Detector (SSD), an early attempt of this paradigm. It generates outputs from default boxes on a pyramid of feature maps. Shen \etal~\cite{shen2017dsod} proposed DSOD, which is similar but based on DenseNet~\cite{huang2017densely}. YOLO~\cite{redmon2016you} and YOLOv2~\cite{redmon2017yolo9000} present an alternative that frames detection as a regression problem. Lin \etal~\cite{lin2017focal} proposed the use of focal loss along with RetinaNet, which tackles the imbalance between foreground and background classes. \vspace{5pt} \noindent \textbf{Object detection in videos.} Compared with object detection in images, video object detection was less studied until the new VID challenge was introduced to ImageNet. Han \etal~\cite{han2016seq} proposed Seq-NMS that builds high-confidence bounding box sequences and rescores boxes to the average or maximum confidence. The method serves as a post-processing step, thus requiring extra runtime over per-frame detection. Kang \etal~\cite{kang2016object,kang2017object} proposed a framework that integrates per-frame proposal generation, bounding box tracking and tubelet re-scoring. It is very expensive, as it requires per-frame feature computation by deep networks. Zhu \etal~\cite{zhu2017deep} proposed an efficient framework that runs expensive CNNs on sparse and regularly selected key frames. Features are propagated to other frames with optical flow. The method achieves 10$\times$ speedup than per-frame detection at the cost of $4.4\%$ mAP drop (from $73.9\%$ to $69.5\%$). Our work differs to~\cite{zhu2017deep} in that we select key frames adaptively rather than at a fixed interval basis. In addition, we perform temporal propagation in a scale-time lattice space rather than once as in~\cite{zhu2017deep}. Based on the aforementioned work, Zhu \etal~\cite{zhu2017flow} proposed to aggregate nearby features along the motion path, improving the feature quality. However, this method runs slowly at around $1$ fps due to dense detection and flow computation. Feichtenhofer \etal~\cite{feichtenhofer2017detect} proposed to learn object detection and cross-frame tracking with a multi-task objective, and link frame-level detections to tubes. They do not explore temporal propagation, only perform interpolation between frames. There are also weakly supervised methods~\cite{misra2015watch,prest2012learning,chen2017discover} that learn object detectors from videos. \vspace{5pt} \noindent \textbf{Coarse-to-fine approaches.} The coarse-to-fine design has been adopted for various problems such as face alignment~\cite{zhang2014coarse,zhu2015face}, optical flow estimation~\cite{hu2016efficient,ilg2017flownet}, semantic segmentation~\cite{li2017not}, and super-resolution~\cite{huang2015single,lai2017deep}. These approaches mainly adopt cascaded structures to refine results from low resolution to high resolution. Our approach, however, adopts the coarse-to-fine behavior in two dimensions, both spatially and temporally. The refinement process forms a 2-D Scale-Time Lattice space that allows gradual discovery of denser and more precise bounding boxes. \section{Technical Design} \label{sec:technical-design} In this section, we introduce the design of key components in the Scale-Time Lattice framework and show how they work together to achieve an improved balance between performance and cost. As shown in Figure~\ref{fig:teaser}, the lattice is comprised of compound structures that connect with each other repeatedly to perform temporal propagation and spatial refinement. We call them \emph{Propagation and Refinement Units (PRUs)}. After selecting a small number of key frames and obtaining the detection results thereon, we propagate the results across time and scales via PRUs until they reach the output nodes. Finally, the detection results at the output nodes are integrated into spatio-temporal tubes, and we use a tube-level classifier to reinforce the results. \subsection{Propagation and Refinement Unit (PRU)} \label{subsec:pru} The PRU takes the detection results on two consecutive key frames as input, propagates them to an intermediate frame, and then refines the outputs to the next scale, as shown in Figure~\ref{fig:pru}. Formally, we denote the detection results at time $t$ and scale level $s$ as $B_{t,s}=\{b_{t,s}^0,b_{t,s}^1,\dots,b_{t,s}^{n_t}\}$, which is a set of bounding boxes $b_{t,s}^i=(x_{t,s}^i, y_{t,s}^i, w_{t,s}^i, h_{t,s}^i)$. Similarly, we denote the ground truth bounding boxes as $G_t=\{g_t^0,g_t^1,\dots,g_t^{m_t}\}$. In addition, we use $I_t$ to denote the frame image at time $t$ and $M_{t \rightarrow t+\tau}$ to denote the motion representation from frame $t$ to $t+\tau$. A PRU at the $s$-level consists of a temporal propagation operator $\cF_T$, a spatial refinement operator $\cF_S$, and a simple rescaling operator $\cF_R$. Its workflow is to output $(B_{t,s+1}, B_{t+\tau,s+1}, B_{t+2\tau,s+1})$ given $B_{t,s}$ and $B_{t+2\tau,s}$. The process can be formalized as \begin{align} B_{t+\tau,s}^L &= \cF_T(B_{t,s}, M_{t\rightarrow t+\tau}), \\ B_{t+\tau,s}^R &= \cF_T(B_{t+2\tau,s}, M_{t+2\tau\rightarrow t+\tau}), \\ B_{t+\tau,s} &= B_{t+\tau,s}^L \cup B_{t+\tau,s}^R, \\ B_{t+\tau,s+1} &= \cF_S(B_{t+\tau,s}, I_{t+\tau}), \\ B_{t,s+1} &= \cF_R(B_{t, s}), \ B_{t+2\tau,s+1} = \cF_R(B_{t+2\tau, s}). \end{align} The procedure can be briefly explained as follows: (1) $B_{t,s}$ is propagated temporally to the time step $t+\tau$ via $\cF_T$, resulting in $B^L_{t+\tau,s}$. (2) Similarly, $B_{t+2\tau,s}$ is propagated to the time step $t+\tau$ in an opposite direction, resulting in $B^R_{t+\tau,s}$. (3) $B_{t+\tau,s}$, the results at time $t + \tau$, are then formed by their union. (4) $B_{t+\tau,s}$ is refined to $B_{t+\tau,s+1}$ at the next scale via $\cF_S$. (5) $B_{t,s+1}$ and $B_{t+2\tau,s+1}$ are simply obtained by rescaling $B_{t, s}$ and $B_{t+2\tau, s}$. Designing an effective pipeline of PRU is non-trivial. Since the key frames are sampled sparsely to achieve high efficiency, there can be large motion displacement and scale variance in between. Our solution, as outlined above, is to factorize the workflow into two key operations $\cF_T$ and $\cF_S$. In particular, $\cF_T$ is to deal with the large motion displacement between frames, taking into account the motion information. This operation would roughly localize the objects at time $t+\tau$. However, $\cF_T$ focuses on the object movement and it does not consider the offset between the detection results $B_{t,s}$ and ground truth $G_t$. Such deviation will be accumulated into the gap between $B_{t+\tau,s}$ and $G_{t+\tau}$. $\cF_S$ is designed to remedy this effect by regressing the bounding box offsets in a coarse-to-fine manner, thus leading to more precise localization. These two operations work together and are conducted iteratively following the scale-time lattice to achieve the final detection results. \begin{figure} \centering \includegraphics[width=0.68\linewidth]{figures/pru.pdf} \caption{\small A Propagation and Refinement Unit.} \label{fig:pru} \vspace{-0.3cm} \end{figure} \vspace{-12pt} \paragraph{Temporal propagation} The idea of temporal propagation is previously explored in the video object detection literatures~\cite{zhu2017deep,zhu2017flow,kang2016object}. Many of these methods~\cite{zhu2017deep,zhu2017flow} rely on optical flow to propagate detection results. In spite of its performance, the approach is expensive for a real-time system and not tailored to encoding the motion information over a long time span. In our work, we adopt \emph{Motion History Image (MHI)}~\cite{bobick2001recognition} as the motion representation which can be computed very efficiently and preserve sufficient motion information for the propagation. We represent the motion from time $t$ to $t + \tau$ as $M_{t\rightarrow t+\tau}=(H_{t\rightarrow t+\tau}, I_t^{(g)}, I_{t+\tau}^{(g)})$. Here $H_{t\rightarrow t+\tau}$ denotes the MHI from $t$ to $t+\tau$, and $I_t^{(g)}$ and $I_{t+\tau}^{(g)}$ denote the gray-scale images of the two frames respectively, which serve as additional channels to enhance the motion expression with more details. We use a small network (ResNet-18 in our experiments) with RoIAlign layer~\cite{he2017mask} to extract the features of each box region. On top of the RoI-wise features, a regressor is learned to predict the object movement from $t$ to $t+\tau$. To train the regressor, we adopt a similar supervision to~\cite{kang2017object}, which learns the relative movement from $G_t$ to $G_{t+\tau}$. The regression target of the $j$-th bounding box $\Delta_{\cF_T}^{*j}$ is defined as the relative movement between best overlapped ground truth box $g_t^j$ and the corresponding one on frame $g_{t+\tau}^j$, adopting the same transformation and normalization used in most detection methods~\cite{girshick2014rich,girshick2015fast}. \vspace{-12pt} \paragraph{Coarse-to-fine refinement} After propagation, $B_{t+\tau,s}$ is supposed to be around the target objects but may not be precisely localized. The refinement operator $\cF_S$ adopts a similar structure as the propagation operator and aims to refine the propagated results. It takes $I_{t+\tau}$ and the propagated boxes $B_{t+\tau,s}$ as the inputs and yields refined boxes $B_{t+\tau,s+1}$. The regression target $\Delta_{\cF_S}^*$ is calculated as the offset of $B_{t+\tau,s}$ \wrt~$G_{t+\tau}$. In the scale-time lattice, smaller scales are used in early stages and larger scales are used in later stages. Thereby, the detection results are refined in a coarse to fine manner. \vspace{-12pt} \paragraph{Joint optimization} The temporal propagation network $\cF_T$ and spatial refinement network $\cF_S$ are jointly optimized with a multi-task loss in an end-to-end fashion. \begin{equation} \begin{split} &L(\Delta_{\cF_T}, \Delta_{\cF_S}, \Delta_{\cF_T}^*, \Delta_{\cF_S}^*) = \\ &\frac{1}{N}\sum_{j=1}^N L_{\cF_T}(\Delta_{\cF_T}^j, \Delta_{\cF_T}^{*j}) + \lambda\frac{1}{N}\sum_{j=1}^NL_{\cF_S}(\Delta_{\cF_S}^j, \Delta_{\cF_S}^{*j}), \end{split} \end{equation} where $N$ is the number of bounding boxes in a mini batch, $\Delta_{\cF_T}$ and $\Delta_{\cF_S}$ are the network output of $\cF_T$ and $\cF_S$, and $L_{\cF_T}$ and $L_{\cF_S}$ are smooth L1 loss of temporal propagation and spatial refinement network, respectively. \subsection{Key Frame Selection and Path Planning} \label{subsec:keyframe-selection} Under the Scale-Time Lattice framework, selected key frames form the input nodes, whose number and quality are critical to both detection accuracy and efficiency. The most straightforward approach to key frame selection is uniform sampling which is widely adopted in the previous methods ~\cite{zhu2017deep,feichtenhofer2017detect}. While uniform sampling strategy is simple and effective, it ignores a key fact that not all frames are equally important and effective for detection and propagation. Thus a non-uniform frame selection strategy could be more desirable. To this end, we propose an adaptive selection scheme based on our observation that temporal propagation results tend to be inferior to single frame image-based detection when the objects are small and moving quickly. Thus the density of key frames should depend on propagation difficulty, namely, we should select key frames more frequently in the presence of small or fast moving objects. The adaptive frame selection process works as follows. We first run the detector on very sparse frames $\{t_0, t_1, t_2, \dots\}$ which are uniformly distributed. Given the detection results, we evaluate how \emph{easy} the results can be propagated, based on both the object size and motion. The \emph{easiness measure} is computed as \begin{align} e_{i,i+1} = \frac{1}{|I|}\sum_{(j,k)\in I}s_{t_i,t_{i+1}}^{j,k}m_{t_i,t_{i+1}}^{j,k} \end{align} where $I$ is the set of matched indices of $B^\prime_{t_i}$ and $B^\prime_{t_{i+1}}$ through bipartite matching based on confidence score and bounding box IoUs, $s_{t_i,t_{i+1}}^{j,k}=\frac{1}{2}(\sqrt{\text{area}(b_{t_i}^j)} + \sqrt{\text{area}(b_{t_{i+1}}^k)})$ is the object size measure and $m_{t_i,t_{i+1}}^{j,k}=\text{IoU}(b_{t_i}^j, b_{t_{i+1}}^k)$ is the motion measure. Note since the results can be noisy, we only consider boxes with high confidence scores. If $e_{i,i+1}$ falls below a certain threshold, an extra key frame $\bar{t}_{i,i+1}=\frac{t_i+t_{i+1}}{2}$ is added. This process is conducted for only one pass in our experiments. With the selected key frames, we propose a simple scheme to plan the paths in the scale-time lattice from input nodes to output nodes. In each stage, we use propagation edges to link the nodes at the different time steps, and then use a refinement edge to connect the nodes across scales. Specifically, for nodes $(t_i, s)$ and $(t_{i+1}, s)$ at time point $t_i$ and $t_{i+1}$ of the scale level $s$, results are propagated to $((t_i + t_{i+1})/2, s)$, then refined to $((t_i + t_{i+1})/2, s+1)$. We set the max number of stages to $2$. After two stages, we use linear interpolation as a very cheap propagation approach to generate results for the remaining nodes. More complex path planning may further improve the performance at the same cost, which is left for future work. \subsection{Tube Rescoring} \label{subsec:tube-rescoring} By associating the bounding boxes across frames at the last stage with the propagation relations, we can construct \emph{object tubes}. Given a linked tube $\cT=(b_{t_0}, \dots, b_{t_n})$ consisting of $t_n-t_0$ bounding boxes that starts from frame $t_0$ and terminates at $t_n$ with label $l$ given by the original detector, we train a R-CNN like classifier to re-classify it following the scheme of Temporal Segment Network (TSN)~\cite{wang2016temporal}. During inference, we uniformly sample $K$ cropped bounding boxes from each tube as the input of the classifier, and the class scores are fused to yield a \emph{tube-level} prediction. After the classification, scores of bounding boxes in $\cT$ are adjusted by the following equation. \[ s_i = \begin{cases} s_i + s^\prime, & \text{if } l = l^\prime \\ \frac{1}{n}\sum_{i=0}^n s_i, & \text{otherwise} \end{cases} \] where $s_i$ is the class score of $b_{t_i}$ given by the detector, and $s^\prime$ and $l^\prime$ are the score and label prediction of $\cT$ given by the classifier. After the rescoring, scores of hard positive samples can be boosted and false positives are suppressed. \section{Introduction} \label{sec:intro} Object detection in videos has received increasing attention as it sees immense potential in real-world applications such as video-based surveillance. Despite the remarkable progress in image-based object detectors~\cite{girshick2015fast,ren2015faster,dai2016r}, extending them to the video domain remains challenging. Conventional CNN-based methods~\cite{kang2016object,kang2017object} typically detect objects on a per-frame basis and integrate the results via temporal association and box-level post-processing. Such methods are slow, resource-demanding, and often unable to meet the requirements in real-time systems. For example, a competitive detector based on Faster R-CNN~\cite{ren2015faster} can only operate at $7$ \textit{fps} on a high-end GPU like Titan X. A typical approach to this problem is to optimize the underlying networks, \eg~via model compression~\cite{iandola2016squeezenet,howard2017mobilenets,zhang2017shufflenet}. This way requires tremendous engineering efforts. On the other hand, videos, by their special nature, provide a different dimension for optimizing the detection framework. Specifically, there exists strong continuity among consecutive frames in a natural video, which suggests an alternative way to reduce computational cost, that is, to propagate the computation temporally. Recently, several attempts along this direction were made, \eg~tracking bounding boxes~\cite{kang2017object} or warping features~\cite{zhu2017deep}. However, the improvement on the overall performance/cost tradeoff remains limited -- the pursuit of one side often causes significant expense to the other. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/teaser.pdf} \caption{The proposed Scale-Time Lattice permits a flexible design space for performance-cost tradeoff. } \label{fig:teaser} \vspace{-0.2cm} \end{figure} Moving beyond such limitations requires a \emph{joint} perspective. Generally, detecting objects in a video is a multi-step process. The tasks studied in previous work, \eg~image-based detection, temporal propagation, and coarse-to-fine refinement, are just individual steps in this process. While improvements on individual steps have been studied a lot, a key question is still left open: \emph{``what is the most cost-effective strategy to combine them?''} Driven by this joint perspective, we propose to explore a new strategy, namely pursuing a \emph{balanced} design over a \emph{Scale-Time Lattice}, as shown in Figure~\ref{fig:teaser}. The Scale-Time Lattice is a unified formulation, where the steps mentioned above are directed links between the nodes at different scale-time positions. From this unified view, one can readily see how different steps contribute and how the computational cost is distributed. More importantly, this formulation comes with a \emph{rich design space}, where one can flexibly reallocate computation on demand. In this work, we develop a balanced design by leveraging this flexibility. Given a video, the proposed framework first applies expensive object detectors to the key frames selected \emph{sparsely} and \emph{adaptively} based on the object motion and scale, to obtain effective bounding boxes for propagation. These boxes are then propagated to intermediate frames and refined across scales (from coarse to fine), via substantially cheaper networks. For this purpose, we devise a new component based on motion history that can propagate bounding boxes effectively and efficiently. This framework remarkably reduces the amortized cost by only invoking the detector sparsely, while maintaining competitive performance with a cheap but very effective propagation component. This also makes it convenient to seek a good performance/cost tradeoff, \eg~by tuning the key frame selection strategy or the network complexity at individual steps. The main contributions of this work lie in several aspects: (1) the \emph{Scale-Time Lattice} that provides a joint perspective and a rich design space, (2) a detection framework devised thereon that achieves better speed/accuracy tradeoff, and (3) several new technical components, \eg~a network for more effective temporal propagation and an adaptive scheme for keyframe selection. Without bells-and-whistles, \eg~model ensembling and multi-scale testing, we obtain competitive performance on par with the method~\cite{zhu2017deep,zhu2017flow} that won ImageNet VID challenges 2017, but with a significantly faster running speed of 20 fps.
{ "timestamp": "2018-04-17T02:12:53", "yymm": "1804", "arxiv_id": "1804.05472", "language": "en", "url": "https://arxiv.org/abs/1804.05472" }
\section{Introduction and results} The gauge/gravity duality \cite{Maldacena,Witten}, as the most concrete realization of holographic principle, has been of interest to physicist over the years \cite{Dine,Gross,Witten,Shuryak,Mojaza}. The basic idea is that a gravitational theory defined on a $d+1$ dimensional background, the bulk, is equivalent to a gauge theory defined on a $d$ dimensional spacetime that forms the bulk’s boundary. This correspondence is also a weak/strong duality which has been a useful and powerful tool to study the strongly coupled field theories by gravitational description \cite{Maldacena,Gubser,Witten,Aharony}. Surprisingly, it is extended to the time dependent cases and therefore is appropriate to study the non-equilibrium phenomenon. Various areas raging from Relativistic Heavy Ion Collider to condensed matter physics are tried to explain with this duality. (for a review see \cite{Sachdev,Kovchegov,Gelis,Muller}).\\ Entanglement entropy is one of the most intriguing non-local quantities which measures the quantum entanglement between two sub-systems of a given system. It can be also used to classify the various quantum phase transitions and critical points \cite{Ali-Akbari,Klebanov,Vidal}. Since the quantum field theories have infinitely degrees of freedom, the entanglement entropy is divergent. Thus, it is scheme-dependent quantity and needs to be regulated. It has been shown that the leading divergence term is proportional to the area of the entangleng surface (for $d>2$) \cite{Bombelli,Srednicki} \begin{eqnarray} S_{EE}\propto \frac{Area}{\epsilon ^{d-2}}, \end{eqnarray} where $\epsilon$ is the $UV$ cut-off in quantum field theories. This is called the area law (see also \cite{Casini1,Das}). Note that cut-off dependence of the entanglement entropy makes it to be a non-universal quantity. \\ Due to the $UV$ divergence structure of entanglement entropy, it is natural to introduce an appropriate quantity called mutual information which is an important concept in information theory and has more advantages than the entanglement entropy. It is a finite, positive, semi-definite quantity which measures the total correlation between the two sub-systems $A$ and $B$ \cite{Fischler}. The tripartite information is another useful quantity in this context which is defined for a system consisting of three spatial regions and measures the extensivity of the mutual information. It is also free of divergence and can take any value depending on the underlying field theory. In spite of the mutual information, tripartite information is finite even when the regions share boundaries \cite{Balasubramanian}.\\ To understand $AdS/CFT$ ( for a review see \cite{Aharony}), a particular case of gauge/gravity duality where the gravity lives in a background with a negative cosmological constant, it seems highly important to study how the information in the $CFT$ is encoded in the gravity theory. Since the amount of the information of a sub-system $A$ can be measured by the entanglement entropy of that sub-system, it seems natural to ask how one can calculate this in the gravity side. In \cite{Ryu1,Ryu2}, by applying $AdS/CFT$ correspondence, the authors showed that the entanglement entropy of a region $A$ in a $CFT$ is proportional to the area of a surface which has the minimum area among surfaces whose boundaries coincide with the boundary of the region $A$ which is known as Ryu-Takayanagi ($RT$) prescription. Since both mutual information and tripartite information are combinations of entanglement entropy they can be then calculated by $RT$ prescription. Consequently, If one would like to calculate the amount of the correlation between two sub-system $A$ and $B$, then mutual information is the quantity needs to be computed and the tripartite information is a quantity to study the degree of the extensivity of the mutual information.\\ In this paper we study the time evolution of the holographic mutual and tripartite information of a strongly coupled $CFT$, initial state, which is derived to a non-relativistic fixed point with Lifshitz scaling, final state, by a quantum quench as time evolves. On the gravity side, this non-equilibrium dynamics is equivalent to a background interpolating between a pure $AdS$ at past infinity and an asymptotically Lifshitz black hole at future infinity. We find the following interesting results corresponding to the mutual and tripartite information of the underlying background. \begin{itemize} \item The non-equilibrium dynamics following the breaking of the relativistic scaling symmetry leads to the more correlation between two sub-systems. Namely, the less symmetry, the greater correlation. \item For slow quenches the mutual information approaches the adiabatic regime in the final state, $i.e.$ there is no dependence on the separation length between two sub-systems. \item Mutual information does undergo a disentangling transition, for a given value of the separation length between two sub-systems, beyond which it is identically zero. Moreover, the separation length of disentangling transition corresponding to the final state is bigger than that of the initial state. \item There is a specific regime of the parameters, small enough of the length of two sub-systems and their separation length, in the phase space diagram of two sub-systems where the mutual information is independent of the time evolution. \item Tripartite information is always non-positive during symmetry breaking quench. \end{itemize} \section{Review on background} The gauge/gravity duality \cite{Maldacena,Witten} provides a wide range of domain to study strongly coupled quantum field theories whose dual are the gravitational theories in one higher dimension. This conjectured duality has been used to explore applications in condensed matter physics and quantum chromodynamics (for a review see \cite{Adams}). In the context of condensed matter, there are quantum systems exhibiting a non-relativistic scaling, which refers to as Lifshitz scaling in the literature, of the following form in $d+1$ dimensions \begin{eqnarray} (t,x)\longrightarrow(\lambda^{z} t,\lambda x^{i}),\label{scale} \end{eqnarray} where $z$ is a dynamical critical exponent governing the anisotropy between spatial and temporal scaling and $x^{i}$($i=1,2,....d$) denotes the spatial coordinates. The gauge/gravity logic suggests that one can look for a background metric in one higher dimension than the field theory whose symmetries match with a field theory living on the boundary. In our case the following Lifshitz geometry was proposed in \cite{Hartnoll,Kachru} as a candidate background for the holographic dual of such a non-relativistic theory \begin{eqnarray}\label{Lif} ds^{2}= - \frac{r^{2z}}{L^{2z}}dt^{2}+\frac{r^2}{L^2}d\textbf{x}^2+\frac{r^2}{L^2}dr^{2},\label{metric} \end{eqnarray} where $l_{AdS}\equiv L(z=1)$, $z$ can take any positive number and the scale transformation acts as (\ref{scale}) along with $r\rightarrow r\lambda^{-1}$. This metric enjoys nice properties such that $(i)$ it is nonsingular and $(ii)$ all local invariants constructed from the Riemann tensor are constant and finite everywhere \cite{Camilo}. The case $z = 1$ is the famous Anti-de Sitter spacetime whose symmetry, and its dual scale-invariant theory, is substantially enhanced. $AdS$ geometry is a vacuum solution to a simple $d+1$ dimensional theory of gravity, namely general relativity with a negative cosmological constant \begin{eqnarray} S=\frac{1}{16\pi G_{d+1}}\int d^{d+1}x \sqrt{-g}(R+\frac{d(d-1)}{L^{2}}),\label{action1} \end{eqnarray} where $G_{d+1}$ is Newton constant and $R$ is the Ricci scalar. Solutions with Lifshitz isometries were first presented in \cite{Kachru}. Einstein gravity with a negative cosmological constant alone does not support the geometry and hence general relativity must be coupled with some matter content. There are many models have been proposed in the literature to reach this Lifshitz solution such as, Einstein-Proca, Einstein-Maxwel-Dilaton and Einstein-$p$ form actions \cite{Kachru,Pang,Taylor,Camilo} or using the nonrelativistic gravity theory of Ho{r}ava-Lifshitz \cite{Griffin}. Here we consider a model involving gravity with negative cosmological constant and a massive gauge field whose action has the following form \cite{Camilo} \begin{eqnarray}\label{action2} S=\frac{1}{16\pi G_{d+1}}\int d^{d+1}x \sqrt{-g}[R+d(d-1)-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2} M^{2}A^{\mu}A_{\mu}], \end{eqnarray} where $F^{\mu\nu}$ is the rescaled field strength, corresponding to the rescaled massive gauge field $A^{\mu}$ whose mass is $M$ (For more detailed see \cite{Korovin}). The Einstein-Proca equations of motion for metric and gauge field are respectively given by \begin{subequations} \begin{align} &R_{\mu \nu}=-d g_{\mu \nu}+\frac{M^2}{2}A_{\mu} A_{\nu}+\frac{1}{2}F_{\mu}^{\sigma}F_{\nu \sigma}+\frac{1}{4(1-d)}F^{\rho \sigma}F_{\rho \sigma}g_{\mu \nu}\, ,\\ &\nabla _{\mu}F^{\mu \nu}=M^{2}A^{\nu}. \end{align} \end{subequations} If one defines \begin{eqnarray} M^{2}=\frac{zd(d-1)^2}{z^2+z(d-2)+(d-1)^2}\, , \qquad L^{2}=\frac{z^{2}+z(d-2)+(d-1)^{2}}{d(d-1)}, \end{eqnarray} a solution with Lifshitz scaling symmetry can be obtained from action \eqref{action2} \begin{subequations} \begin{align} &\label{static} ds^{2}= - \frac{r^{2z}}{L^{2z}}dt^{2}+\frac{r^2}{L^2}d\textbf{x}^2+\frac{r^2}{L^2}dr^{2}\, ,\\ & A=\sqrt{\frac{2(z-1)}{z}}\frac{r^{z}}{L^{z}} dt\, , \end{align} \end{subequations} where the lifshitz scaling can be understood by the following transformation \begin{eqnarray}\label{Lifshitzscaling} t\longrightarrow \lambda^{z} t\, , \,\,\,\,\,\,\,x^{i} \longrightarrow \lambda x^{i}\, , \,\,\,\,\,\,\, r\longrightarrow \lambda^{-1}r\, . \end{eqnarray} It is obvious that when $z = 1$ the above solution reduces to the famous $AdS_{d+1}$ solution with unit curvature radius $l_{AdS}=1$.\\ The standard $AdS/CFT$ dictionary states that the presence of the massive gauge field $A_{\mu}$ in the bulk is dual to a vector primary operator $\zeta^{a}$($a=0,1,...d$) of dimension $\Delta$ \cite{Korovin} \begin{eqnarray} \Delta=\frac{1}{2}[d+\sqrt{(d-2)^2+4M^2}]=\frac{d}{2}+\sqrt{\frac{(d-2)^2}{4}+\frac{zd(d-1)^2}{z^2+z(d-2)+(d-1)^2}}\, . \end{eqnarray} In other words, one can say that the action (\ref{action2}) controls the dynamics of a $CFT$ whose spectrum contains a vector primary operator of dimension $\Delta$. The asymptotic expansion of the bulk gauge field is also given by \begin{eqnarray} A_{t}=r^{\Delta -d +1} A_{t}^{(0)}+....... +\,r^{-(\Delta -1)} A_{t}^{(d)}+......\, , \end{eqnarray} where $A_{t}^{(0)}$ is the source of the dual operator and $A_{t}^{(d)}$ is related to its expectation value. It was shown in \cite{Korovin} that the Lifshitz geometries get close to $AdS $ when dynamical exponent $z$ is close to unity, $i.e.$ $z=1+\epsilon^{2}$ where $\epsilon\ll1$. In this case the static solution (\ref{static}) reads \begin{subequations} \begin{align} &ds^2=-r^2[1+2\epsilon^2\ln r+\frac{\epsilon^2}{1-d}]dt^{2}+r^{2}[1+\frac{\epsilon^2}{1-d}]d\textbf{x}^2+[1-\frac{\epsilon^2}{1-d}]\frac{dr^2}{r^2}+O(\epsilon^4)\, , \\ &\label{gauge1} A=\sqrt{2}\epsilon r dt + O(\epsilon ^{3}) \,, \end{align} \end{subequations} and the corresponding mass $M$ and the dual operator dimension $\Delta$ have also the following expressions \begin{eqnarray} M^{2}=d-1+(d-2)\epsilon^{2}+O(\epsilon^{4}) \, ,\qquad\Delta=d+\frac{d-2}{d}\epsilon^{2}+O(\epsilon^{4}) \,. \end{eqnarray} In this case the asymptotic expansion of the bulk gauge field is given by \begin{eqnarray}\label{gauge2} A_{t}=r(1+O(\epsilon ^{2})) A_{t}^{(0)}+.......+\, r^{-(d -1)}(1+O(\epsilon ^{2})) A_{t}^{(d)}+......\, , \end{eqnarray} According to (\ref{gauge1}) and (\ref{gauge2}) and by the following identifications \begin{eqnarray} A_{t}^{(0)}\equiv \sqrt{2}\epsilon + O(\epsilon ^{3})\, , \qquad A_{t}^{(d)}\equiv O(\epsilon^{3})\, , \qquad \Delta=d \, , \end{eqnarray} one can extend the standard $AdS/CFT$ dictionary in order to study the dual field theory. Indeed, this special class of Lifshitz spacetime can be considered holographically as a continuous deformation of corresponding $CFT$ by time component of a vector primary operator $\zeta^{a}$ of conformal dimension $\Delta=d$, namely \begin{equation}\label{action lif} S_{Lif}=S_{CFT}+\sqrt{2}\epsilon\int d^{ d}x \zeta^{t}(x). \end{equation} It is worth to mention that many Lifshitz invariant solutions exist which are not of the above form and then, holographically, one can not reach the key features of dual theory.\\ In \cite{Camilo} the authors consider a nice mechanism to study the symmetry breaking of a $CFT$ towards a non-relativistic Lifshitz scaling with $z=1+\epsilon^{2}$. In fact, they consider a quantum quench profile $j(t)\equiv \sqrt{2}\epsilon J(t)$, coupled to the vector primary operator $\zeta^{t}(x)$ in the action (\ref{action lif}) which interpolates smoothly between $0$ and $\sqrt{2}\epsilon$. The first corresponds to a strongly coupled $CFT$ at zero temperature (initial state) and the later to a finite temperature fixed point with Lifshitz scaling (thermal finite state) (\ref{Lifshitzscaling}) as time evolves from past infinity to future infinity. The new action governing this process has the following form \begin{eqnarray} S=S_{CFT}+\sqrt{2}\epsilon\int d^{ d}x J(t) \zeta^{t}(x). \end{eqnarray} In the following we merely review the above mechanism which has already done in \cite{Camilo}. Working with the ingoing Eddingtone-Finkelstein($EF$) coordinate system ($\nu , r , \textbf{x}$) and arbitrary exponent $z$, consider the following ansatz for the metric and the gauge field \begin{subequations}\label{ansatz} \begin{align} &ds^{2}=2h(\nu ,r)d\nu dr-f(\nu ,r)d\nu ^{2}+r^{2}d\textbf{x}^2 \, , \\ &A(\nu ,r)=a(\nu ,r)d\nu +b(\nu ,r)dr \, , \end{align} \end{subequations} where $h,f,a$ and $b$ are four unknown functions. In order to focus on the case of interest, $i.e.$ $z=1+\epsilon ^{2}$, one should expand $h(\nu ,r),f(\nu ,r),a(\nu ,r)$ and $b(\nu ,r)$ in the ansatz (\ref{ansatz}) as a power series in $\epsilon$, that is \begin{subequations} \begin{align} &f(\nu ,r)=\sum _{n=0}^{\infty}f^{n}(\nu ,r)\epsilon^{n} \, ,\\ &h(\nu ,r)=\sum _{n=0}^{\infty}h^{n}(\nu ,r)\epsilon^{n} \, ,\\ &a(\nu ,r)=\sum _{n=0}^{\infty}a^{n}(\nu ,r)\epsilon^{n} \, ,\\ &b(\nu ,r)=\sum _{n=0}^{\infty}b^{n}(\nu ,r)\epsilon^{n} \, , \end{align} \end{subequations} and then solves the equations of motion corresponding to the action (\ref{action2}) in terms of $\epsilon$ expansion, to leading non-trivial order for each function (in this case up to $\epsilon ^{2}$). To solve the equations of motion for a given order in $\epsilon$, consider the following ansatz for $f^{n}(\nu ,r),h^{n}(\nu ,r),a^{n}(\nu ,r)$ and $b^{n}(\nu ,r)$ \begin{subequations} \begin{align} &f^{n}(\nu ,r)=r^{2}\sum _{l=0}(f_{l}^{(n)}(\nu)+\tilde{f}_{l}^{(n)}(\nu)\ln r)\, r^{-l} \, , \\ &h^{n}(\nu ,r)=\sum _{l=0}(h_{l}^{(n)}(\nu)+\tilde{h}_{l}^(n)(\nu)\ln r)\,r^{-l} \, , \\ &a^{n}(\nu ,r)=r\sum _{l=0}(a_{l}^{(n)}(\nu)+\tilde{a}_{l}^{(n)}(\nu)\ln r)\,r^{l} \, , \\ &b^{n}(\nu ,r)=\frac{1}{r}\sum _{l=0}(f_{l}^{(n)}(\nu)+\tilde{f}_{l}^{(n)}(\nu)\ln r)\,r^{l} \, , \end{align} \end{subequations} along with the initial conditions (note that in the boundary side the initial state corresponds to a zero temperature state of the strongly coupled $CFT$ which is represented by a pure $AdS$ geometry in the bulk with no gauge field) \begin{subequations}\label{initial con} \begin{align} &f(\nu\rightarrow -\infty ,r)=r^{2}\, ,\\ &h(\nu\rightarrow -\infty ,r)=1\, ,\\ &a(\nu\rightarrow -\infty ,r)=0\, ,\\ &b(\nu\rightarrow -\infty ,r)=0\, , \end{align} \end{subequations} and the boundary conditions at $r\rightarrow\infty$ to order $\epsilon^{2}$ \begin{subequations}\label{boundary con} \begin{align} &f(\nu ,r\rightarrow \infty)=r^{2}(1+2\epsilon^{2}J(\nu)^{2}\ln r+.....)\, ,\\ &h(\nu ,r\rightarrow \infty)=1+2\epsilon^{2}J(\nu)^{2}\ln r+.....)\, ,\\ &a(\nu ,r\rightarrow \infty)=\sqrt{2}\epsilon J(\nu) r+.....\, ,\\ &a(\nu ,r\rightarrow \infty)=0 \, , \end{align} \end{subequations} where $J(\nu)$ is the quench profile which specifies how energy is injected in to the system. Considering a quantum quench of a $CFT$, living in ($2+1$) dimensions, and following the underlying scheme along with the initial conditions (\ref{initial con}) and the boundary conditions (\ref{boundary con}) the solution for the metric and gauge field to order $\epsilon^{2}$ reads \cite{Camilo} \begin{subequations} \begin{align} &\label{final form1} A(\nu ,r)=\epsilon[a^{(1)}(\nu ,r)d\nu +b^{(1)}(\nu ,r)dr]+O(\epsilon^{3})\, ,\\ &\label{final form2} ds^{2}=2[1+\epsilon^{2}h^{(2)}(\nu ,r)]d\nu dr-[r^{2}+\epsilon^{2}f^{(2)}(\nu ,r)]d\nu ^{2}+r^{2}(dx_{1}^{2}+dx_{2}^{2})+O(\epsilon^{4}) \, , \end{align} \end{subequations} where $a^{(1)}(\nu ,r),b^{(1)}(\nu ,r),f^{(2)}(\nu ,r)$ and $h^{(2)}(\nu ,r)$ are given by \begin{subequations} \begin{align} &a^{(1)}(\nu ,r)=\sqrt{2}r(J(\nu)+\frac{\dot{J}(\nu)}{r}+\frac{\ddot{J}(\nu)}{2r^{2}})\, ,\\ &b^{(1)}(\nu ,r)=\frac{-\sqrt{2}}{r}(J(\nu)+\frac{\dot{J}(\nu)}{2r})\, ,\\ &f^{(2)}(\nu ,r)=2r^{2}(\ln r-\frac{1}{4})J(\nu)^{2}-3rJ(\nu)\ddot{J}(\nu)-\ddot{J}(\nu)^{2}-\frac{I(\nu)}{r}\, ,\\ &h^{(2)}(\nu ,r)=J(\nu)^{2}\ln r-\frac{J(\nu)\ddot{J}(\nu)}{r}-\frac{\ddot{J}(\nu)^{2}}{8r^{2}} \, , \end{align} \end{subequations} and the coefficient $I(\nu)$ is defined \begin{eqnarray} I(\nu)=\frac{1}{2}\int _{-\infty}^{\nu} \ddot{J}(\omega)^{2}d\omega \, . \end{eqnarray} In the limit $\nu\rightarrow-\infty$, for which $J(\nu)$ goes to zero, we are left with the static $AdS$ solution with no gauge field (zero temperature initial state) and in the limit $\nu\rightarrow\infty$, for which $J(\nu)$ goes to one, the final state corresponds to an asymptotically Lifshitz black brane (thermal finial state) as follows \begin{eqnarray} ds_{f}^{2}=2(1+\epsilon^{2}\ln r)d\nu dr-r^{2}[1+2\epsilon^{2}(\ln r-\frac{1}{4})-\epsilon^{2}\frac{I_{f}}{r^{3}}]d\nu ^{2}+r^{2}(dx_{1}^{2}+dx_{2}^{2})+O(\epsilon^{4}) \, , \end{eqnarray} whose event horizon will be located at $r=r_{h}$ given by the largest solution of the following equation \begin{eqnarray}\label{horizon1} 1+2\epsilon^{2}(\ln r_{h}-\frac{1}{4})-\epsilon^{2}\frac{I_{f}}{r_{h}^{3}}=0 \, . \end{eqnarray} In \cite{Camilo} two specific quench profiles, as a probe of the quench dynamic, have been considered to study both local observable such as vacuum expectation values of the stress-energy tensor and of the quenching operator and also non-local one such as entanglement entropy. However, in this paper, we concentrate on the following profile \begin{eqnarray} J(\nu)=\frac{1}{2}(1+\tanh \frac{\nu}{\delta t}), \end{eqnarray} where $\delta t$ is a time scale which we call it the quenching time. At the asymptotic boundary $r=\infty$ both $\nu$ and $t$ coincide, thus, one can understand the bulk quench profile $J(\nu)$ as $J(t)$ for an observer living on the boundary side. The discussed mechanism is merely valid for values of $r\rightarrow\infty$ (boundary) up to $r\sim r_{h}$ where the event horizon of the final state black hole obtained from (\ref{horizon1}) and given by \cite{Camilo} \begin{eqnarray}\label{horizon2} r_{h}\simeq \frac{0.5 \epsilon ^{\frac{2}{3}}}{\delta t}. \end{eqnarray} It is noticed that the temperature of the Lifshitz final state, $T$, is also proportional to $r_{h}$ or equivalently \begin{eqnarray}\label{temperature} T \propto r_{h}\propto \frac{\epsilon ^{\frac{2}{3}}}{\delta t}. \end{eqnarray} \section{Review on the entanglement entropy, mutual information and tripartite information} \begin{itemize} \item \textbf{Entanglement entropy:} The entanglement entropy is one of the most important quantities which measures the quantum entanglement among different degrees of freedom of a quantum mechanical system \cite{Horodecki,Casini}. In fact, entanglement entropy has emerged as a valuable tool to probe the physical information in quantum systems.\\ To define the entanglement entropy we decompose the total system into two sub-systems $A$ and its complement $\bar{A}$. Accordingly, the total Hilbert space $\mathcal{H}$ becomes a direct products of $\mathcal{H}_{A}$ and $\mathcal{H}_{\bar{A}}$ such that \begin{eqnarray} \mathcal{H} = \mathcal{H}_{A} \otimes \mathcal{H}_{\bar{A}}. \end{eqnarray} We then define the reduced density matrix $\rho_{A}$ for the sub-system $A$ by integrating out the degrees of freedom $\bar{A}$ \begin{eqnarray} \rho_{A} = Tr _{\bar{A}} [\rho], \end{eqnarray} where $\rho$ is the total density matrix of the entire system. Then the entanglement entropy is defined as the Von-Neumann entropy for $\rho_{A}$ \begin{eqnarray} S_{A} = -Tr [\rho_{A} \log \rho_{A} ]. \end{eqnarray} When the system is in a pure state the Von-Neumann entropy of the complete system is zero and the following property is fulfilled \begin{eqnarray} S_{A} = S_{ \bar{A}}. \end{eqnarray} Noticeably, in a quantum field theory, entanglement entropy of a region $A$ contains short-distance divergence and behaves according to an area law \cite{Srednicki}. It can be shown that the entanglement entropy for two disjoint sub-systems $A_{1}$ and $A_{2}$ satisfies the so-called strong subadditivity condition \cite{Lieb} \begin{eqnarray}\label{subadd} S(A_{1}) + S(A_{2}) \geq S(A_{1}\cup A_{2})+S(A_{1}\cap A_{2}). \end{eqnarray} According to $AdS/CFT$ correspondence, for large $N$ theories on the boundary side there exist gravity dual theories on the bulk side which are described by classical Einstein gravity with suitable matter field content. The holographic entanglement entropy of a sub-system $A$ on the boundary field theory can be computed using the $RT$ prescription proposed in \cite{Ryu1,Ryu2} \begin{eqnarray}\label{RT} S_{A}=\frac{Area (\gamma_{A})}{4 G_{d+1}} \, , \end{eqnarray} where $\gamma_{A}$ is the area of the minimal surface, extended in the bulk, whose boundary coincides with the boundary of the $\partial A$ (so that $\partial \gamma_{A} = \partial A $) . We also require that $\gamma_{A}$ is homologous to $A$. \end{itemize} \begin{itemize} \item \textbf{Mutual information:} Having introduced the entanglement entropy for a sub-system $A$ and its complement $\bar{A}$ if one would like to measure the amount of correlation between two disjoint regions $A$ and $B$, the most interesting finite quantity is the mutual information \begin{eqnarray}\label{MI} I(A,B)\equiv S(A)+S(B)-S(A\cup B) \, , \end{eqnarray} where $S(X)$ denotes the entanglement entropy corresponding to the region $X$. While the entanglement entropy of a region contains $UV$ divergence proportional to the area of the boundary of $A$, the mutual information is free of divergence (finite) since in this linear combination the leading divergence due to the area law is canceled. It is noticed that due to the subadditivity condition (\ref{subadd}) the mutual information indeed satisfies the following inequality \cite{Allais} \begin{eqnarray}\label{Sub} I(A,B)\geq 0\, , \end{eqnarray} \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{fig55} \caption{Two different configurations for computing $S_{A\cup B}$. The time coordinate is suppressed.} \label{fig:s} \end{figure} with equality if and only if $A$ and $B$ are uncorrelated. It was pointed out in \cite{Headrick} mutual information indeed undergoes a disentangling transition as one increases the separation between the two sub-systems $A$ and $B$ that is for small separation, $I(A, B) \neq 0$ but $I(A, B) = 0$ for large separation. On the other hand, when $I(A, B) = 0$ there is no correlation between two sub-systems and hence they become completely decoupled \cite{Fischler}.\\ Imagine two disjoint sub-systems $A$ and $B$, of the same length $l$ separated by $x$, the entanglement entropy of each sub-system can be computed from (\ref{RT}). However, the computation of $S(A\cup B)$ is more interesting. In the bulk, depending on the critical ratio $\frac{x}{l}$, there are two candidate minimal surfaces which are schematically shown in Fig. \ref{fig:s} and we therefore have \begin{eqnarray}\label{sAB} {S_{A\cup B}}= \begin{cases} 2 S(l), & \text{large}\: \frac{x}{l}, \\ S(2l+x) +S(x), & \text{small}\: \frac{x}{l}, \\ \end{cases} \end{eqnarray} where $S(Y)$ denotes the area of the minimal surface whose boundary is coincided with the boundary of the underlying sub-system. Accordingly, one can immediately reach the following result for the mutual information \begin{eqnarray}\label{MI} {I(A,B)}= \begin{cases} 0, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{large}\: \frac{x}{l}, \\ 2S(l) - S(2l+x) -S(x), \qquad \text{smal}\: \frac{x}{l}. \\ \end{cases} \end{eqnarray} Mutual information can potentially provide a powerful description of how correlations evolve and spread in an out-of-equilibrium system which is of our interest in this paper. \end{itemize} \begin{figure}[h] \centering {\includegraphics[width=0.70\textwidth]{I31} }\\ {\includegraphics[width=0.70\textwidth]{I32} } \caption{Four different configurations for computing $S_{A\cup B \cup C}$. The time coordinate is suppressed. } \label{figI3} \end{figure} \begin{itemize} \item \textbf{Tripartite information:} In addition to the mutual information there is another interesting quantity, defined from the entanglement entropy, called the tripartite information \begin{eqnarray}\label{I3} I^{[3]}(A\cup B\cup C)\equiv S(A)+S(B)+S(C)-S(A\cup B)-S(A\cup C)-S(B\cup C)+S(A\cup B\cup C)\, , \end{eqnarray} where $A , B$ and $C$ are three disjoint intervals. It is clear that this quantity is symmetric under permutations of its arguments and it is also free of divergences even when the regions share their boundary \cite{Hayden}. According to (\ref{I3}) the tripartite information can be also written in terms of the mutual information as follows \begin{eqnarray} I^{[3]}(A\cup B\cup C)\equiv I(A\cup B) + I(A\cup C)- I(A\cup B\cup C)\, . \end{eqnarray} Tripartite information can measure the degree of extensivity of the mutual information in such a way that mutual information is extensive when $I^{[3]} = 0$, superextensive when $I^{[3]} < 0$ and subextensive when$ I^{[3]} > 0$. In either the extensive or the superextensive case mutual information is said to be monogamous. It is noticed that when $I^{[3]} = 0$ the mutual information of $A$ with $BC$ is the sum of its mutual information with $B$ and $C$ individually. While in a generic quantum system, the tripartite information can be positive, negative or zero, depending on the choice of the regions, but in \cite{Hayden} it was shown that according to the RT prescription the mutual information is always monogamous, $i.e.$ \begin{eqnarray} I^{[3]}(A \cup B \cup C) \leq 0\, , \end{eqnarray} for any regions $A, B$ and $C$ in the boundary field theory. To calculate the tripartite information for three disjoint regions $A , B$ and $C$ of the same length $l$ with separation $x$ the computation of $S(A\cup B\cup C)$ is more challenging. For the union of three subsystems one should consider different configurations for the extremal surfaces. In fact, considering the $N$ intervals we should compare $(2N-1)!! $ configurations ($N=3\Rightarrow 15 $ configurations in our case). However, one can show that for $N = 3$ equal intervals we are left only with the four configurations depicted in Fig. \ref{figI3} \cite{Allais}.\\ \end{itemize} In \cite{Camilo} the authors specialized to the $3$-dimensional boundary and considered a strip-like entangling region $A$ with the length $l$ in the $x^{1}$ direction and regulated length $l_{\perp}\rightarrow \infty$ in the $x^{2}$ direction at a constant time slice and they reached the following expression for the entanglement entropy \begin{equation}\begin{split}\label{EE} \delta s_{A_{finite}}(t)&\equiv s_{A}(t)-s_{A}^{(0)}\\ \,\,\,&=\epsilon^{2}\frac{l_{\perp}}{4G_{3}}\left\{ \int _{r_{\ast}}^{\infty} dr \frac{\sqrt{r^{4}-r_{\ast}^{4}}}{r^{2}}\,\,\,\,\,[\frac{J(t-\frac{1}{r})^{2}-J(t)^{2}}{2}+\frac{J(t-\frac{1}{r})\dot{J}(t-\frac{1}{r})}{r}+\frac{3\dot{J}(t-\frac{1}{r})^{2}}{4r^{2}}+\frac{I(t-\frac{1}{r})}{r^{3}}]\right.\\ &\left. +\frac{\sqrt{\pi}\Gamma (\frac{-1}{4})}{16\Gamma(\frac{5}{4})}r_{\ast}J(t)^{2}+\frac{5}{2}J(t)\dot{J}(t) \right\}, \end{split} \end{equation} where $r_{\ast}$ is related to the boundary size of the entangling region through $r_{\ast}=\frac{1.19814}{l}$ and the time independent background contribution to the entanglement entropy $s_{A}^{(0)}$ has been subtracted to study the time evolution of the entanglement entropy \cite{Camilo}. They also defined $\delta S_{A_{finite}}(t)\equiv \frac{4G_{3}}{l_{\perp}}\delta s_{A_{finite}}(t)$ as a more useful quantity to study the effect of quenching process on the time evolution of the underlying theory. It is worth to recall that one can trust the upcoming calculations by demanding that $r_{\ast}> 2 r_{h}$ which means that the size of the sub-systems must satisfies the following inequality \begin{eqnarray}\label{condition} l < \frac{1.19814\, \delta t}{\epsilon^{\frac{2}{3}}}\, , \end{eqnarray} which is sufficiently away from the event horizon of the final state Lifshitz black brane. \\ In the following we consider sub-systems of strip-like shape in the field theory described by background \eqref{final form2} and study the time evolution of the mutual and tripartite information. \section{Numerical results} \begin{figure}[h] \label{fig:MI1} \centering {\includegraphics[width=0.48\textwidth]{fig1} } {\includegraphics[width=0.48\textwidth]{fig11} } \caption{$Left$: The rescaled holographic mutual information $I$ as a function of the boundary time $t$ at fixed $ l=1$ and $\delta t=0.4 $ . The different curves are characterized by different values of $x=0.1$ (top) to $0.4$ (bottom). (Some of the curves are not visible since everywhere vanishing). $Right$ : Rescaled equilibrium time $t_{eq} \delta t ^{-1}$ as a function of separation $x$ at fixed $ l=1$ and $\delta t=0.4 $. The larger separation $x$ , the longer rescaled equilibrium time $t_{eq} \delta t ^{-1}$. }\label{fig1} \end{figure} In Fig. \ref{fig1} we show the time evolution of the mutual information of two sub-systems, with the same length $l$, in the left plot and the dependence of rescaled equilibration time on the separation length $x$ in the right plot. We set $\epsilon =0.1$ which means that the dynamical exponent of the final state Lifshitz theory will be $z=1.01$. In the left plot the length of the two sub-systems and the injection time are to be fixed $l=1$ and $\delta t=0.4$, respectively, to study the effect of separation length $x$ on mutual information. It can be seen that by varying the separation $x$ between them three different behaviors occur. Firstly, as it was expected, for very large $x$ mutual information is zero at all times (pink curve $x=1$) . Secondly, for large enough $x$, the mutual information undergoes a transition beyond which it is identically zero. In fact, when $I(A,B)=0$ the two sub-systems $A$ and $B$ become completely decoupled and hence one would say that a disentangling transition occurs. Thirdly, for a given value of $x$, \textit{i.e.} $x=0.25,0.3,0.35,0.4$ in our case, it is clear that the mutual information is (approximately) zero at $t=-\infty$ then it reaches positive values at intermediate times, before the thermal state is reached, and then vanishes at the end . Finally and more interestingly, for small enough $x$ we find that the mutual information is positive for any boundary time $t$ which means that there always exist correlation between the two sub-systems at all times. From another point of view in this case one can say that the connected configuration, see Fig .\ref{fig:s}, would be always minimal. Consequently, the mutual information does respect to the subadditivity condition (\ref{Sub}). These results are in complete agreement with the ones reported in the literature, see $e.g.$ \cite{Allais, Balasubramanian,Fischler,Hayden}. One can also study time evolution of mutual information for different length scale $l$, $x$ and different injection times $\delta t$. The logic is identical and the same results will be obtained. We would like to define $t_{eq}$ as a specific time above which the mutual information reaches its equilibrium value. To do so, we consider the following time dependent function \begin{eqnarray} \epsilon(t) =\vert\frac{I(t=\infty)-I(t)}{I(t=\infty)}\vert, \end{eqnarray} where the equilibration time is defined as the time which staisfies $\varepsilon(t_{eq})<0.001$ and $\varepsilon(t)$ stays below this afterwards. In the right plot of Fig. \ref{fig3} we set $ l=1$ and $\delta t=0.4 $ to analyze relation between the separation length $x$ and the rescaled equilibrium time $t_{eq} \delta t ^{-1}$. It is noticed that we have just considered curves which have $I(A, B)\neq0$ at $t\rightarrow\infty$. Here we list the following interesting features corresponding to Fig. \ref{fig1}. \begin{enumerate} \item \textbf{Left plot:} \begin{itemize} \item Interestingly, for large enough $x$ there is a disentangling transition before the two sub-systems reach the final equilibrium. That is when the two sub-systems become largely separated they become decoupled and have no time to reach the equilibrium, at least from the mutual information point of view. \item There is always a value of $x$ for which the disentangling transition occurs, either in the initial state or the final state, which we call it $x^{DT}$. Our results indicate that $x^{DT}_{Lif}>x^{DT}_{AdS}$, $i.e.$ in the final state with lower symmetry $x^{DT}$ increases. However, notice that $x^{DT}$ is independent of how much the symmetry is broken, since $\epsilon$ merely shifts the mutual information according to (\ref{EE}). \item We can observe that small enough separations $x < l$ (top curves) cause the mutual information at late times ($t=+\infty$) be always greater than those at earlier times ($t=-\infty$). In fact, the non-equilibrium dynamics following the breaking of the relativistic scaling symmetry leads to the more correlation between two sub-systems. Namely, the less symmetry, the greater correlation. Note that although the final state is thermal and hence the temperature decreases the mutual information but the above statement is still correct. \end{itemize} \item \textbf{Right plot:} \begin{itemize} \item It is obvious from the figure that if we decrease the separation length $x$, the two subsystems reach their final state faster. In fact, the smaller the separation, the faster the rescaled equilibration time. \item Interestingly, there is a linear relationship between the separation length $x$ and the rescaled equilibrium time $t_{eq} \delta t ^{-1}$ of the following form \begin{eqnarray} t_{eq} \delta t ^{-1} = 10.636 x +5.319. \end{eqnarray} \item According to (\ref{MI}) and (\ref{EE}) if one would like to break the symmetry of the initial state more strongly, $i.e.$ choosing larger $\epsilon$, the holographic mutual information merely experiences a shift and the rescaled equilibrium time is independent of the rate of symmetry breaking. \end{itemize} \end{enumerate} \begin{figure}[h] \label{fig:MI1} \centering {\includegraphics[width=0.48\textwidth]{new1} } {\includegraphics[width=0.48\textwidth]{fig31} }\\ {\includegraphics[width=0.49\textwidth]{fig25} } {\includegraphics[width=0.49\textwidth]{fig22} } \caption{Top: The holographic mutual information $I$ as a function of the boundary time $t$ at fixed $\delta t=0.4$. In the left panel $l_{1}=1.4$ and $x=0.1$ and in the right one $l_{1}=1$ and $x=0.3$. The various curves correspond to different values of $l_{2}$. Some of the curves are invisible since everywhere disappearing such as $l_{2}=0.2$ (the purple curve).\\ Down: $Left$: The equilibrated value of mutual information as a function of $l_{2}$ at fixed $l_{1}=0.5$ and $x=0.2$. The different curves correspond to different value of $\delta t=2.5$(top) to $\delta t=0.4$(bottom). $Right$: Maximum value of equilibrated mutual information $I_{max}$ as a function of $\delta t$ for different values of $l_{2}=0.8,1.1,..,7.1,...10.1$ at fixed $l_{1}=0.5$, $x=0.2$. }\label{fig2} \end{figure} Having studied the mutual information of the two sub-systems of the same length $l$, we shall extend the previous results to a situation where the two intervals have different lengths $l_{1}$ and $l_{2}$. As it was already mentioned, the mutual information for two sub-systems of the different lengths is given by \begin{eqnarray} I(l_1,l_2) = S(l_{1})+S(l_{2})-S(l_{1}\cup l_{2}). \end{eqnarray} In Fig. \ref{fig2}-top, we plot the time evolution of the holographic mutual information for two sub-systems $A$ and $B$, with lengths $l_{1}$ and $l_{2}$ respectively and separation length $x$, for two different regimes of $l_{2}$. It is significant to notice that there is an upper bound for $l_2$ due to (\ref{condition}) . On the left panel one can easily see that the mutual information increases as long as $l_2$ increases. It is intuitively comprehensible since for large $l_2$ the correlation of the two sub-systems $A$ and $B$ increases. On the other hand, for very large $l_2$ this correlation, or equivalently the mutual information, does not change substantially and it remains approximately constant. However, the same intuitive argument can not be applied to describe the behavior of the mutual information on the right panel. In fact, by increasing $l_2$ the final value of mutual information can be larger or smaller depending on the choice of $l_2$. As an example, if we consider $l_2=0.4, 0.9$ and $1.4$ we get $I(l_2=0.4)<I(l_2=0.9)>I(l_2=1.4)$ indicating that the mutual information has a maximum value, $I_{max}$. It is noticed that $\epsilon$ and $\delta t$ are kept fixed so that the final temperature, introduced in (\ref{temperature}), is the same for both left and right plots. The (maximum) mutual information as a function of $l_{2}$ ($\delta t$) for different values of $\delta t$ ($l_2$) has been plotted in the left (right) panel of Fig. \ref{fig2}-down. In fact, the strategy is to change the final temperature of the system thus we consider various injection times. At low temperatures, corresponding to larger values of $\delta t$, the mutual information increases for large $l_2$ but it is not substantial for large enough $l_2$. By raising temperature, though the available $l_2$ is more limited according to (\ref{condition}), a maximum appears in the mutual information. Therefore, one can conclude that the final temperature and length $l_2$ have opposite effect on the mutual information which is in complete agreement with the results reported in \cite{Balasubramanian,Allais}. It is noticed that the general results indeed coincide with the case $l_{1}=l_{2}$ so we merely express the following interesting outcomes. \\ \newpage \begin{enumerate} \item \textbf{Top plots:} \begin{itemize} \item In both plots if one increases the length of second sub-system $ l_{2}$, before the two sub-systems reach the final state, the two sub-systems become more and more entangled and hence the mutual information's peak goes upward. \item Remarkably, depending on the value of $l_{2}$ two different behaviors for the mutual information will be observed at the final state . While, in the right plot increasing $l_{2}$ causes the mutual information decreases at the final state in the left plot if we increase the length of $l_{2}$, the mutual information will also increase. \item It is evident from the right plot that there is a length scale, $l_{2}\simeq0.3$, beyond which a disentangling transition occurs and then there is no correlation between the two sub-systems. \end{itemize} \item \textbf{Down plots:} \begin{itemize} \item The right panel indicates that if one decreases the final state temperature (corresponding to increase the injection time $\delta t$), the maximum value of the mutual information will increase gradually and then finally reaches an approximate constant value, $i.e.$ at low temperature regime $I_{max}$ stays constant. In other words, the higher the temperature of the final state is, the lesser the maximum value of the mutual information becomes. \end{itemize} \end{enumerate} \begin{figure}[h] \label{fig:MI3} \centering {\includegraphics[width=0.4\textwidth]{fig3} } {\includegraphics[width=0.4\textwidth]{fig5} } \caption{Two - dimensional parameter space of the two sub-systems living on the boundary theory. All of the curves correspond to $I=0$ (the transition curves) below which the two-subsystems become entangled. $Left$ : The transition curves of the two sub-systems, with the same length $l_{1}=l_{2}$, is plotted at fixed $\delta t=0.4 $ (for which $l , x < 2.1$) for different times $t=0.5$(red) to $t=4$(black). $Right$ : The transition curves of the two sub-systems with lengths $ l_{1}=l$ , $ l_{2}=2 l_{1}=2l $ is plotted at fixed $\delta t=0.4 $ (for which $l , x < 2.1$) for different times $t=0.5$(red) to $t=4$(black). }\label{fig3} \end{figure} In order to study the holographic mutual information more accurately we consider two sub-systems of the different length $l_1$ and $l_2$ separated by length $x$. Defining $H(t,l,x)\equiv S(t,2l+x)+S(t,x)-2S(t,l)$ we would like to find a family of curves in the configuration space, given by $x$ , $l$ and parameterized by $t$, satisfying the following equation \begin{eqnarray} H(t,l,x)=0, \end{eqnarray} corresponding to the time-dependent disentangling transition. In Fig. \ref{fig3}, this transition has been shown at different times for $l_{2}=l_{1}=l$ (left plot) and $l_{2}=2 l_{1}=2l$ (right plot). The area below each curve is a region where the two sub-systems have non-zero mutual information, $i.e.$ two sub-systems are entangled. In the following we list some interesting points regarding these two plots. \begin{figure}[h] \centering {\includegraphics[width=0.48\textwidth]{fig4} } {\includegraphics[width=0.48\textwidth]{fig6} } \caption{$Left$: The rescaled Holographic mutual information as a function of the boundary time $t$ at fixed $ l=1$ and $x=0.3$. The different curves are characterized by different values of $\delta t=0.2,0.3,0.4,0.6,0.9,1.5,2$. $Right$: The rescaled equilibrium time $t_{eq} \delta t ^{-1}$ as a function of the length $\delta t$ at fixed $ l=1$ for different separation length $x=0.2,0.3,0.4$. }\label{fig4} \end{figure} \begin{itemize} \item One of the main features we observe is that there is a specific regime of the parameters, small enough $ l$ and $x$, where the mutual information is indeed independent of the time evolution. Hence one can say that transition curves do not feel the time lapse in the above regime. \item Another interesting point is that, following the (\ref{EE}), configuration space is independent of the rate of the symmetry breaking, which specified by $\epsilon$ namely whatever $\epsilon$ is the transition curves will be the same as the previous one. Consequently, strength of the symmetry breaking has no role on the phase space of the two sub-systems. \item Moreover, It can be observed from these plots that the region where the mutual information has non vanishing values in out-of-equilibrium time ($e.g.$ $t=0.5 $ top curve) is more wider than those of at the equilibrium time. In other words, there are wide region of parameters in out-of-equilibrium time where the two sub-systems become entangled. In fact, during the time evolution towards the final equilibrium state the phase space is more restricted. \item Comparing the two plots we can clearly see that the qualitative features and behaviors are the same but it is worth to mention that in the case of $ l_{2}=2 l_{1} $ (right plot) there is an increase in the area of non-vanishing mutual information in the configuration space with respect to that of $l_{2}= l_{1} $ (left plot). \end{itemize} In Fig. \ref{fig4}, we plot the effect of quenching time $\delta t$ on the time evolution of the holographic mutual information for two sub-systems of the same length $l=1$ which are separated by $x=0.3$ (left plot) and the rescaled equilibration time $t_{eq} \delta t ^{-1}$ as a function of the quenching time $\delta t$ for different separation lengths $x$ (right plot). It can be seen from the left plot that although the quenching function $J(t)$ is a monotonically increasing function, the time evolution of the mutual information behaves in a different manner depending on the quenching rate $\delta t$. For fast enough quenches, $0.2\lesssim\delta t\lesssim0.8$ mutual information starts at the value roughly zero in the initial state and then reaches a peak in the intermediate times and finally declines to zero or constant values in the final state. While slower quenches $\delta t>0.8$ behave quite smoothly. Namely, if one increases the value of $\delta t$, there is no considerable gap for the mutual information between its initial state and its final state. Remarkably, we can observe that for slow quenches the mutual information approaches the adiabatic regime in the final state that is there is no dependence on the separation length $x$. Another interesting aspect noted from these plot is that this adiabatic behavior is completely independent of how large or small the symmetry of the initial state breaks. In fact, either the adiabatic behavior or the disentangling transition are indeed independent of the rate of the symmetry breaking. \begin{figure}[h] \centering { \includegraphics[width=0.48\textwidth]{fig12} } {\includegraphics[width=0.48\textwidth]{fig77} } \caption{$Left$: The rescaled holographic tripartite information $I^{[3]}$ as a function of the boundary time $t$ at fixed $x=0.1$ and $\delta t=0.4 $ . The different curves are characterized by different values of $l=0.3,0.35,0.4,0.45,0.5,0.52$ which decrease from right to left (Some of the curves are not visible since everywhere vanishing). $Right$: The rescaled holographic tripartite information $I^{[3]}$ as a function of the boundary time $t$ at fixed $x=0.2$ and $\delta t=0.4 $. The different curves are characterized by different values of $l=0.28,0.29,0.3,0.31$ which decrease from right to left.} \label{fig5} \end{figure} Now consider the time evolution of the tripartite information for three sub-systems of the same length $l$ with separation length $x$ living on the boundary. We plot in Fig. \ref{fig5} the results of the tripartite information for different values of $l$ and $x$ as a function of the boundary time. The most important point is that the tripartite information is generically non-positive at all times, $i.e.$ the mutual information is extensive or superextensive. Hence, one can say that the holographic mutual information is monogamous which is coincided with \cite{Hayden}. Moreover, it is obvious that the tripartite information starts at the initial value (roughly zero) and ends at the final value, zero or more negative than the initial value, passing through an intermediate phase where it is absolutely negative. If one also decrease the length of the sub-systems, the tripartite information's peak tends to zero that is when the size of the sub-systems $l$ approaches the separation length $x$ the tripartite information becomes vanished. \\
{ "timestamp": "2018-04-17T02:15:48", "yymm": "1804", "arxiv_id": "1804.05604", "language": "en", "url": "https://arxiv.org/abs/1804.05604" }
\subsection{Setup and notation} We set the stage for the proof by introducing some notation and definitions. Let $Y = (V,E)$ be an undirected graph, and let $\vec{E}$ be the set of directed edges associated with $E$, so that \[ \vec{E} = \{(u,v) : \{u,v\} \in E\}, \] and $|\vec{E}| = 2|E|$. To limit confusion, we will use plain, bold letters $\be$ to denote edges in $E$ and decorated bold letters $\vec{\be}$ to denote arcs in $\vec{E}$. For an arc $\vec{\be} = (u,v)$, we let $(\vec{\be})^{-1} = (v,u)$. Let $n \in \Z^+$, let $Y_n = (V_n, E_n)$ be an $n$-lift of $Y$ as defined in \Cref{sec:lift-notation}, and let $X_n = (V_n,E_n, \xi_n)$ be random signing of $Y_n$ with signs $\xi_n: E_n \to \R$.\footnote{In our setting, we will choose $\xi_n(e) \in\{ \pm 1\}$ independently and uniformly for each $e \in E_n$.} In the $n$-lift, each edge $e \in E_n$ (arc $\vec{e} \in \vec{E_n}$) is associated with an edge $\{u,v\} \in E$ (arc $(u,v) \in \vec{E}$), and with a pair of labels $i,j \in [n]$, so that $e = \{(u,i),(v,j)\}$ ($\vec{e} = ((u,i),(v,j)$). Again to limit confusion, we will use non-bold, plain letters to denote edges $e \in E_n$ and decorated, non-bold letters to denote arcs $\vec{e}\in \vec{E_n}$. We let $S^{E}_n$ be the set of tuples of $|E|$ permutations on $[n]$. Each $n$-lift is associated with some $\sigma = \{\sigma_{\be}\}_{\be \in E} \in S^{E}_n$, so that $E_n = \{ \{(u,i),(v,\sigma_{u,v}(i))\}\}$ (where we take $u$ to proceed $v$ lexicographically, in order to ensure that the bijection between $\sigma$ and lifts is unique).\footnote{Again, in our setting we will choose each $\sigma_{\be}$ uniformly at random in $S_n$.} We sometimes refer to the lift specified by $\sigma \in S_n^E$ as $Y_n(\sigma)$. We also define $B_n$ to be the weighted non-backtracking matrix of $X_n$ as in \Cref{sec:matrix-notation}, so that for directed edges $(u,v),(x,y) \in \vec{E_n}$, \[ B_n[(u,v),(x,y)] = \xi_n(\{u,v\}) \cdot \Ind[v = x] \cdot \Ind[u \neq y]. \] We will apply the trace method to $B_n$; that is, we will relate $\rho(B_n)$ to the expected trace of a power of $B_n$. \begin{fact}\label{fact:trace-method} If $A \in \C^{n\times n}$ is a random complex matrix, $m,\ell \in \Z^+$, $\eps,c \in \R^+$, and $\E[\tr((A^\ell(A^\ell)^*)^{k})] \le R^{2m\ell}$, then for $\ell\cdot m \ge \frac{c}{\eps} R \log n$ and $\eps < R/2 $, \[ \Pr[\rho(A) \ge R + \eps ] \le n^{-c} \] \end{fact} \begin{proof} This follows by noticing that $\rho(A)^\ell \le \sup_{x \in \R^n}\frac{\|A^\ell x\|_2}{\|x\|_2} = \| A^{\ell} (A^{\ell})^* \|^{1/2}$, and then applying Markov's inequality: \begin{align*} \Pr[ \| A^{\ell} (A^{\ell})^* \|^{1/2\ell} \ge t] \le \frac{\E[\tr((A^{\ell} (A^{\ell})^*)^{m})]}{t^{2m\ell}} \le \left(\frac{R}{t}\right)^{2m\ell}, \end{align*} and choosing $t = R + \eps$ with $2\eps < R$, \[ \left(\frac{1}{1 + \eps/R}\right)^{2m\ell} \le \left(1 - \frac{\eps}{2R}\right)^{2m\ell} \le \exp\left(- \frac{\eps m \ell}{R}\right) \] for $\ell \cdot m \ge \frac{c}{\eps} R\log n$ the conclusion follows. \end{proof} In our computations, we will bound the contribution of sequences of \emph{half-edges} (so as to be consistent with \cite{Bor17}). \begin{definition}[half-edge] A {\em half-edge} $\gamma$ is given by an arc $(u,v) \in \vec{E}$, and an index $i \in [n]$ corresponding to the index of $u$. We think of $\gamma = ((u,v),i)$ as an arc leaving the $i$th copy of $u$ in the lift, and going to vertex $v$ at some unspecified index; colloquially, $\gamma = ((u,i), (v,?))$. We call the set of all possible half-edges $\Pi$. In the interest of promoting clarity, we point out that $\Pi$ does not depend on the specific choice of lift, $\sigma$. \end{definition} \begin{definition}[valid sequence of half-edges] We will say that a sequence of half-edges $(\gamma_1,\ldots,\gamma_{2k})$ is {\em valid} if it satisfies the following constraints: \begin{enumerate} \item {\em Admissibility of pairs}: consecutive pairs of half-edges correspond to the same edge in $Y$. Formally, for each $t \in [k]$ with $\gamma_{2t -1} = (\vec{\be}_{2t-1}, i_{2t-1})$ and $\gamma_{2t} = (\vec{\be}_{2t},i_{2t})$, we have that $\vec{\be}_{2t-1} = (\vec{\be}_{2t})^{-1}$. \item {\em Consistency}: if two half-edges are paired once, they remain paired for the remainder of the sequence. Formally, if there exists $t^*$ such that the half-edge $g = \gamma_{2t^*-1}$ is succeeded by the half-edge $h = \gamma_{2t^*}$, then for all $t$ such that $\gamma_{2t-1} = g$, we must also have $\gamma_{2t} = h$. Similarly, for all $t$ with $\gamma_{2t} = g$, we must also have $\gamma_{2t-1} = h$. \item {\em Consecutiveness}: the sequence of half-edges, when glued together, must correspond to a valid walk. Formally, for every $t$, if we have $\gamma_{2t} = ((u_{2t},v_{2t}), i_{2t})$ and $\gamma_{2t+1} = ((v_{2t+1}, u_{2t+1}),i_{2t+1})$, then we must have $v_{2t+1} = v_{2t}$ and $i_{2t+1} = i_{2t}$. \end{enumerate} \end{definition} Colloquially, if two half-edges $\gamma= (\be,i),\gamma'=((\be)^{-1},j)$ appear consecutively in a sequence with $\gamma$ in an odd position and $\gamma'$ in an even position, we will say that they are {\em glued together} to give the edge $\{(\be_1,i),(\be_2,j)\}$ (where $\be_1,\be_2$ are the first and second endpoints of $\be$, respectively). \begin{definition}[non-backtracking sequence] A sequence of half-edges $(\gamma_1,\ldots,\gamma_{k})$ is called {\em non-backtracking} if it does not define a walk that backtracks; that is, for each $t \in [k]$, if $\gamma_{2t} = (\be_{2t}, i_{2t})$ and $\gamma_{2t+1} = (\be_{2t+1}, i_{2t+1})$, we require that $\be_{2t} \neq \be_{2t+1}$. \end{definition} We define $\Gamma^{2k}$ to be the set of all valid, non-backtracking sequences of $2k$ half-edges. \subsection{Walk decomposition} For $\be = \{u,v\} \in E$, define $M_{\be}$ to be the $n\times n$ signed permutation matrix which encodes $\sigma_e$, so that $(M_{\be})_{ij} = \xi(\{(u,i),(v,j)\})$ if and only if $\sigma_{\be}(i) = j$. Further, for two half edges $\gamma = (\vec{\be},i),\gamma' = (\vec{{\bf f}},j)$, we let $M_{\gamma,\gamma'} = \Ind[\vec{\be} = (\vec{{\bf f}})^{-1}] \cdot \Ind[\sigma_{\be}(i) = j] \cdot \xi((\be_1,i),(\be_2,j))$ (where $\be$ is the undirected version of $\vec{\be}$). For two arcs $\vec{e},\vec{f} \in \vec{E}_n$, let $\Gamma_{\vec{e},\vec{f}}^{2k}$ be the set of all valid, non-backtracking sequences of $2k$ half-edges $(\gamma_1,\ldots,\gamma_{2k})$, such that $\gamma_1,\gamma_2$ form $e$ when glued together, with the direction of $\vec{e}$ specified by $\gamma_1$, and such that $\gamma_{2k-1},\gamma_{2k}$ form $\vec{f}$ when glued together, with the direction of $\vec{f}$ specified by $\gamma_{2k-1}$. We have by definition that \begin{align} (B_n^{k})_{ef} &= \sum_{\gamma \in \Gamma_{\vec{e},{\vec{f}}}^{2k+2}} \prod_{s=1}^k M_{\gamma_{2s-1}\gamma_{2s}}, \label{eq:sum} \end{align} since if a sequence $\gamma$ is not valid or non-backtracking, it will have value $0$. We now define {\em tangles}, which are undesirable, low-probability walk structures (we will be able to discard their contribution to \Cref{eq:sum}). \begin{definition}[tangle-free] For a positive integer $\ell$, a graph $G$ is { \em $\ell$-tangle free} if it contains at most one cycle in every neighborhood of radius at most $\ell$. A valid sequence $\gamma \in \Gamma^{2k}$ is $\ell$-tangle free if the graph given by the edges and vertices visited by $\gamma$ does not contain more than one cycle in any neighborhood of radius at most $\ell$. \end{definition} The following lemma from \cite{Bor17} proves that with high probability, $Y_n$ is $\ell$-tangle free. \begin{lemma}[{\cite[Lemma 24]{Bor17}}]\label{lem:no-tangle} If $\ell \le \kappa \log_{d-1} n$ with $\kappa \in [0,1/4]$ and $d$ the maximum degree of a vertex in $Y$, then with high probability $Y_n$ is tangle-free. \end{lemma} Finally, we will require the following definition. \begin{definition} A valid sequence $\gamma$ is {\em even} if the walk it induces contains every undirected edge with even multiplicity. \end{definition} \subsection{Bounding the expectation of a single walk}\label{sec:onewalk} Now, we bound the expectation of the product of entries along a walk. For a sequence $\gamma= (\gamma_1,\ldots,\gamma_{2\ell})$ of length $2\ell$, with $\gamma_t = ((u_t,v_t),i_t)$, let $E_\gamma$ be the set of lifted edges in $\gamma$, \[ E_\gamma = \{ \{(u_{2t-1}, i_{2t-1}),(v_{2t-1},i_{2t})\} \mid t \in [k] \}. \] \begin{proposition}\label{prop:onewalk} Suppose that $\gamma$ is a valid sequence of length $2k \ll \sqrt{n}$. Let $\ell < \frac{1}{4} \log_{d-1} n$. Then we have \[ \E_{\sigma,\xi} \left[\prod_{s=1}^{k} M_{\gamma_{2s-1}\gamma_{2s}} \right] \le \Ind[\gamma \text{ even}]\cdot(1+o_n(1))\cdot \left(\frac{1}{n}\right)^{|E_{\gamma}|}. \] \end{proposition} \begin{proof} Consider some valid sequence of half-edges $\gamma = (\gamma_1,\ldots,\gamma_{2k})$, and let $\gamma_t = ((u_t,v_t), i_t)$ and $\vec{\be_t} = (u_t,v_t)$, $\be_t = \{u_t,v_t\}$ for convenience. We have that \begin{align} \E_{\sigma,\xi} \left[\prod_{s=1}^{k} M_{\gamma_{2s-1}\gamma_{2s}} \right] &= \E_{\sigma,\xi} \left[\prod_{\be \in \gamma} \prod_{\substack{t \in [k]\\ \be_{2t-1} =\be}} M_{\gamma_{2t -1}\gamma_{2t}}\right] = \prod_{\be \in \gamma} \E_{\sigma,\xi} \left[\prod_{\substack{t \in [k]\\ \be_{2t-1} =\be}} M_{\gamma_{2t -1}\gamma_{2t}}\right],\label{eq:edge-prod} \end{align} since for $\be \neq \be'$, $\sigma_{\be}$ and $\sigma_{\be'}$ are independent, and by the independence of $\xi_n$. Expanding the entries of $M$ according to $M$'s definition, \begin{align} \cref{eq:edge-prod} &= \prod_{\be \in \gamma} \E_{\sigma,\xi} \left[\prod_{\substack{t \in [k]\\ \be_{2t-1} = \be}} \Ind[\sigma_{e_{2t-1}}(i_{2t-1}) = i_{2t}] \cdot \xi((u_{2t-1},i_{2t-1}),(v_{2t-1},i_{2t})) \right].\label{eq:expand} \end{align} By the independence of the signing $\xi$, we have that the expectation of any sequence in which any (undirected) edge is visited an odd number of times is $0$. Assimilating this fact, \begin{align} \cref{eq:expand} = \Ind[\gamma \text{ even}] \cdot \prod_{e \in \gamma} \E_{\sigma} \left[\prod_{\substack{t \in [k]\\ \be_{2t-1} = \be}} \Ind[\sigma_{\be_{2t-1}}(i_{2t-1}) = i_{2t}] \right].\label{eq:unsign} \end{align} Now, suppose that $k_{\be}$ distinct lifted copies of the edge $\be \in E$ appear in $\gamma$. Since $\gamma$ is consistent, and because we may assume every edge appears with even multiplicity, the term within the expectation just corresponds to fixing $k_{\be}$ edges of a permutation on $n$ elements. Thus we simplify, \begin{align} \cref{eq:unsign} &= \Ind[\gamma \text{ even}] \cdot \prod_{\be \in \gamma} \frac{(n-k_{\be})!}{n!} \le \Ind[\gamma\text{ even}] \cdot \prod_{\be \in \gamma} \left(\frac{1}{n}\left(1 + \frac{2k_{\be}}{n}\right)\right)^{k_{\be}}, \end{align} where to obtain the last inequality we have used that for $i \le k_{\be}\ll \sqrt{n}$, \[ \frac{1}{n - i} \le \frac{1}{n}\left(1+\frac{2i}{n}\right) \le \frac{1}{n} \left(1 + \frac{2k_{\be}}{n}\right). \] And now since $\sum_{\be \in \gamma} k_{\be} = |E_\gamma|$ is the number of distinct lifted edges in $\gamma$, and the number of base edges is at most the number of lifted edges, \begin{align} \le \Ind[\gamma\text{ even}] \cdot \left(\frac{1}{n}\right)^{|E_\gamma|} \left(1 + \frac{2k}{n}\right)^{2k}. \end{align} Using that $2k \ll \sqrt{n}$ we obtain our conclusion. \end{proof} \subsection{Counting walks} To apply \Cref{fact:trace-method}, we will need to bound the trace of a power of $B_n^{\ell}(B_n^{\ell})^*$. Since the trace corresponds to a sum over walks, and because in \Cref{sec:onewalk} we have a bound on the expectation of each walk as a function of the number of distinct edges and the evenness of the walk, we have reduced our problem to counting the number of walks of various types. We will follow the definitions of Bordenave rather closely, so we may recycle his bounds. We have that \begin{align}\label{eq:trace-full} \tr\left((B^{\ell}(B^\ell)^{*})^m\right) &= \sum_{\substack{e_1,\ldots,e_{2m-1} \in E_n^{2m-1}}} \prod_{s = 1}^{2m-1} (B_n^\ell)_{e_{s}, e_{s+1}}, \end{align} where we have taken $s+1$ modulo $2m-1$. To characterize the summation, it is useful for us to define the following set of sequences of half-edges, which have the property that large sub-sequences are tangle-free. \begin{definition} Let $W_{\ell,m}$ be the set of sequences of half-edges $\gamma$ of length $2\ell\times 2m$ with the properties that, if we write $\gamma$ as a sequence of sub-sequences $\gamma = (\gamma^{(1)},\ldots,\gamma^{(2m)})$ \begin{enumerate} \item For each $s \in [2m]$, the sub-sequence $\gamma^{(s)}$ is valid, non-backtracking, and {\bf tangle-free}. \item For each $s \in [m]$, the final edge in $\gamma^{(s)}$ is equal to the first edge in $\gamma^{(s+1)}$ (where we take addition mod $2m$). Formally, if $\gamma^{(t)} = (((u^{(t)}_1,v^{(t)}_1),i^{(t)}_1),\ldots,((u^{(t)}_{2\ell},v^{(t)}_{2\ell}),i^{(t)}_{2\ell}))$, then we require $u^{(s)}_{2\ell-1} = u^{(s+1)}_1$, $v^{(s)}_{2\ell-1} = v^{(s+1)}_1$, $i^{(s)}_{2\ell-1} = i^{(s+1)}_1$ and $i^{(s)}_{2\ell} = i^{(s+1)}_2$. \end{enumerate} \end{definition} Recall we have defined $\Pi$ to be the set of all half-edges (not necessarily present in $Y_n$). \begin{definition} We define an equivalence relation on $\Pi^m$: $\gamma,\gamma' \in \Pi^m$, with $\gamma_t = ((u_t,v_t),i_t)$ and with $\gamma'_t = ((u'_t,v'_t),i'_t)$ for $t\in [m]$. We'll say that for $\gamma \sim \gamma'$ if for all $t \in [m]$ we have $(u_t,v_t) = (u'_t,v'_t)$, and if in addition there exists a tuple of permutations in $S_n$, one for each vertex $u \in V$ from the base graph, $(\sigma_u)_{u \in V}$, so that $i'_t = \sigma_{u_t}(i_t)$. \end{definition} We observe that if $\gamma$ is {\em even}, then any $\gamma' \sim \gamma$ is even as well. Similarly, if $\gamma \sim \gamma'$, then $|E_\gamma| = |E_{\gamma'}|$. We choose a canonical representative for each equivalence class: \begin{definition}[Canonical sequence] Let $V_\gamma(u) \subseteq \{u\} \times [n]$ be the set of all vertices of $Y_n$ visited by $\gamma$ which include $u$. We'll call $\gamma \in \Pi^m$ {\em canonical} if for all $u \in V$, $V_\gamma(u) = \{(u,1),\ldots,(u,|V_{\gamma}(u)|)\}$, and if the vertices of $V_{\gamma}(u)$ appear in lexicographical order in $\gamma$. \end{definition} The following lemmas are given in \cite{Bor17}. \begin{lemma}[{\cite[Lemma 27]{Bor17}}]\label{lem:bord-bound-equiv} Let $\gamma \in \Pi^m$, and let $V_{\gamma} \subseteq V\times [n]$ be the set of vertices of $Y_n$ which appear in $\gamma$. Suppose that $|V_\gamma| = s$. Then $\gamma$ is isomorphic to at most $n^s$ elements in $\Pi^m$. \end{lemma} \begin{lemma}[{\cite[Lemma 28]{Bor17}}]\label{lem:bord-bound-cW} Let $\calW_{\ell,m}(s,a)$ be the subset of canonical paths in $W_{\ell,m}$ with $|V_\gamma| = s$ and $|E_\gamma| = a$. There exists a constant $\kappa$ depending on $\rho$ and $Y$ such that we have \[ |\calW_{\ell,m}(s,a)| \le \rho^s (\kappa \ell m)^{8m (a-s+1) + 10m}. \] \end{lemma} We are now ready to bound the contribution of the sums of tangle-free sections. \begin{proposition}\label{prop:full-bd} For $m = \lfloor \frac{\log n}{17 \log \log n}\rfloor$, $n \ge 3$, and $\ell \le \frac{1}{4}\log_{d-1} n$, and $1 < \rho$, there is a constant $c$ independent of $n$ such that \[ \E\left[\sum_{\gamma \in W_{\ell,m}} \prod_{i=1}^{2m} \prod_{t=1}^{\ell} M_{\gamma^{(i)}_{2t-1},\gamma^{(i)}_{2t}}\right] \le n (c\ell m)^{10m} \rho^{(\ell + 2)m}. \] \end{proposition} \begin{proof} We split the left-hand side according to the $\calW$ equivalence classes, \begin{align} \E\left[\sum_{\gamma \in W_{\ell,m}} \prod_{i=1}^{2m} \prod_{t=1}^{\ell} M_{\gamma^{(i)}_{2t-1},\gamma^{(i)}_{2t}}\right] &\le \sum_{s=1}^{\infty} \sum_{a = s-1}^{\infty} n^s \sum_{\gamma \in \calW_{\ell,m}(s,a)} \E\left[\prod_{i=1}^{2m} \prod_{t=1}^{\ell} M_{\gamma^{(i)}_{2t-1},\gamma^{(i)}_{2t}}\right],\label{eq:bd} \end{align} where we have used that $|V_\gamma|-1 \le |E_\gamma|$, since $G_{\gamma}$ is connected. Now applying \Cref{prop:onewalk} (using that $\ell m \ll \sqrt{n}$), we have that for $\gamma \in \calW_{\ell,m}(s,a)$, \[ \E\left[\prod_{i=1}^{2m} \prod_{t=1}^{\ell} M_{\gamma^{(i)}_{2t-1},\gamma^{(i)}_{2t}}\right] \le \Ind[\gamma \text{ even}]\cdot (1+o_n(1))\cdot \left(\frac{1}{ n}\right)^{a}. \] Plugging this in above, along with the bound on $|\calW_{\ell,m}(s,a)|$ from \Cref{lem:bord-bound-cW}, we have \begin{align*} \text{\cref{eq:bd}} &\le \sum_{s=1}^{(\ell + 2)m + 1} n^s \sum_{a = s-1}^{(\ell+2)m} \rho^s (\kappa \ell m)^{8m(a - s + 1) + 10m} \cdot (1+o_n(1))\cdot\left(\frac{1}{n}\right)^a, \end{align*} where we use the fact that $\gamma$ must be even to obtain that $|E_\gamma| = s \le (\ell + 2)m$, (as there are only $2(\ell + 2)m$ edges in the sequence $\gamma$, and each must appear twice), and adjusted the upper limits of the summation accordingly. We re-index the above summation, setting $a' = a - s + 1$ and beginning to sum from $a' = 0$ (and summing till $a' = \infty$, as this yields a valid upper bound), \begin{align} \text{\cref{eq:bd}} &\le (1+o_n(1))\cdot(\kappa \ell m)^{10m} \cdot \sum_{s=1}^{(\ell + 2)m + 1} n^s\rho^s \cdot \left(\frac{1}{n}\right)^{s-1}\sum_{a' = 0}^{\infty} (\kappa \ell m)^{8ma'} \cdot \left(\frac{1}{n}\right)^{a'}\nonumber\\ &= (1+o_n(1))\cdot n(\kappa \ell m)^{10m} \cdot \sum_{s=1}^{(\ell + 2)m + 1} \rho^s \sum_{a' = 0}^{\infty} (\kappa \ell m)^{8ma'} \cdot \left(\frac{1}{n}\right)^{a'}.\label{eq:blah} \end{align} For our chosen $m$, when $n$ is large enough, $\frac{(\kappa \ell m)^{8m}}{n} \le \frac{(\log n)^{16 m}}{n} \le n^{-1/17}$. Combining this observation with the fact that the rightmost sum is a geometric sum, there is a constant c such that \begin{align*} \cref{eq:blah} &\le cn(\kappa \ell m)^{10m} \cdot \sum_{s=1}^{(\ell + 2)m + 1} \rho^s. \end{align*} Finally, we are left again with a geometric sum; since we have $\rho > 1$, there is a constant $c'$ so that \begin{align*} &\le c'n(\kappa \ell m)^{10m} \cdot \rho^{(\ell + 2)m + 1}. \end{align*} Using that $\rho$ is independent of $n$ to push $\rho$ into the constant, we have our conclusion. \end{proof} \subsection{Putting things together} We now finally have the ingredients to prove \Cref{thm:signed-bordenave}. \begin{proof}[Proof of \Cref{thm:signed-bordenave}] Define $\rho := \rho(B)$, fix $\eps > 0$, $\ell = \kappa \log_{d-1} n$ for a constant $\kappa \in (0,1/4)$, $m = \lfloor \frac{\log n}{17\log\log n}\rfloor$. By \Cref{lem:no-tangle}, if $\calE$ is the event that $Y_n$ is $\ell$-tangle-free, \[ \Pr( \rho(B_n) \ge \sqrt{\rho} + \eps) \le \Pr(\rho(B_n) \ge \sqrt{\rho} + \eps, \calE) + o(1) \le \Pr(\|B_n^{\ell}(B_n^\ell)^*\|^{1/2\ell} \ge \sqrt{\rho} + \eps,\calE) +o(1). \] If $Y_n$ is $\ell$-tangle-free, then only sequences $\gamma \in W_{\ell,m}$ contribute to \Cref{eq:trace-full}, as any (consecutive) sub-sequence $\gamma^{(i)} \subset \gamma$ of length $2\ell$ defines a length-$\ell$ walk in $Y_n$. So using \Cref{fact:trace-method} in conjunction with \Cref{eq:trace-full} and \Cref{prop:full-bd}, we have that \[ \E[\tr((B_n^{\ell}(B_n^{\ell})^*)^m) \cdot \Ind[ \calE]] \le n(c\ell m)^{10m} \rho^{(\ell+2)m}. \] Taking the $2\ell m$th root on the right, by our choice of $\ell = \Theta(\log n)$ and $m = \Theta(\log n/\log\log n)$, $(c\ell m)^{5/\ell} = o(\log^2 n)^{1/\log n} = 1 + o(1)$, $n^{1/2\ell m} \le 2^{\Theta(\log\log n/\log n)} = 1 + o(1)$, and since $\rho$ is independent of $n$, $\rho^{1/\ell} = 1 + o_n(1)$, and we have the desired conclusion. \end{proof} \section{Introduction} A randomly chosen $n$-variable constraint satisfaction problem (CSP) will typically be unsatisfiable once the constraint density~$\alpha$ (ratio of constraints to variables) is a sufficiently large constant. Taking 3SAT as an example, the conjectural satisfiability threshold~\cite{MPZ02,MMZ06} is $\alpha_c \approx 4.2667$, and the trivial first moment method already establishes unsatisfiability (whp) once $\alpha > \log_{7/8}(1/2) \approx 5.19$. Despite this, there is no known efficient algorithm that can refute random 3SAT instances (whp) for any large constant~$\alpha$. The best known algorithms~\cite{FGK05,GL03,CGL07,FO07,FKO06}, all of which use spectral or semidefinite-programming (SDP) techniques, work only once $\alpha \gtrapprox \sqrt{n}$. Indeed, there are lower bounds~\cite{Sch10,Tul09,KMOW17} showing that any polynomial-time algorithm based on such techniques --- more generally, based on the constant-degree ``Sum of Squares'' method --- will fail to refute unless $\alpha \gtrapprox \sqrt{n}$. The most general of these results~\cite{KMOW17} applies to \emph{any} CSP for which the constraint predicate supports a pairwise-uniform probability distribution.\footnote{That is, there is a distribution $D$ over satisfying assignments $x$ to the predicate, with the property that the order $1$ and $2$ moments of $D$ are identical to those of the uniform distribution.} On the other hand, for any CSP whose predicate does \emph{not} support a pairwise-uniform probability distribution, it has been shown~\cite{AOW15} that there \emph{is} an efficient SDP-based algorithm for refuting random instances once the constraint density~$\alpha$ is a sufficiently large constant.\footnote{In~\cite{AOW15}, it is stated that $\alpha = n^{k/2-1}\polylog n$ suffices when no $k$-wise uniform distribution is supported; however, in the particular case of $k = 2$ one can show that the $\polylog n$ is unnecessary, using the (worst-case) strong refutation algorithm for 2XOR-SAT~\cite{CW04}.} For such CSPs, where ``all of the action'' is in the sparse regime of $O(n)$ constraints, it is more plausible to hope for an efficient refutation algorithm that works just above the satisfiability threshold --- or at least to identify sharp thresholds for when efficient refutation algorithms succeed. Perhaps the simplest and most natural $\mathsf{NP}$-complete CSP of this type is NAE-3SAT. This is the variant of 3SAT in which a clause is considered ``satisfied'' if and only if it has at least one true literal \emph{and} one false literal; i.e., the literals' truth values are Not All Equal. (The further variant wherein all literals appear positively is equivalent to the problem of $2$-coloring a $3$-uniform hypergraph.) Being a more symmetric --- and in some sense, simpler --- variant of 3SAT, the NAE-3SAT problem has received a great deal of attention in the study of random CSPs; see, e.g.,~\cite{AS93,ACIM01,AM02,GJ03,CNRZ03,DRZ08,DKR15,DSS16}. In particular, by 2003 Goerdt and Jurdzi{\'n}ski~\cite{GJ03} had already proven that SDP methods could refute random NAE-3SAT instances at sufficiently high constant constraint density. NAE-3SAT is also closely related to the Max-Cut and 2XOR-SAT CSPs and has a natural basic SDP relaxation; for this reason, the problem has also been well-studied from the point of view of worst-case approximation algorithms~\cite{KLP96,AE98,Zwi98,Zwi99}. This paper is motivated by the question of whether efficient algorithms might be able to refute unsatisfiability of random NAE-3SAT instances at densities all the way down to the satisfiability threshold --- or whether there is still a range of constant densities where random instances are unsatisfiable, but this is hard for efficient algorithms to certify. The latter case seems to prevail for 3SAT, and one would likely pessimistically guess the same is true for NAE-3SAT. However one may need a finer analysis for NAE-3SAT; the range of presumably-hard densities for refuting 3SAT is between a constant and~$\sqrt{n}$, whereas for NAE-3SAT it is between two universal constants. One way to give evidence for the existence of hard densities for NAE-3SAT refutation would be to study the \emph{SDP-satisfiability threshold} for random instances; i.e., the largest density for which the basic SDP algorithm fails to refute satisfiability. The goal would be to give a lower-bound for the SDP-satisfiability threshold that exceeds the actual NAE-3SAT satisfiability threshold. In fact, the main result of this paper is a determination of the \emph{exact} SDP-satisfiability threshold of random NAE-3SAT instances, in the setting of random regular instances. This threshold provably exceeds the actual satisfiability threshold, thus establishing a range of degrees for which random regular NAE-3SAT refutation is hard for SDP algorithms. \subsection{Our results} \label{sec:our-results} For technical simplicity, we work in the setting of random \emph{regular} instances of NAE-3SAT, where every variable participates in the same number, $d$, of 3NAE-constraints. (This is in contrast to the ``Erd\H{o}s--R\'{e}nyi'' setting with clause density~$\alpha$, in which the degree of each variable is like a Poisson random variable with mean~$3\alpha$.) We also use the ``random lift'' model for $d$-regular instances, rather than, say, the ``configuration'' model. For precise details see \Cref{sec:lift-notation}, but in brief, our random $d$-regular instances are chosen as follows: \begin{enumerate}[label=\roman*] \item Start with the bipartite graph $K_{d,3}$. \item Choose a uniformly random $n$-lift~$\bH$, a bipartite graph with $dn$ vertices of degree $3$ in one part and $3n$ vertices of degree $d$ in the other part. \item Treat the degree-$d$ vertices as CSP variables and the degree-$3$ vertices as 3NAE constraints on the adjacent variables \item In each constraint, randomly replace each variable-appearance with its negation, uniformly and independently. \end{enumerate} Notice that for \emph{any} $(3,d)$-biregular graph~$H$ and any truth assignment to the variables, the randomness from the negations alone gives us that each constraint is independently satisfied with probability~$3/4$. Thus the first moment method implies the following: \begin{fact} \label{fact:sat-thresh-upper} For $d > \log_{\frac43} 8 \approx 7.228$ (i.e., for $d \geq 8$) a random $d$-regular NAE-3SAT instance will be unsatisfiable with high probability (indeed, in any model with random negations).\footnote{In fact, the unsatisfiability threshold is more likely to be lower, specifically $d \geq 7$, based on heuristics from statistical physics. The ``1RSB'' prediction for the unsatisfiability threshold of random NAE-3SAT --- which was rigorously verified for NAE-$k$SAT, $k \geq k_0$, in~\cite{DSS16} --- was determined to be at average degree $3 \cdot 2.105 = 6.315$ in the Erd\H{o}s--R\'{e}nyi case~\cite{CNRZ03}, and at degree at most~$7$ in the regular case~\cite{DRZ08} (albeit these predictions were for the ``coloring'' version of NAE-3SAT without negations).} \end{fact} Our main theorem is the following sharp threshold for SDP-satisfiability: \begin{theorem} \label{thm:our-mainest} Let $\bI$ be a random $d$-regular instance of NAE-3SAT. Then with high probability (meaning probability $1-o_{n \to \infty}(1)$): \begin{itemize} \item For $d < 13.5$, the natural SDP relaxation will \emph{not} refute satisfiability of~$\bI$. \item For $d > 13.5$, the natural SDP relaxation \emph{will} refute satisfiability of~$\bI$. \end{itemize} \end{theorem} Of course, since $d$ is always an integer we could have phrased the two cases as $d \leq 13$ and $d \geq 14$. However, as will be seen below, there is a sense in which the precise non-integer $13.5$ is the sharp threshold. In any case, these results show that for $d = 8, 9, 10, 11, 12, 13$ (and likely also $d = 7$), a random $d$-regular NAE-3SAT instance is unsatisfiable, yet this cannot be efficiently refuted using the basic SDP relaxation.\\ In fact, our results are somewhat stronger than what is stated in \Cref{thm:our-mainest}. Let us define \[ f(d) = \frac98 - \frac38\cdot\frac{\parens*{\sqrt{d-1} - \sqrt{2}}^2}{d}, \] a quantity that decreases on $[3, \infty)$, with $f(13.5) = 1$ and $\lim_{d \to \infty} f(d) = 3/4$. We show: \begin{itemize} \item (See \Cref{thm:sdp-for-girth1,thm:sdp-for-girth2} for details.) Even when augmented with the triangle inequalities, the SDP ``thinks'' that a random $d$-regular NAE-3SAT instance has a solution satisfying at least an $f(d) - \eps$ fraction of the constraints; in particular, it thinks the instance is satisfiable if $d < 13.5$. Indeed this holds for \emph{any} $d$-regular NAE-3SAT instance of sufficiently large constant girth. \item (See \Cref{thm:refutation-upper} for details.) Even the basic ``eigenvalue bound'' (a special case of the SDP method) shows that a random $d$-regular NAE-3SAT instance has no solution satisfying at least an $f(d) + \eps$ fraction of the constraints; in particular, it refutes satisfiability if $d > 13.5$. \end{itemize} \section{Methodology, further generalizations, and related work} \subsection{2XOR-SAT and semidefinite programming} One reason that semidefinite programming algorithms are particularly natural for NAE-3SAT is that the CSP is essentially a form of 2XOR-SAT. Recall that the 2XOR-SAT CSP has constraints on pairs of literals, with the constraint being satisfied if the literals are assigned unequal truth values. Now for literals $\ell_1, \ell_2, \ell_3$: \begin{align*} \text{NAE}(\ell_1,\ell_2,\ell_3) \text{ \phantom{un}satisfied} &\iff \text{exactly $2$ of } \text{XOR}(\ell_1, \ell_2), \text{ XOR}(\ell_2, \ell_3), \text{ XOR}(\ell_3, \ell_1) \text{ satisfied;} \\ \text{NAE}(\ell_1,\ell_2,\ell_3) \text{ unsatisfied} &\iff \text{exactly $0$ of } \text{XOR}(\ell_1, \ell_2), \text{ XOR}(\ell_2, \ell_3), \text{ XOR}(\ell_3, \ell_1) \text{ satisfied.} \end{align*} (In case all the literals are variables appearing positively, the resulting 2XOR-SAT instance is in fact a ``Max-Cut'' instance.) If we convert an NAE-3SAT CSP with $m$ constraints to a 2XOR-SAT CSP with $3m$ constraints in the above way, every truth assignment satisfying a $\beta$ fraction of NAE-3SAT constraints satisfies a $(2/3)\beta$ fraction of 2XOR-SAT constraints. Indeed, the standard SDP relaxation for NAE-3SAT, first studied by Kann, Lagergren, and Panconesi~\cite{KLP96}, is nothing more than $3/2$ times the basic Goemans--Williamson~\cite{GW95} SDP for the associated 2XOR-SAT instance. We recall here the basic definitions: \begin{definition} Let $I$ be an instance of 2XOR-SAT with $m$ constraints on $n$ variables, to be assigned values in $\{\pm 1\}$. We identify the instance with its (multi)set of constraints. Each constraint is a triple $(u,v,\xi)$ for $u,v \in [n]$ distinct and $\xi \in \{\pm 1\}$; this is thought of as the constraint $x_ux_v = -\xi$. The SDP relaxation value is defined to be \[ \mathrm{SDP}(I) = \sup\braces*{ \frac{1}{m} \sum_{(u,v,\xi) \in I} \parens*{\frac12 - \frac12 \xi \la X_u, X_v \ra}} \in [0,1], \] where the $\sup$ is over all choices of vectors $(X_v)_{v \in [n]}$ satisfying $\la X_v, X_v \ra = 1$ for all~$v$. Equivalently, instead of vectors, the $X_v$'s may be jointly (centered) Gaussian random variables, with $\la X_u, X_v \ra$ interpreted as $\E[X_u X_v]$. The quantity $\mathrm{SDP}(I)$ always upper-bounds $\mathrm{OPT}(I)$, the maximum fraction of simultaneously satisfiable 2XOR-SAT constraints, since for any truth assignment $x \in \{\pm 1\}^n$ we may take the joint Gaussians $X_u = x_u Z$, where $Z$ is a standard Gaussian. The advantage of $\mathrm{SDP}(I)$ is that while computing $\mathrm{OPT}(I)$ is $\mathsf{NP}$-hard, one can compute $\mathrm{SDP}(I)$ (to additive accuracy $2^{-n}$) in polynomial time. \end{definition} \begin{definition} A common algorithmic technique is to also enforce the \emph{triangle inequalities}, meaning to only take the $\sup$ over $X_v$'s satisfying \[ \la X_u, X_v \ra + \la X_v, X_w \ra + \la X_w, X_u \ra \geq -1, \qquad \la X_u, X_v \ra - \la X_v, X_w \ra - \la X_w, X_u \ra \geq -1. \] The resulting value, $\mathrm{SDP}_\triangle(I)$, is a tighter relaxation: $\mathrm{OPT}(I) \leq \mathrm{SDP}_\triangle(I) \leq \mathrm{SDP}(I)$. \end{definition} \begin{definition} A related quantity is the \emph{Lov\'{a}sz theta function}~\cite{Lov79}; for a graph~$G$, the Lov\'asz theta function (of its complement), $\LTheta{G}$, is the least~$k$ such that there are centered joint Gaussians $(X_u)$ with $\la X_u, X_u \ra = 1$ for all vertices~$u$ and $\la X_u, X_v \ra = -\frac{1}{k-1}$ for all edges~$(u,v)$. In particular, if $G$ is thought of as a Max-Cut instance, then $\mathrm{SDP}(G) \geq \frac12 + \frac12\frac{1}{\LTheta{G}-1}$. \end{definition} \begin{definition} The SDP for 2XOR-SAT is also known to have a \emph{dual} characterization~\cite{DP93}: \[ \mathrm{SDP}(I) = \inf_{\substack{w \in \R^n \\ \sum_u w_u = 0}} \\ \braces*{\frac{n}{4m} \cdot \lambda_{\text{max}}(L_I + \diag(w))}, \] where $L_I$ denotes the \emph{Laplacian} matrix for~$I$ (defined in \Cref{sec:matrix-notation}), and $\lambda_{\text{max}}$ denotes the largest eigenvalue. Note that by taking $w = 0$ we get an upper bound on~$\mathrm{SDP}(I)$; we refer to this as the \emph{eigenvalue bound}, \[ \mathrm{EIG}(I) = \frac{n}{4m} \cdot \lambda_{\text{max}}(L_I) = \frac{1}{2d} \cdot\lambda_{\text{max}}(L_I), \] the latter equality holding in case $I$ is $d$-regular. The certificate $\mathrm{OPT}(I) \leq \mathrm{EIG}(I)$ is easy to see; it is a consequence of the definitions that $\mathrm{OPT}(I) = \frac{n}{4m} \cdot \max\{x^\top L_I x : x \in \{\pm \frac{1}{\sqrt{n}}\}^n\}$, and $\lambda_{\text{max}}(L_I)$ allows taking the max over all unit vectors. \end{definition} \subsection{Methodology and related work} \label{sec:prior} To prove \Cref{thm:our-mainest}, we convert our random NAE3-SAT instances into random 2XOR-SAT instances, and then try to analyze whether or not the SDP-value of these instances is as large as~$\frac23$. (Recall that this corresponds to the SDP-value of the NAE3-SAT instances being as large as~$1$.) There are a number of prior works on analyzing the Goemans--Williamson SDP on random graphs (see below); however, our situation is a bit different. The main difference is that the graphs underlying our random 2XOR-SAT instances are not uniformly random $2d$-regular graphs, but rather have a peculiar ``triangle-structure''. Recall that they are generated by first choosing a large random $(3,d)$-biregular graph (by randomly lifting $K_{d,3}$), then replacing each $3$-regular vertex on the left with a triangle on the right. Thus, locally, the resulting graphs look like the graph on the right in \Cref{fig:infinite-graphs} (for $d = 4$). An additional small complication is that these random ``triangle-graphs'' effectively get random edge-signings when the random literal-negations are taken into account, converting the Max-Cut instance to a 2XOR-SAT instance. Finally, in the remainder of the paper we will focus on the generalized problem in which triangles are replaced by $c$-cliques, for $c \geq 3$. This generalization does not correspond to any well-known CSP, but analyzing general~$c$ turns out to be no harder than analyzing the $c = 3$ special case. For the part of our main theorem showing that the simple eigenvalue bound succeeds as $d$ becomes large, we need to show tight bounds on the eigenvalues of the random ``triangle-graphs'' (more generally, $c$-clique graphs) that arise in our model. If we simply had random $d$-regular graphs, Friedman's famous almost-Ramanujan theorem~\cite{Fri03} would have sufficed. Instead, we relate the eigenvalues of our random graphs to those of a randomly lifted $(c,d)$-biregular bipartite graph. We then use Bordenave's recent reproof~\cite{Bor17} of Friedman's theorem (revised to also include random edge-signings), as well as the Ihara--Bass formula, to show that with high probability the nontrivial spectrum of such random bipartite graphs is contained in $\pm [\sqrt{d-1} - \sqrt{c-1}, \sqrt{d-1} + \sqrt{c-1}]$. Inspiration for these computations comes from~\cite{FM16}. For the part of our main theorem showing that large-value SDP solutions exist, the tools we use come from a fairly recent line of work concerning ``Gaussian waves'' in infinite regular graphs~\cite{Elo09,CGHV15,HV15}. This work can be seen as giving a way to convert eigenfunctions on the infinite regular tree (and other vertex-transitive infinite graphs) into Goemans--Williamson SDP solutions --- in fact, Lov\'{a}sz theta function solutions. These may be converted to such solutions on high-girth finite graphs that locally resemble the infinite graphs. Several works in this area~\cite{CGHV15,HV15,Cso16,Lyo17} used this method to show, e.g., that high-girth $3$-regular graphs must contain large independent sets, using techniques resembling the randomized rounding of independent-set SDPs (cf.~\cite{KMS98}) and also local improvement techniques applicable to cubic graphs (cf.~\cite{HLZ04}). These techniques were also used to show limits on the performance of SDP for Max-Cut, Min-Bisection, and community detection problems in, e.g.,~\cite{MS16,FM16}. See~\cite{BKM17} for similar approaches in the context of graph-coloring, and~\cite{JMR16} for more on phase transitions for SDPs in the context of community detection. \section{Preliminaries on graphs, lifts, and eigenvalues} \subsection{Graphs, hypergraphs, and edge-labeled graphs} \label{sec:graph-notation} We begin with some general notation. $H$ will typically denote a simple $(c,d)$-biregular bipartite graph with $c, d \geq 2$. The setting of most interest to us is $d \geq c = 3$. Sometimes we will refer to the vertices on the $c$-regular side as \emph{constraints} and the vertices on the $d$-regular side as \emph{variables}. \Cref{fig:k43} shows an example, $K_{4,3}$, with the variables depicted as circles and the constraints depicted as squares. \myfig{.1}{figures/k43.pdf}{$H = K_{4,3}$}{fig:k43} We may also think of $H$ as a $c$-uniform $d$-regular hypergraph, with the variables as vertices and constraints as hyperedges. $X$ will denote an edge-signed version of $H$ (thought of as a bipartite graph, not a hypergraph); i.e., one in which each edge of~$H$ is labeled with~$\pm 1$. (In the unsigned case, we think of all edges as being labeled~$+1$.) We say that $X$ is a ``random signing'' of $H$ if it is formed by independently labeling each edge of $H$ with~$\pm 1$, uniformly at random. Given $H$, we will write $G = G_H$ for the (loopless multi-)graph formed by first thinking of $H$ as a hypergraph and then replacing each hyperedge by a $c$-clique. As a result, $G$ is a $(c-1)d$-regular graph, called the \emph{primal graph} for~$H$. Given an edge-signed version $X$ of $H$, we will write $I = I_X$ for the primal graph of~$X$, an edge-signed version of $G$ defined as follows: whenever constraint $a$ is adjacent to variables~$i, j$ with edge-signs $\xi_{ai}, \xi_{aj} \in \{\pm 1\}$, we place the sign $\xi_{ai}\xi_{aj}$ on the resulting $\{i,j\}$ edge of~$G$. We may think of~$I$ as a 2XOR-SAT instance, where the vertices are to be assigned values $x_i \in \{\pm 1\}$, and an edge $\{i,j\}$ with label $\xi$ corresponds to the constraint $x_ix_j = -\xi$. In the special case of $c=3$, we can think of~$X$ as a NAE-3SAT instance, where the variables are to be assigned values $x_i \in \{\pm 1\}$, and a constraint $a$ adjacent to variables $i,j,k$ with labels $\xi_{ai}, \xi_{aj}, \xi_{ak}$ corresponds to the constraint that $\xi_{ai} x_i, \xi_{aj} x_j, \xi_{ak} x_k$ are not all equal. In this case there is a precise relationship between the NAE-3SAT instance~$X$ and the 2XOR-SAT instance~$I$; any assignment to the vertices satisfying exactly a $\beta$ fraction of the NAE-3SAT constraints will necessarily satisfy exactly a $\frac23\beta$ fraction of the 2XOR-SAT constraints. \subsection{Associated matrices} \label{sec:matrix-notation} Given any of $Y \in \{H, X, G, I\}$, we will write $A_Y$ for the adjacency matrix. More precisely, $A_Y[i,j]$ is the sum of the (positive and negative) edge-labels on all edges connecting~$i$ and~$j$. We will write $D_Y$ for the diagonal degree matrix of~$Y$, whose entry $D_Y[i,i]$ equals the degree of vertex~$i$. (Both signed and unsigned edges count~$1$ toward the degree.) We write $L_Y = D_Y - A_Y$ for the Laplacian matrix of~$Y$; we also write $L_Y(u) = (1-u^2) \Id + u^2 D_Y - uA_Y$ for the ``deformed Laplacian'', parameterized by $u \in \R$, which reduces to the basic Laplacian when $u = 1$. (Here $\Id$ denotes the identity operator.) Finally, we will write $B_Y$ for the non-backtracking matrix of~$Y$. Recall that this matrix is formed as follows: First, each undirected edge in $Y$ is converted to two directed edges (both having the same sign, in case $Y$ is edge-signed). Then $B_Y$ is the square (non-symmetric) matrix indexed by the directed edges, in which $B_Y[(i,j),(k,\ell)]$ entry is nonzero if and only if $j = k$ and $i \neq \ell$, in which case it equals the sign-label of~$(i,j)$. \subsection{Lifts} \label{sec:lift-notation} Suppose now that $Y = (V,E)$ denotes any undirected (multi-)graph. For $n \in \Z^+$, an $n$-lift of $Y$ is a graph $Y_n$ whose vertex set is $V \times [n]$ and whose edges consist of a perfect matching between $\{u\} \times [n]$ and $\{v\} \times [n]$ for each edge $\{u,v\} \in E$. When the $|E|$ perfect matchings are chosen independently and uniformly at random, we call $Y_n$ a random $n$-lift of $Y$. Note that if $Y$ is a $d$-regular graph, then so is $Y_n$, and if $Y$ is a $(c,d)$-biregular bipartite graph, then so is $Y_n$. If $B$ (respectively, $B_n$) denotes the non-backtracking matrix of~$Y$ (respectively, $Y_n)$, it is known that the multiset of $B_n$'s eigenvalues contains the multiset of $Y$'s eigenvalues. The remaining eigenvalues are referred to as the ``new'' eigenvalues of~$B_n$. \subsection{Eigenvalues} \label{sec:eig-notation} Given an $N$-dimensional matrix $M$, we write $\mathrm{spec}} \newcommand{\Spec}{\spec(M) \subset \C$ for its spectrum, the cardinality-$N$ \emph{multiset} of roots of its characteristic polynomial. We also write $\rho(M)$ for its spectral radius, $\max\{|\lambda| : \lambda \in \mathrm{spec}} \newcommand{\Spec}{\spec(M)\}$. The adjacency matrix of a (possibly edge-signed) graph is symmetric, and hence its spectrum is real; the Laplacian is furthermore positive semidefinite, and hence its spectrum is nonnegative. A non-backtracking matrix, however, will in general have complex spectrum. We are particularly interested in bipartite graphs, so we record some facts concerning them here. Suppose $X$ is a possibly edge-signed bipartite graph, with vertex parts of size $m \geq n$. Then it is well known that \[ \mathrm{spec}} \newcommand{\Spec}{\spec(A_X) = \{0 : \text{with multiplicity } m-n\} \cup \{\pm \lambda : \lambda \in \mathrm{PS}(A_X)\} \] for some multiset $\mathrm{PS}(A_X) \subset \R^{\geq 0}$.\footnote{We chose ``$\mathrm{PS}$'' to stand for Positive Spectrum, notwithstanding our warning that it may contain~$0$.} Further, if $X$ is $(c,d)$-biregular, we'll have $\mathrm{PS}(A) \subset [0, \sqrt{cd}]$. The set $\pm \mathrm{PS}(A_X)$ may be called the ``nontrivial'' part of $A_X$'s spectrum. A warning, though: $\pm \mathrm{PS}(A_X)$ is not the same as the ``nonzero'' part of $A_X$'s spectrum, since $\mathrm{PS}(A_X)$ may contain~$0$ with positive multiplicity. Indeed, this happens in one of the simplest cases, as is well known: \begin{fact} \label{fact:Kcd-A-spec} Let $H = K_{d,c}$, the complete bipartite graph with vertex parts of size $d \geq c$. Then $\mathrm{PS}(A_H)$ consists of $c-1$ copies of~$0$ and $1$ copy of $\sqrt{cd}$. \end{fact} We also record below the spectrum of the non-backtracking matrix of $K_{d,c}$, which we'll derive in \Cref{sec:ihara--bass} using the Ihara--Bass formula. But first, some notation we'll use heavily in this paper: \begin{notation} \label{not:cd} For $c, d \geq 2$, we write \[ s_c = \sqrt{c-1}, \quad s_d = \sqrt{d-1}, \quad \rho_1 = s_c s_d, \quad \ol{\lambda} = s_d + s_c, \quad \ul{\lambda} = |s_d - s_c|, \quad \kappa = (c-1)d = \rho_1^2 + s_c^2. \] We will often assume $d \geq c$, in which case $\ul{\lambda} = s_d - s_c$. \end{notation} \begin{proposition} \label{prop:Kcd-B-spec} Let $B$ be the non-backtracking matrix of $K_{d,c}$, where $d \geq c \geq 2$, $d \neq 2$. Let $i$ be the fourth primitive root of unity. Then \[ \mathrm{spec}} \newcommand{\Spec}{\spec(B) = \begin{cases} \pm 1& \text{with multiplicity $(c-1)(d-1)$ each;}\\ \pm is_c & \text{with multiplicity $(d-1)$ each;}\\ \pm is_d & \text{with multiplicity $(c-1)$ each;}\\ \pm s_cs_d & \text{with multiplicity $1$ each;}\\ \end{cases}\\ \qquad \text{and hence, } \rho(B) = s_c s_d = \rho_1. \] \end{proposition} As described in \Cref{sec:graph-notation}, we will often consider forming the primal graph~$G$ of a $(c,d)$-biregular graph~$H$. It is simple to work out the relationship between the eigenvalues of~$H$ and the eigenvalues of~$G$; this is done in, e.g.,~\cite[Section~4.1]{LS96}. The analysis is unchanged for the edge-signed variant, and it yields: \begin{proposition} \label{prop:triangle-replace-eigs} Let $X$ be an edge-signed $(c,d)$-biregular graph, and let $I = I_X$ be the corresponding edge-signed primal graph. Then \[ \Spec(A_I) = \{\lambda^2 - d : \lambda \in \mathrm{PS}(A_X)\}. \] Since $I$ is $\kappa$-regular, where $\kappa = cd - d$, we can also conclude that \[ \Spec(L_I) = \{cd - \lambda^2 : \lambda \in \mathrm{PS}(A_X)\}. \] \end{proposition} \subsection{The infinite biregular tree and distance-regular graph} \label{sec:infinite-bireg-notation} Since a large random $(c,d)$-biregular graph looks locally like a tree, we will want to study the infinite $(c,d)$-biregular tree, which we denoted by $\mathbb{T}_{d,c}$. More to the point, we will want to study its (infinite) primal graph, which we denote by $\mathbb{G}_{d,c}$. Fragments of these graphs, in the case $c = 3$, $d = 4$, are pictured in \Cref{fig:infinite-graphs}. \begin{figure}[H] \centering \includegraphics[width=.3\textwidth]{figures/t34.pdf} \qquad \qquad \includegraphics[width=.3\textwidth]{figures/g34.pdf} \caption{Fragments of the infinite biregular tree $\mathbb{T}_{4,3}$, and its primal graph $\mathbb{G}_{4,3}$} \label{fig:infinite-graphs} \end{figure} As shown by Ivanov~\cite{Iva83}, the graphs $\mathbb{G}_{d,c}$ are precisely the infinite graphs~$G$ that are \emph{distance-regular}, meaning that there exist constants~$p^h_{j,k}$ such that for every pair $u,v \in V(G)$ with $\dist_G(u,v) = h$, the number of vertices $w \in V(G)$ having $\dist_G(w,u) = j$ and $\dist_G(w,v) = k$ is equal to~$p^h_{j,k}$. It is elementary to compute these quantities for~$\mathbb{G}_{d,c}$, and the results appears below. Only the cases $h = 0, 1$ are truly essential for the paper, and the reader might like to verify them while referring to \Cref{fig:infinite-graphs}. \begin{proposition} \label{prop:pijk} In the distance-regular graph $\mathbb{G}_{d,c}$, recalling the notation \[ s_c^2 - 1 = c-2, \quad \rho_1^2 = (c-1)(d-1), \quad \rho_1^2 + s_c^2 = \kappa = (c-1)d, \quad \rho_1^2 - s_c^2 = (c-1)(d-2), \] we have \[ p^0_{\ell,\ell} = \begin{cases} 1 & \text{if }\ell = 0\\ (\rho_1^2 + s_c^2)\rho_1^{2(\ell-1)} & \text{if }\ell \geq 1; \end{cases} \] and, for $h \geq 1$, $0 \leq t \leq h$, \begin{align*} \text{if $h$ and $t$ have the same parity:} \quad p^{h}_{\ell, \ell + t} = p^{h}_{\ell + t, \ell} &= \begin{cases} 0 & \text{if }\ell < \frac{h-t}{2}\\ 1 & \text{if }\ell = \frac{h-t}{2}\\ \rho_1^{2\ell} & \text{if }\ell > \frac{h-t}{2} \text{ and } t = h \\ (\rho_1^2 - s_c^2)\rho_1^{2(\ell-(\frac{h-t+2}{2}))} & \text{if }\ell > \frac{h-t}{2} \text{ and } t \neq h; \end{cases} \\ \text{if $h$ and $t$ have opposite parity:} \quad p^{h}_{\ell, \ell + t} = p^{h}_{\ell + t, \ell} &= \begin{cases} 0 & \text{if }\ell < \frac{h-t+1}{2}\\ (s_c^2-1)\rho_1^{2(\ell-\frac{h-t+1}{2})} & \text{if }\ell \geq \frac{h-t+1}{2}; \end{cases} \end{align*} and finally, $p^h_{j,k} = 0$ otherwise. \end{proposition} The spectrum of the adjacency ``matrix'' (operator) of $\mathbb{G}_{d,c}$ --- and indeed, the whole ``spectral measure'' --- has been known since the early '80s. (There are appropriate definitions for these terms, generalizing the definitions in the finitary case. We will not give them here since, strictly speaking, this paper does not rely on them.) In particular, \begin{equation} \label{eqn:infinite-spectra} \mathrm{spec}} \newcommand{\Spec}{\spec(A_{\mathbb{T}_{d,c}}) = \{0\} \cup \pm [\ul{\lambda}, \ol{\lambda}], \qquad \mathrm{spec}} \newcommand{\Spec}{\spec(A_{\mathbb{G}_{d,c}}) = [\ul{\lambda}^2 - d, \ol{\lambda}^2 - d]; \end{equation} (the latter holding under the assumption $d \geq c$; if $d < c$ then also $-d \in \mathrm{spec}} \newcommand{\Spec}{\spec(A_{\mathbb{G}_{d,c}})$). The history of these results can be found in \cite[Section~7E]{MW89} and~\cite[Section~5.2]{GM88}, the latter of which also shows that the spectral measures of large random $(c,d)$-biregular graphs converge to a measure with support $\mathrm{spec}} \newcommand{\Spec}{\spec(A_{\mathbb{T}_{d,c}})$ (and similarly for their primal graphs and $\mathrm{spec}} \newcommand{\Spec}{\spec(A_{\mathbb{G}_{d,c}})$). \section{Eigenvalues of random lifts and signings} Generalizing Friedman's celebrated characterization of the spectrum of random $d$-regular random graphs \cite{Fri03}, Bordenave recently proved the following theorem: \begin{theorem} (\cite[Theorem 20]{Bor17}.) \label{thm:bordenave} Let $Y$ be a connected multigraph (with more edges than vertices) having non-backtracking matrix $B$. Fix $\eps > 0$. Let $\bY_n$ be a random $n$-lift of $Y$, and let $\bB_n$ be its non-backtracking matrix. Then \[ \Pr[\text{$\bB_n$ has a \emph{new} eigenvalue of magnitude} \geq \sqrt{\rho(B)} + \eps] = o_{n \to \infty}(1). \] \end{theorem} We will need a variant of this theorem in which the graph is randomly lifted and then randomly signed. The statement and proof are actually a little bit simpler. \begin{theorem} \label{thm:signed-bordenave} Let $Y$ be a connected graph (with more edges than vertices) having non-backtracking matrix $B$. Fix $\eps > 0$. Let $\bX_n$ be a random signing of a random $n$-lift $\bY_n$ of $Y$, and let $\bB_n$ denote the non-backtracking matrix of $\bX_n$. Then \[ \Pr[\rho(\bB_n) \geq \sqrt{\rho(B)} + \eps] = o_{n \to \infty}(1). \] \end{theorem} The proof, which closely follows that of~\cite[Theorem 20]{Bor17}, appears in \Cref{sec:bordenave}.\\ We will also quote some basic results about the scarcity of cycles in randomly lifted graphs: \begin{theorem} \label{thm:girthy} (Greenhill--Janson--Ruci{\'n}ski~\cite[Lemma~5.1]{GJR10}.) Let $\bY_n$ be as in \Cref{thm:bordenave} or \Cref{thm:signed-bordenave}, and write $\bZ_k$ for the number of length-$k$ cycles in $\bY_n$. Let $\bP_2, \bP_3, \dots$ be independent Poisson random variables with $\bP_k$ of mean $w_k/(2k)$, where $w_k = \tr(B^k)$ is the number of closed non-backtracking walks in~$Y$. Then for any $g \in \N^+$, the random variables $(\bZ_2, \bZ_3, \dots, \bZ_g)$ converge jointly in distribution to $(\bP_2, \bP_3, \dots, \bP_g)$. In particular, for a fixed~$g$ and $n$ sufficiently large, there is a positive probability (depending only on~$g$ and~$Y$) that $\bY_n$ has girth exceeding~$g$. \end{theorem} \begin{theorem} (Easily extracted from the proof of \cite[Lemma~24]{Bor17}.) \label{thm:cycleless-neighborhoods} Let $\bY_n$ be as in \Cref{thm:bordenave} or \Cref{thm:signed-bordenave} and write $d$ for the maximum degree of~$Y$. Call a vertex of $\bY_n$ \emph{$g$-bad} if its distance-$g$ neighborhood contains a cycle. Then the expected number of $g$-bad vertices in~$\bY_n$ is $O((d+1)^g)$. \end{theorem} \subsection{The Ihara--Bass formula} \label{sec:ihara--bass} The Ihara--Bass formula relates the eigenvalues of a graph's adjacency matrix and its non-backtracking matrix. Originally proved by Ihara~\cite{Iha66} for regular graphs, it was subsequently generalized to irregular graphs~\cite{Has92,Bas92,ST96,KS00}, vertex-weighted graphs~\cite{Kem16}, and most generally, edge-weighted graphs~\cite{WF09,FM16}. We will need the last of these, but only in the special case that all edge-weights are~$\pm 1$. In this case, the resulting formula looks identical to the usual (irregular, unweighted) Ihara--Bass formula: \begin{theorem} (\cite[Theorem~2]{WF09}, specialized to all edge-weights $\pm 1$.) \label{thm:ihara-bass1} Let $X$ be a edge-signed graph, having adjacency matrix~$A$, non-backtracking matrix~$B$, and deformed Laplacian $L(u) = (1-u^2) \Id + u^2 D - uA$. Then for all real $u \neq \pm 1$, \[ \det(\Id - uB) = \det(L(u)) \cdot (1-u^2)^{\#E(X) - \#V(X)}. \] \end{theorem} In the special case when $X$ is $(c,d)$-biregular, one can use this formula to work out a very explicit mapping between the eigenvalues of~$A$ and the eigenvalues of~$B$. The computations appear in~\cite[Section~4.2]{Kem16}; that paper only considered unsigned edges, but the result is the same because the Ihara--Bass formula is identical. Recalling the notation from \Cref{sec:eig-notation}: \begin{theorem} (Follows from \cite[Theorem~6]{Kem16} using \Cref{thm:ihara-bass1}.) \label{thm:ihara-bass2} Let $X$ be an edge-signed $(c,d)$-biregular graph, with $m$ vertices on the $c$-regular side and $n$ vertices on the $d$-regular side, so $e = cm = dn$ is the number of edges. Let $A$ denote the adjacency matrix of~$X$. Then~$B$, the non-backtracking matrix of~$X$, has the following $2e$ eigenvalues: \begin{itemize} \item $e - (m+n)$ copies each of $\pm 1$. \item $m - n$ copies each of $\pm is_c$. \item $4n$ ``nontrivial'' eigenvalues, all roots of $p_\lambda(u) = u^4 + (s_c^2+s_d^2 - \lambda^2)u^2 + \rho_1^2$ for $\lambda \in \mathrm{PS}(A)$. \end{itemize} \end{theorem} We would now like to understand the location of the $4$ roots of $p_\lambda(u)$ in~$\C$ as $\lambda$ varies in~$[0,\sqrt{cd}]$. \ignore{ Since $p_\lambda$ is just a quadratic in~$u^2$, one can easily work out the following picture. When $\lambda = \sqrt{cd}$, the four roots are at $\pm \rho_1$ and $\pm 1$. As $\lambda$ decreases, the roots travel along the real axis towards $\pm \sqrt{\rho_1}$, reaching there simultaneously when $\lambda = \ol{\lambda}$. As $\lambda$ further decreases, the $4$ roots travel separately in the $4$ quadrants, tracing out the circle of radius $\sqrt{\rho_1}$ until they reach the complex axis at $\pm \sqrt{\rho_1} i$ when $\lambda = \ul{\lambda}$. As $\lambda$ decreases from $\ul{\lambda}$, the pairs split up and travel along the complex axis, ending at final positions $\pm is_c$ and $\pm is_d$ when $\lambda = 0$. } To do this, write \[ s_c = \frac{\ol{\lambda} - \ul{\lambda}}{2}, \quad s_d = \frac{\ol{\lambda} + \ul{\lambda}}{2}, \quad \alpha = \frac{\lambda^2 - \ul{\lambda}^2}{2}, \quad \beta = \frac{\lambda^2 - {\ol{\lambda}}^2}{2}, \quad U = u^2. \] Then \[ p_\lambda(u) = U^2 - (\alpha+\beta) U + \parens*{\frac{\alpha - \beta}{2}}^2, \] which has roots \[ U = \frac12 \parens*{\sqrt{\alpha} \pm \sqrt{\beta}}^2. \] If $\ul{\lambda}^2 \leq \lambda^2 \leq \ol{\lambda}^2$ then $\beta \leq 0 \leq \alpha$ and \[ |U| = \frac12\parens*{\sqrt{\alpha}^2 + \sqrt{-\beta}^2} =\frac{\alpha - \beta}{2} = \frac{\ol{\lambda}^2 - \ul{\lambda}^2}{4} = s_cs_d = \rho_1. \] On the other hand, if $\lambda^2 \not \in \bracks*{\ul{\lambda}^2, \ol{\lambda}^2}$, then $\alpha$ and $\beta$ have the same sign and \[ |U| = \frac12(|\alpha| + |\beta| \pm 2|\alpha|\cdot |\beta|), \text{ the larger of which exceeds } \frac{\ol{\lambda}^2 - \ul{\lambda}^2}{4} = \rho_1. \] We conclude: \begin{proposition} \label{prop:the-picture} For real $\lambda$, the roots of $p_\lambda(u)$ simultaneously have magnitude at most $\sqrt{\rho_1}$ if and only if $\lambda^2 \in \bracks*{\ul{\lambda}^2, \ol{\lambda}^2 }$ (i.e., $\lambda \in \pm \bracks*{\ul{\lambda}, \ol{\lambda}}$). \end{proposition} Also, when $\lambda = 0$ we have $p_\lambda(u) = u^4 + (s_c^2 + s_d^2)u^2 + s_c^2s_d^2$, and when $\lambda = \sqrt{cd}$ we have $p_\lambda(u) = u^4 - (\rho_1^2 + 1)u^2 + \rho_1^2$. Thus we can directly verify: \begin{proposition} \label{prop:lambda-cd} For $\lambda = 0$, the $4$ roots of $p_\lambda(u)$ are $\pm is_c$, $\pm is_d$. And, for $\lambda = \sqrt{cd}$, the $4$ roots of $p_\lambda(u)$ are $\pm \rho_1$, $\pm 1$. \end{proposition} At this point, we can combine \Cref{thm:ihara-bass2}, \Cref{fact:Kcd-A-spec}, and \Cref{prop:lambda-cd} to obtain \Cref{prop:Kcd-B-spec} as stated in \Cref{sec:eig-notation}. We may furthermore put together all the results in this section: \begin{theorem} \label{thm:random-signed-lift-eigenvalues} Let $d \geq c \geq 2$, $d \neq 2$. Fix $\eps > 0$. Let $\bX_n$ be a random signing of a random $n$-lift of the complete bipartite graph $K_{d,c}$, and let $\bA_n$ denote its adjacency matrix. Then \[ \Pr\bigl[\mathrm{PS}(\bA_n) \not \subset [\ul{\lambda} -\eps, \ol{\lambda} + \eps]\bigr] = o_{n \to \infty}(1). \] \end{theorem} \begin{proof} We apply \Cref{thm:signed-bordenave} with $Y = K_{d,c}$ and some sufficiently small $\eps' = \eps'(\eps, c, d) > 0$. The non-backtracking matrix $B$ of~$Y$ has spectral radius $\rho_1$, by \Cref{prop:Kcd-B-spec}. Thus if $\bB_n$ is the non-backtracking matrix of the randomly signed random lift~$\bX_n$ of~$Y$, we get \[ \Pr\bigl[\rho(\bB_n) \geq \sqrt{\rho_1} + \eps'\bigr] = o_{n \to \infty}(1), \] Thus with probability $1 - o(1)$ we have $\rho(\bB_n) < \sqrt{\rho_1} + \eps'$. In this case, taking $\eps'$ sufficiently small and using the fact that the roots of a polynomial are continuous in its coefficients, \Cref{prop:the-picture} and \Cref{thm:ihara-bass2} imply that $\mathrm{PS}(\bA_n) \subset [\ul{\lambda} -\eps, \ol{\lambda} + \eps]$. The proof is complete. \end{proof} \begin{remark} This theorem is ``to be expected'' in light of the Godsil--Mohar work on spectral convergence mentioned at the end of \Cref{sec:infinite-bireg-notation}. But of course one needs the hard work of Bordenave's Theorem to show that random $(c,d)$-biregular graphs typically do not \emph{any} eigenvalues outside the spectral bulk. In fact, to emphasize that care is needed, we remark that the random signing in \Cref{thm:random-signed-lift-eigenvalues} is essential; without it, it's not hard to show that $\mathrm{PS}(\bA_n)$ will contain~$0$ with probability~$1$. \end{remark} \begin{corollary} \label{cor:random-triangle-graph-eigenvalues} Let $d \geq c \geq 2$, $d \neq 2$. Fix $\eps > 0$. Let $\bX_n$ be a random signing of a random $n$-lift of the complete bipartite graph $K_{d,c}$, let $\bI_n$ be the associated 2XOR-SAT instance (as in \Cref{sec:graph-notation}), and let $\bL_n$ be its Laplacian matrix. Then \[ \Pr\bigl[\bL_n \text{ has an eigenvalue outside } [(1-\rho_1)^2 - \eps, (1+\rho_1)^2 +\eps]\bigr] = o_{n \to \infty}(1). \] \end{corollary} \begin{proof} This follows from \Cref{prop:triangle-replace-eigs}, $cd - \ol{\lambda}^2 = (1-\rho_1)^2$, and $cd - \ul{\lambda}^2 = (1+\rho_1)^2$. \end{proof} \Cref{cor:random-triangle-graph-eigenvalues} now directly implies the following: \begin{theorem} \label{thm:refutation-upper} Let $d \geq c \geq 2$, $d \neq 2$. Fix $\eps > 0$. Let $\bI_n$ be a random 2XOR-SAT instance as in \Cref{cor:random-triangle-graph-eigenvalues}, so $\bI_n$ is $\kappa$-regular ($\kappa = (c-1)d$) with $cn$ variables and $\binom{c}{2}dn$ constraints. Then \[ \Pr\bracks*{\mathrm{EIG}(\bI_n) \geq \frac{(1+\rho_1)^2}{2\kappa} + \eps} = o_{n\to \infty}(1), \] where $\rho_1 = \sqrt{c-1}\sqrt{d-1}$. In case $c = 3$, if we view $\bI_n$ as a random $d$-regular NAE-3SAT instance on~$3n$ variables (chosen according to the random lift/sign model), we have \[ \Pr\bracks*{\mathrm{EIG}(\bI_n) \geq \frac98 - \frac38\cdot\frac{\parens*{\sqrt{d-1} - \sqrt{2}}^2}{d} + \eps} = o_{n\to \infty}(1), \] \end{theorem} As mentioned in \Cref{sec:our-results}, the quantity $\frac98 - \frac38\cdot\frac{\parens*{\sqrt{d-1} - \sqrt{2}}^2}{d}$ decreases from $\frac98$ to $\frac34$ on~$[3,\infty)$ and takes value~$1$ at $d = 13.5$. Thus the above theorem shows that the basic eigenvalue bound refutes a random $d$-regular instance of NAE-3SAT (whp) provided $d > 13.5$. \section{SDP solutions for random instances} As a guide for our construction, let us imagine SDP solutions for the Max-Cut problem on the infinite graph $\mathbb{G}_{d,c}$. (As these imaginings are only for intuition's sake, we will not be completely formal.) To lower bound $\mathrm{SDP}(\mathbb{G}_{d,c})$, it is necessary and sufficient to construct jointly standard Gaussian random variables~$(\bX_v)_{v \in V(\mathbb{G}_{d,c})}$ for which the correlation~$\E[\bX_{u}\bX_{v}]$ --- ``on average'', over all edges $\{u,v\} \in E(\mathbb{G}_{d,c})$ --- is very negative. It's simpler, and stronger, to look for such a Gaussian process in which $\E[\bX_u \bX_v] = \varrho$ for \emph{every} edge~$\{u,v\}$, with $\varrho$ as negative as possible. Such solutions would give an upper bound for the Lov{\'a}sz theta value, $\LTheta{\mathbb{G}_{d,c}} \leq 1-1/\varrho$, while still giving an SDP lower bound of $\mathrm{SDP}(\mathbb{G}_{d,c}) \geq \frac12 - \frac12 \varrho$. In turn, we would have such a Gaussian process provided it satisfied \begin{equation} \label{eqn:infinite-eigenvalue} \phantom{\quad \text{for all } v \in V(\mathbb{G}_{d,c})} \frac{1}{\kappa}\sum_{u \sim v } \bX_u = \varrho \bX_v \quad \text{for all } v \in V(\mathbb{G}_{d,c}), \end{equation} where, as before, $\kappa = (c-1)d$ is the degree of each~$v$. This is the ``eigenvalue equation'' for $A_{\mathbb{G}_{d,c}}$ for $\lambda = \kappa\varrho$. Thus one may suspect that \Cref{eqn:infinite-eigenvalue} is possible whenever $\lambda = \kappa\varrho \in \mathrm{spec}} \newcommand{\Spec}{\spec(A_{\mathbb{G}_{d,c}})$. Given $\mathrm{spec}} \newcommand{\Spec}{\spec(A_{\mathbb{G}_{d,c}})$ as in \Cref{eqn:infinite-spectra}, we may therefore hope to obtain the desired Gaussian process for any \begin{equation} \label{eqn:cf} \varrho \in \bracks*{\frac{\ul{\lambda}^2 - d}{\kappa}, \frac{\ol{\lambda}^2 - d}{\kappa}} = \bracks*{1 - \frac{(1+\rho_1)^2}{\kappa}, 1 - \frac{(1-\rho_1)^2}{\kappa}}; \end{equation} in particular, for the most negative such value, \begin{equation} \label{eqn:vrstar} \vr^* = 1 - \frac{(1+\rho_1)^2}{\kappa}. \end{equation} This would lead to the lower bound \[ \mathrm{SDP}(\mathbb{G}_{d,c}) \geq \frac12 - \frac12 \vr^* = \frac{(1+\rho_1)^2}{2\kappa}. \] In fact, since $\mathbb{G}_{d,c}$ is a vertex-transitive graph, it follows from a theorem of Harangi and Vir{\'a}g that such Gaussian processes do exist, and they can be constructed in a simple fashion as ``linear block factors of IIDs'': \begin{theorem} (\cite[Theorem~4]{HV15}.) \label{thm:harangi-virag} Let $G$ be an infinite vertex-transitive graph with adjacency operator~$A_G$. Then for each $\lambda \in \Spec(A_G)$, there is an $\mathrm{Aut}(G)$-invariant standard Gaussian process $(\bX_v)_{v \in V(G)}$ for which $\sum_{u \sim v} \bX_u = \lambda \bX_v$ holds for all $v \in V(G)$. Furthermore, the process can be approximated (in distribution) by a ``linear block factor of IID process'', meaning one that is constructed as follows: $(\bZ_v)_{v \in V(G)}$ are chosen as IID standard Gaussians, and then $\bX_v$ is set to be a fixed linear function~$f$ of those $\bZ_u$'s which have $\dist_G(u, v) \leq L$, where $L$ is a finite ``radius''. \end{theorem} As mentioned in \Cref{sec:prior}, results of this nature date back at least to the work of Elon~\cite{Elo09}, who constructed such ``Gaussian waves'' on the infinite $d$-regular tree~$\mathbb{T}_d$. An important aspect of \Cref{thm:harangi-virag} is the ``block'' aspect, meaning that each~$\bX_v$ is defined just from a ``local'', finite number of~$\bZ_u$'s. Thus we can hope to use the construction for (primal graphs of) large but finite $(c,d)$-biregular graphs with large girth, which locally look tree-like. That said, we cannot quite use the \Cref{thm:harangi-virag} as a black box for our purposes, for a few reasons. One reason is that we want to apply it to large random biregular graphs, which will not strictly speaking have low girth, but will merely have ``few'', ``far apart'' short cycles. Second, we will be constructing SDP solutions for \emph{edge-signed} graphs, a slight generalization of \Cref{thm:harangi-virag}'s framework. Finally, it will be nice for us to reason about $\E[\bX_u \bX_v]$ not just for adjacent $u$,~$v$. On the other hand, the construction of the linear block factor of IID process for $\mathbb{G}_{d,c}$ is a fairly straightforward generalization of earlier concrete constructions for~$\mathbb{T}_d$ such as the one in~\cite{CGHV15}. We present it in the next section. \subsection{Linear factors of IIDs} Here we essentially prove \Cref{thm:harangi-virag} in the special case of $\mathbb{G}_{d,c}$. The proof closely follows~\cite[Section~3]{CGHV15}. \begin{theorem} \label{thm:idealized-FIID} Let $c,d \geq 2$ and let $\lambda \in \mathrm{spec}} \newcommand{\Spec}{\spec(A_{\mathbb{G}_{d,c}})^\circ = (\ul{\lambda}^2 - d, \ol{\lambda}^2 - d)$. Then there exist~$L \in \N$ and reals $a_0, a_1, \dots, a_L$ such that the following holds: When $(\bZ_v)_{v \in V(\mathbb{G}_{d,c})}$ are IID standard Gaussians, and the random variables $(\bX_v)_{v \in V(\mathbb{G}_{d,c})}$ are formed via \begin{equation} \label{eqn:the-FIID} \bX_v = \sum_{\ell = 0}^L \sum_{\substack{w \in V(\mathbb{G}_{d,c}) \\ \dist(w,v) = \ell}} a_\ell \bZ_w, \end{equation} then we have $\E[\bX_v^2] = 1$ for all $v$ (so that the $\bX_v$'s are jointly standard Gaussians), and ${\E[\bX_u\bX_v] = \frac{\lambda}{\kappa}}$ for all $\{u,v\} \in E(\mathbb{G}_{d,c})$. In other words (cf.~\Cref{eqn:cf}): \begin{equation} \label{eqn:finish} \text{for any} \quad 1 - \frac{(1+\rho_1)^2}{\kappa} < \varrho < 1 - \frac{(1-\rho_1)^2}{\kappa} \quad \text{we can achieve $\E[\bX_u\bX_v] = \varrho$} \quad \forall \{u,v\} \in E(\mathbb{G}_{d,c}). \end{equation} \end{theorem} \begin{proof} Let us temporarily relax the requirement that $L$ be finite. To that end, we will consider defining \begin{equation} \label{eqn:X-FIID} \bX_v = \gamma \cdot \sum_{\ell=0}^\infty \ \sum_{\substack{w \in V(\mathbb{G}_{d,c}) \\ \mathrm{dist}(w,v) = \ell}} r^\ell \bZ_w, \end{equation} for constants $\gamma \in \R^+$, $r \in \R$. It follows that for two vertices $u, v \in V(\mathbb{G}_{d,c})$ with $\dist(u,v) = h$, we have \begin{equation} \label{eqn:general-corr} \E[\bX_u \bX_v] = \gamma^2 \cdot \sum_{j,k = 0}^\infty p^h_{j,k} r^{j+k}. \end{equation} In this proof we focus only on $h = 0, 1$, saving $h > 1$ for \Cref{thm:full-FIID}. By \Cref{prop:pijk} we have \[ \#\braces{w : \mathrm{dist}(w,v) = \ell} = p^0_{\ell,\ell} = \begin{cases} 1 & \text{if $\ell = 0$,} \\ (\rho_1^2 + s_c^2) \cdot \rho_1^{2(\ell-1)} & \text{if $\ell > 0$,} \end{cases} \] where recall $\rho_1^2 + s_c^2 = (c-1)d$ and $\rho_1^2 = (c-1)(d-1)$. Thus \begin{equation} \label{eqn:the-variance} \E[\bX_v^2] = \Var[\bX_v] = \gamma^2 \cdot \parens*{1 + \sum_{\ell=1}^\infty (\rho_1^2 + s_c^2) \cdot \rho_1^{2(\ell-1)} \cdot r^{2\ell}} = \gamma^2 \cdot \frac{1+(s_c r)^2}{1-(\rho_1 r)^2}, \quad \text{provided $|r| < \rho_1^{-1}$.} \end{equation} By choosing $\gamma$ such that \[ \gamma^2 = \frac{1-(\rho_1 r)^2}{1+(s_c r)^2} \] we get $\Var[\bX_v] = 1$. On the other hand, for fixed $u, v$ with $\mathrm{dist}(u,v) = 1$ we have \[ \#\braces*{w : \mathrm{dist}(u,w) = \ell_1, \mathrm{dist}(v,w) = \ell_2} = p^1_{\ell_1,\ell_2} = \begin{cases} (s_c^2-1) \cdot \rho_1^{2(\ell - 1)} & \text{if $\ell_1 = \ell_2 > 0$,} \\ \rho_1^{2\ell_1} & \text{if $\ell_2 = \ell_1 + 1$,} \\ \rho_1^{2\ell_2} & \text{if $\ell_1 = \ell_2 + 1$,} \\ 0 & \text{else,} \end{cases} \] where recall $s_c^2 -1 = c-2$. Thus \begin{equation} \label{eqn:the-corr} \E[\bX_u \bX_v] = \gamma^2 \cdot \parens*{\sum_{\ell = 1}^\infty (s_c^2-1)\cdot \rho_1^{2(\ell-1)}\cdot r^{2\ell} + \sum_{\ell = 0}^\infty 2\cdot \rho_1^{2\ell} \cdot r^{2\ell+1}} = \gamma^2 \cdot \frac{1+(s_c r)^2 - (1-r)^2}{1-(\rho_1 r)^2}, \end{equation} and so by our choice of $\gamma$ we conclude \[ \E[\bX_u \bX_v] = 1 - \frac{(1-r)^2}{1+(s_c r)^2}. \] Calculus shows that the expression on the right is increasing for $r$ in the range $[-s_c^{-2}, 1]$, which is a superset of the range that \Cref{eqn:the-variance} allows us for~$r$, namely $(-\rho_1^{-1}, \rho_1^{-1})$. This establishes \Cref{eqn:finish}; the only catch is that we haven't used a finite~$L$. But this can be achieved by truncating the sum in \Cref{eqn:X-FIID} to $\ell \leq L$ for $L$ sufficiently large. This truncation only changes \Cref{eqn:the-variance,eqn:the-corr} by a quantity that decays like~$(\rho_1 r)^L$. Thus the change in $\E[\bX_u \bX_v]$ from truncation can be made arbitrarily small, and this is acceptable for the conclusion \Cref{eqn:finish} because the desired interval of~$\varrho$'s is open. \end{proof} \begin{corollary} \label{cor:signed-FIID} \Cref{thm:idealized-FIID} also holds for the primal graph~$\mathbb{I}$ of any edge-signed version $\mathbb{X}$ of $\mathbb{T}_{d,c}$ (as defined in \Cref{sec:graph-notation}), in the sense of having $\E[\bX_u\bX_v] = \xi_{uv} \varrho$ for all $\{u,v\} \in E(\mathbb{I})$, where $\xi_{uv}$ denotes the sign of edge $\{u,v\}$. \end{corollary} \begin{proof} Assume we have signs $\xi_{av} \in \{\pm 1\}$ for each constaint/variable edge $\{a,v\}$ in $\mathbb{X}$, and therefore signs $\xi_{uv} = \xi_{au}\xi_{av}$ for each edge $\{u,v\}$ in $\mathbb{I}$. It's clear that for any closed walk in the tree~$\mathbb{X}$, the product of the edge-signs along the walk is~$1$; by construction, it follows that the same is true in~$\mathbb{I}$. Thus for any $u,v \in V(\mathbb{I})$ (not necessarily adjacent) we can unambiguously define $\xi[u \leftrightarrow v]$ as the product of edge-signs along any $uv$-path in~$\mathbb{I}$. We now alter the construction in \Cref{eqn:X-FIID} as follows: \[ \bX_v = \gamma \cdot \sum_{\ell=0}^\infty \ \sum_{\substack{w \in V(\mathbb{G}_{d,c}) \\ \mathrm{dist}(w,v) = \ell}} \xi[w \leftrightarrow v] r^\ell \bZ_w, \] Clearly $\Var[\bX_v]$ is unchanged. As for $\E[\bX_u \bX_v]$, the contribution from each $\bZ_w$ now yields an additional factor of $\xi[w\leftrightarrow u]\xi[w \leftrightarrow v] = \xi[u \leftrightarrow v] = \xi_{uv}$. Thus each $\E[\bX_u \bX_v]$ changes by a factor of $\xi_{uv}$, as desired. The rest of the proof is the same. \end{proof} \begin{theorem} \label{thm:full-FIID} In the $L = \infty$ setting of \Cref{thm:idealized-FIID}, we in fact obtain, for all $r \in (-\rho_1^{-1}, \rho_1^{-1})$ and all $u,v \in V(\mathbb{G}_{d,c})$, \[ \E[\bX_u \bX_v] = r^h\parens*{1 + \frac{h(1-r)(1 + s_c^2 r)}{1+(s_c r)^2}}, \quad \text{where $h = \dist(u,v)$.} \] (The $r = 0$ case is of course trivial, with $\bX_v = \bZ_v$.) \end{theorem} \begin{proof} Allowing $L$ to be infinite and returning to \Cref{eqn:general-corr}: for $u,v \in V(\mathbb{G}_{d,c})$ with ${\dist(u,v) = h}$, one can use \Cref{prop:pijk} to show (calculations omitted) that \[ \E[\bX_u\bX_v] = \gamma^2 \cdot \frac{r^h(1+(s_c r)^2 + h(1-r)(1 + s_c^2 r))}{1-(\rho_1 r)^2} \] provided $|r| < \rho_1^{-1}$. The result follows. \end{proof} \begin{remark} \label{rem:for-triangle-ineqs} One can show that the expression in \Cref{thm:full-FIID} has the property that its absolute value is a strictly decreasing function of~$h$ for every $r \neq 0$. (Indeed, it decreases exponentially.) This is the key takeaway of the theorem, implying that in the setting of \Cref{cor:signed-FIID}, $\abs{\E[\bX_u \bX_v]} \leq \abs{\varrho}$ for \emph{all} distinct pairs $u,v \in \mathbb{I}$ (with equality when $\{u,v\} \in E(\mathbb{G}_{d,c})$). \end{remark} \subsection{SDP solutions for randomly lifted/signed graphs} In this section, let us fix $d \geq c \geq 2$, a small $\eps > 0$, \[ \varrho = 1 - \frac{(1+\rho_1)^2}{\kappa} + \eps, \] and an~$L = L(\eps, c, d)$ such that \Cref{thm:idealized-FIID} and \Cref{cor:signed-FIID} hold. Since each $\bX_v$ constructed therein depends only on the $\bZ_v$'s at distance at most~$L$ in $\mathbb{G}_{d,c}$ (and hence distance at most $2L$ in $\mathbb{T}_{d,c}$), we see that the exact same construction works equally well on any finite primal graph constructed from a $(c,d)$-biregular graph of girth exceeding~$4L$. Thus (using also \Cref{rem:for-triangle-ineqs}) we immediately obtain: \begin{theorem} \label{thm:sdp-for-girth1} Let $H$ be any edge-signed $(c,d)$-biregular graph of girth exceeding $4L$ and let $I$ be its associated primal graph, with edge signs $\xi_{uv}$, $\{u,v\} \in E(I)$. Then one can assign joint standard Gaussians $\bX_v$ to the vertices $v \in V(I)$ such that $\E[\bX_u \bX_v] = \xi_{uv} \varrho$ for each edge $\{u,v\} \in E(I)$. Furthermore, $\abs{\E[\bX_u \bX_v]} \leq \abs{\varrho}$ for all distinct $u,v \in V(I)$. As consequences: \begin{enumerate}[label=(\roman*)] \item \label{item:lovasz} If $H$ is unsigned, $\LTheta{I} \leq 1-1/\varrho$. \item \label{item:2xor} If we view $I$ as a 2XOR-SAT instance, we have $\mathrm{SDP}_\triangle(I) \geq \half - \half \varrho = \frac{(1+\rho_1)^2}{2\kappa} - \eps$. \item \label{item:3nae} If $c = 3$ and we view $I$ as a $d$-regular NAE-3SAT instance, we have $ \mathrm{SDP}_\triangle(I) \geq \frac13 - \frac13 \varrho = \frac98 - \frac38\cdot\frac{\parens*{\sqrt{d-1} - \sqrt{2}}^2}{d} - \eps. $ \end{enumerate} \end{theorem} We have the following corollary: \begin{theorem} \label{thm:sdp-for-girth2} Let $Y$ be a $(c,d)$-biregular bipartite graph and let $\bY_n$ be a random $n$-lift of~$Y$. Let~$\bH_n$ denote an \emph{arbitrary} edge-signing of~$\bY_n$, and $\bI_n$ its associated primal graph. Then: \begin{enumerate} \item With positive probability (depending only on~$d$ and~$\eps$), \Crefrange{item:lovasz}{item:3nae} of \Cref{thm:sdp-for-girth1} all hold. \item With high probability, \Cref{item:2xor,item:3nae} of \Cref{thm:sdp-for-girth1} hold with an additive loss of~$O(1/n)$. \end{enumerate} \end{theorem} \begin{proof} The first statement is an immediate consequence of \Cref{thm:girthy}. As for the second statement, \Cref{thm:cycleless-neighborhoods} and Markov's inequality imply that, with high probability, only an $O((d+1)^{2L+2})/n = O(1/n)$ fraction of vertices in~$\bY_n$ are ``$(2L+2)$-bad'' (i.e., have a cycle within their distance-$(2L+2)$ neighborhood). Assuming this holds, we use the linear block factors of IID solution from \Cref{thm:idealized-FIID} and \Cref{cor:signed-FIID} but with a small twist: For each vertex~$v$ that is $2L$-bad in $\bY_n$, rather than using \Cref{eqn:the-FIID} we simply set $\bX_v = \bZ'_v$, where the random variables $\bZ'_v$ are new standard Gaussians independent of all other random variables. Now for the $1-O(1/n)$ fraction of ``$(2L+2$)-good'' vertices, all their neighbors are still $2L$-good and thus are using the linear block factors of IID solution. We therefore still have $\E[\bX_u \bX_v] = \xi_{uv} \varrho$ for each edge $\{u,v\} \in E(I)$ where $u$ or $v$ is $(2L+2)$-good. Furthermore, we still have $\abs{\E[\bX_u \bX_v]} \leq \abs{\varrho}$ for all distinct $u,v \in V(I)$, since $\E[\bX_u \bX_v] = 0$ when one of $u$ or $v$ is $2L$-bad. The second statement in the theorem therefore follows. \end{proof} \section{Conclusions} In this work we have shown a sharp threshold for the SDP-satisfiability of random $d$-regular NAE-3SAT instances in the model of random lifts. Some open questions that remain are the following: \begin{itemize} \item Can we show similar sharp threshold results in the configuration model? The main challenge is proving Friedman-style bounds on the spectra of random $(c,d)$-biregular bipartite graphs in this model. An advantage to doing this would be the potential to show similar sharp thresholds for $2$-coloring random $d$-regular $3$-uniform hypergraphs (i.e., random $d$-regular NAE-3SAT \emph{without} negations). \item Can we show similar sharp threshold results in the Erd\H{o}s--R{\'e}nyi random model? \item Can our analysis of the 2XOR-SAT SDP / Lov\'{a}sz theta function for the infinite biregular tree $\mathbb{T}_{d,c}$, and its primal graph $\mathbb{G}_{d,c}$ be extended to other interesting classes of infinite graphs (say, vertex-transitive)? Are there application to other finite CSPs? \item A difficult but important open question: can we analyze the performance higher-degree ``Sum of Squares'' relaxations for refuting random sparse CSPs (that do not support pairwise-uniform distributions)? Even analyzing the degree-$4$ Sum of Squares relaxation for NAE-3SAT or graph $3$-colorability seems very challenging. \end{itemize} \section*{Acknowledgments} This work began at the American Institute of Mathematics workshop ``Phase transitions in randomized computational problems''; the authors would like to thank AIM, as well as the organizers Amir Dembo, Jian Ding, and Nike Sun, for the invitation. R.~O.~would like to thank Charles Bordenave, Sidhanth Mohanty, Doron Puder, Nike Sun, and David Witmer for helpful comments. \bibliographystyle{alpha}
{ "timestamp": "2018-04-17T02:06:57", "yymm": "1804", "arxiv_id": "1804.05230", "language": "en", "url": "https://arxiv.org/abs/1804.05230" }
\section{Introduction} The task of \emph{cognate detection}, i.e., the search for genetically related words in different languages, has traditionally been regarded as a task that is barely automatable. During the last decades, however, automatic cognate detection approaches since \citet{Covington:96} have been constantly improved following the work of \citet{kondrak2002algorithms}, both regarding the quality of the inferences \citep{List2017c,Jaeger2017}, and the sophistication of the methods \citep{hauer-kondrak:2011:IJCNLP-2011,rama2016siamese,Jaeger2017}, which have been expanded to account for the detection of partial cognates \cite{list-lopez-bapteste:2016:P16-2}, language specific sound-transition weights \citep{list:2012:LINGVIS2012} or the search of cognates in whole dictionaries \citep{st2017identifying}. Despite the progress, none of the automated cognate detection methods have been used for the purpose of inferring phylogenetic trees using modern Bayesian phylogenetic methods \citep{yang1997bayesian} from computational biology. Phylogenetic trees are hypotheses of how sets of related languages evolved in time. They can in turn be used for testing additional hypotheses of language evolution, such as the age of language families \citep{gray2003language,chang2015ancestry}, their spread \citep{bouckaert2012mapping,gray2009language}, the rates of lexical change \citep{greenhill2017evolutionary}, or as a proxy for tasks like cognate detection and linguistic reconstruction \citep{bouchardcote2013}. By plotting shared traits on a tree and testing how they could have evolved, trees can even be used to test hypotheses independent from language evolution, such as the universality of typological statements \citep{dunn2011evolved}, or the ancestry of cultural traits \citep{jordan2009matrilocal}. In the majority of these approaches, scholars infer phylogenetic trees with help of \emph{expert-annotated cognate sets} which serve as input to the phylogenetic software which usually follows a Bayesian likelihood framework. Unfortunately, expert cognate judgments are only available for a small number of language families which look back on a long tradition of classical comparative linguistic research \citep{campbell2008language}. Despite the claims that automatic cognate detection is useful for linguists working on less well studied language families, none of the papers actually tested, if automated cognates can be used instead as well for the important downstream task of Bayesian phylogenetic inference. So far, scholars have only tested distance-based approaches to phylogenetic reconstruction \citep{wichmann2010evaluating,rama2013bchap,jager2013phylogenetic}, which employ aggregated linguistic distances computed from string similarity algorithms to infer phylogenetic trees. In order to test whether automatic cognate detection is useful for phylogenetic inference, we collected multilingual wordlists for five different language families (230 languages, cf. section \ref{subsec:data}) and then applied different cognate detection methods (cf. section \ref{sec:autocog}) to infer cognate sets. We then applied the Bayesian phylogenetic inference procedure (cf. section \ref{sec:bayinf}) to the automated and the expert-annotated cognate sets in order to infer phylogenetic trees. These trees were then evaluated against the \emph{family gold standard trees}, based on external linguistic knowledge \citep{Hammarstroem2017}, using the \emph{Generalized Quartet Distance} (cf. section \ref{subsec:gqd}). The results are provided in table \ref{tab:gqd} and the paper is concluded in section \ref{sec:concl}. To the best of our knowledge, this is the first study in which the performance of several automatic cognate detection methods on the downstream task of phylogenetic inference is compared. While we find that on average the trees inferred from the expert-annotated cognate sets come closer to the gold standard trees, the trees inferred from automated cognate sets come surprisingly close to the trees inferred from the expert-annotated ones. \begin{table}[htb] \centering \begin{tabular}{lp{1.5cm}p{1cm}p{1cm}} \toprule \textbf{Dataset} & \textbf{Mngs.} & \textbf{Lngs.} & \textbf{AMC} \\ \midrule Austronesian & 210 & 45 & 0.79 \\ Austro-Asiatic & 200 & 58 & 0.90 \\ Indo-European & 208 & 42 & 0.95 \\ Pama-Nyungan & 183 & 67 & 0.89 \\ Sino-Tibetan & 110 & 64 & 0.91 \\ \bottomrule \end{tabular} \caption{Datasets used in our study. The second, third, and fourth columns show the number of number of meanings, languages and average mutual coverage for each language family respectively.} \label{tab:data} \end{table} \section{Materials and Methods}\label{sec:exps} \subsection{Datasets}\label{subsec:data} Our wordlists were extracted from publicly available datasets from five different language families: Austronesian \citep{Greenhill2008}, Austro-Asiatic \citep{Sidwell2015}, Indo-European \citep{Dunn2012}, Pama-Nyungan \citep{Bowern2012}, and Sino-Tibetan \citep{Peiros2004}. In order to make sure that the datasets were amenable for automatic cognate detection, we had to make sure that the transcriptions employed are readily recognized, and that the data is sufficient for those methods which rely on the identification of regular sound correspondences. The problem of transcriptions was solved by applying intensive semi-automatic cleaning. In order to guarantee an optimal data size, we selected a subset of languages from each dataset, which would guarantee a high \emph{average mutual coverage} (AMC). AMC is calculated as the average proportion of words shared by all language pairs in a given dataset. All analyses were carried out with version 2.6.2 of LingPy \citep{List2017i}. Table \ref{tab:data} gives an overview on the number of languages, concepts, and the AMC score for all datasets.\footnote{In order to allow for an easy re-use of our datasets, we linked all language varieties to Glottolog \citep{Hammarstroem2017} and all concepts to Concepticon \citep{List2016a}. In addition to the tabular data formats required to run the analyses with our software tools, we also provide the data in form of the format specifications suggested by the Cross-Linguistic Data Formats initiative \citep{Forkel2017a}. Data and source code are provided along with the supplementary material accompanying this paper.} \subsection{Automatic Cognate Detection}\label{sec:autocog} The basic workflow for automatic cognate detection methods applied to multilingual wordlists has been extensively described in the literature \citep{hauer-kondrak:2011:IJCNLP-2011,List2014d}. The workflow can be divided into two major steps: (a)\ word similarity calculation, and (b)\ cognate set partitioning. In the first step, similarity or distance scores for all word pairs in the same concept slot in the data are computed. In the second step, these scores are used to partition the words into sets of presumably related words. Since the second step is a mere clustering task for which many solutions exist, the most crucial differences among algorithms can be noted for step\ (a). For our analysis, we tested six different methods for cognate detection: The Consonant-Class-Matching (CCM) Method \citep{turchin2010analyzing}, the Normalized Edit Distance (NED) approach \citep{levenshtein1965binary}, the Sound-Class-Based Aligmnent (SCA) method \citep{List2014d}, the LexStat-Infomap method \citep{List2017c}, the SVM method \citep{Jaeger2017}, and the Online PMI approach \citep{rama2017fast}. The \textbf{CCM} approach first reduces the size of the alphabets in the phonetic transcriptions by mapping consonants to \emph{consonant classes} and discarding vowels. Assuming that different sounds which share the same sound class are likely to go back to the same ancestral sound, words which share the first two consonant classes are judged to be cognate, while words which differ regarding their first two classes are regarded as non-cognate. The \textbf{NED} approach first computes the \emph{normalized edit distance} \citep{Nerbonne:97} for all word pairs in given semantic slot and then clusters the words into cognate sets using a flat version of the UPGMA algorithm \citep{Sokal1958} and a user-defined threshold of maximal distance among the words. We follow \citet{List2017c} in setting this threshold to 0.75. The \textbf{SCA} approach is very similar to NED, but the pairwise distances are computed with help of the Sound-Class-Based Phonetic Alignment algorithm \citep{List2014d} which employs an extended sound-class model and a linguistically informed scoring function. Following \citet{List2017c}, we set the threshold for this approach to 0.45. The \textbf{LexStat-Infomap} method builds on the SCA method by employing the same sound-class model, but individual scoring functions are inferred from the data for each language pair by applying a permutation method and computing the \emph{log-odds scores} \citep{Eddy2004} from the expected and the attested distribution of sound matches \citep{List2014d}. While SCA and NED employ flat UGPMA clustering for step 2 of the workflow, LexStat-Infomap further uses the Infomap community detection algorithm \citep{rosvall2008maps} to partition the words into cognate set. Following \citet{List2017c}, we set the threshold for LexStat-Infomap to 0.55. The \textbf{OnlinePMI} approach \citep{rama2017fast} estimates the sound-pair PMI matrix using the online procedure described in \citet{liang2009online}. The approach starts with an empty PMI matrix and a list of synonymous word pairs from all the language pairs. The approach proceeds by calculating the PMI matrix from alignments calculated for each minibatch of word pairs using the current PMI matrix. Then the calculated PMI matrix for the latest minibatch is combined with the current PMI matrix. This procedure is repeated for a fixed number of iterations. We employ the final PMI matrix to calculate pairwise word similarity matrix for each meaning. In an additional step, the similarity score was transformed into a distance score using the sigmoid transformation: $1.0-(1+\exp(-x))^{-1}$ The word distance matrix is then supplied as an input to the Label Propagation algorithm \citep{raghavan2007near} to infer cognate clusters. We set the threshold for the algorithm to be 0.5. For the \textbf{SVM} approach \citep{Jaeger2017} a linear SVM classifier was trained with PMI similarity \citep{jager2013phylogenetic}, LexStat distance, mean word length, distance between the languages as features on cognate and non-cognate pairs extracted from word lists from \citet{wichmann2013languages} and \citet{List2014d}. The details of the training dataset are given in table 1 in \citet{Jaeger2017}. We used the same training settings as reported in the paper to train our SVM model. The trained SVM model is then employed to compute the probability that a word pair is cognate or not. The word pair probability matrix is then given as input to InfoMap algorithm for inferring word clusters. The threshold for InfoMap algorithm is set to 0.57 after cross-validation experiments on the training data. \begin{table*}[!ht] \small \centering \begin{tabular}{lccccc} \toprule Method & Austro-Asiatic & Austronesian & Indo-European & Pama-Nyungan & Sino-Tibetan\\ \midrule CCM & 0.71 & 0.7 & 0.75 & 0.74 & 0.48 \\ NED & 0.73 & 0.77 & 0.69 & 0.53 & 0.49 \\ SCA & 0.76 & 0.78 & 0.81 & 0.71 & 0.56 \\ LexStat & 0.76 & \cellcolor{lightgray}0.84 & \cellcolor{lightgray}0.83 & 0.84 & \cellcolor{lightgray}0.6 \\ OnlinePMI & 0.76 & 0.81 & 0.82 & 0.72 & 0.56 \\ SVM & \cellcolor{lightgray}0.82 & 0.81 & 0.79 & \cellcolor{lightgray}0.86 & 0.5 \\ \bottomrule \end{tabular} \caption{B-cubed F-scores for different cognate detection methods across the language families.} \label{tab:bcubedfscores} \end{table*} We evaluate the quality of the inferred cognate sets using the above described methods using B-cubed F-score \citep{amigo2009comparison} which is widely used in evaluating the quality of automatically inferred cognate clusters \citep{hauer-kondrak:2011:IJCNLP-2011}. We present the cognate evaluation results in table \ref{tab:bcubedfscores}. The SVM system is the best in the case of Austro-Asiatic and Pama-Nyungan whereas LexStat algorithm performs the best in the case of rest of the datasets. This is surprising since LexStat scores are used as features for SVM and we expect the SVM system to perform better than LexStat in all the language families. On the other hand, both OnlinePMI and SCA systems perform better than the algorithmically simpler systems such as CCM and NED. Given these F-scores, we hypothesize that the cognate sets output from the best cognate identification systems would also yield the high quality phylogenetic trees. However, we find the opposite in our phylogenetic experiments. \section{Bayesian Phylogenetic Inference}\label{sec:bayinf} The objective of Bayesian phylogenetic inference is based on the Bayes rule in \ref{eq:bphy}. \begin{equation}\label{eq:bphy} f(\tau, v,\theta|X) = \frac{f(X|\tau, v,\theta)f(\tau, v,\theta)}{f(X)} \end{equation} where $X$ is the data matrix, $\tau$ is the topology of the tree, $v$ is the vector of branch lengths, and $\theta$ is the substitution model parameters. The data matrix $X$ is a binary matrix of dimensions $N \times C$ where $N$ is the number of languages and $C$ is the number of cognate clusters in a language family. The posterior distribution $f(\tau, v, \theta|X)$ is difficult to calculate analytically since one has to sum over all the possible topologies ($\frac{(2N-3)!}{2^{N-2}(N-2)!}$) to compute the marginal in the denominator. However, posterior probability of all the parameters of interest (here, $ \Psi = \{\tau, v, \theta\}$) can be computed from samples drawn using a Markov chain Monte Carlo (MCMC) method. Typically, Metropolis-Hastings (MH) algorithm is the MCMC algorithm used to sample phylogenies from the posterior distribution \citep{huelsenbeck2001bayesian}. The MH algorithm constructs a Markov chain of the parameters' states by proposing change to a single parameter or a block of parameters in $\Psi$. The current state $\Psi$ in the Markov chain has a parameter $\theta$ and a new value $\theta^*$ is proposed from a distribution $q(\theta^*|\theta)$, then $\theta^*$ is accepted with a probability \begin{equation}\label{eq:mhr} r = \frac{f(X|\tau, v,\theta^*)}{f(X|\tau, v,\theta)}\frac{f(\theta^*)}{f(\theta)} \frac{q(\theta|\theta^*)}{q(\theta^*|\theta)} \end{equation} The likelihood of the data $f(X|\Psi)$ is computed using the Felsenstein's pruning algorithm \citep{felsenstein1981evolutionary} also known as sum-product algorithm \citep{jordan2004graphical}. We assume that $\tau, \theta, v$ are independent of each other. \section{Experiments}\label{sec:results} In this section, we report the experimental settings, the evaluation measure, and the results of our experiments. \begin{table*}[t] \small \centering \begin{tabular}{llllll} \toprule Method & Austro-Asiatic & Austronesian & Indo-European & Pama-Nyungan & Sino-Tibetan\\ \midrule Expert cognate sets & \bfseries 0.0081 $\pm$ 0.001 &0.1056 $\pm$ 0.0118 &\bfseries 0.0249 $\pm$ 0.0079 &\bfseries 0.1384 $\pm$ 0.0225 &\bfseries 0.0561 $\pm$ 0.0123 \\\midrule CCM & 0.0243 $\pm$ 0.018 &0.0854 $\pm$ 0.0176 &0.0369 $\pm$ 0.0148 &0.1617 $\pm$ 0.0162 &0.1424 $\pm$ 0.027 \\ NED & 0.0265 $\pm$ 0.007 &\cellcolor{lightgray}0.0458 $\pm$ 0.0152 &0.046 $\pm$ 0.0132 &0.196 $\pm$ 0.0166 &0.1614 $\pm$ 0.0282 \\ SCA &0.0152 $\pm$ 0.0035 &0.0514 $\pm$ 0.013 &\cellcolor{lightgray}0.0256 $\pm$ 0.009 &0.166 $\pm$ 0.0153 &\cellcolor{lightgray}0.0704 $\pm$ 0.0206 \\ LexStat & 0.0267 $\pm$ 0.0085 &0.0848 $\pm$ 0.0226 & 0.0314 $\pm$ 0.0091 &\cellcolor{lightgray}0.1507 $\pm$ 0.0143 &0.0786 $\pm$ 0.0209 \\ OnlinePMI & 0.0158 $\pm$ 0.0048 &0.1056 $\pm$ 0.0198 & 0.0457 $\pm$ 0.0135 & 0.1717 $\pm$ 0.0185 &0.1184 $\pm$ 0.031 \\ SVM & \cellcolor{lightgray}0.0146 $\pm$ 0.0039 &0.0989 $\pm$ 0.0224 & 0.0452 $\pm$ 0.011 & 0.1827 $\pm$ 0.0237 &0.1199 $\pm$ 0.0269 \\ \bottomrule \end{tabular} \caption{The mean and standard deviation for each method and family is computed from 7500 posterior trees. The automatic methods which comes closest to the gold standard phylogeny is shaded in gray, and where the expert cognate sets perform best, this is indicated with a \textbf{bold} font.} \label{tab:gqd} \end{table*} All our Bayesian analyses use binary datasets with states $0$ and $1$. We employ the Generalized Time Reversible Model \citep[chapter 1]{yang2014molecular} for computing the transition probabilities between individual states. The rate variation across sites is modeled using a four category discrete $\Gamma$ distribution \citep{yang1994maximum}. We follow \citet{lewis2001likelihood} and \citet{felsenstein1992phylogenies} in correcting the likelihood calculation for ascertainment bias resulting from unobserved \texttt{0} patterns. We used a uniform tree prior \citep{ronquist2012total} in all our analyses which constructs a rooted tree and draws internal node heights from uniform distribution. In our analysis, we assumes a Independent Gamma Rates relaxed clock model \citep{lepage2007general} where the rate for a branch $j$ of length $b_j$ in the tree is drawn from a Gamma distribution with mean 1 and variance $\sigma^2_{IG}/b_j$ where $\sigma^2_{IG}$ is a parameter sampled in the MCMC analysis. We infer $\tau, v, \theta$ from two independent random starting points and sample every 1000th state in the chain until the phylogenies from the two independent runs do not differ beyond $0.01$. For each dataset, we ran the chains for 15 million generations and threw away the initial $50\%$ of the chain's states as part of burnin. After that we computed the generalized quartet distance from each of the posterior trees to the gold standard tree described in subsection \ref{subsec:gqd}. All our experiments are performed using MrBayes 3.2.6 \cite{zhang2015total}. \subsection{GQD}\label{subsec:gqd} \citet{pompei2011accuracy} introduced Generalized Quartet Distance (GQD) as an extension to Quartet Distance (QD) in order to compare binary trees with a polytomous tree, since gold standard trees can have non-binary internal nodes. It was widely used for comparing inferred language phylogenies with gold standard phylogenies \citep{greenhill2010accurate,Wichmann:2011:2210-5824:205,jager2013phylogenetic}. QD measures the distance between two trees in terms of the number of different quartets \citep{estabrook1985comparison}. A quartet is defined as a set of four leaves selected from a set of leaves without replacement. A tree with $n$ leaves has ${n \choose 4}$ quartets in total. A quartet defined on four leaves $a,b,c,d$ can have four different topologies: $ab|cd$, $ac|bd$, $ad|bc$, and $ab\times cd$. The first three topologies have an internal edge separating two pairs of leaves. Such quartets are called as \emph{butterflies}. The fourth quartet has no internal edge and as such is known as star quartet. Given a tree $\tau$ with $n$ leaves, the quartets can be partitioned into sets of butterflies, $B(\tau)$, and sets of stars, $S(\tau)$. Then, the QD between $\tau$ and $\tau_g$ is defined as $1-\frac{|S(\tau)\cap S(\tau_g)|+|B(\tau)\cap B(\tau_g)|}{{n \choose 4}}$. The QD formulation counts the butterflies in an inferred tree $\tau$ as errors. The tree $\tau$ should not be penalized if an internal node in the gold standard tree $\tau_g$ is $m$-ary. To this end, \citet{pompei2011accuracy} defined a new measure known as GQD to discount the presence of star quartets in $\tau_g$. GQD is defined as $DB(\tau, \tau_g)/B(\tau_g)$ where $DB(.)$ is the number of butterflies between $\tau, \tau_g$. We extracted gold standard trees from Glottolog \citep{Hammarstroem2017} for the purpose of evaluating the inferred posterior trees from each automated cognate identification system. We note that the Bayesian inference procedure produces rooted trees with branch lengths whereas the gold standard trees do not have any branch lengths. Although there are other linguistic phylogenetic inference algorithms such as those of \citet{ringe2002indo} we do not test the algorithms due to the non-availability and scalability of the software to datasets with more than twenty languages. \subsection{Results} The results of our experiments are given in table \ref{tab:gqd}. A average lower GQD score implies that the inferred trees are closer to the gold standard phylogeny than a higher average GQD score. Except for Austronesian, Bayesian inference based on expert cognate sets yields trees that are very close to the gold standard tree. Surprisingly, algorithmically simple systems such as NED and CCM show better performance than the machine-learned SVM model except from Sino-Tibetan. SCA is a subsystem of LexStat but emerges as the winner in two language families (Indo-European and Sino-Tibetan). Given that SCA is outperformed by SVM and LexStat in automatic cognate detection, this is very surprising, and further research is needed to find out, why the simpler models perform well on phylogenetic reconstruction. Although our results indicate that expert-coded cognate sets are generally more suitable for phylogenetic reconstruction, we can also see that the difference to trees inferred from automated cognate sets is not very large. \section{Conclusion}\label{sec:concl} In this paper, we carried out a preliminary evaluation of the usefulness of automated cognate detection methods for phylogenetic inference. Although the cognate sets predicted by automated cognate detection methods yield phylogenetic trees that come close to expert trees, there is still room for improvement, and future research is needed to further enhance automatic cognate detection methods. However, as our experiments show, expert-annotated cognate sets are also not free from errors, and it seems likewise useful to investigate, how the consistency of cognate coding by experts could be further improved. As future work, we intend to create a cognate identification system that combines the output of different algorithms in a more systematic way. We intend to infer cognate sets from the combined system and use them to infer phylogenies and evaluate the inferred phylogenies against the gold standard trees. {\begin{spacing}{1.2} \section*{Acknowledgments} \footnotesize This research was supported by the ERC Advanced Grant 324246 EVOLAEMP (GJ, JW), the DFG-KFG 2237 Words, Bones, Genes, Tools (GJ), the ERC Starting Grant 715618 CALC (JML), and the BIGMED project (TR). We thank our anonymous reviewers for helpful comments, Mei-Shin Wu for helping with Sino-Tibetan data, Claire Bowern and Tiago Tresoldi for helping with Pama-Nyungan data, Paul Sidwell for helping with Austro-Asiatic data, as well as the audience at the CESC 2017 conference (MPI-SHH, Jena) for their helpful comments on an earlier version of the paper.\end{spacing}}
{ "timestamp": "2018-04-17T02:11:50", "yymm": "1804", "arxiv_id": "1804.05416", "language": "en", "url": "https://arxiv.org/abs/1804.05416" }
\section{Introduction} Learning to translate between two image domains is a common problem in computer vision and graphics, and has many potentially useful applications including colorization \cite{pix2pix}, photo generation from sketches \cite{pix2pix}, inpainting \cite{inpainting}, future frame prediction \cite{framepredict}, superresolution \cite{superres}, style transfer \cite{style}, and dataset augmentation. It can be particularly useful when images from one of the two domains are scarce or expensive to obtain (for example by requiring human annotation or modification). \par Until recently the problem has been posed as a supervised learning problem or a one-to-one mapping, with training datasets of paired images from each domain \cite{pix2pix}. However having access to paired images is a difficult and resource intensive challenge, and so it is helpful to learn to map between unpaired image distributions. Multiple approaches have been successfully applied in solving this task in the recent months \cite{cyclegan,cogan,unit,xgan,stargan}. Most of the work done in this area deal with translation of images between a single pair of distributions. In this work, we generalize this translation mechanism to multiple pairs. In other words, given a set of distributions which share an underlying joint distribution, we come up with a set of translators that can convert samples from images belonging to any distribution to any other distribution, on which these translators were trained. The effectiveness of these translators is exhibited by considering them as a set of composite functions which can be applied on top of one another. Further, we explore if these models have the capability of disentanglement of shared and individual components between different distributions. For example, instead of learning to translate from a smiling person that is wearing glasses to a person that is not smiling and not wearing glasses, or a horse in a field on a summer's day to a zebra in a field on a winter's day, we learn to translate from wearing glasses to not wearing glasses, smiling to not smiling, horse to zebra, and summer to winter separately, then compose the results. \par There are a number of potential advantages to this approach. It becomes possible to learn granular unpaired image to image translations, whilst only having access to either less granular or no labels. It facilitates training on larger datasets since only the marginal, more general labels are required. It gives finer grained control to users of the translation process since they can compose different translation functions to achieve their desired results. Finally, it makes it possible to generate entirely new combinations, by translating to combinations of the marginal distributions that never appeared in the training set. We also experiment with different training mechanisms to efficiently train models on multiple distributions and show results on decoupled training performing better joint training. Overall, decoupled training followed by some finetuning by joint training produces the best results. \section{Related Work} \label{ref:related-work} \textbf{Generative Adversarial Networks}: Image generation through GAN \cite{gan} and it's several variants such as DCGAN \cite{dcgan} and WGAN \cite{wgan} have been groundbreaking in terms of how realistic the generated samples were. The adversarial loss originally introduced in \cite{gan} has led to creation of a new kind of architecture in generative modelling and subsequently been applied in several areas such as \cite{pix2pix, inpainting, framepredict}. It consists of a generator and a discriminator, wherein the former learns to generate novel realistic samples in order to fool the latter, while the latter's objective is to distinguish between real samples and generated ones. The combined learning objective is to minimize the adversarial loss. \textbf{Image-to-Image translation}: Supervised image-to-image translation \cite{pix2pix} has achieved outstanding results where the data used for training is available in one-to-one pairs. Apart from adversarial loss, it uses L1 (reconstruction) loss as well, which has now become a common practice in these types of tasks. Unsupervised methods take samples of images from two distributions and learn to cross-translate between them. This introduces the well known issue of there being infinitely many mappings between the two unpaired image domains \cite{cyclegan,cogan,unit,xgan,stargan} and so further constraints are required to do well on the problem. \cite{cyclegan} introduces the requirement that translations be cycle-consistent; mapping image $x \in X$ to domain $Y$ and back again to $X$ must yield an image that is close to the original. \cite{cogan} takes a different approach, enforcing weight sharing between the early layers of the generators and later layers of the discriminators. \cite{unit} combines these two ideas and models each image domain using a VAE-GAN. \cite{xgan} utilizes reconstruction loss and teacher loss instead of VAE using a pretrained teacher network to ensure the encoder output lies in a meaningful subregion. To our knowledge, only \cite{stargan} has presented results in generating translations between multiple distribution samples. However, their generator is conditioned on supervised labels. \section{Method} \label{ref-method} Our work broadly builds on the assumption of shared-latent space \cite{cogan}, which theorizes that we can learn a latent code $z$ that can represent the joint distribution $P(x_1, x_2)$, given samples from marginal distributions $P(x_1)$ and $P(x_2)$. The generator or translator consists of an Encoder $E_i$, shared latent space $z$ and a Decoder $G_i$, such that $z = E_1(x_1)$, $z = E_2(x_2)$ and $x_1 = G_1(z)$, $x_2 = G_2(z)$. The composability property of these translators would be then as follows: $x_2 = G_2 \circ E_1(x_1)$ and vice versa. In other words, we want the translator to learn to map similar characteristics of image samples from two distributions to $z$ and then the decoder should learn to disentangle unique characteristics of that distribution on which it is trained and apply that transformation on any given image sample as input. We extend this framework to learn composite functions for $|N|$ distributions. To formalize, given sets of samples from distributions $N = \{X_1, X_2, ..., X_{|N|}\}$ with an existing and unknown joint distribution $P(X_1, X_2, ..., X_{|N|}) \neq \phi $, we learn a set of composite functions and a shared latent space, such that $x_j = G_j \circ E_i (x_i)$, where $\{i, j\} \in N$. Thus giving us a total of $|N|^2$ unique transformations possible. To approach solving towards this problem, we start with a bottom-up approach and take $|N| = 4$ sets of sample images. \subsection{Model architecture} We extend the model proposed by Liu et. al \cite{unit} to learn to simultaneously translate between two pairs of image distributions (making four distributions in total). There are four encoders, four decoders, and four discriminators in our model, one for each image distribution. Additionally, there is a shared latent space following \cite{unit}, consisting of the last layers of the encoders, and first layers of the decoders. See Figure \ref{fig-model} for more detail. \par \begin{figure}[h] \includegraphics[width=12cm, height=6cm]{im2im2im.png} \centering \caption{Model architecture. Distributions have been selected from CelebA dataset \cite{celeba} having the following unique properties: smiling, not-smiling, eyeglasses, no-eyeglasses.} \label{fig-model} \end{figure} We felt that sharing a latent space, introduced in \cite{cogan}, and used to great effect in \cite{unit} has applications beyond improving pairwise domain translations, and could improve the composability of image translations. Sharing a latent space implements the assumption that there exists a single latent code $z$ from which images in any of the four domains can be recovered \cite{unit}. If this assumption holds, then complex image translations can be disentangled into simpler image translations which learn to map to and from this shared latent code. \par \subsection{Objective} We adapted the objective function from \cite{unit}, and benefit from the extensive tuning that the authors carried out. Since we had access to limited computational resources, we kept the same weightings as \cite{unit} on the individual components of loss function for a single pairing. There are three components to the objective function for each learned translation, making twelve elements in total. \begin{align} &\min_{E_1,E_2,E_3,E_4,G_1,G_2,G_3,G_4}\max_{D_1,D_2,D_3,D_4} \mathcal{L} = \\ &\mathcal{L}_{\text{\tiny VAE}_1}(E_1,G_1) +\mathcal{L}_{\text{\tiny GAN}_1}(E_1,G_1,D_1) +\mathcal{L}_{\text{\tiny CC}_1}(E_1,G_1,E_2,G_2)\nonumber\\ + &\mathcal{L}_{\text{\tiny VAE}_2}(E_2,G_2) + \mathcal{L}_{\text{\tiny GAN}_2}(E_2,G_2,D_2)+\mathcal{L}_{\text{\tiny CC}_2}(E_2,G_2,E_1,G_1)\nonumber\\ + &\mathcal{L}_{\text{\tiny VAE}_3}(E_3,G_3) +\mathcal{L}_{\text{\tiny GAN}_3}(E_3,G_3,D_3) +\mathcal{L}_{\text{\tiny CC}_3}(E_3,G_3,E_4,G_4)\nonumber\\ + &\mathcal{L}_{\text{\tiny VAE}_4}(E_4,G_4) + \mathcal{L}_{\text{\tiny GAN}_4}(E_4,G_4,D_4)+\mathcal{L}_{\text{\tiny CC}_4}(E_4,G_4,E_3,G_3) \end{align} The VAE loss objective is responsible for ensuring that the model can reconstruct and image from the same domain. That is, $$G(E(x)) \approx x$$ The adversarial loss objective is responsible for ensuring that the decoder (or generator $G_i$) generates realistic samples when translating from an image lying in domain $X_1$ into domain $X_2$, which is evaluated by the discriminator ($D_i$). Finally, the cycle-consistency component ensures that when the model translates an image from domain $X_1$ to $X_2$ and back to $X_1$ the resulting image is similar to the original. That is, $$G_1(E_2(G_2(E_1(x))) \approx x $$ We refer readers to \cite{unit} for a full explanation and motivation of these different elements. \subsection{Training and Inference} The model is conceptually split into two, with each part responsible for learning to translate between one pair of distributions. Each of the three loss components; reconstruction, GAN, and cycle-consistency is enforced within the pair. \begin{enumerate} \item (E1,E2,G1,G2,D1,D2): Learns $f_1:X_1 \Rightarrow X_2$, and $f_2:X_2 \Rightarrow X_1$ \begin{itemize} \item $f_1(x) = G_2(E_1(x))$ \item $f_2(x) = G_1(E_2(x))$ \end{itemize} \item (E3,E4,G3,G4,D3,D4): Learns $f_3:X_3 \Rightarrow X_4$, and $f_4:X_4 \Rightarrow X_3$ \begin{itemize} \item $f_3(x) = G_4(E_3(x))$ \item $f_4(x) = G_3(E_4(x))$ \end{itemize} \end{enumerate} The shared latent space between all of the encoders and generators is responsible for ensuring realistic translations to image distributions the model has not seen before. \par At inference time we complete a "double-loop" through the model. Suppose we had learned the following translations: \begin{itemize} \item $f_1$: glasses to no glasses \item $f_2$: no glasses to glasses \item $f_3$: smiling to not smiling \item $f_4$: not smiling to smiling \end{itemize} Then to translate from someone who is not smiling and not wearing glasses to smiling and wearing glasses, we do: \begin{equation} \begin{aligned} & \text{not smiling, no glasses} \Rightarrow \text{smiling, no glasses} \Rightarrow \text{smiling, glasses} \\ & \Leftrightarrow f_2(f_4(x)) \\ & \Leftrightarrow G_2(E_1(G_3(E_4(x)))) \end{aligned} \end{equation} Contrary to the above approach, it seems straightforward to think that training this model would be done in a joint manner with the objective of minimizing $\mathcal{L}$. However, as $N \rightarrow \infty$, this method will become unscalable. Hence we present the above given training strategy that splits the shared latent space $z$ and trains $\frac{|N|}{2}$ pairs. It must be noted that at this point in the training, there has been no weight sharing between pair 1 ($X_1$ and $X_2$) and pair 2 ($X_3$ and $X_4$). In Section \ref{ref:expriments} we see that this method results in generating better quality samples at inference time as compared to joint training from scratch. However, we hypothesized that having a shared latent space would improve the translation quality so experimented with training the models in an uncoupled manner first and then jointly training all the models, sharing a latent space, for a few iterations to fine tune. We found that this approach yielded the best results (see Section \ref{ref:expriments}). \section{Experiments} \label{ref:expriments} We conducted all of our experiments using the celebA dataset \cite{celeba}. This dataset consists of 202,599 images each labeled with 40 binary attributes, for example brown hair, smiling, eyeglasses, beard, and mustache \cite{celeba}. These binary attributes naturally lend themselves to composition, making this an ideal dataset to test our proposed model. We focused on translating between glasses, no glasses, smiling, not smiling (experiment $1$), and blonde hair, brown hair, smiling and not smiling (experiment $2$). For each experiment we constructed four datasets, one corresponding to each image distribution with the relevant characteristic. So that we could test our models for their ability to generate combinations of characteristics that did not appear in the training set, we ensured that there were no faces which were smiling and wearing glasses in experiment $1$, and no faces which were smiling with either blonde or brown hair in experiment $2$. \par We experimented with the following training approaches. \begin{itemize} \item \textbf{Four way}: Training the model described in Section \ref{ref-method} from scratch \item \textbf{Separately Trained (Baseline)}: Following the method and models architectures from \cite{unit}. To compose the image translation we first passed an image through one model, then another. \item \textbf{Warm start}: First training separate models. Then initializing the model described in Section \ref{ref-method} with the weights from the separately trained models, and continuing to train to fine tune. \end{itemize} The baseline model was intended to help test the role of the shared latent space between all four distributions. If the latent space is helpful, the translations of the four way or warm start model should be better than the baseline model. High quality translations should have the following characteristics. Realism, variety, and the clear presence of the translated feature, distinct from the pre-translated image. To evaluate our models on these criteria we used three evaluation metrics. \begin{itemize} \item \textbf{Realism}: qualitative, manual examination of the generated images \item \textbf{Variety}: Low cycle consistency loss. The lower this loss, the less likely a model is to have mode collapse. If a model experiences mode collapse and translates all example to only a few images, then it will be unable to reconstruct the original image from the translated image well. \item \textbf{Presence of translated feature}: We trained a 11-layer VGG \cite{vgg} net using original images from the dataset to classify examples into four classes, one for each possible combination of features for each experiment. Then we selected a batch of 100 original images from a single class (e.g. blonde and not smiling, eyeglasses and not smiling), translated them to every other class using our model, and classified them after every translation. If a model is making clear translations, the class they are classified into should change with each translation. To mitigate the fact that our classifier was imperfect, we excluded any images that the classifier was not able to classify correctly. \end{itemize} \section{Results} \textbf{Realism}: Overall the warm start model described in Section \ref{ref:expriments} generated the most visually appealing and coherent double translations (see Figures \ref{fig-eye-smile} and \ref{fig-hair-smile}). The presence of the translated features are clear and generally integrated in a coherent way, with minimal distortions or artifacts. The model is able to successfully handle atypical translations, such as adding glasses when one eye is occluded (see Figure \ref{fig-eye-smile}). The warm start model is significantly better than a joint model trained from scratch. This is clear from Figure \ref{fig-eye-smile}, and we were not able to successfully train a joint model from scratch for experiment 2, which exhibited results at par with the other methods. Interestingly the separately trained models generated reasonably good double translations, particularly in experiment two (Figure \ref{fig-hair-smile}), and was significantly better than a joint model trained from scratch. This suggests that unpaired image to image translation already exhibits some composablility. However, enforcing a shared latent space and fine tuning these models (the warm start training approach) does seem to improve the overall quality of images. This is particularly apparent in the results from experiment 1 (Figure \ref{fig-eye-smile}). These results suggest that the more scalable decoupled training strategy in which $\frac{|N|}{2}$ pairs are trained separately, then fine-tuned through joint training, is also the approach which yields the highest quality results. Though we weren't able to experiment on more than 4 distributions due to time and resource constraints. \par Finally, what is particularly exciting about these results is that our best model has no problem generating images with combinations of characteristics that never appeared in the training set. In experiment 1, there are no pictures of people wearing glasses and smiling, and yet the model generates high quality images of people smiling and wearing glasses (see right most image in the triplets in Figure \ref{fig-eye-smile}). Similarly for experiment 2 there was no one with either brown or blonde hair that was smiling in the training data. \begin{figure}[h] \includegraphics[width=14cm]{eye_smile.png} \centering \caption{Selected results from experiment 1 for all three models. For each triplet of images, the image on the left is the original image, selected from the celebA dataset \cite{celeba}. They are all not smiling and not wearing glasses. The center image is the translation to not smiling and wearing glasses. The image on the right is the second translation to smiling and wearing glasses.} \label{fig-eye-smile} \end{figure} \begin{figure}[h] \includegraphics[width=10cm]{hair_smile.png} \centering \caption{Selected results from experiment 2 for the warm start and baseline models. For each triplet of images, the image on the left is the original image, selected from the celebA dataset \cite{celeba}. They are all not smiling and have either blonde or brown hair. The center image is the translation to not smiling and either blonde or brunette, depending on the original hair color. The image on the right is the second translation to smiling.} \label{fig-hair-smile} \end{figure} \textbf{Variety}: Generally, our models were able the reconstruct the original image from the translated image well, suggesting they did not suffer from mode collapse. This is also consistent with what we observed by inspecting the generated images.\par \textbf{Presence of translated features} (Quantitative Analysis): Figure \ref{ref:clf1} shows our assessment of translation quality. VGG classifier trained on the classes: blonde \& not smiling, brunette \& not smiling, blonde \& smiling, brunette \& smiling with 87\% accuracy is able to separate the translated images into their respective classes very efficiently for the baseline model. For the warmstarted model, the classifier gets somewhat confused between smiling and not smiling images. This makes us think how finetuning the model is distorting generated samples as to give a mixed classification decision. We leave this for future work. \begin{figure}[h] \includegraphics[width=14cm]{clf1.png} \centering \caption{A batch of 100 blonde \& not smiling images are classified and then translated of which again the correctly classified ones are translated again and so on. Some samples from the batch are displayed here. The label map is as follows- 0: Blonde \& Not Smiling, 1: Brunette \& Not Smiling, 2: Blonde \& Smiling, 3: Brunette \& Smiling.} \label{ref:clf1} \end{figure} \section{Further Work} The joint models we trained sometimes dropped one of the translation modes, most noticeably translating from not smiling to smiling. We hypothesize that this was because this translation was the most difficult of the four translations. This could potentially be remedied by increasing the contribution to the loss function from this translation. More generally it would be interesting to explore the effect of varying the contribution from the many different loss components more fully. Time and computational resource constraints prevented us from doing this.\par We constructed four images domains from a single more general domain, celebrity faces \cite{celeba}. This ensured that the domains were fundamentally related. It would be interesting to explore the degree of relatedness between different image domains required to achieve good results. For example, given outdoor scenes labeled with the weather (e.g. snow, sun, rain), and outdoor scenes containing different animals (e.g. horse zebra), could we learn to translate between horses in sunshine to zebras in snow? \section{Conclusion} In this work we extend a given model of unpaired image to image translation for handling multiple pairs of distributions. We devise scalable training methods with modified architecture and objective for this kind of model and compare the results of the model through each of these methods. We set qualitative and quantitative evaluation criterion and assess how performance of our model in various training scenarios. Moreover, we show the translation flexibility property of our model by using the translators as stacked composable functions for multi-way translation into novel distributions. \section*{Acknowledgments} We are grateful to M. Liu, T. Breuel, and J. Kautz for making their research and codebase publicly available, and to Professor Rob Fergus for his valuable advice.
{ "timestamp": "2018-04-17T02:12:46", "yymm": "1804", "arxiv_id": "1804.05470", "language": "en", "url": "https://arxiv.org/abs/1804.05470" }
\section{Introduction} Cataclysmic variables (CVs) are close binary systems in which a white dwarf (WD) primary accretes material from a late-type secondary star, via Roche-lobe overflow (see \citealt{warner95a} for a review). Non-magnetic CVs are classified into 3 main sub-types -- the novae, the dwarf novae and the nova-likes. The {\em novae} are defined as systems in which only a single nova eruption has been observed. Novae eruptions have typical amplitudes of 10 magnitudes and are believed to be due to the thermonuclear runaway of hydrogen-rich material accreted onto the surface of the white dwarf. The {\em dwarf novae} (DNe) are defined as systems which undergo quasi-regular (on timescales of weeks--months) outbursts of much smaller amplitude (typically 6 magnitudes). Dwarf novae outbursts are believed to be due to instabilities in the accretion disc causing it to collapse onto the white dwarf. The {\em nova-like} variables (NLs) are the non-eruptive CVs, i.e. objects which have never been observed to show novae or dwarf novae outbursts. The absence of dwarf novae outbursts in NLs is believed to be due to their high mass-transfer rates, producing ionised accretion discs in which the disc-instability mechanism that causes outbursts is suppressed \citep{osaki74}; the mass transfer rates in NLs are $\dot{M} \sim 10^{-9}$ M$_{\odot}$ yr$^{-1}$ whereas DNe have rates of $\dot{M} \sim 10^{-11}$ M$_{\odot}$ yr$^{-1}$ \citep{warner95a}. Our understanding of CV evolution has made great strides in recent years (e.g. \citealt{knigge10}, \citealt{knigge11}). However, one of the main unsolved problems in CV evolution is: how can the different types of CV co-exist at the same orbital period? Theory predicts that all CVs evolve from longer to shorter orbital periods on timescales of gigayears, and as they do so the mass-transfer rate also declines. At periods longer than approximately 5 hours, all CVs should have high mass-transfer rates and appear as nova-likes, whereas below this period the lower mass-transfer rate allows the disc-instability mechanism to operate and all CVs should appear as dwarf novae \citep{knigge11}. This theoretical expectation, however, is in stark contrast to observations, which show that nova-likes are far more common than dwarf novae in the 3--4 hr period range \citep{rodriguez07}. Two possible explanations for the coexistence of nova-likes and dwarf novae at the same orbital periods have been proposed, both of which invoke cycles in $\dot{M}$ on timescales shorter than the gigayear evolutionary timescale of CVs. The first explanation is that the $\dot{M}$ cycles are caused by irradiation from the accreting WD, which bloats the secondary and hence increases $\dot{M}$ (e.g. \citealt{buning04}). \citet{knigge11} found that irradiation would cause bloating of ${<}3\%$ above the period gap, leading to modest fluctuations in $\dot{M}$ with timescales of the order of $10^6-10^9$ yr, insufficient to explain the full range in $\dot{M}$ that is observed. The second explanation for variable $\dot{M}$ is a nova-induced cycle. Some fraction of the energy released in the nova event will heat up the WD, leading to irradiation and subsequent bloating of the secondary. Following the nova event, the system would have a high $\dot{M}$ and appear as a NL. As the WD cools, $\dot{M}$ reduces and the system changes to a DN, or even possibly $\dot{M}$ ceases altogether and the system goes into hibernation. Hence CVs are expected to cycle between nova, NL and DN states, on timescales of $10^4-10^5$ yrs (see \citealt{shara86}). The cyclical evolution of CVs through nova, NL and DN phases recently received observational support from the discovery that BK Lyn appears to have evolved through all three phases since its likely nova outburst in the year AD 101 \citep{patterson13}. A second piece of evidence has come from the discovery of nova shells around the dwarf novae Z Cam and AT Cnc (\citealt{shara07}, \citealt{shara13}), verifying that they must have passed through an earlier nova phase. \citet{shara17} also found a nova shell from Nova Sco 1437 and were able to associate it with a nearby dwarf nova using its proper motion. A more obvious place than DNe to find nova shells is actually around NLs, as the nova-induced cycle theory suggests that the high $\dot{M}$ in NLs is due to a recent nova outburst. Finding shells around NLs would lend further support to the existence of nova-induced cycles and hence why systems with different $\dot{M}$ are found at the same orbital period. In our earlier paper (\citealt{sahman15}; hereafter S15), we presented the initial results of our search for nova shells around CVs. We reported the tentative discovery of a possible shell around the nova-like V1315 Aql (orbital period 3.35 hr). We subsequently obtained intermediate-resolution spectroscopy of this shell, in an effort to determine its physical characteristics and to ascertain if it is associated with the nova-like. The results of these spectroscopic observations, along with a more in-depth analysis of the H$\alpha$ images of the V1315 Aql shell shown in S15, are presented in this paper. \section{Observations and Data Reduction} \subsection{Observations} \subsubsection{INT images} We used the Wide Field Camera\footnote{http://www.ing.iac.es/astronomy/instruments/wfc/} at the prime focus of the 2.5m Isaac Newton Telescope on La Palma to image V1315 Aql on the night of 2014~August~2. This setup gave a platescale of 0.33$\arcsec$/pixel and a field view of approx. $34\arcmin \times 34\arcmin$. H$\alpha$ is generally the strongest feature in the spectra of nova shells, with a velocity width of up to 2000 km\,s$^{-1}$ (e.g. \citealt{duerbeck87}). In order to maximise the detection of light from the shell and minimise the contribution of sky, we therefore used a narrow-band (95\AA\ FWHM = 4300 km\,s$^{-1}$) interference filter centred on the rest wavelength of H$\alpha$ (ING filter number 197\footnote{http://catserver.ing.iac.es/filter/list.php?instrument=WFC}). We took eight 900s H$\alpha$ exposures, with four of the images dithered by $\pm 20\arcsec$ in both RA and Dec. The observing conditions were good throughout the run: the sky was always photometric, there was no evidence of dust and the seeing was 1.5$\arcsec$. \subsubsection{Keck DEIMOS spectra} We used the DEIMOS \citep{faber03} multi-slit spectrograph on the 10m Keck II telescope on Hawaii, on the night of 2015~June~13. We obtained 39 spectra of 300s duration each, using the 1200G grating centred on 6000\AA\,\, and the GG455 order-blocking filter. This gave a wavelength coverage of 4550--7500\AA, with a FWHM resolution of 1.6\AA. The seeing was 0.7$\arcsec$, and there was some thin cloud present. The slit mask design requires that the slits cannot overlap in the spatial direction, so we placed seven slits around the edges of the roughly circular shell. We also placed a slit on V1315 Aql itself and chose four nearby stars for flux calibration. We identified two areas of blank sky for sky subtraction. The positions of each slit on the sky are shown in Figure \ref{fig:mask1}, and full details of the position, orientation and wavelength coverage of each slit are given in Table \ref{tab:journal}. \begin{table*} \caption[]{V1315 Aql DEIMOS slit positions, sizes and spectral range coverage. The RA and Dec positions are for the centres of the slits.} \begin{center} \begin{tabular}{lllccccc} \hline \multicolumn{1}{l}{Slit name} & \multicolumn{1}{l}{RA} & \multicolumn{1}{l}{Dec} & \multicolumn{1}{c}{Slit} & \multicolumn{1}{c}{Slit} & \multicolumn{1}{c}{Slit} & \multicolumn{2}{c}{Wavelength range} \\ & \multicolumn{1}{l}{(degs)} & \multicolumn{1}{l}{(degs)} & \multicolumn{1}{c}{Length} & \multicolumn{1}{c}{Position} & \multicolumn{1}{c}{Width} & \multicolumn{1}{c}{Start(\AA)} & \multicolumn{1}{c}{End (\AA)} \\ & & & \multicolumn{1}{c}{(arcsecs)} & angle & \multicolumn{1}{c}{(arcsecs)} & & \\ & & & & (degs) & & & \\ \hline Blank\,\,Sky 1 & 288.5230602 & 12.2217710 & 58.872 & 154.4 & 0.7 & 4780 & 7432 \\ Blank Sky 2 & 288.4896981 & 12.3217610 & 49.423 & 154.4 & 0.7 & 4867 & 7514 \\ Shell 1 & 288.4995975 & 12.3003892 & 32.904 & 154.4 & 1.0 & 4868 & 7521 \\ Shell 2 & 288.5116182 & 12.2922447 & 46.933 & 150.0 & 1.0 & 4915 & 7576 \\ Shell 3 & 288.4560525 & 12.3369014 & 74.181 & 170.0 & 1.0 & 4686 & 7342 \\ Shell 4 & 288.4507472 & 12.3566674 & 65.419 & 170.0 & 1.0 & 4699 & 7343 \\ Shell 5 & 288.5171327 & 12.2647206 & 59.595 & 154.4 & 1.0 & 4877 & 7529 \\ Shell 6 & 288.4757487 & 12.2611586 & 43.389 & 150.0 & 1.0 & 4614 & 7272 \\ Shell 7 & 288.4367229 & 12.3103574 & 43.386 & 130.0 & 1.0 & 4487 & 7170 \\ V1315 Aql & 288.4769928 & 12.3013719 & 42.173 & 154.4 & 1.0 & 4735 & 7382 \\ Star 1 & 288.5492156 & 12.2172214 & 49.155 & 154.4 & 1.0 & 5058 & 7565 \\ Star 2 & 288.5403078 & 12.2461305 & 45.629 & 154.4 & 1.0 & 5047 & 7592 \\ Star 3 & 288.5135945 & 12.2472213 & 42.009 & 154.4 & 1.0 & 4898 & 7455 \\ Star 4 & 288.5604882 & 12.2053443 & 61.154 & 154.4 & 1.0 & 5038 & 7586 \\ \hline \end{tabular} \end{center} \label{tab:journal} \end{table*} \begin{figure*} \centering \includegraphics[width=140mm,angle=0]{v1315.pdf} \caption{INT WFC H$\alpha$ image of the nova shell around V1315 Aql. The binary is located at the centre of the image. North is up and East is left.} \label{fig:zoom} \end{figure*} \begin{figure*} \centering \includegraphics[width=140mm,angle=0]{v1315slit9.pdf} \caption{INT WFC H$\alpha$ image of the nova shell around V1315 Aql with the Keck DEIMOS slit positions and sizes overlaid. V1315 Aql is situated at \textit{x}=0, \textit{y}=0. The seven shell slits are numbered, and the two blank sky slits are also shown. The four flux calibration stars are marked Star 1--4. North is up and East is left. The horizontal band across the image at \textit{y}$\sim -380$ is the gap between two of the CCDs in the WFC mosaic.} \label{fig:mask1} \end{figure*} \subsubsection{0.5m Telescope -- La Palma} Our Keck spectra included four stars for flux calibration but unfortunately they did not appear in any photometric catalogues. In order to allow us to perform flux calibration, we therefore obtained additional images of the four stars together with two catalogue stars (TYC 1049-408-1 and IPHAS J1911411.93+121357.7) using the 0.5m robotic telescope \textit{pt5m} on La Palma \citep{hardy15}. The observations were taken on 2016 October 7, when we took four images in each of the \textit{B, V, R, I} filters with an exposure time of 1 minute each, and on 2016 Nov 18, when we took four 40 sec \textit{R}-band images and four 360 sec \textit{B}-band images. \subsection{Data reduction} \subsubsection{INT images} The INT images were debiased using the median level of the overscan strip and flat-fielded using normalised twilight sky flats. All image processing was carried out using {\sc theli}\footnote{http://www.ing.iac.es/astronomy/instruments/wfc/WFC-THELI-reduction.html}. Figure \ref{fig:zoom} shows the final stacked image of the shell. \subsubsection{Keck DEIMOS spectra} We used {\sc iraf} to reduce the DEIMOS spectra. The spectra were bias corrected using the overscan strip on the chips, and were flat-fielded using quartz lamp flats. We had difficulty in performing the background sky subtraction because the two blank sky slits we had chosen both contain small residual $H\alpha$ emission lines, possibly from the nova shell. We then tried using the sky portion of our four flux calibration stars, but we found that the spectra of the three closest to the shell (Stars 1--3) also contained low levels of residual $H\alpha$ emission. (See Figure \ref{fig:haspec}). The best results were obtained with sky from Star 4, which is furthest from the shell and showed negligible $H\alpha$ emission -- this was used for all subsequent background sky subtraction. \subsubsection{pt5m images} The images were bias and flat field corrected using standard {\sc iraf} procedures. This allowed us to derive magnitudes for the four flux calibration stars, as shown in Table \ref{tab:mags}. We also found 2MASS infrared magnitudes \citep{skrutskie06} for Star 2. Hence, the star with the most complete set of magnitudes was Star 2. We input these values into the Virtual Observatory SED Analyser (VOSA -- see \citealt{bayo08}) to determine the spectral type of Star 2, obtaining M4V ($\pm 2$). We then used VOSA to generate a template spectrum of an M4V star, which we used to flux calibrate the Keck spectra in {\sc iraf}. \begin{table} \caption[]{Magnitudes of the four flux-calibration stars observed with Keck DEIMOS. The errors on the \textit{B, V, R, I} magnitudes are $\pm0.3$ magnitudes. See Table \ref{tab:journal} for the positions of the stars on the sky.} \begin{center} \begin{tabular}{lrrrrr} \hline \multicolumn{1}{l}{Band} & \multicolumn{1}{r}{Star 1} & \multicolumn{1}{r}{Star 2} & \multicolumn{1}{r}{Star 3} & \multicolumn{1}{r}{Star 4} & \multicolumn{1}{r}{} \\ \hline \textit{B} & 18.8 & 16.5 & -- & -- \\ \textit{V} & 17.6 & 15.3 & 19.5 & 18.3 \\ \textit{R} & 16.9 & 14.5 & 18.4 & 16.5 \\ \textit{I} & 15.1 & 13.7 & 17.2 & 14.6 \\ 2MASS \textit{J} & -- & 12.508 & -- & -- \\ 2MASS \textit{H} & -- & 11.798 & -- & -- \\ 2MASS \textit{K} & -- & 11.646 & -- & -- \\ \hline \end{tabular} \end{center} \label{tab:mags} \end{table} \subsubsection{Review of satellite imagery} We searched the GALEX UV satellite footprint using the GalexView interface (\citealt{bianchi14}), but no observations were taken of the field around V1315 Aql. We also examined the WISE 22$\mu$m data (\citealt{wright10}), and there was no emission in the vicinity of V1315 Aql. \section{Results} \label{sec:res} \subsection{INT image} The H$\alpha$ image of the shell surrounding V1315 Aql is shown in Figure \ref{fig:zoom}. The images clearly show one, possibly two roughly spherical shells centred on V1315 Aql. The lobe towards the West has the most prominent emission. There was no evidence of nebulosity on wider scales than shown in Figure \ref{fig:mask1}. There is a possibility that the shell is unrelated to V1315 Aql, and it may just be a line-of-sight alignment of a foreground or background cloud of gas in the Milky Way. To determine that the shell does indeed originate from V1315 Aql, we need to determine if it has the same systemic velocity as the binary, and that its composition is comparable to other nova shells, and to rule out other types of nebulosity, e.g. planetary nebulae, supernovae remnants. \subsection{Geometry of the shell} In Figure \ref{fig:geocirc} we show the image of the shell with circles centred on V1315 Aql overlaid. The radii of the circles are 100\arcsec, 180$\arcsec$ and 240\arcsec. The inner annulus between 100$\arcsec$ and 180$\arcsec$ contains the most prominent areas of emission (from the North around to the West), and appears to be centred on V1315 Aql. The outer annulus also contains a fainter arc of emission to the North-West, and some fainter areas of emission to the South-East, which also appear to be centred on V1315 Aql. \begin{figure*} \centering \includegraphics[width=140mm,angle=0]{v1315circ1.pdf} \caption{INT WFC H$\alpha$ image of the shell with overlaid circles centred on V1315 Aql of radii 100\arcsec, 180\arcsec and 240\arcsec. North is up and East is left.} \label{fig:geocirc} \end{figure*} \subsection{Keck DEIMOS spectra} In Figure \ref{fig:vtot} we show the spectrum of V1315 Aql. The spectrum shows strong, broad (FWHM of H$\alpha$ is 900 km s$^{-1}$) Balmer and He{\sc{I}} emission lines from the accretion disk. The spectrum is very similar to that shown in \citet{dhillon95b}. \begin{figure} \centering \includegraphics[width=85mm,angle=0]{v1315tot3.pdf} \caption{Keck DEIMOS spectrum of V1315 Aql. Note that we did not flux calibrate this spectrum because the flux calibration stars do not cover its whole wavelength range.} \label{fig:vtot} \end{figure} \subsubsection{Emission lines} \label{emline} The spectra of the seven shell slits and the blank sky slits in the range 6540--6600\AA\, are shown in Fig. \ref{fig:haspec}. Note that the blank sky 2 slit spanned two CCDs in the spectrograph and each part is shown separately. The shell spectra all show single-peaked emission lines of H$\alpha$ and a pair of N[II] lines at 6548 and 6583\AA. These lines are characteristic of old nova shells \citep{downes01}. \begin{figure*} \centering \includegraphics[width=180mm,angle=0]{shellspec3-4.pdf} \caption{Spectra of the seven shell slits (see Fig. \ref{fig:mask1}), and the blank sky slits from 6540--6600\AA. The flux is in units of $\mu$Jy arcsec$^{-2}$. The slit spectra all show the presence of H$\alpha$ and N[II] 6548\AA\,\,and N[II] 6583\AA. H$\alpha$ is also present in the blank sky slits. The Blank Sky 2 slit fell across two CCDs on the detector and so we show each spectrum separately, as 2a and 2b.} \label{fig:haspec} \end{figure*} We also detected H$\beta$ in those shell spectra that covered 4861\AA. Unfortunately, none of the spectra of the four flux-calibration stars covered this wavelength, and hence we were unable to flux calibrate the H$\beta$ lines. We also found the S[II] 6716 and 6731\AA\, lines in shell slits 2, 3, 5, 6 and 7, as shown in Figure \ref{fig:siispec}. The average ratio of the two S[II] lines is 1:1.4. \begin{figure*} \centering \includegraphics[width=150mm,angle=0]{v1315sii-5.pdf} \caption{Spectra of the five shell slits from 6710--6740\AA\, showing the S[II] emission lines at 6716 and 6731\AA.} \label{fig:siispec} \end{figure*} We searched for the emission lines N[II] 5755\AA\,, O[I] 6300, 6300, 6364\AA\, and O[III] 4363, 4959, 5007\AA, often seen in nova shell spectra, but none were detected. There are faint lines at 5679, 5740 and 5742\AA, presumably from NI and NII, in shell slits 3 and 4, but these are not present in any other slits. There is also H$\alpha$ emission present in both the sky portions of the slit centred on V1315 Aql, though any N[II] lines present are lost in the noise. We show the H$\alpha$ line profile from the sky on the South-East side of the V1315 Aql slit in Figure \ref{fig:hazoom}. \begin{figure} \centering \includegraphics[width=85mm,angle=0]{v1315spec1.pdf} \caption{H$\alpha$ spectrum of the sky from the South-East side of the V1315 Aql slit. The error bars show the noise levels of the background sky} \label{fig:hazoom} \end{figure} The FWHM of the H$\alpha$ and N[II] lines of all the shell slits are listed in Table \ref{tab:fwhm}. \begin{table*} \caption[]{The first column shows the radial velocities in km s$^{-1}$ of the H$\alpha$ emission line in the spectra of the seven V1315 Aql shell slits and the sky portion of the four flux calibration stars. The errors on the velocities are $\pm5$ km\,s$^{-1}$. The other columns shows the FWHM in km s$^{-1}$ of each line. The errors on the FWHM are $\pm8$ km\,s$^{-1}$. The H$\alpha$ line in the V1315 Aql sky has a double peak and both radial velocities are shown. All values and errors were obtained from Gaussian fits to the emission lines.} \begin{center} \begin{tabular}{lrrrrrrrrrr} \hline \multicolumn{1}{l}{Slit name} & \multicolumn{1}{r}{Radial} & \multicolumn{1}{r}{H$\alpha$} & \multicolumn{1}{r}{N[II]} & \multicolumn{1}{r}{N[II]} & \multicolumn{1}{r}{H$\beta$} & \multicolumn{1}{r}{NI} & \multicolumn{1}{r}{NII} & \multicolumn{1}{r}{NI} & \multicolumn{1}{r}{SII} & \multicolumn{1}{r}{SII} \\ & \multicolumn{1}{r}{Velocity} & & \multicolumn{1}{r}{6548\AA} & \multicolumn{1}{r}{6583\AA} & & \multicolumn{1}{r}{5676\AA} & \multicolumn{1}{r}{5740\AA} & \multicolumn{1}{r}{5742\AA} & \multicolumn{1}{r}{6716\AA} & \multicolumn{1}{r}{6731\AA} \\ \hline Shell 1 & $-8$ & 71 & 82 & 72 & 0 & 0 & 0 & 0 & 0 & 0 \\ Shell 2 & $-8$ & 86 & 65 & 74 & 0 & 0 & 0 & 0 & 67 & 78 \\ Shell 3 & $-5$ & 80 & 79 & 79 & 122 & 143 & 206 & 0 & 68 & 68 \\ Shell 4 & $-3$ & 79 & 59 & 75 & 89 & 78 & 0 & 124 & 0 & 0 \\ Shell 5 & $-9$ & 75 & 54 & 75 & 0 & 0 & 0 & 0 & 52 & 64 \\ Shell 6 & $-26$ & 84 & 87 & 80 & 0 & 0 & 0 & 0 & 78 & 76 \\ Shell 7 & 4 & 89 & 87 & 86 & 0 & 0 & 0 & 0 & 91 & 84 \\ V1315 Aql Sky& $-33$ \& 14 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Star 1 & 11 & & & & & & & & & \\ Star 2 & $-23$ & & & & & & & & & \\ Star 3 & 29 & & & & & & & & & \\ Star 4 & 14 & & & & & & & & & \\ \hline \end{tabular} \end{center} \label{tab:fwhm} \end{table*} \subsubsection{Systemic velocity of V1315 Aql} Historically the systemic velocity of V1315 Aql, $\gamma$, has been difficult to determine because of the complex behaviour of its disk emission lines and lack of absorption lines from the primary and secondary stars. \citet{downes86} presented radial velocity data for H$\beta$, H$\gamma$ and HeII 4686\AA\, emission lines. They derived values for $\gamma$ consistent with zero from the H$\beta$ and HeII 4686\AA\, lines, but the H$\gamma$ line gave a value of 100 km\,s$^{-1}$. \citet{dhillon91} also used the H$\beta$, H$\gamma$ and HeII 4686\AA\, emission lines and the HeI 4471\AA\, line and derived a $\gamma$ range of $-4$ to +93 km\,s$^{-1}$. Given the unreliability of the broad emission lines from the accretion disc to determine $\gamma$, in the following subsection we will use our own measurements of the radial velocity of the shell to determine if they are consistent. \subsubsection{Radial velocities of shell emission lines} To measure the radial velocities of the emission lines, we fitted a Gaussian to the H$\alpha$ line of the shell and measured the wavelength at the centre of the Gaussian. The resulting shell radial velocities are shown in Table \ref{tab:fwhm}. The spectrum of the sky on the South-East side of the slit centred on V1315 Aql is shown in Figure \ref{fig:hazoom}. The plot shows tentative evidence of a double-peaked structure. We measured the radial velocity of each peak to be $-33$ and 14 km\,s$^{-1}$. If we assume that the two peaks represent emission from the front and back sides of a spherically-expanding shell, then the average of the two gives a systemic velocity of $ \gamma \approx -10$ km\,$^{-1}$, and an expansion velocity of $\sim$ 25 km\,s$^{-1}$. We analysed the sky on the North-East side of V1315 Aql and it too showed a double-peaked structure, although it is less pronounced. The seven shell slits were placed at the edges of the shell. The expansion velocity of the edge of the shell will be tangential to the line of sight and will not affect the radial velocities, which should be similar to the overall systemic velocity. The measured shell radial velocities are shown in Table \ref{tab:fwhm} and are broadly comparable with the systemic velocity of $-10$ km\,s$^{-1}$ derived above, apart from shells 6 and 7 which differ by 14 and 16 km\,s$^{-1}$ respectively. The Galactic velocity of V1315 Aql relative to the Sun can be derived from its Galactic coordinates, $l=46.4^{\circ}, b=0^{\circ}$, which give a radial velocity of 7 km\,s$^{-1}$. This is broadly consistent with the systemic velocity derived above. \subsubsection{Line fluxes} The fluxes of the emission lines from the shell in each of the slits are given in Table \ref{tab:flux}. Assuming a shell radius of 220$\arcsec$, a distance of 489 parsecs \citep{ak2008}, and using the H$\alpha$ flux from each slit, we can estimate the total flux from the whole shell. However, we can see by examining Figure \ref{fig:geocirc} that the shell is fragmented and clumpy and only a small fraction is actually emitting. If we assume that 10\% of the full shell is emitting and take an average H$\alpha$ flux from the seven shell slits of $1.70 \times 10^{-17}$ ergs/cm$^2$/sec/arcsec$^{2}$, we obtain a total H$\alpha$ luminosity of $7.1 \times 10^{30}$ ergs/sec. The plots of \citet{downes01} showing the temporal reduction in the H$\alpha$ luminosities of shells from fast and slow novae, have a lower limit of log L = 30 at 100 yrs. We note that the source of the V1315 Aql luminosity is likely to include emission from shock interaction with pre-existing ISM. This would enhance the flux, and lead to an underestimate of the age of the shell. We conclude that the shell is likely to be significantly older than 100 yrs. \begin{table} \caption[]{H$\alpha$ and N[II] flux (ergs/s/cm$^2$/arcsec$^2$ $\times$ $ 10^{-18}$) from the seven shell slits. The errors on the flux values are $\pm 25\%$. } \begin{center} \begin{tabular}{crrr} \hline \multicolumn{1}{l}{Shell slit No.} & \multicolumn{1}{r}{H$\alpha$} & \multicolumn{1}{r}{N[II]} & \multicolumn{1}{r}{N[II]} \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{r}{6548\AA} & \multicolumn{1}{r}{6583\AA} \\ \hline 1 & 17.2 & 4.22 & 10.8 \\ 2 & 18.1 & 2.10 & 7.64 \\ 3 & 18.3 & 4.16 & 11.8 \\ 4 & 24.2 & 1.47 & 3.03 \\ 5 & 9.58 & 0.73 & 3.62 \\ 6 & 15.4 & 2.64 & 5.61 \\ 7 & 16.2 & 4.80 & 11.9 \\ \hline \end{tabular} \end{center} \label{tab:flux} \end{table} \subsubsection{Time of nova eruption} \label{time} The distance to V1315 Aql was measured by \citet{ak2008} as $489 \pm 49$ pc, computed from the Period-Luminosity-Colours (PLCs) relation of CVs calibrated with {\em 2MASS} photometric data. The angular radius of the shell on our image is $\sim$4$\arcmin$, giving a physical radius of $1.7 \times 10^{13}$ km. \citet{duerbeck87} found that the velocity of nova shells reduces by half every 50--100 yrs. Using our measured expansion velocity of $\sim$25 km\,s$^{-1}$, and assuming an initial velocity of 2,000 km\,s$^{-1}$ (see Table 8.1 \citealt{bode08}), we estimate that the nova explosion occurred $\sim$500-600 yrs ago. However, if we take more extreme values for the initial ejection velocity, say 700 km\,s$^{-1} $ and a deceleration half-life of 200 yrs then the age of the nova increases to 1,200 yrs. Assuming the visual magnitude of V1315 Aql was the same prior to the nova event as it is now, m$_{V}=14.3$, and taking the average brightening of a nova to be $\sim$11 magnitudes (\citealt{bode08}), the system would have been at m$_{V}\sim$ 3.3 at peak brightness, clearly visible to the naked eye. Novae decline rapidly and so it would have dropped below the naked-eye visibility limit of m$_{V}\sim$ 6 within a few days. We reviewed the catalogues of ancient Chinese and Asian novae and supernovae sightings by \citet{stephenson76} which includes sightings from 532 BC up to 1604 AD. We could find no record of an event close to the coordinates of V1315 Aql. If the nova eruption occurred when V1315 Aql was close to the Sun in the sky, and it was brighter than m$_{V}\sim$ 6 for only a few days, it may well have been hidden in twilight and hence gone unnoticed. \subsubsection{Temperature and density of the shell} The method most often used to determine the temperature and density of gaseous nebulae is to measure the ratio of the intensities of particular emission lines from the same species of ions. Two ions which are often used are N[II] and O[III]. We were unable to detect any O[III] lines in our spectra, and the N[II] ratio requires a flux measurement of the 5755\AA\,\,line, which we were only able to detect very weakly in shell slit 1. It was not present in any other slit. Hence we can only place an upper limit on the electron temperature ($T_e$) of the shell of 5,000\,K using Figure 5.1 from \citet{osterbrock89}. \subsubsection{Mass of the shell} We can derive a rough estimate of the mass of the shell using the technique set out in \citet{corradi2015}. They derived the ionised hydrogen masses of several planetary nebulae using the formula \begin{equation} m_{\mathrm{shell}}(H^{+})=\frac{4 \pi \,D^2\,F(\mathrm{H}\beta)\,m_{\mathrm{p}}}{h\nu_{\mathrm{H}\beta}\,n_\mathrm{e}\,\alpha_{\mathrm{H}\beta}^{eff}(H^0,T_\mathrm{e})}, \end{equation} where \textit{D} is the distance to the object, \textit{F}(H$\beta$) is the H$\beta$ flux, \textit{m}$_{\mathrm{p}}$ is the mass of a proton, \textit{h$\nu_{\mathrm{H}\beta}$} is the energy of an H$\beta$ photon, \textit{n$_{\mathrm{e}}$} is the electron density per cm$^3$, and \textit{$\alpha_{\mathrm{H}\beta}^{eff}(H^0,T_{\mathrm{e}})$} is the effective recombination coefficient for H$\beta$. This formula is also applicable to nova shells \citep{osterbrock89}. As we pointed out in Section \ref{emline}, the spectra of our four flux calibration stars do not cover H$\beta$ so we are unable to derive a flux directly. However, we can make a rough estimate as follows. The H$\beta$ line is present in four shell slits (Nos. 3, 4, 6 and 7). We can measure the counts for both H$\alpha$ and H$\beta$. The DEIMOS exposure time calculator for a source that is flat in frequency gives the ratio of counts for H$\alpha$\,:\,H$\beta$ as approximately 1\,:\,0.3. Assuming that 10\%\ of the full shell is emitting and taking an average H$\alpha$ flux from the seven shell slits of $1.70 \times 10^{-17}$ ergs/cm$^2$/sec/arcsec$^{2}$ we obtain a total H$\alpha$ flux of $2.49 \times 10^{-13}$ ergs/cm$^2$/sec from the whole shell, allowing us to derive an H$\beta$ flux of \textit{F(H$\beta$)} =$ 8.9 \times 10^{-14}$ ergs/cm$^2$/sec. The electron density, \textit{$n_e$}, can be estimated using the S[II] 6716 and 6731 line ratio, which we found to be 1.4 (see Section \ref{emline}). Figure 5.8 in \citet{osterbrock89} shows the electron density versus intensity ratio at \textit{T$_{\mathrm{e}}$} = 10,000\,K and indicates a scaling of \textit{n$_e$}(10$^4/$\textit{T$_{\mathrm{e}}$})$^{1/2} $. We found a maximum temperature of 5,000\,K which gives an electron density of $\sim$ 22 cm$^{-3}$. Finally, using the distance measured by \citet{ak2008} of 489 pc and a value for $\alpha_{\mathrm{H}\beta}^{eff}(H^0,T_\mathrm{e})$ of $3.78 \times 10^{-14}$, for Case A conditions at T$_{\mathrm{e}}$ = 5000\,K listed in Table 4.1 of \citet{osterbrock89} gives a maximum mass of \begin{equation} m_{\mathrm{shell}}(H^{+}) \simeq 2 \times 10^{-4} M_{\odot}. \end{equation} There is no need to correct for extinction as \citet{rutten92b} found $E(B-V)=0$ for V1315 Aql using \textit{IUE} spectra of interstellar absorption bands around 2200\AA. In view of the many assumptions used to estimate the mass of the shell, it should be treated as an order of magnitude approximation. As nova shells expand they decelerate as they sweep up pre-existing circumstellar gas, which leads to a doubling of their mass every 50--100 yrs \citep{duerbeck87}. We estimated the age of the shell in Section \ref{time} as 500--1200 yrs, so the original ejected mass of the shell would have been substantially lower than the value we have derived above, giving a maximum ejected mass of $\la 10^{-5}$ M$_{\odot}$. This rules out a planetary nebula origin, which typically have masses in the range 0.1--1.0 M$_{\odot}$ \citep{osterbrock89}. Nova shells typically have masses in the range 10$^{-4}$--10$^{-6}$~M$_{\odot}$ (\citealt{yaron05}), so our estimate of the shell mass in V1315 Aql of $\sim 10^{-5}$ M$_{\odot}$ is in accordance with this. \section{Discussion} \label{sec:disc} We can summarise our findings as follows. The shell is broadly spherical and appears to be centred on V1315 Aql, strongly suggesting that the shell is associated with the central binary. The systemic velocity of the shell measured from the sky portion of the V1315 Aql slit and at the edges of the shell are broadly consistent. The absence of 22$\mu$m emission precludes a planetary nebula origin (\citealt{mizuno10}). We derive an order-of-magnitude estimate of the mass of the shell of $\sim 10^{-5}$ M$_{\odot}$ which rules out a planetary nebula or supernova origin. We conclude that these results indicate that the shell is associated with V1315 Aql. At this stage of the shell's evolution, the luminosity of the outer edges of the shell is most likely fuelled by two processes, recombination and shock interaction with pre-existing CSM. Our flux measurement will include contributions from both of these processes, making it difficult to estimate the physical conditions in the shell as a whole. Furthermore, the lack of other forbidden emission lines in the shell spectra, especially N[II] 6583\AA\,\,and O[II], means we cannot determine the physical parameters of the shell to confirm conclusively that it exhibits properties consistent with a nova origin. In S15, we estimated that the nova-like phase following a nova eruption lasts $\sim2400$ yrs. This is comparable to the $\sim2000$ yrs order-of-magnitude estimate by \citet{patterson13}, based on the transition of BK Lyn to a dwarf nova in the year 2011. However, the AAVSO light curve of BK Lyn\footnote{https://www.aavso.org/} suggests that it has now reverted back to a nova-like state, indicating that the object is a Z Cam-type dwarf nova that has likely been transitioning from the nova-like to dwarf nova state for much less than the $\sim2000$-yr estimate of \citet{patterson13}. \citet{shara17a} found that the transition time for AT Cnc was much shorter at $330^{+135}_{-90}$ yrs. Our estimate of the time since the nova eruption on V1315 Aql of 500--1200 yrs is consistent with both these timescales, and lies within the overall nova recurrence timescale of 13000 yrs found by \citet{schmidtobreick15}. \section{Conclusions} We present images and spectra of the shell surrounding V1315 Aql. Our results strongly suggest that the shell originated from a nova eruption on the CV. This discovery of the first nova shell around a nova-like variable adds further support to the theory of nova-induced cycles in the mass transfer rates of CVs. \section*{Acknowledgments} We would like to thank the referee for his helpful comments and for pointing out the latest AAVSO light curve of BK Lyn demonstrating Z Cam-like behaviour. VSD and SPL were supported under grants from the Science and Technology Facilities Council (STFC). This publication makes use of VOSA, developed under the Spanish Virtual Observatory project supported from the Spanish MICINN through grant AyA2011-24052. The INT is operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'{\i}sica de Canarias. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognise and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \bibliographystyle{mn2e}
{ "timestamp": "2018-04-17T02:15:29", "yymm": "1804", "arxiv_id": "1804.05596", "language": "en", "url": "https://arxiv.org/abs/1804.05596" }
\section{Introduction} Recent coreference resolution systems have heavily relied on first order models~\cite{clark:2016b,e2e-coref}, where only pairs of entity mentions are scored by the model. These models are computationally efficient and scalable to long documents. However, because they make independent decisions about coreference links, they are susceptible to predicting clusters that are locally consistent but globally inconsistent. Figure~\ref{fig:consistency} shows an example from \newcite{wiseman:2016} that illustrates this failure case. The plurality of \textbf{[you]} is underspecified, making it locally compatible with both \textbf{[I]} and \textbf{[all of you]}, while the full cluster would have mixed plurality, resulting in global inconsistency. We introduce an approximation of higher-order inference that uses the span-ranking architecture from \newcite{e2e-coref} in an iterative manner. At each iteration, the antecedent distribution is used as an attention mechanism to optionally update existing span representations, enabling later coreference decisions to softly condition on earlier coreference decisions. For the example in Figure~\ref{fig:consistency}, this enables the linking of \textbf{[you]} and \textbf{[all of you]} to depend on the linking of \textbf{[I]} and \textbf{[you]}. To alleviate computational challenges from this higher-order inference, we also propose a coarse-to-fine approach that is learned with a single end-to-end objective. We introduce a less accurate but more efficient coarse factor in the pairwise scoring function. This additional factor enables an extra pruning step during inference that reduces the number of antecedents considered by the more accurate but inefficient fine factor. Intuitively, the model cheaply computes a rough sketch of \emph{likely} antecedents before applying a more expensive scoring function. Our experiments show that both of the above contributions improve the performance of coreference resolution on the English OntoNotes benchmark. We observe a significant increase in average F1 with a second-order model, but returns quickly diminish with a third-order model. Additionally, our analysis shows that the coarse-to-fine approach makes the model performance relatively insensitive to more aggressive antecedent pruning, compared to the distance-based heuristic pruning from previous work. \input{figures/consistency} \section{Background} \paragraph{Task definition} We formulate the coreference resolution task as a set of antecedent assignments $y_i$ for each of span $i$ in the given document, following \newcite{e2e-coref}. The set of possible assignments for each $y_i$ is $\mathcal{Y}(i) = \{\epsilon, 1, \ldots, i - 1\}$, a dummy antecedent $\epsilon$ and all preceding spans. Non-dummy antecedents represent coreference links between $i$ and $y_i$. The dummy antecedent $\epsilon$ represents two possible scenarios: (1) the span is not an entity mention or (2) the span is an entity mention but it is not coreferent with any previous span. These decisions implicitly define a final clustering, which can be recovered by grouping together all spans that are connected by the set of antecedent predictions. \paragraph{Baseline} We describe the baseline model~\cite{e2e-coref}, which we will improve to address the modeling and computational limitations discussed previously. The goal is to learn a distribution $P(y_i)$ over antecedents for each span $i$ : \begin{align} P(y_i) &= \frac{e^{s(i, y_i)}}{\sum_{y' \in \mathcal{Y}(i)}e^{s(i, y')}} \end{align} where $s(i, j)$ is a pairwise score for a coreference link between span $i$ and span $j$. The baseline model includes three factors for this pairwise coreference score: (1) $\mscore{i}$, whether span $i$ is a mention, (2) $\mscore{j}$, whether span $j$ is a mention, and (3) $\ascore{i}{j}$ whether $j$ is an antecedent of $i$: \begin{align} \cscore{i}{j} &=\mscore{i} + \mscore{j} + \ascore{i}{j} \end{align} In the special case of the dummy antecedent, the score $s(i, \epsilon)$ is instead fixed to 0. A common component used throughout the model is the vector representations $\V{g}_i$ for each possible span $i$. These are computed via bidirectional LSTMs~\cite{lstm} that learn context-dependent boundary and head representations. The scoring functions $s_\text{m}$ and $s_\text{a}$ take these span representations as input: \begin{align} \mscore{i}&= \V{w}_\text{m}^\top \ffnn{m}{\V{g}_i}\\ \hspace{-10pt}\ascore{i}{j} &= \V{w}_\text{a}^\top \ffnn{a}{[\V{g}_i, \V{g}_j, \V{g}_i \circ \V{g}_j, \phi(i, j)]}\hspace{-10pt} \end{align} \noindent where $\circ$ denotes element-wise multiplication, $\textsc{ffnn}$ denotes a feed-forward neural network, and the antecedent scoring function $\ascore{i}{j}$ includes explicit element-wise similarity of each span $\V{g}_i \circ \V{g}_j$ and a feature vector $\phi(i, j)$ encoding speaker and genre information from the metadata and the distance between the two spans. The model above is factored to enable a two-stage beam search. A beam of up to $M$ potential mentions is computed (where $M$ is proportional to the document length) based on the spans with the highest mention scores $\mscore{i}$. Pairwise coreference scores are only computed between surviving mentions during both training and inference. Given supervision of gold coreference clusters, the model is learned by optimizing the marginal log-likelihood of the possibly correct antecedents. This marginalization is required since the best antecedent for each span is a latent variable. \section{Higher-order Coreference Resolution} \label{sec:higher_order} The baseline above is a first-order model, since it only considers pairs of spans. First-order models are susceptible to consistency errors as demonstrated in Figure~\ref{fig:consistency}. Unlike in sentence-level semantics, where higher-order decisions can be implicitly modeled by the LSTMs, modeling these decisions at the document-level requires explicit inference due to the potentially very large surface distance between mentions. We propose an inference procedure that allows the model to condition on higher-order structures, while being fully differentiable. This inference involves $N$ iterations of refining span representations, denoted as $\V{g}_i^n$ for the representation of span $i$ at iteration $n$. At iteration $n$, $\V{g}_i^n$ is computed with an attention mechanism that averages over previous representations $\V{g}_j^{n-1}$ weighted according to how likely each mention $j$ is to be an antecedent for $i$, as defined below. The baseline model is used to initialize the span representation at $\V{g}_i^1$. The refined span representations allow the model to also iteratively refine the antecedent distributions $P_n(y_i)$: \begin{align} P_n(y_i) &= \frac{e^{s(\V{g}_i^n, \V{g}_{y_i}^n)}}{\sum_{y \in \mathcal{Y}(i)}e^{s(\V{g}_i^n, \V{g}_y^n))}} \end{align} where $s$ is the coreference scoring function of the baseline architecture. The scoring function uses the same parameters at every iteration, but it is given different span representations. At each iteration, we first compute the expected antecedent representation $\V{a}_i^n$ of each span $i$ by using the current antecedent distribution $P_{n}(y_i)$ as an attention mechanism: \begin{align} \V{a}_i^n &= \sum_{y_i \in \mathcal{Y}(i)}P_{n}(y_i) \cdot \V{g}_{y_i}^n \end{align} The current span representation $\V{g}_i^n$ is then updated via interpolation with its expected antecedent representation $\V{a}_i^n$: \begin{align} \V{f}_i^n &= \sigma(\M{W}{f}[\V{g}_i^n, \V{a}_i^n]) \\ \V{g}_i^{n+1}&= \V{f}_i^n \circ \V{g}_i^n + (\V{1} - \V{f}_i^n) \circ \V{a}_i^n \end{align} The learned gate vector $\V{f}_i^n$ determines for each dimension whether to keep the current span information or to integrate new information from its expected antecedent. At iteration $n$, $\V{g}_i^n$ is an element-wise weighted average of approximately $n$ span representations (assuming $P_n(y_i)$ is peaked), allowing $P_n(y_i)$ to softly condition on up to $n$ other spans in the predicted cluster. Span-ranking can be viewed as predicting latent antecedent trees~\cite{fernandes:2012,martschat:2015}, where the predicted antecedent is the parent of a span and each tree is a predicted cluster. By iteratively refining the span representations and antecedent distributions, another way to interpret this model is that the joint distribution $\prod_i P_N(y_i)$ implicitly models every directed path of up to length $N + 1$ in the latent antecedent tree. \section{Coarse-to-fine Inference} \label{sec:c2f} The model described above scales poorly to long documents. Despite heavy pruning of potential mentions, the space of possible antecedents for every surviving span is still too large to fully consider. The bottleneck is in the antecedent score $\ascore{i}{j}$, which requires computing a tensor of size $M \times M \times (3|\V{g}| + |\phi|)$. This computational challenge is even more problematic with the iterative inference from Section~\ref{sec:higher_order}, which requires recomputing this tensor at every iteration. \subsection{Heuristic antecedent pruning} To reduce computation, \newcite{e2e-coref} heuristically consider only the nearest $K$ antecedents of each span, resulting in a smaller input of size $M \times K \times (3|\V{g}| + |\phi|)$. The main drawback to this solution is that it imposes an a priori limit on the maximum distance of a coreference link. The previous work only considers up to $K = 250$ nearest mentions, whereas coreference links can reach much further in natural language discourse. \input{figures/pruning} \subsection{Coarse-to-fine antecedent pruning} We instead propose a coarse-to-fine approach that can be learned end-to-end and does not establish an a priori maximum coreference distance. The key component of this coarse-to-fine approach is an alternate bilinear scoring function: \begin{align} \bscore{i}{j} &= \V{g}_i^\top \M{W}{c}\;\V{g}_j \end{align} where $\M{W}{c}$ is a learned weight matrix. In contrast to the concatenation-based $\ascore{i}{j}$, the bilinear $\bscore{i}{j}$ is far less accurate. A direct replacement of $\ascore{i}{j}$ with $\bscore{i}{j}$ results in a performance loss of over 3 F1 in our experiments. However, $\bscore{i}{j}$ is much more efficient to compute. Computing $\bscore{i}{j}$ only requires manipulating matrices of size $M \times |\V{g}|$ and $M \times M$. \input{figures/test_results} Therefore, we instead propose to use $\bscore{i}{j}$ to compute a rough sketch of \emph{likely} antecedents. This is accomplished by including it as an additional factor in the model: \begin{align} \hspace{-10pt}\cscore{i}{j} &= \mscore{i} + \mscore{j} + \bscore{i}{j} + \ascore{i}{j}\hspace{-10pt} \end{align} Similar to the baseline model, we leverage this additional factor to perform an additional beam pruning step. The final inference procedure involves a three-stage beam search: \paragraph{First stage} Keep the top $M$ spans based on the mention score $\mscore{i}$ of each span. \paragraph{Second stage} Keep the top $K$ antecedents of each remaining span $i$ based on the first three factors, $\mscore{i} + \mscore{j} + \bscore{i}{j}$. \paragraph{Third stage} The overall coreference $\cscore{i}{j}$ is computed based on the remaining span pairs. The soft higher-order inference from Section~\ref{sec:higher_order} is computed in this final stage. While the maximum-likelihood objective is computed over only the span pairs from this final stage, this coarse-to-fine approach expands the set of coreference links that the model is capable of learning. It achieves better performance while using a much smaller $K$ (see Figure~\ref{fig:pruning}). \section{Experimental Setup} We use the English coreference resolution data from the CoNLL-2012 shared task~\cite{pradhan:2012} in our experiments. The code for replicating these results is publicly available.\footnote{\url{https://github.com/kentonl/e2e-coref}} Our models reuse the hyperparameters from ~\newcite{e2e-coref}, with a few exceptions mentioned below. In our results, we report two improvements that are orthogonal to our contributions. \begin{itemize} \item We used embedding representations from a language model~\cite{elmo} at the input to the LSTMs (\texttt{ELMo} in the results). \item We changed several hyperparameters: \begin{enumerate} \item increasing the maximum span width from 10 to 30 words. \item using 3 highway LSTMs instead of 1. \item using GloVe word embeddings ~\cite{glove} with a window size of 2 for the head word embeddings and a window size of 10 for the LSTM inputs. \end{enumerate} \end{itemize} The baseline model considers up to 250 antecedents per span. As shown in Figure~\ref{fig:pruning}, the coarse-to-fine model is quite insensitive to more aggressive pruning. Therefore, our final model considers only 50 antecedents per span. On the development set, the second-order model ($N=2$) outperforms the first-order model by 0.8 F1, but the third order model only provides an additional 0.1 F1 improvement. Therefore, we only compute test results for the second-order model. \section{Results} We report the precision, recall, and F1 of the the \muc, \bcubed, and \ceaf metrics using the official CoNLL-2012 evaluation scripts. The main evaluation is the average F1 of the three metrics. Results on the test set are shown in Table~\ref{tab:test_results}. We include performance of systems proposed in the past 3 years for reference. The baseline relative to our contributions is the span-ranking model from ~\newcite{e2e-coref} augmented with both \texttt{ELMo} and hyperparameter tuning, which achieves 72.3 F1. Our full approach achieves 73.0 F1, setting a new state of the art for coreference resolution. Compared to the heuristic pruning with up to 250 antecedents, our coarse-to-fine model only computes the expensive scores $\ascore{i}{j}$ for 50 antecedents. Despite using far less computation, it outperforms the baseline because the coarse scores $\bscore{i}{j}$ can be computed for all antecedents, enabling the model to potentially predict a coreference link between any two spans in the document. As a result, we observe a much higher recall when adopting the coarse-to-fine approach. We also observe further improvement by including the second-order inference (Section~\ref{sec:higher_order}). The improvement is largely driven by the overall increase in precision, which is expected since the higher-order inference mainly serves to rule out inconsistent clusters. It is also consistent with findings from \newcite{martschat:2015} who report mainly improvements in precision when modeling latent trees to achieve a similar goal. \section{Related Work} In addition to the end-to-end span-ranking model~\cite{e2e-coref} that our proposed model builds upon, there is a large body of literature on coreference resolvers that fundamentally rely on scoring span pairs~\cite{ng:2002,bengtson:2008,denis2008specialized,fernandes:2012,durrett:2013,wiseman:2015,clark:2016b}. Motivated by structural consistency issues discussed above, significant effort has also been devoted towards cluster-level modeling. Since global features are notoriously difficult to define~\cite{wiseman:2016}, they often depend heavily on existing pairwise features or architectures~\cite{bjorkelund:2014,clark:2015,clark:2016a}. We similarly use an existing pairwise span-ranking architecture as a building block for modeling more complex structures. In contrast to ~\newcite{wiseman:2016} who use highly expressive recurrent neural networks to model clusters, we show that the addition of a relatively lightweight gating mechanism is sufficient to effectively model higher-order structures. \section{Conclusion} We presented a state-of-the-art coreference resolution system that models higher order interactions between spans in predicted clusters. Additionally, our proposed coarse-to-fine approach alleviates the additional computational cost of higher-order inference, while maintaining the end-to-end learnability of the entire model. \subsection*{Acknowledgements} The research was supported in part by DARPA under the DEFT program (FA8750-13-2-0019), the ARO (W911NF-16-1-0121), the NSF (IIS-1252835, IIS-1562364), gifts from Google and Tencent, and an Allen Distinguished Investigator Award. We also thank the UW NLP group for helpful conversations and comments on the work.
{ "timestamp": "2018-04-17T02:11:28", "yymm": "1804", "arxiv_id": "1804.05392", "language": "en", "url": "https://arxiv.org/abs/1804.05392" }
\section{Introduction} To select a decent model, a physicist has to account for a multitude of experimental data sets registered during different beam conditions and in varying detector set-ups, exhibiting vastly different statistical and systematical errors. For believable testing of theoretical models, the systematic uncertainties should be under control \cite{Hudson,Bityukov11,Knoet}. Frequency analyses, based on the likelihood ratio and other methods, are widely in use in particle physics \cite{Prosper,Erler15} and in high-energy astrophysics. When averages of different experimental results for the same quantities are computed, each one including both the statistical and systematical errors, the combined error of these is usually referred to. Both sources of uncertainty constitute important pieces of information, since the statistical errors usually scale in proportion to the sample size, while this is not the case for the systematical or theoretical sources of uncertainty. The systematical error cannot be reduced by simply increasing the statistical significance of separate experimental data points \cite{Prosper,Erler15}. Experimental measurements are still sometimes presented without inclusion of their systematical uncertainties, and it is not always obvious whether the quoted over-all error bars include both statistical and systematical uncertainty. In fact, the actual background rates and shapes of the measured distributions are sensitive to a number of experimental quantities, such as calibration constants, detector geometries, poorly known material budgets within experiments, particle identification efficiencies etc. A 'systematical error', referred to by a high energy physicist, usually corresponds to a 'nuisance parameter' by a statistician. The uncertainties, due to propagation of imperfect knowledge of nuisance parameters that cannot be constrained by the same data set, lead to systematical uncertainties. However, the uncertainties that are purely related to the fit, are referred to as statistical uncertainties. The uncertainties due to calculations, such as uncertainty propagation and treatment of systematical effects, have to be accounted for, as well, since the conventional statistics does not guarantee consistent treatment of these, but rather an {\it ad hoc} procedure is typically used \cite{DAug1,DAug3}. There are two fundamentally different ways of including statistical and systematical errors in the fitting procedure. The first one, mostly used in connection with the differential cross sections, takes into account the square of the statistical and systematical errors in quadrature: $\sigma_{tot}^2 = \sigma_{stat.}^2+ \sigma_{syst.}^2$. The second approach accounts for the basic property of systematical errors, i.e. the fact that these errors have the same sign and size in proportion to the effect they have in another set of the same experimental data. To account for these properties, extra normalization coefficients for the measured data are introduced in the fit. For simplicity, this normalization is often transferred into the model parametrization, while it - in reality - accounts for the unknown normalization of the experimental data. This method is often used by research collaborations to extract, for example, the parton distribution functions of nucleons \cite{Stump01,exmp1-26,exmp1} and nuclei \cite{EPPS16}) in high energy accelerator experiments, or in astroparticle physics \cite{Koh15}. There are number of studies addressing the way to include systematical errors in experimental measurements (see, for example, references \cite{Bityukov11,Fichet}, and references therein). In these studies, a predefined region of allowed values is usually considered, in order to define the magnitude of the signal above a large background. This differs from the cases where a number of different experimental data sets are spread over intervals that are specific to each experiment. The systematic errors, in this case, will have many different contributions. For example, the TOTEM Collaboration presented eight different sources of systematic errors in their analysis of reference \cite{T7a}. The signs of these systematic effects may vary, but usually there is a single dominating systematic uncertainty present. At the high energy accelerators, it is often the machine luminosity error, that plays the main role. The luminosity error has the same sign for the whole data set collected by an experiment. When using the square sum approach to evaluate the over-all error in the fit, the sign constraint is lost. Due to this problem, several additional normalization coefficients are introduced to account for the systematic errors in accelerator based physics \cite{Stump03,Sel-PRD15}, or in cosmology \cite{Koh15,Ankowski16}. Both methods can also be used simultaneously, by accounting for the bulk of the systematical errors by the square sum method and, in addition, for the maximum one, as a nuisance parameter in the fit. Sometimes a more complicated combination of the two methods is used \cite{Erler15,Ghosh17}. In reference \cite{Ge12}, for example, the total $\chi^2$ is separated into three parts: $\chi^2= \chi^2_{para}+\chi^2_{sys}+\chi^2_{stat}$ and each term is estimated separately. There are systematic uncertainties of different origins to be addressed in theory computations \cite{Charles16}. Here, the experimental systematic uncertainties that have the same sign for a set of experimental data are considered. In the second part of the present analysis, the two ways of accounting for the statistical and systematical errors of different data sets are discussed. In the third part, the simplest linear model, similar to a toy model discussed in reference \cite{Barlow-17}, is analyzed. In the fourth and fifth parts, a more complicated nonlinear models, tested against four separate simulated data sets, is addressed. In the sixth section, an analysis of five sets of actual experimental LHC data is presented. Finally, in seventh part, the analysis of some practice using the systematic errors is made. In Conclusions, the summed results are presented. \section{ Error combination} The data sets provided by individual experiments are unique to each experimental set-up, and can be considered as statistically independent. These data sets are then used to fit a model, using a number of model parameters of interest, or nuisance parameters are introduced to account for possible uncertainties in normalization of data to account for the varying experimental conditions. In the frequentist approach, the most widely used goodness-of-fit statistics in hypothesis testing is $\chi^2$, the value of which is determined by the residual between the fitted model and the data, using no input from the prior knowledge. Thus, $\chi^{2}_{min} = \chi^{2} (\alpha_{j})$ represent the goodness-of-fit statistics for the minimum, $\chi^2$ solution for $\alpha_{j}$. The likelihood can be written in a "binwise" form, i.e. in the form that accounts for the choice of bin widths. The effect of choosing the bins can be modeled by shifting the signal and background templates up and down, corresponding to the degree of uncertainty \begin{eqnarray} \mathcal L (\vec{n}|\vec{s}, \vec{b},\mu,\beta) = \left[ \prod_{i=1}^{n_{bins}} Pr(\hat{E}_{i}|F(a_{j},\delta)) \right] \end{eqnarray} Here $\hat{E}_{i}$ is the observed event number and $F(a_{j},\delta)$ the value resulting from a version of the model where the model parameters $\alpha_{j}$ and nuisance parameter $\delta$ were used. In case a sufficiently large number of model parameters are used, the Gaussian prior can be assumed for the distribution of experimental data, and the likelihood becomes \begin{eqnarray} \mathcal L (\vec{n}|\vec{s}, \vec{b},\mu,\beta) = \prod_{i=1}^{n_{bins}} \frac{1}{\sqrt{2\pi} \sigma_{i}} e^{-(\hat{E}_{i}-F(a_{j},\delta))^2/2\sigma_{i}^{2}} \end{eqnarray} According to the frequentist approach, for a model with the correct dependence on its parameters of interest, moving the parameters to their "true" values means that the corresponding likelihood attains its maximum value. This procedure is equivalent to the minimization of the value of the likelihood $\chi^2$ \begin{eqnarray} -2 ln \mathcal L (x_{i}; \mu, \sigma) = \sum_{i}^{n} \frac{(\hat{E}_{i} - \mu)^2}{\sigma_{i}^{2} } + n (ln 2\pi + 2 ln \sigma ), \end{eqnarray} where the last term does not impact the position of the minimum of $\chi^2$. This term, however, impacts the absolute size of $\chi^2$ at the location of the minimum. For determining the parameters of interest, only location of the minimum $\chi^2$ is required. Minimization of $-2 ln \mathcal L (x_{i}; \mu, \sigma)$ can proceed either analytically or numerically by finding the zeros of the first derivative with respect to $\mu$ and $\sigma^{2}$. The following maximum-likelihood estimates $\mu$ and $\sigma^{2}$ are obtained: $\hat{\mu}=[\sum_{i}^{n}(x_{i})]/n$ and $\sigma^{2} = [\sum_{i}^{n}(x_{i} - \hat{\mu})^{2}]/n$. The maximum- likelihood estimate of $\sigma^{2}$ is biased, in the sense that its average value deviates from the true $\sigma^{2}$. In the following, for simplicity, all statistical errors are assumed to be of the same order of magnitude, $\sigma_{i}=\sigma_{st.}$. The systematic error is a nuisance parameter reflecting the detection efficiencies or uncertainty in measuring the luminosity. This error can be accounted for as a bias within the model where the corresponding errors $\sigma_{\delta}$ are adopted. Assuming the Gaussian prior for such a bias, the likelihood becomes \begin{eqnarray} \mathcal L (\vec{n}|\vec{s}, \vec{b},\mu,\beta) = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi} \sigma_{st.}} e^{-(\hat{E}_{i}-(F(a_{j})-\delta))^2 /2\sigma_{st.}^{2}} \frac{1}{\sqrt{2\pi} \sigma_{syst.}} e^{-\delta^2/2 \sigma_{syst.}^{2} } d \delta \end{eqnarray} The integration has a standard representation, for example in reference \cite{DAug2} it is of the form \begin{eqnarray} \mathcal L (\vec{n}|\vec{s}, \vec{b},\mu,\beta) = \frac{1}{\sqrt{2\pi} \sqrt{\sigma_{st.}^{2}+\sigma_{syst.}^{2} } } e^{-(\hat{E}_{i}-F(a_{j}))^2 /2(\sigma_{st.}^{2}+\sigma_{syst.}^{2}) }. \end{eqnarray} The total error is now expressed in terms of the sum of squares: $$\sigma_{tot}=\sqrt{\sigma_{st.}^{2}+\sigma_{syst.}^{2}}$$. It should be noted that this result assumes the Gaussian form for the bias. In this case, the systematical errors will also have their signs distributed according to the Gaussian form. This contradicts the assumption that all signs of the systematic errors of one origin have the same sign for a chosen set of experimental data. In the following, to compare possible sizes of $\chi^2$ in case of the squared errors, and for fitting additional normalization coefficients, the statistical and systematic errors are assumed to be of equal size. In the case of the squared errors, $\chi^2$ can be simply written as \begin{eqnarray} \chi^{2}=\sum_{i=1}^{n} \frac{ ( \hat{E}_{i} - F_{i}(a_{j}) )^2 } {\sigma_{i-st.}^{2}+\sigma_{i-syst.}^2 } \label{eq6} \end{eqnarray} Assuming, that all the errors are of the same size, and that $F_{i}=\bar{x}$. then $\sigma_{tot}^{2} =2 \sigma^{2}$, $\sigma = 1/\sqrt{N}$ and $\chi^2$ becomes \begin{eqnarray} \chi^{2}=\sum_{i=1}^{n} \frac{ ( \hat{E}_{i} - \bar{x} )^2 } {2 \sigma^{2} } = \frac{1}{2 \sigma^{2} } \sum_{i=1}^{n} \hat{E}_{i} - \frac{n \bar{x}^2 }{ 2 \sigma^{2} } \\ \nonumber = \frac{N}{2 } \sum_{i=1}^{n} \hat{E}_{i} - \frac{n N \bar{x}^2 }{ 2 } = A_{1} - A_{2}. \label{eq8} \end{eqnarray} As a result, difference of the two terms appears in equation (7). When the systematic errors are taken into account as an additional normalization coefficient, $k$, and the size of this coefficient is assumed to have a standard error, $k=1\pm \sigma $, \begin{eqnarray} \chi^{2} &=&\sum_{i=1}^{n} \frac{ ( k \hat{E}_{i} - \bar{x} )^2 }{2 \sigma^{2} } + \frac{(1-f_k)^2}{\delta^{2}_{i(norm)}} \\ \nonumber &=& \frac{1\pm\sigma}{2 \sigma^{2} } \sum_{i=1}^{n} \hat{E}_{i} - \frac{n \bar{x}^2 }{ 2 \sigma^{2} } \pm\frac{(1-f_k)^2}{\sigma^{2}} = B_{1} - B_{2} \pm \Delta. \label{exmp1} \end{eqnarray} The last term, $(1-f_k)^{2} / \delta^{2}_{i(norm)}$ is small compared to the others. Although this term could be of significance in model fits to the data, it is neglected in the following. For large $N$ , the difference in $\chi^2$ is \begin{eqnarray} \Delta \chi^{2}= (A_{1}-B_{1}) + (B_{2} - A_{2}), \label{exmp1} \end{eqnarray} which can be written as \begin{eqnarray} \Delta_{1} \chi^{2}= \frac{1}{\sigma^2} (\frac{1}{2}-f_{k}^{2})) \sum_{i=1}^{n} \hat{E}_{i} \label{exmp1} \end{eqnarray} and \begin{eqnarray} \Delta_{2} \chi^{2}= \frac{1}{\sigma^2} (1-\frac{1}{2})) n \bar{x}^2. \label{exmp1} \end{eqnarray} If the set of experimental data has no bias, then $f_{k}=1$ and $\sum_{i=1}^{n} \hat{E}_{i} - n \bar{x}^2 \approx 0$, In case the set of data is biased, then $\sum_{i=1}^{n} \hat{E}_{i} - n \bar{x}^2 > 0$ and $\Delta \chi^2 < 0$. Hence, the $\chi^2$ for the squared errors will be smaller than in the case where the additional normalization for the set of experimental data is accounted for. This will be revisited below in examples where simulated data sets are used. \section{Model ab} To study the influence of additional normalization coefficients in model fitting, a simple model ($\rho(t) =a + b t$), also analyzed in Ref.[24], for simulated "experimental" data is considered, based on two sets of data. The first data set is constrained to be within the $t$-interval from $t =0.5$ to $12.5$ with $\Delta t=0.5$. The second data set is constrained to $t=8$ to $20$ with $\Delta t=0.5$; hence the two data sets have $50$ points. As the initial values of the model parameters $a$ and $b$, $a = 0$ and $b = 1$ are chosen. To simulate the $50$ points of "experimental" data, a random procedure with $10\%$ statistical errors is used (see the Appendix). To account for possible systematical errors, the second set is shifted by $20\%$ with respect to the initial (simulated) values. As a result, two variants of the "experimental" data is obtained: the first one with zero systematic errors, and the second one with the $+20\%$ systematic error. A fit is then performed to the two data sets to determine the crucial model parameters on the basis of the experimental data. The first model variant for the experimental data set without systematical errors is considered first. The results are summed up by the first row in Table 1. The $\chi^2$ value obtained is small, and the size of parameter $a$ remains practically zero. Parameter $b$ has a value close to its "true" value. Next, the simulated data with the assumed $+20\%$ systematic errors is considered. The fitting procedure is carried out for the following three cases: (1) accounting for the statistical errors, only; (2) the errors are assumed to have the form $\sigma_{tot}^2 = \sigma_{stat.}^2+ \sigma_{syst.}^2$; (3) $\sigma_{tot}^2 = \sigma_{stat.}^2$, where the systematic errors are included in fitting the extra normalization coefficients. The second, third and fourth rows of Table 1 list the results of the case assuming $+20\%$ systematical errors. The minimum $\chi^2$ is obtained for the cases assigned with the squared statistical and systematic errors. The magnitudes of the model parameters have, however, sizable deviations from their true values and large errors. In Figure 1, it can be seen that the best fit is obtained for the case where an additional normalization is included in the fit. \begin{table} \caption{Description of Model ab, $\rho(t) = a+ b t$ (syst.er. = stat. er.) with shift $n_i=1.$ } \label{Table-1} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Model & $\sum_{N} \chi^{2}$ & $a$ & $b$ & $n_i$ \\ \hline $\sigma_{st.}^2$ & $38.65$ & $-0.0056\pm0.04$ & $0.968\pm0.016$ & $1.;1_{fix.}$ \\ \hline $\sigma_{st.}^2$ & $81.1$ & $-0.115\pm0.04$ & $1.08\pm0.02$ & $1.;;1_{fix.}$ \\ $\sigma_{st.}^2 + \sigma_{syst.}^2$ &$3.9.$ & $-0.17\pm0.4$ & $1.22 \pm0.09$ & $1.;1_{fix.}$. \\ $\sigma_{st.}^2$ & $27.4$ & $-0.006\pm0.03$ & $1.02\pm0.02$ & $0.945;;1.19$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \includegraphics[width=.8\textwidth]{ab20pc.ps} \vspace{1cm} \caption{ Linear fit, $\rho(t)= a+b t$, to the simulated data with $+ 20\%$ systematic errors (the second set of simulated "experimental" data). (1) Dash-point lines indicate the calculation accounting for the statistical errors, only; (2) long dashed line indicates the calculation with $\sigma_{tot}^2 = \sigma_{stat.}^2+ \sigma_{syst.}^2$; (3) short dashed line indicates the calculations where $\sigma_{tot}^2 = \sigma_{stat.}^2$ and extra normalization coefficients are used; (4) solid line indicates the exact calculation $\rho(t) = t$. } \end{figure} \section{Model A-Gd-1} Next, experimental data is emulated by using the familiar expression \begin{eqnarray} dS_{0}/dt= 1/(1. + \sqrt{t}/0.71). \label{exmp1} \end{eqnarray} In Equation (12), the parameters determined by our "experimental" data are exactly known. The following calculations are restricted to the $t$-region of $0 < t < 20$ for the $200$ simulated experimental data points with an assumed bin width of $\Delta t=0.01$. The simulated experimental points are calculated for four $t$-intervals, $t=0-5$, $5-10$, $10-15$ and $15-20$. The statistical and systematical errors are assumed to be $1\%, 2\%, 4\%, 8\%$, for the four $t$-intervals, respectively. A random procedure (see the Appendix) is then applied that accounts for the statistical errors. As a result, an unbiased simulated data set is obtained for $dS_{1}/dt $. The standard fit to the simulated data was done by using the FUMILI program \cite{FUMILI,FUMILY}. This is preferred instead of the commonly used MINUIT \cite{MINUIT} which includes three separate minimization methods, and may lead to results that have intrinsic dependence on the different representations used in simulating the experimental data. Next, the following model parametrization with free parameters is used to fit the simulated data: \begin{eqnarray} dS/dt= h/(1. + t^{\alpha}/L) \label{M1d} \end{eqnarray} The results are listed in Table 2. It is clear, that despite of the large difference of the $\chi^2$ values, the fit parameters attain the same sizes in both cases, where either only statistical errors or the sum of the squared systematical and statistical errors are considered. The sizes of the fitting parameters are very close to the parameter values used in calculation of the simulated data. A bias is then introduced for a separate data set, by assigning systematic errors for each data interval $n_{i}=1.01, 0.98, 1.04, 0.92 $. \begin{eqnarray} dS_{i}/dt= n_{i} h/(1. + t^{\alpha}/L). \label{M1d} \end{eqnarray} As a result, a modified simulated data set is defined having different bias for each data interval, $dS_{n}/dt $. Obviously, the sign of the systematic error is the same for every point of each $t$-interval. \begin{table}[b] \caption{Description of Model A-Gd: $dS_{1}/dt= h/(1. + t^{\alpha}/L)$ ($\sigma_{syst.} = \sigma_{stat.}$ with shift $n_i=1.$ } \label{Table-2} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Model & $\sum_{N} \chi^{2}$ & $h$ & $\alpha$ & $L$ \\ \hline & & & & \\ $\sigma_{st.}^2$ & $297.7$ & $0.966\pm0.02$ & $0.521\pm0.006$ & $0.771\pm0.03$ f \\ $\sigma_{st.}^2 + \sigma_{syst.}^2$ &$148.9.$ & $0.966\pm0.026$ & $0.52 \pm0.008$ & $0.771\pm0.04$ \\ & & & & \\ \hline \end{tabular} \end{center} \end{table} In the first two cases, in Tables 2 and 3, symmetric distribution of the signs of the systematic errors is assumed. The sign is freely distributed according to the Poisson or Gaussian form; also a non-symmetric distribution of the signs of the systematic errors is considered. The model fit was done for three different cases, where: \\ a) Only systematical errors were taken into account $\sigma_{tot}^2 = \sigma_{st.}^2$ ; \\ b) The systematical and statistical errors were squared: $\sigma_{tot}^2 = \sigma_{st.}^2 + \sigma_{syst.}^2$ \\ c) $\sigma_{tot}^2 = \sigma_{st.}^2$ and $n_i$ were taken into account as nuisance parameters in the fit. The results are presented in Table 3. The $\chi^2$ value is smaller in the case of the squared errors, $\sigma_{tot}^2= \sigma_{st.}^2 + \sigma_{syst.}^2 $. It is four time smaller when compared to case $c)$, where only statistical errors, $\sigma_{tot}^2= \sigma_{st.}^2 $ are accounted for, and extra normalization coefficients are used as free parameters. \begin{table} \caption{Description of Model A-Gd-8: $dS_{n}/dt= h/(1. + t/L)^{\alpha}$ ($\sigma_{syst.} = \sigma_{stat.}$) with shift $n_i=1.01; 0.98; 1.04; 0.92$ } \label{Table-3} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & $\sum_{N} \chi^{2}$ & $h$ & $\alpha$ & $L$ & $n_i$ \\ \hline & & & & & \\ $\sigma_{st.}^2$ & $320.$ & $0.968\pm0.02$ & $0.52\pm0.006$ & $0.767\pm0.03$ & $n_{i}=1.$ \\ $\sigma_{st.}^2 + \sigma_{syst.}^2$ &$37.6.$ & $0.94\pm0.07$ & $0.53 \pm0.02$ & $0.808\pm0.1$ & $n_{i}=1.$ \\ $\sigma_{st.}^2$ & $154.9$ & $0.97\pm0.04$ & $0.49 \pm0.01$ & $0.69\pm0.001$ & $n_{i}$ \\ & & & & & \\ \hline \end{tabular} \end{center} \end{table} The basic objective in this analysis is not to find the maximum likelihood of the fit, but to determine the true sizes of the model parameters. Obviously, the third case with its extra free nuisance parameters introduced for normalization, would give the best technical fit result. It should be noted that constant $h$ stays practically the same for the first and third cases. In fact, this results from the assumption of symmetric distributions of the signs of the systematical errors. Consider next the asymmetric case. For this, the bias for the separate sets of simulation data is assumed to be given as $n_{i}= 1.01,0.98, 1.04$ and $1.08$. The fit results for this case are shown in Table 4. The $\chi^2$ of the squared errors is smaller than in the previous symmetric case. However, the sizes of the obtained parameters deviate more with respect to their true values. It is interesting to note, that for the last model variant, with the extra free normalization parameters, the resulting parameter values are clearly closer to their true values when compared to the ones obtained in the symmetric case. The results do not change significantly, when the statistical and systematical errors are increased, and allowed to change faster with increasing $t$ (for example, by increasing $t$ in steps of $4\%, 8\%, 12\%, 16\%$ ). The statistical and systematical errors are here assumed to have the same values. The results are shown in Table 5 for the symmetric case, and in Table 6 for the asymmetric case. Note that in these cases, the $\chi^2$ values for the squared errors and for the case with free normalization parameters, are very close to each other. However, the parameter values appear to be closer to their true values for the last model variant for both symmetric and asymmetric cases. \begin{table} \caption{Description of Model A-Gd: $dS/dt= h/(1. + t^{\alpha}/L)$ ($\sigma_{syst.} = \sigma_{stat.}$) in the non-symmetric case (with bias $n_i=1.01; 0.98; 1.04; 1.08$) } \label{Table-4} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & $\sum_{N} \chi^{2}$ & $h$ & $\alpha$ & $L$ & $n_i$ \\ \hline & & & & & \\ $\sigma_{st.}^2$ & $299.$ & $1.041\pm0.02$ & $0.495\pm0.006$ & $0.673\pm0.03$ & $n_{i}=1.$ \\ $\sigma_{st.}^2 + \sigma_{syst.}^2$ &$30.2.$ & $1.12\pm0.07$ & $0.47 \pm0.02$ & $0.59\pm0.1$ & $n_{i}=1.$ \\ $\sigma_{st.}^2$ & $139.1$ & $1.015\pm0.03$ & $0.493 \pm0.001$ & $0.686\pm0.002$ &$n_{i}$ \\ & & & & & \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Description of Model A-Gd: $dS/dt= h/(1. + t^{\alpha}/L)$ ($\sigma_{syst.} = \sigma_{stat.}$) with the shift $n_i=1.04; 0.92; 1.08; 0.84$ } \label{Table-5} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & $\sum_{N} \chi^{2}$ & $h$ & $\alpha$ & $L$ & $n_i$ \\ \hline & & & & & \\ $\sigma_{st.}^2$ & $356.4$ & $0.83\pm0.04$ & $0.63\pm0.02$ & $1.12\pm0.11$ & $n_{i}=1.$ \\ $\sigma_{st.}^2 + \sigma_{syst.}^2$ &$178.2.$ & $0.83\pm0.06$ & $0.63 \pm0.03$ & $1.12\pm0.16$ &$n_{i}=1.$ \\ $\sigma_{st.}^2$ & $177.2$ & $0.99\pm0.2$ & $0.47 \pm0.03$ & $0.61\pm0.09$ & $n_{i}$ \\ & & & & & \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{ Description of Model A-Gd-Up: $dS/dt= h/(1. + t^{\alpha}/L)$ ($\sigma_{syst.} = \sigma_{stat.}$) with the shift $n_i=1.04; 0.92; 1.12; 1.16$ } \label{Table-6} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & $\sum_{N} \chi^{2}$ & $h$ & $\alpha$ & $L$ & $n_i$ \\ \hline & & & & & \\ $\sigma_{st.}^2$ & $322.8$ & $1.23\pm0.12$ & $0.476\pm0.02$ & $0.535\pm0.08$ & $n_{i}=1.$ \\ $\sigma_{st.}^2 + \sigma_{syst.}^2$ &$161.4.$ & $1.23\pm0.17$ & $0.475 \pm0.03$ & $0.535\pm0.12$ & $n_{i}=1.$ \\ $\sigma_{st.}^2$ & $158.2$ & $1.06\pm0.13$ & $0.46 \pm0.04$ & $0.57\pm0.14$ &$n_{i}$ \\ & & & & & \\ \hline \end{tabular} \end{center} \end{table} \section{Model A-Gd8-1} A more complicated model is examined below. For this, experimental data is simulated by using the following formula that is close to the observed differential cross sections, and is proportional to the fourth power of the proton dipole form factor \begin{eqnarray} dS/dt= h/(1. + t/L)^{8} \label{M8d} \end{eqnarray} Here the parameters are chosen as $h = 100$ and $L = 0.71$. As in the previous cases, the entire $t$-interval from $t = 0 - 20$ is considered and $200$ "experimental" points separated into four intervals are generated. The statistical errors are assumed to be $2\%; 4\%, 8\%$, and $12\%$; the systematic errors as $4\%; 8\%, 16\%$, and $24\%$. Random procedure is then used to generate four sets of simulated experimental data. Supposing that the true form of the data is not known, an exponential model is adopted to describe of the generated data. In terms of combined exponentials \begin{eqnarray} dS/dt= h_{1} exp[-\alpha_{1} t) +h_{2} exp[-\alpha_{2} t). \label{M2e} \end{eqnarray} To determine the optimum parameters for the model (16), the simulated data is assigned with small $1\%$ statistical errors and with zero systematical errors. A fit using four free parameters with these relatively small errors, yields an optimum $\chi^2$, given the dipole model (15), and very large $\chi^2$, value in case of the exponential model (16) (see Table 7). The parameters obtained for the dipole model well coincide with the parameters used in simulating the "experimental" data sample. For the exponential model, the parameters determined by the fit can be considered as the best description of this particular choice of the model. In the following, the simulated data is assigned with both statistical and systematical errors and, as in the previous simple cases, the symmetric and asymmetric cases are investigated separately. The bias in the last $t$-interval is assumed to be $\mp 24\%$. The results for the symmetric case are shown in Table 8. Again, $\chi^2$ is sufficiently small for the model variant with squared errors. The parameter values found are far off the ones obtained when fitting with the assumed $1\%$ errors (Table 6, second row). Despite of the larger $\chi^2$ values, the parameters (Table 8, third row) obtained when including the extra normalization coefficients better coincide with the true parameter values (Table 7, second row). In the asymmetric case, the approach based on squared errors yields better results (see Table 9), and the difference between the two approaches (the one based on squared errors and the one based on extra normalization coefficients) is less important when compared to the symmetric case. Hence, in case the model is sufficiently far from the reality, it is not obvious which model variant to choose. The model variant with the extra normalization parameters, however, wins over the square error case. In the following, the simulated data is analyzed by assuming statistical errors of $2\%; 4\%, 8\%, 16\%$ and systematic errors $4\%; 8\%, 16\%, 24\%$ in terms of a true model. The fit is based on the model with three parameters \begin{eqnarray} dS/dt= h/(1. + t/L)^{\alpha} \label{M8fit} \end{eqnarray} \begin{table} \caption{ Description of Model A-Gd-8 $dS/dt= h/(1. + t/L)^{8}$ by $h_{1} exp(-\alpha_{1} t) + h_{2} exp(-\alpha_{2} t)$ ($\sigma_{st.} = 1 \% $) $n_i=1.; 1.; 1.; 1.$ } \label{Table-7} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & $\sum_{N} \chi^{2}$ & $h_{1}$ & $\alpha_{1}$ & $h_{2}$ & $\alpha_{2}$ \\ \hline & & & & & \\ Dipole & $0.33$ & $99.99\pm1.1$ & $8.001\pm0.01$ & $0.71\pm0.002$ & $ $ \\ 2 exp. &$7027.$ & ${\bf 74.04}\pm0.22$ & ${\bf8.64} \pm0.01$ & ${\bf2.68}\pm0.02$ & ${\bf3.61}\pm0.004$ \\ & & & & & \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Description of Model A-Gd-8 $dS/dt= h/(1. + t/L)^{8}$ by $h_{1} exp(-\alpha_{1} t) + h_{2} exp(-\alpha_{2} t)$ ($\sigma_{syst.} = \sigma_{stat.}$) symmetric case with bias $n_i=1.04; 0.92; 1.16; 0.76$ } \label{Table-8} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & $\sum_{N} \chi^{2}$ & $h_{1}$ & $\alpha_{1}$ & $h_{2}$ & $\alpha_{2}$ \\ \hline & & & & & \\ $\sigma_{st.}^2$ & $1277 $ & $87.3\pm0.6$ & $9.82\pm0.05$ & $6.24\pm0.18$ & $4.3\pm0.02 $ \\ $\sigma_{st.}^2 + \sigma_{syst.}^2$ &$227 $ & $87.3\pm1.3$ & $9.65 \pm0.1 $ & $5.2\pm0.3 $ & $4.1\pm0.06$ \\ $\sigma_{st.}^2$ & $862 $ & $84.5 \pm3. $ & $10.3 \pm0.1$ & $8.7\pm0.5$ & $4.4\pm0.1$ \\ & & & & & \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Description of Model A-Gd-8 $dS/dt= h/(1. + t/L)^{8}$ by $h_{1} exp(-\alpha_{1} t) + h_{2} exp(-\alpha_{2} t)$ ($\sigma_{syst.} = \sigma_{stat.}$) asymmetric case with bias $n_i=1.04; 0.92; 1.16; 1.24$ } \label{Table-9} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & $\sum_{N} \chi^{2}$ & $h_{1}$ & $\alpha_{1}$ & $h_{2}$ & $\alpha_{2}$ \\ \hline & & & & & \\ $\sigma_{st.}^2$ & $990$ & $86.5\pm0.6$ & $9.14\pm0.05$ & $2.97\pm0.09$ & $3.6\pm0.03$ \\ $\sigma_{st.}^2 + \sigma_{syst.}^2$ &$198$ & $86.5\pm1.3$ & $9.24 \pm0.1$ & $2.1\pm0.2$ & $3.4\pm0.1$ \\ $\sigma_{st.}^2$ & $836$ & $88.4\pm2.6$ & $10.2 \pm0.1$ & $8.3\pm0.5$ & $4.4\pm0.1$ \\ & & & & & \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Description of Model A-Gd-8 by $dS/dt= h/(1. + t/L)^{\alpha}$ ($\sigma_{syst.} = \sigma_{stat.}$) symmetric case with bias $n_i=1.04; 0.92; 1.16; 0.76$ } \label{Table-10} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $\sum_{N} \chi^{2}$ & $h$ & $\alpha$ & $L$ & $n_i$ \\ \hline & & & & \\ $1211$ ($\sigma_{st.}^2$) & $103.8\pm0.8$ & $8.6\pm0.06$ & $0.77\pm0.1$ & $n_{i}=1.$ \\ $242$ ($\sigma_{st.+syst.}^2$) & $103.8\pm16$ & $8.6 \pm0.14$ & $0.77\pm0.2$ &$n_{i}=1.$ \\ $616$ ($\sigma_{st.}^2$) & $101.9\pm3.6$ & $7.8 \pm0.12$ & $0.69\pm0.02$ & $n_{i}$ \\ & & & & \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Description of Model A-Gd-8 by $dS/dt= h/(1. + t/L)^{\alpha}$ ($\sigma_{syst.} = \sigma_{stat.}$) asymmetric case with bias $n_i=1.04; 0.92; 1.16; 1.24$ } \label{Table-11} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $\sum_{N} \chi^{2}$ (err.) & $h$ & $\alpha$ & $L$ & $n_i$ \\ \hline & & & & \\ $1030$ ($\sigma_{st.}^2$) & $109.4\pm0.8$ & $7.6\pm0.05$ & $0.65\pm0.01$ & $n_{i}=1.$ \\ $206$ ($\sigma_{st.+syst.}^2$) & $109.4\pm 2$ & $7.6 \pm0.11$ & $0.65\pm0.02$ & $n_{i}=1.$ \\ $576$ ($\sigma_{st.}^2$) & $102.6\pm3.6$ & $7.8 \pm0.1$ & $0.69\pm0.02$ &$n_{i}$ \\ & & & & \\ \hline \end{tabular} \end{center} \end{table} The results for the symmetric and antisymmetric cases are shown in Tables 10 and 11. It is seen, that for the model variant based on using squared errors, $\chi^2$ is smaller in the symmetric case. Despite of the minimal $\chi^2$, the parameter values are far off the initial parameters defined for the model. Contrary to this, the model variant using extra free normalization coefficients, yields fit parameter values close to the true ones. \section{Elastic cross sections at the LHC} For testing the model hypotheses further, the data collected by the TOTEM and ATLAS Collaborations at $7$ and $8$ TeV is used below. At small four-momentum transfers, $-t$, in proton-proton elastic scattering processes close to the diffraction peak region, there are five sets of experimental data used for measurements of the differential cross sections: two of them at $7$ TeV center-of-mass energy, $\sqrt{s}$, and three at $8$ TeV. The data sets come from different $t$-regions. The usual $ln (s)^2$ dependence of the total $pp$ cross section on c.m.s energy, is here assumed. As the very high LHC energies, the pre-asymptotic terms in the standard representation for the total cross sections can be safely neglected. For the description of the hadronic part of the differential cross sections, the standard exponential form of the hadron elastic scattering amplitude is taken \begin{eqnarray} F_{h}(s,t)= i h ln (s)^2 (1.- i\rho ) e^{B_{1}/2 t + B_{2}/2 t^2} G^2(t); \label{Fh} \end{eqnarray} with the form factor \begin{eqnarray} G(t) = \frac{4 m_p^2 - \mu t }{4 m_p^2-t}\frac{\Lambda^2}{(\Lambda - t)^2}. \label{emff} \end{eqnarray} where $m_p$ is the proton rest mass, $\Lambda=0.71$ GeV$^2$ and $\mu=2.79$. In calculations of the differential cross sections, the five spiral electromagnetic amplitudes and the Coulomb-hadron phase factor are accounted for (see, for example, \cite{Sel-rho,HEGS0}). At $7$ TeV, the TOTEM measurements \cite{T7a} in the $t$-region of $0.00515 <|t|<0.371$ GeV$^2$; and the ATLAS measurements \cite{ATL7} in the region $0.0062 <|t|<0.35$ GeV$^2$ are used. At $8$ TeV, the data published by the TOTEM Collaboration \cite{T8a} in the $t$-regions of $0.0285 <|t|<0.19$ GeV$^2$, and $0.000741 <|t|<0.191$ GeV$^2$ and the data by the ATLAS Collaboration \cite{ATL8} in the region of $0.0105 <|t|<0.363$ GeV$^2$ are used. On the whole, these data sets contain $225$ data points. Some discrepancies exist in the total cross sections measured by the two Collaborations. From the separate analysis of each data set, the TOTEM Collaboration finds for the $pp$ total cross section: $\sigma_{tot} = 98.0 - 99.1 \pm 3 $ mbar at $7$ TeV and $\sigma_{tot} = 101.7 \pm 2.9 $ mbar at $8$ TeV \cite{T7a,T8a}. The ATLAS Collaboration obtained somewhat smaller values: $\sigma_{tot} = 95.35 \pm 1.34 $ mbar at $7$ TeV and $\sigma_{tot} = 96.07 \pm 1.34 $ mbar at $8$ TeV \cite{ATL7,ATL8}. All the above five data sets are analyzed below simultaneously. The results of the analysis are listed in Table 12. In the first row (Table 12), the result with only the statistical errors, and the additional normalization coefficients with fixed by unity are shown. In the second row, the results of the same fitting procedure are shown but with the statistical and systematic errors ($\sigma^2 = \sigma_{st.}^2+\sigma_{syst.}^2$). Comparing these two results, it can be seen that the parameters of interest are practically the same, despite of the enormous difference in the over-all $\chi^2$. The total cross sections coincide with the ATLAS measurements. If the statistical errors are considered alone, but including the extra normalization coefficients, the total $\chi^2$ decreases with respect to the first case (Table 12, third row). The normalized TOTEM data lies above the ATLAS results. Note that this result is also obtained within the framework model of high energy general structure (HEGS) \cite{HEGS1,Diff16}. Here, a simple model parametrization of the hadronic amplitude is used. Different forms of the amplitude should be considered and their dependence on energy and four momentum transfer, while accounting for the fit procedure. It is observed, that the using approach based on the squared systematical and statistical errors, no new results are obtained. Contrary to this, by including the systematic errors in model fitting, usage of extra normalization coefficient allows new results to be reached. The same conclusions were obtained above, in connection of the model testing using the simulated "experimental" data samples. \begin{table} \caption{Description of $d\sigma/dt$ at LHC energies } \label{Table-1} \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|} \hline $\sum_{N} \chi^{2}$; (err.) & $h$ & $B_{1}$ & $B_{2}$ &$\rho$ &$\sigma_{tot}$ & $n_i$ \\ & & & & &7{\small TeV}/$8${\small TeV} & T;A;|T;T;A \\ \hline & & & & & & \\ $48337$ ($\sigma_{st.}^2$) & $0.30$ & $0.55$ & $-0.39$ &$0._{b}$ &$95.3/98.2$ & $1.;1.;|1.;1.;1.$ \\ $421$ ($\sigma_{st.+syst.}^2$) & $0.30$ & $ 0.55$ & $-0.45$ &$0._{b}$& $95.1/98.0$ & $1.;1.;|1.;1.;1.$ \\ \hline & & & & & & \\ $1812$ ($7$ {\small TeV}) & $0.31$ & $0.58$ & $-0.26$ &$0._{b}$ &$96.7$ & $1.03;0.98.;|$ \\ ($\sigma_{st.}^2$) ($8${\small TeV}) & & & & & $99.7$ & $1.06;1.06;0.94$ \\ & & & & & & \\ \hline \end{tabular} \end{center} \end{table} \section{Notes concerning additional normalization of data} Besides the standard use of systematic errors, either as the squares of statistical and systematic errors summed together, or by using additional normalization coefficients, other approaches have been recently introduced in error analysis \cite{un-pdf,un-sig}. In Ref. \cite{un-sig} the following expression was used for $\chi^2=\chi^2_{stat} +\chi^2_{scale}$ \begin{eqnarray} \chi^{2}_{stat} &=&\sum_{k=1}^{L} \sum_{i_{k}} \frac{ ( \omega_{k} \sigma_{inv,i_{k}} - \sigma_{inv} (C,{\cal{T}})_{i_{k}})^2 } {\omega^{2}_{k} \sigma^{2}_{i_{k}} } \label{primer1} \end{eqnarray} The authors note: "$ \sigma_{inv,i_{k}}$ is the $i_{k}$ data point for invariant cross section having total uncertainty $\sigma_{i_{k}}$,, which is taken as the quadratic sum of statistical and systematical uncertainties of each data point if both are stated separately." All the experimental errors are, therefore, considered in the analysis. However, the authors state in addition: "For each data set we allow a re-scaling by a constant factor $\omega_{k}$". The size of the scale factor was chosen as: "the average size of the systematic uncertainties". Unfortunately, such a procedure leads to double counting of systematic errors. The authors use a normalization factor in the denominator when calculating the total error. However, the normalization factor, $\omega_{k}$, centers around unity: when it is less than unity, the total error decreases and vice versa, when it is above unity, the total error will increase, and the $\chi^2$ value tends to decrease. The additional term in $\chi^2$, expressed as \begin{eqnarray} \chi^{2}_{scale} &=&\sum_{k=1}^{L} \frac{ ( \omega_{k}-1)^2 }{ \sigma^{2}_{scale,k} } \label{primer2} \end{eqnarray} will be independent of the sign of the term $ (\omega_{k}-1)$, and basically asymmetric properties of the $\chi^2$ are recovered. The authors in Reference \cite{un-sig} end up to be mistaken in their approach to parameter fitting. \section{Conclusion} All experimental data are associated with finite systematical errors. To reliably determine their sizes is of essential importance, and great care should be exercised in evaluating them. Erroneous treatment of the systematic errors can lead to fundamentally faulty conclusions when extracting model parameters through a fit. Different approaches in addressing the systematic errors, can lead to either right or wrong determination of the "true" model parameters, thereby influencing choice of a valid "true" model. Complications in error calculation include propagation of uncertainties and treatment of systematic effects; conventional statistical analyses do not usually involve consistent methods, but only {\it ad hoc} prescriptions to follow \cite{DAug1}. Present analysis shows that in model fitting, particularly in cases where the systematical uncorrelated errors exceed the statistical errors, additional normalization coefficients need to be introduced. In fact, when additional normalization coefficients are introduced in the fitting procedure, the $\chi^2$ values reached can end up being larger compared to the usage of the sum of squared errors. However, the parameters of interest of the tested model will be closer to their "true" values allowing to better validate the correct model description of experimental data. \vspace*{0.5cm} {\bf Acknowledgments} {\it The authors would like to thank J.-R. Cudell for fruitful discussions concerning the paper.} \\ {\bf REFERENCES}
{ "timestamp": "2018-04-17T02:05:43", "yymm": "1804", "arxiv_id": "1804.05201", "language": "en", "url": "https://arxiv.org/abs/1804.05201" }
\section{Introduction} Melting of two-dimensional solids has been heavily discussed since it was proven that long-wavelength thermal fluctuations prevent the long-range positional order in 2D systems~\cite{Strandburg1988,Dash1999,Grasser2009}. A significant attempt to reach the general description of 2D melting was the Kosterlitz-Thouless-Halperin-Nelson-Young (KTHNY) theory, that predicts a new phase of matter, i.e., the hexatic phase, which has quasi-long-range orientational order and short-range positional order. The hexatic phase is predicted to be interposed between the usual solid and fluid phases. Therefore, the melting in 2D was predicted to undergo two continuous transitions from solid to hexatic and hexatic to fluid, respectively~\cite{kt,hn2,hn,young,bkt}. However, the KTHNY theory does not rule out the first-order transition due to other effects~\cite{binder2002}, e.g., the grain-boundary induced melting~\cite{chiu1982,saito1982}. In arguably the simplest benchmarking model system in 2D, i.e., the system of monodisperse hard disks, the physics of melting transition has been debated for long~\cite{zahn1999,karn2000,han2008,RICE20091,murray1987,marcus1996,maret2004,keim2007,stuart2008}. It was recently settled that the melting of solids in the system of monodisperse hard disk undergoes a two-stage process consisting of a continuous solid-hexatic transition followed by a first-order hexatic-fluid transition~\cite{hdprl,hdpre}, and the shape and softness of particles also play important roles in the 2D melting~\cite{krauth2015,glotzer2017,massimo2018}. It was found that pinning a small fraction of particles can change the melting transition in hard-disk systems significantly~\cite{lowen2013,weikai2015}. Moreover, simulations of binary hard-disk mixtures showed that the presence of tiny amounts of small particles can eliminate the stability of hexatic phase~\cite{russo2017}. These highlight that the melting transition in 2D is subtle. A recent experiment with colloidal hard spheres in 2D~\cite{dullens2017} appears to support the two-stage melting found in simulation~\cite{hdprl,hdpre}. However, most of experimental systems possess certain degree of (continuous) particle size polydispersity, and polydisperse hard disks have also been widely employed as a model system to investigate the glass transition~\cite{tanaka2007,tanaka2011,tanaka2015}. Yet the effect of polydispersity on the nature of phase transitions in 2D remains unknown. To this end, we investigate the equilibrium phase behaviour of a 2D polydisperse hard-disk system (PHDS) with Gaussian-like particle size polydispersity. We find that with increasing the polydispersity, the first-order hexatic-fluid transition becomes weaker, and completely switches to be continuous following the KTHNY scenario in PHDS with around $7\%$ size polydispersity. Concurrently, the packing fraction range for stable hexatic phase increases significantly by one order of magnitude compared to that in monodisperse hard-disk systems. More surprisingly, in PHDS with slightly higher polydispersity, we observe re-entrant solid-hexatic and hexatic-fluid transitions at high density, which were proven impossible in 3D polydisperse hard-sphere systems~\cite{sollich2003prl}. \begin{figure*} \centering \includegraphics[width=\textwidth]{FIG1.pdf} \caption{{\textbf{Phase behaviour of polydisperse hard disks.}}(a): Equation of state (EOS) for polydisperse hard disk systems (PHDS) with various polydispersity parameter $\nu/\sigma_0 = 0.005$ to $0.08$ in the representation of $(P-P^*)\sigma_0^2/ k_B T$ vs $(\rho^{-1} - \rho_{hex}^{-1})\sigma_0^{-2}$, where $P^*$ and $\rho_{hex}$ are the pressure and density of the hexatic phase at the fluid-hexatic transition, respectively, and the solid lines are fits of the EOS using 5th order polynomials. (b): $\left | \langle \Psi_6 \rangle \right |$ as functions of $(\rho^{-1} - \rho_{hex}^{-1})\sigma_0^{-2}$ for systems with $\nu/\sigma_0 = 0.07$ and 0.08. $P$ and $\rho=N/V$ are the pressure of the system and the density of particles in the system, respectively. (c): EOSs for PHDS of various numbers of particles $N = 64^2 \sim 512^2$ at $\nu/\sigma_0 = 0.07$. The solid symbols are the obtained fluid-hexatic transition points. (d): Phase diagram of the PHDS in the representation of $\rho \sigma_0^2$ and $s/\langle \sigma \rangle$, in which the dashed lines are the interpolated phase boundaries for re-entrant melting of solid and hexatic phases. The phase boundaries are obtained from $NVT-\Delta \mu$ MC simulations for PHDS with $\nu/\sigma_0 = 0.005$ to 0.0835. Inset: the enlarged view of the region of phase diagram at $0 \le s / \langle \sigma \rangle \le 0.02$. The dotted lines connect the re-entrant transitions at $\nu/\sigma_0 = 0.08,0.0805,0.081$, $0.082, 0.083$, and 0.0835 from left to right.} \label{fig1} \end{figure*} \section{Results} \subsection{Model} To model the effect of polydispersity, we consider a 2D system of volume $V$ containing $N$ polydisperse hard disks based on the semigrand canonical ensemble, in which the chemical potential difference between particles of different size is fixed~\cite{kofke1988,bolhuis1996,kofke1999,frenkel2004}. In this work, we use the following function for the distribution of chemical potential difference \begin{equation}\label{eq1} \Delta \mu (\sigma) = -k_B T \frac{(\sigma - \sigma_0)^2}{2 \nu^2}, \end{equation} where $\sigma$ is the particle diameter changing from 0 to $\infty$ with $k_B$ and $T$ the Boltzmann constant and temperature of the system, respectively. $\nu$ is the polydispersity parameter, and in the ideal gas limit Eq.~(\ref{eq1}) gives a Gaussian-like particle size distribution centered around $\sigma_0$ with the standard deviation $\nu$. This models PHDS in equilibrium with the dilute reservoir of Gaussian-like particle size distribution, e.g., the very top region in sedimentation experiments~\cite{dullens2017}. \begin{figure}[tb] \centering \includegraphics[width=0.4\textwidth]{FIG2.pdf} \caption{ \textbf{Stabilization of hexatic phase.} (a): The fractions of topological defects and dislocations in the hexatic phase as functions of $(\rho - \rho_{hex})\sigma_0^2$ for systems of various polydispersity parameter $\nu$. Here $\rho$ and $\rho_{hex}$ are the density of the system and the lowest density of stable hexatic phase, respectively. The vertical dashed lines indicate the hexatic-solid transition points. (b): Low pressure phase diagram of polydisperse hard disks in the representation of $\phi$ vs $s/\langle \sigma \rangle$, where $\phi$ is the packing fraction of the system with $\sigma_i$ the diameter of particle $i$. The state points obtained from simulations at each $\nu/\sigma_0$ from 0.005 to 0.08 are shown as the symbols, which are color coded with $\langle \sigma \rangle / \sigma_0$. The error bars are smaller than the symbols. } \label{fig2} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{FIG3.pdf} \caption{ \textbf{Re-entrant melting transitions.} (a,b): The pair correlation function $g(x,0)-1$ along the major axis of the system (a) and the six fold orientation correlation function $g_6(r)$ (b) for polydisperse hard disk systems (PHDS) with $\nu/\sigma_0 = 0.08$ at various densities. (c): $\left |\langle \Psi_6 \rangle \right| $ and $s/\langle \sigma \rangle$ as functions of density $\rho \sigma_0^2$ for PHDS with $\nu/\sigma_0 = 0.082$, 0.083, and 0.0835. (d): Contour plot of the probability density distribution of particle size $\sigma/\sigma_0$ for systems of different densities at $\nu/\sigma_0 = 0.0835$. (e): Mean square displacement $\langle \Delta r^2 \rangle / \sigma_0^2$ in PHDS of $\nu/\sigma_0 = 0.0835$ at densities $\rho \sigma_0^2 = 1.592$, 2.292, and 3.565 as marked in the inset, where $\tau = \sigma_0 \sqrt{m/k_BT}$ is the time unit of molecular dynamics simulations with $m$ the mass of particles. Inset: the diffusion coefficient $D$ as a function of density in PHDS with $\nu/\sigma_0 = 0.0835$. } \label{fig3} \end{figure*} \subsection{Phase diagram} By performing Monte Carlo (MC) simulations, we calculate the equation of state (EOS) for a system of $256^2=65,536$ hard disks with various $\nu$ from 0.005 to $0.08\sigma_0$. As shown in Fig.~\ref{fig1}a, one can see that when the polydispersity parameter is small, i.e., $\nu/\sigma_0 \le 0.05$, there is clearly a Mayer-Wood loop in the EOS~\cite{mayerloop} implying a first-order transition from fluid with increasing the density of the system. Similar to the monodisperse hard-disk system, i.e., $\nu = 0$, the coexisting phase with fluid in PHDS is a hexatic phase~\cite{hdprl,hdpre,frenkel2004}. To characterize the structural difference between fluid and hexatic phases, we calculate the six-fold bond orientation order parameter $\langle \Psi_6 \rangle = \left \langle \frac{1}{N} \sum_{k=1}^N \psi_6(\mathbf{r}_k) \right \rangle$ with $\psi_6(\mathbf{r}_k) = \frac{1}{N_k} \sum_{j=1}^{N_k} \exp(i6 \theta_{kj})$, where $\theta_{kj}$ is the angle between the vector connecting particle $k$ with its neighbour $j$ and a chosen fixed reference vector. $N_k$ is the number of first neighbours for particle $k$ based on the Voronoi tessellation of the system. As shown in Fig.~\ref{fig1}a, the density difference between the coexisting hexatic and fluid phases becomes smaller with increasing $\nu$, and when $\nu/\sigma_0 \ge 0.07$, the density jump from fluid to hexatic phase disappears, while $\left |\langle \Psi_6 \rangle \right|$ increases significantly at $\rho_{hex}$ (Fig.~\ref{fig1}b). Here we note that $\left | \langle \Psi_6 \rangle \right |$ in an infinitely large system, should be zero for hexatic phase, while in our large finite system it can be a positive number indicating the existence of some orientational order in the system. However, we do not use $\left | \langle \Psi_6 \rangle \right |$ to identify the existence of hexatic phase, for which we always check the change of correlation functions to confirm. This suggests that the first-order fluid-hexatic transition becomes weaker with increasing $\nu$, and changes to be continuous at high polydispersity, i.e., $\nu/\sigma_0 \ge 0.07$, following the celebrated KTHNY scenario~\cite{kt,hn2,hn,young}. This, to some extent, is similar to the melting of soft spheres~\cite{krauth2015}. Because with increasing the particle size polydispersity, the distribution of distance between the nearest neighbors becomes wider, which is similar to introducing a ``soft'' repulsion between particles, and it has been also found in the melting of soft spheres in 2D, the transition type can switch from a first order transition to a continuous KTHNY scenario with increasing the ``softness'' of the potential ~\cite{krauth2015}. \subsection{Finite size effect on the melting in polydisperse hard disks} It was shown that the system finite size effect is important in studying 2D melting~\cite{hdprl,schilling2011}. Therefore, we perform MC simulations for PHDS of various numbers of particles from $N=64^2$ to $512^2$ at $\nu/\sigma_0 = 0.07$, and the EOSs for different system sizes are shown in Fig.~\ref{fig1}c. When the system size is small, i.e., $N=64^2=4096$, there is a pronounced Mayer-Wood loop in EOS, signaturing a first-order fluid-hexatic transition consistent with Ref~\cite{frenkel2004}. Interestingly, with increasing the system size, the first-order fluid-hexatic transition becomes weaker, and changes to be continuous at $N \ge 256^2$. Further increasing $N$ does not change EOS significantly, which suggests the finite size effect negligible in our system of $N = 256^2$ particles. The change from a first order like transition to a continuous transition with increasing system size can be interpreted as following. A finite 2D system with periodic boundary conditions can be seen as a system wrapped on a torus, and this increases the cooperation between neighbouring particles as a result of extra ``communication'' via paths encircling the torus, which was shown being able to change the transition type from continuous to first order like in small systems~\cite{fisher1969}. By contrast, this effect does not appear in systems with open free boundaries, while in real experiments, the open boundary walls can induce order in the fluid. Moreover, for $\nu/\sigma_0 < 0.07$, we ensure that most of the coexisting densities do not change significantly with further increasing the system size to $N=512^2$, while the only exception is for systems at $\nu/\sigma_0 = 0.05$ (see Supplementary Figure 1), of which the exact boundary is out of the reach with our present computation capability. \subsection{Stabilization of the hexatic phase} With increasing the density of the system, the hexatic phase solidifies, and the hexatic-solid transition point can be obtained by checking the pair correlation function $g(x,0) - 1$ along the major axis of system switching from an exponential decay to a power law decay upon solidification~\cite{hdprl}. The resulting phase diagram is plotted in the representation of $s/\langle \sigma \rangle$ vs $\rho \sigma_0^2$ in Fig.~\ref{fig1}d, where $s = \sqrt{\langle \sigma^2 \rangle - \langle \sigma \rangle^2 }$ is the actual particle size polydispersity in the system. One can see that with increasing the polydispersity of the system, the density range for stable hexatic phase increases substantially for $s/\langle \sigma \rangle$ above 0.07. To understand the physics behind this enhanced stability of hexatic phase in PHDS, we calculate the fraction of topological defects in the system using the method in Ref~\cite{glotzer2017}. Particle $k$ is identified as a topological defect if its disclination charge $q_a = N_k - 6 \ne 0$. These topological defects form clusters which can be classified based on the total disclination charge and Burgers vector~\cite{glotzer2017}. Defect clusters having nonzero Burgers vectors and zero disclination charges are called dislocations, and the dislocations as well as the defect clusters with nonzero disclination charges can destroy the bond orientational order in 2D solids. Although the total amount of topological defects is not directly related to the stability of hexatic phase, the dislocations are evidence for the hexatic phase along with few defect clusters with nonzero disclination charges~\cite{glotzer2017}. The fraction of particles in defect clusters with nonzero disclination charge is much smaller, i.e., more than one order of magnitude smaller, than that of dislocations in the hexatic phase found in our simulations, although the changing trends of these two types of defects are very similar (see Supplementary Figure 2). The calculated fractions of topological defects and dislocations in our simulations with various $\nu/\sigma_0$ are plotted as functions of $(\rho - \rho_{hex})\sigma_0^2$ in Fig~\ref{fig2}a, where $\rho_{hex}$ is the lowest density of stable hexatic phase, and the vertical dashed lines locate hexatic-solid transitions. One can see that typically when the fraction of dislocations decreases to below about $1 \sim 1.5\%$, the system solidifies. When the polydispersity is small, i.e., $\nu/\sigma_0 \le 0.05$, the dependence of both fractions of defects and dislocations on density do not change significantly with increasing $\nu$, and the density ranges for stable hexatic phase below the dashed lines in Fig.~\ref{fig2}a are almost the same. However, at $\nu/\sigma_0 \ge 0.07$, the fraction of topological defects increases to around $10\%$ at $\rho_{hex}$, and also the fraction of dislocations increases to about $4.5\%$ at $\rho_{hex}$. Along with increasing the density at $\nu/\sigma_0 = 0.07$ and 0.08, the fraction of dislocations decreases to below the threshold for solidification at much higher density compared with the system of $\nu/\sigma_0 \le 0.05$. The fraction of defect clusters with nonzero disclination charges also changes similarly in systems of different polydispersity (Supplementary Figure 2). Therefore, this suggests that the size polydispersity of hard disks creates more topological defects, which could subsequently increase the fraction of dislocations as well as the defect clusters with nonzero disclination charges in the system. This destroys the quasi-long-range positional correlation in the solid driving the formation of hexatic phase. As the chemical potential difference $\Delta \mu (\sigma)$ is fixed in our simulations, the average particle size in the system decreases with increasing density. Therefore, the packing fraction of the system $\phi = \frac{\pi}{4} \langle \sum_{i} \sigma_i^2/V \rangle$ does not linearly increase with the density, while the actual particle size distribution remains very close to a Gaussian-like distribution centered around $\langle\sigma\rangle$ (see Supplementary Figure 3). At very high density, when the polydispersity is high enough, e.g., $\nu/\sigma_0 \simeq 0.08$, the packing fraction of the system can even decrease with increasing the density in $\mu VT$ ensemble, which different from the $NVT$ ensemble~\cite{santen200}. This decreasing packing fraction at high density signatures a re-entrant melting transition, which we explain later. In experiments, one of the most relevant parameters is the packing fraction of the system. Thus we plot the low pressure phase diagram of PHDS in the representation of $\phi$ vs $s/\langle \sigma \rangle$ in Fig.~\ref{fig2}b. Here the low pressure means the range of pressure close to the fluid$\rightarrow$hexatic and hexatic$\rightarrow$solid transitions below the re-entrant transitions. One can see that in systems of larger size polydispersity, the average particle size is smaller, and the actual packing fraction range for stable hexatic increases from $0.2 \sim 0.3\%$ in the monodisperse hard-disk system to about $2 \sim 3\%$ in the PHDS with $s/\langle \sigma \rangle \simeq 0.07$. Moreover, at fixed $\nu$, increasing the density of the system along with the fluid-hexatic and hexatic-solid transitions decreases the actual polydispersity $s/\langle \sigma \rangle$ in the system, which is due to the formation of more ordered structures eliminating the particle size fluctuation in the system. \subsection{Re-entrant meltings} Another interesting feature of the phase diagram in Fig.~\ref{fig1}d is that when the size polydispersity of hard disks is around 0.08 to 0.1, at very high density, increasing density triggers re-entrant melting transitions from solid to hexatic and hexatic to fluid. As shown in Fig.~\ref{fig3}a, for PHDS with $\nu/\sigma_0 = 0.08$, with increasing the density from $\rho\sigma_0^2 = 1.6$ to 1.6424, the system transforms from a hexatic phase with the short-range positional correlation, i.e., an exponential decay $g(x,0)-1 \sim \exp(-x)$, to a solid with quasi-long-range positional order, i.e., a power law decay $g(x,0)-1 \sim x^{-\alpha}$ with $\alpha \le 1/3$, where the six fold orientation correlation function $g_6(r) = \langle \psi_6^*(\mathbf{r}'+\mathbf{r})\psi_6(\mathbf{r}')\rangle$~\cite{weikai2014} remains almost the same (Fig.~\ref{fig3}b). At much higher density, increasing the density from $\rho\sigma_0^2 = 3.64$ to 3.67, $g(x,0)-1$ changes again from a power law decay to an exponential decay with almost the same $g_6(r)$ (Fig.~\ref{fig3}b), which suggests a re-entrant transition from solid to hexatic phase. For hard-disk systems with higher polydispersity, i.e., $\nu/\sigma_0 = 0.082 \sim 0.0835$, we plot $\left | \langle \Psi_6 \rangle \right |$ as functions of $\rho \sigma_0^2 $ in Fig.~\ref{fig3}c, and one can see that with increasing $\rho$, $\left | \langle \Psi_6 \rangle \right |$ first increases from 0 to around 0.8 indicating the formation of an ordered phase, and further increasing $\rho$ leads to the drop of $\left | \langle \Psi_6 \rangle \right |$ to 0 implying a re-entrant melting transition into a disordered phase. By checking the scaling of $g(x,0)-1$ in the system, we ensure that the ordered phase formed is a hexatic phase. To understand the mechanism behind the re-entrant melting of hexatic phase, we plot $s/\langle \sigma \rangle$ as functions of $\rho \sigma_0^2$ in the lower panel of Fig.~\ref{fig3}c, from which one can find a clear correlation between the change of $\left | \langle \Psi_6 \rangle \right |$ and $s/\langle \sigma \rangle$. Here we ensure that our simulations at high density have been well equilibrated and independent with the initial configurations (Supplementary Figure 5). Actually, it has been shown that the equilibration in systems of continuously polydisperse particles is much faster than that in monodisperse systems~\cite{ludovicprl2016,ludovicprx2017,ikeda2017}. At high density, during the decrease of $\left | \langle \Psi_6 \rangle \right |$, $s/\langle \sigma \rangle$ increases significantly to the value even higher than that in the low density fluid. This suggests that by increasing the density of the system, the standard deviation of particle size increases, and the combinatorial entropy in the system associated with the variation of particle size increases, which stabilizes the amorphous structures against the ordered ones in the system. Moreover, along with the decrease of $\left | \langle \Psi_6 \rangle \right |$ with density at high pressure, we do not observe any Mayer-Wood loops in EOS (Supplementary Figure 4). This implies that the re-entrant melting of hexatic phase at high density is continuous, which is similar to that in soft particle systems~\cite{zuprl2016,ryzhov}. Furthermore, it was shown that the strong particle size fractionation in polydisperse hard spheres eliminates the possibility of re-entrant melting in 3D polydisperse hard-sphere solids~\cite{sollich2003prl}. Therefore, in Fig.~\ref{fig3}d, we plot the probability density distribution of particle size $p(\sigma)$ for PHDS of various density with $\nu/\sigma_0 = 0.0835$. One can see that within the whole density range, there is always a single peak in $p(\sigma)$ at fixed $\rho$, indicating no particle size fractionation in the system. This also suggests that re-entrant melting can indeed exist in 2D polydisperse particle systems, if there is no particle size fractionation~\cite{bartlett1999}. Here we note that although all our simulations starting with different particle size distributions converge to the same result at fixed polydispersity and density, there still exist a possibility that the system may posses an equilibrium multimodal particle size distribution which is not accessible with direct simulations~\cite{bartlett1999}. In addition, to check whether the amorphous phase formed at high density is a fluid or glass, we perform event driven molecular dynamics (EDMD) simulations starting from the equilibrated configurations obtained from our MC simulations at $\nu/\sigma_0 = 0.0835$. In the EDMD simulation, the system evolves via a time-ordered sequence of elastic collision events, which are described by Newton's equations of motion. The spheres move at constant velocities between collisions, and the velocities of the respective particles are updated when a collision occurs. All collisions are elastic and preserve energy and momentum~\cite{rapaportmd}. The calculated mean square displacement (MSD) for various densities are shown in Fig.~\ref{fig3}e. One can see that not only the diffusion coefficient increases by nearly two orders of magnitude along with the re-entrant melting of hexatic phase (Fig.~\ref{fig3}e inset), but also the plateau in MSD almost disappears. This suggests that the high density amorphous phase is a diffusive fluid, and in the equilibrium sedimentation of such PHDS of Gaussian-like particle size distribution, there should be a ``floating hexatic phase'' sandwiched in two fluids. For PHDS with $\nu/\sigma_0 \ge 0.084$, with increasing the density, we do not observe any ordered phase in our simulations. \section{Discussions} By performing large scale computer simulations, we investigate the effect of particle size polydispersity on the phase behaviour of hard-disk systems. We find that with increasing the size polydispersity of hard disks from zero, the first-order melting transition from hexatic phase to fluid becomes weaker, and completely switches to be continuous at $\nu/\sigma_0 \ge 0.07$ following the celebrated KTHNY scenario. Simultaneously, the density range for stable hexatic phase gets substantially enlarged. Compared with monodisperse hard-disk systems, where hexatic phase is only stable in a very small range of packing fraction of $0.2 \sim 0.3\%$~\cite{hdprl}, the packing fraction range for stable hexatic phase in PHDS with $s/\langle \sigma \rangle \simeq 0.07$ is about $2 \sim 3\%$. This suggests new directions in searching for the hexatic phase in 2D polydisperse particle systems. More interestingly, in PHDS with even higher polydispersity, i.e., $0.08 \le \nu/\sigma_0 \lesssim 0.0835$, we find that at very high density, increasing density of the system can trigger re-entrant transitions from solid to hexatic and hexatic to fluid phases depending on $\nu$. In polydisperse systems, re-entrant transitions, especially the re-entrant melting, were originally predicted theoretically for 3D hard-sphere crystals~\cite{bartlett1999}, but proven impossible due to the strong fractionation that was not taken into account in the theory~\cite{sollich2003prl}. We find that the absence of strong fractionation in 2D polydisperse systems can indeed induce re-entrant transitions at high density. These show a new difference between phase transitions in polydisperse hard-sphere systems in 2D and 3D, and suggest a new direction in investigating phase transitions in 2D systems by considering the particle size polydispersity, which was overlooked in the past. \section{Methods} We perform Monte Carlo simulations in the semigrand canonical ensemble ($NVT-\Delta \mu$) using a square simulation box with periodic boundary conditions in both directions, in which each particle can randomly change its diameter $\sigma$ under the control of Eq.~(\ref{eq1}). To accelerate the equilibration and sampling, we implement the rejection free event-chain MC algorithm, with which the pressure $P$ in the system can be calculated using the mean excess chain displacement~\cite{manon2013} \begin{equation} P = k_B T \rho \left \langle \frac{x_{final} - x_{initial}}{L_c}\right \rangle_{chains}, \end{equation} where $x_{final}$ and $x_{initial}$ are the initial and final positions of each event chain along the direction of the chain, respectively. $L_c$ and $\rho$ are the chain length and number density of the particles, respectively, with $\left \langle \cdot \right \rangle_{chains}$ calculating the average over all event chains. For each simulation, we perform about $10^8 - 10^9$ MC sweeps, and each MC sweep on average consists of 1 event chain with the chain length of $L_c = 100\sigma_0$ and 500 trials of randomly changing the diameter of a randomly selected particle. \section{Author Contributions} R.N. conceived the research. P.S.R. performed the event chain Monte Carlo simulations. Q.-L.L. performed the event driven molecular dynamics simulations. All authors analysed the results and wrote the manuscript. \section{Competing interests} The authors declare no competing interests. \section{Code availability} Codes used for the numeric simulations are available on request from R Ni at Nanyang Technological University, Singapore. \section{Data availability} All data that support the findings of this study are available from the corresponding author on reasonable request. \begin{acknowledgments} We thank Dr. Saurish Chakrabarty for helpful discussions. This work is supported by Nanyang Technological University Start-Up Grant (NTU-SUG: M4081781.120), the Academic Research Fund from Singapore Ministry of Education (M4011616.120 and M4011873.120), and the Advanced Manufacturing and Engineering Young Individual Research Grant (A1784C0018) by the Science and Engineering Research Council of Agency for Science, Technology and Research Singapore. We are grateful to the National Supercomputing Centre (NSCC) of Singapore for supporting the numerical calculations. \end{acknowledgments}
{ "timestamp": "2019-05-29T02:09:02", "yymm": "1804", "arxiv_id": "1804.05582", "language": "en", "url": "https://arxiv.org/abs/1804.05582" }
\section{Introduction} With machine learning algorithms increasingly being deployed in real world settings, it is crucial that we understand how the algorithms can interact, and the dynamics that can arise from their interactions. In recent years, there has been a resurgence in research efforts on multi-agent learning, and learning in games. The recent interest in adversarial learning techniques also serves to show how game theoretic tools can be being used to \emph{robustify} and improve the performance of machine learning algorithms. Despite this activity, however, machine learning algorithms are still being treated as black-box approaches and being na\"{i}vely deployed in settings where other algorithms are actively changing the environment. In general, outside of highly structured settings, there exists no guarantees on the performance or limiting behaviors of learning algorithms in such settings. Indeed, previous work on understanding the collective behavior of coupled learning algorithms, either in competitive or cooperative settings, has mainly looked at games where the global structure is well understood like bilinear games\cite{GradDyn,hommes:2012aa,mertikopoulos:2018aa,leslie:2005aa}, convex games \cite{Mertikopoulos2019,Rosen1965}, or potential games \cite{monderer:1996aa}, among many others. Such games are more conducive to the statement of global convergence guarantees since the assumed global structure can be exploited. In games with fewer assumptions on the players' costs, however, there is still a lack of understanding of the dynamics and limiting behaviors of learning algorithms. Such settings are becoming increasingly prevalent as deep learning is increasingly being used in game theoretic settings \cite{goodfellow:2014aa,foerster:2017aa,abdallah:2008aa,zhang:2010aa}. Gradient-based learning algorithms are extremely popular in a variety of these multi-agent settings due to their versatility, ease of implementation, and dependence on local information. There are numerous recent papers in multi-agent reinforcement learning that employ gradient-based methods (see, e.g.\cite{abdallah:2008aa, foerster:2017aa,zhang:2010aa}), yet even within this well-studied class of learning algorithms, a thorough understanding of their convergence and limiting behaviors in general continuous games is still lacking. Generally speaking, in both the game theory and the machine learning communities, two of the central questions when analyzing the dynamics of learning in games are the following: \begin{description}[leftmargin=20pt] \item[Q1.] \emph{Are all attractors of the learning algorithms employed by agents equilibria relevant to the underlying game?} \item[Q2.] \emph{Are all equilibria relevant to the game also attractors of the learning algorithms agents employ?} \end{description} In this paper, we provide some answers to the above questions for the class of gradient-based learning algorithms by analyzing their limiting behavior in general continuous games. In particular, we leverage the continuous time limit of the more naturally discrete multi-agent learning algorithms. This allows us to draw on the extensive theory of dynamical systems and stochastic approximation to make statements about the limiting behaviors of these algorithms in both deterministic and stochastic settings. The latter is particularly relevant since it is common for stochastic gradient methods to be used in multi-agent machine learning contexts. Analyzing gradient-based algorithms through the lens of dynamical systems theory has recently yielded new insights into their behavior in the classical optimization setting \cite{Wilson2016,Bach,lee:2016aa}. We show that a similar type of analysis can also help understand the limiting behaviors of gradient-based algorithms in games. We remark, however, that there is a \emph{fundamental difference} between the dynamics that are analyzed in much of the single-agent, gradient-based learning and optimization literature and the ones we analyze in the competitive multi-agent case: the combined dynamics of gradient-based learning schemes in games \emph{do not necessarily correspond to a gradient flow}. This may seem a subtle point, but it it turns out to be extremely important. Gradient flows admit desirable convergence guarantees---e.g., almost sure convergence to local minimizers---due to the fact that they preclude flows with the \emph{worst geometries}~\cite{pemantle:2007aa}. In particular, they do not exhibit non-equilibrium limiting behavior such as periodic orbits. Gradient-based learning in games, on the other hand, does not preclude such behavior. Moreover, as we show, asymmetry in the dynamics of gradient-play in games can lead to surprising behaviors such as non-relevant limiting behaviors being attracting under the flow of the game dynamics and relevant limiting behaviors, such as a subset of the Nash equilibria being almost surely avoided. \subsection{Related Work} The study of continuous games is quite extensive~(see e.g. \cite{BasarOlsder, osborne:1994aa}), though in large part the focus has been on games admitting a fair amount of structure. The behavior of learning algorithms in games is also well-studied~(see e.g. \cite{fudenberg:1998aa}). In this section, we comment on the most relevant prior work and defer a more comprehensive discussion of our results in the context of prior work to Section~\ref{sec:discussion}. As we noted, previous work on learning in games in both the game theory literature, and more recently from the machine learning community, has largely focused on addressing (\textbf{Q1}) whether all attractors of the learning dynamics are game-relevant equilibria, and (\textbf{Q2}) whether all game-relevant equilibria are also attractors of the learning dynamics. The primary type of game-relevant equilibrium considered in the investigation of these two questions is a Nash equilibrium. The majority of the existing work has focused on \textbf{Q1}. In fact, a large body of prior work focuses on games with structures that preclude the existence of non-Nash equilibria. Consequently, answering \textbf{Q1} reduces to analyzing the convergence of various learning algorithms (including gradient-play) to the unique Nash equilibrium or the set of Nash equilibria. This is often shown by exploiting the game structure. Examples of classes of games where gradient-play has been well-studied are potential games~\cite{monderer:1996aa}, concave or monotone games \cite{Rosen1965,bravo2018bandit,Mertikopoulos2019}, and gradient-play over the space of stochastic policies in two-player finite-action bilinear games \cite{GradDyn}. In the latter setting, other gradient-like algorithms such as multiplicative weights have also been studied fairly extensively \cite{hommes:2012aa}, and have been shown to converge to cycling behaviors. Some works have also attempted to address \textbf{Q1} in the context of gradient-play in two-player zero-sum games. Concurrently with this paper, for a general class of ``sufficiently smooth'' two-player, zero-sum games it was shown that there exists stationary points for gradient-play that are non-Nash \cite{Daskalakis}\footnote{This paper was under review at the time that \cite{Daskalakis} became publicly available. Our results show the existence of these non-Nash equilibria and attracting cycles in both general-sum and zero-sum games.}. In such games, it has also been shown that gradient-play can converge to cycles (see, e.g.,~\cite{mertikopoulos:2018aa, Wesson2016, hommes:2012aa}). There is also related work in more general games on the analysis of when Nash equilibria are attracting for gradient-based approaches (i.e. \textbf{Q2}). Sufficient conditions for this to occur are the conditions for stable differential Nash equilibria introduced in \cite{ratliff:2013aa,ratliff:2014aa,ratliff:2016aa} and the condition for variational stability later analyzed in \cite{Mertikopoulos2019}. We remark that these conditions are equivalent for the classes of games we consider. Neither of these works give conditions under which Nash equilibria are avoided by gradient-play or comment on other attracting behaviors. Expanding on this rich body of literature (only the most relevant of which is covered in our short review), in this paper we provide answers to \textbf{Q1} without imposing structure on the game outside regularity conditions on the cost functions by exploiting the observation that gradient-based learning dynamics are not gradient flows. We also provide answers to \textbf{Q2} by demonstrating that a non-trivial set of games admit Nash equilibria that are almost surely avoided by gradient-play. We give explicit conditions for when this occurs. Using similar analysis tools, we also provide new insights into the behavior of gradient-based learning in structured classes of games such as zero-sum and potential games. \subsection{Contributions and Organization} We present a general framework for modeling competitive gradient-based learning that applies to a broad swath of learning algorithms. In Section~\ref{sec:connections}, we draw connections between the limiting behavior of this class of algorithms and game-theoretic and dynamical systems notions of equilibria. In particular, we construct general-sum and zeros-sum games that admit non-Nash attracting equilibria of the gradient dynamics. Such points are attracting under the learning dynamics, yet at least one player---\emph{and potentially all of them}---has a direction in which they could unilaterally deviate to decrease their cost. Thus, these non-Nash equilibria are of questionable game theoretic relevance and can be seen as artifacts of the players' algorithms. In Section~\ref{sec:results}, we show that policy gradient multi-agent reinforcement learning (MARL), generative adversarial networks (GANs), gradient-based multi-agent multi-armed bandits, among several other common multi-agent learning settings, conform to this framework. The framework is amenable to tools for analysis from dynamical systems theory. Also in Section~\ref{sec:results}, we show that a subset of the local Nash equilibria in general-sum games and potential games is avoided almost surely when each player employs a gradient-based algorithm. We show that this holds in two broad settings: the full information setting when each player has oracle access to their gradient but randomly initializes their first action, and a partial information setting where each player has access to an unbiased estimate of their gradient. Thus, we provide a negative answer to both \textbf{Q1} and \textbf{Q2} for $n$--player general-sum games, and highlight the nuances present in zero-sum and potential games. We also show that the dynamics formed from the individual gradients of agents' costs are \emph{not gradient flows}. This in turn implies that competitive gradient-based learning in general-sum games may converge to periodic orbits and other non-trivial limiting behaviors that arise in, e.g., chaotic systems. To support the theoretical results, we present empirical results in Section~\ref{sec:LQR} that show that policy gradient algorithms avoid global Nash equilibria in a large number of linear quadratic (LQ) dynamic games, a benchmark for MARL. We conclude in Section~\ref{sec:discussion} with a discussion of the implications of our results and some links with prior work as well as some comments on future directions. % % % % % % % % % % % % \section{Preliminaries} \label{sec:prelims} Consider $n$ agents indexed by $\mathcal{I}=\{1, \ldots, n\}$. Each agent $i \in \mathcal{I}$ has their own decision variable $x_i \in X_i$, where $X_i$ is their finite-dimensional strategy space of dimension $m_i$. Define $X=X_1\times \cdots \times X_n$ to be the finite-dimensional joint strategy space with dimension $m=\sum_{i\in \mathcal{I}}m_i$. Each agent is endowed with a cost function $f_i\in C^s(X, \mathbb{R})$ with $s\geq 2$ and such that $f_i:(\pxone{i},\pxone{-i})\mapsto f_i(x_i,x_{-i})$ where we use the notation $x=(x_i,x_{-i})$ to make the dependence on the action of the agent $x_i$, and the actions of all agents excluding agent $i$, $\pxone{-i}=(\pxone{1}, \ldots, \pxone{i-1}, \pxone{i+1}, \ldots, \pxone{n})$ explicit. The agents seek to minimize their own cost, but only have control over their own decision variable $x_i$. In this setup, agents' costs are not necessarily aligned with one another, meaning they are competing. Given the game $\mathcal{G}=(f_1,\ldots, f_n)$, agents are assumed to update their strategies simultaneously according to a gradient-based learning algorithm of the form \begin{equation} x_{i,t+1}=x_{i,t}-\gamma_{i,t}h_i(x_{i,t}, x_{-i,t}), \label{eq:gradbasedlearn} \end{equation} where $\gamma_{i,t}$ is agent $i$'s step-size at iteration $t$. We analyze the following two settings: \begin{enumerate} \item Agents have \emph{oracle access} to the gradient of their cost with respect to their own choice variable---i.e.~$h_i(x_{i,t},x_{-i,t})= D_if_i(x_{i,t},x_{-i,t})$ where $D_if_i\equiv \partial f_i/\partial x_i$ denotes the derivative of $f_i$ with respect to $x_i$. \item Agents have an \emph{unbiased estimator} of their gradient---i.e.,~$h_i(x_{i,t},x_{-i,t})=D_if_i(x_{i,t},x_{-i,t})+w_{i,t+1}$ where $\{w_{i,t}\}$ is a zero mean, finite variance stochastic process. \end{enumerate} We refer to the former setting as \emph{deterministic} gradient-based learning and the latter setting as \emph{stochastic} gradient-based learning. Assuming that all agents are employing such algorithms, we aim to analyze the limiting behavior of the agents' strategies. To do so, we leverage the following game-theoretic notion of a Nash equilibrium. \begin{definition} \label{def:SLNE} A strategy $x\in X$ is a {local Nash equilibrium} for the game $(f_1, \ldots, f_n)$ if, for each $i\in\mathcal{I}$, there exists an open set $W_i\subset X_i$ such that that $\pxone{i}\in W_i$ and $f_i(x_i,x_{-i})\leq f_i(x_i',x_{-i})$ for all $\pxone{i}'\in W_i$. If the above inequalities are strict, then we say $x$ is a {strict local Nash equilibrium}. \end{definition} The focus on \emph{local} Nash equilibria is due to our lack of assumptions on the agents' cost functions. If $W_i=X_i$ for each $i$, then a local Nash equilibrium $x$ is a {global Nash equilibrium}. This holds in e.g the bimatrix games and the linear quadratic games we analyze in Section~\ref{sec:LQR}. Depending on the agents' costs, a game $(f_1, \ldots, f_n)$ may admit anywhere from one to a continuum of local or global Nash equilibria; or none at all. % \section{Linking Games and Dynamical Systems} \label{sec:connections} In this section, we draw links between the limiting behavior of dynamical systems and game-theoretic notions of equilibria in three broad classes of continuous games. For brevity, the proofs of the propositions in this section are supplied in Appendix~\ref{app:proofs}. A high-level summary of the links we draw is shown in Figure~\ref{fig:eqtype}. Define $\omega(x)=(D_1f_1(x), \ldots, D_n f_n(x))$ to be the vector of player derivatives of their own cost functions with respect to their own choice variables. When each player is employing a gradient-based learning algorithm, the joint strategy of the players, (in the limit as the agents' step-sizes go to zero) follows the differential equation \begin{align*} \dot x=-\omega(x). \label{eq:sys} \end{align*} A point $x\in X$ is said to be an equilibrium, critical point, or stationary point of the dynamics if $\omega(x)=0$. Stationary points of $\dot x=-\omega(x)$ are joint strategies from which, under gradient-play, the agents do not move. We note that $\omega(x)=0$ is a necessary condition for a point $x\in X$ to be a local Nash equilibrium~\cite{ratliff:2016aa}. Hence, all local Nash equilibria are critical points of the joint dynamics $\dot x=-\omega(x)$. Central to dynamical systems theory is the study of limiting behavior and its stability properties. A classical result in dynamical systems theory allows us to characterize the stability properties of an equilibrium $x^*$ by analyzing the Jacobian of the dynamics at $x^*$. The Jacobian of $\omega$ is defined by \[D\omega(x)=\bmat{D_{1}^2f_1(x) & \cdots & D_{n 1}f_1(x)\\ \vdots & \ddots & \vdots \\ D_{1n}f_n(x) & \cdots & D_{n}^2f_n(x)}.\] Since $D\omega$ is a matrix of second derivatives, it is sometimes referred to as the `game Hessian'. Similar to the Hessian matrix of a gradient flow, $D\omega$ allows us to further characterize the critical points of $\omega$ by their properties under the flow of $\dot x=-\omega(x)$. Let $\lambda_i(x)\in \mathrm{spec}(D\omega(x))$ for $i\in \{1, \ldots, m\}$ denote the eigenvalues of $D\omega$ at $x$ where $\text{Re}(\lambda_1(x))\leq \cdots \leq \text{Re}(\lambda_m(x))$---that is, $\lambda_1(x)$ is the eigenvalue with the smallest real part. Of particular interest are asymptotically stable equilibria. \begin{definition} A point $x\in X$ is a {locally asymptotically stable equilibrium} of the continuous time dynamics $\dot x=-\omega(x)$ if $\omega(x)=0$ and $\mathrm{Re}(\lambda)>0$ for all $\lambda\in \mathrm{spec}(D\omega(x))$. \end{definition} Locally asymptotically stable equilibria have two properties of interest. First, they are isolated, meaning that there exists a neighborhood around them in which no other equilibria exist. Second, they are exponentially attracting under the flow of $\dot x=-\omega(x)$, meaning that if agents initialize in a neighborhood of a locally asymptotically stable equilibrium $x^\ast$ and follow the dynamics described by $\dot x=-\omega(x)$, they will converge to $x^\ast$ exponentially fast \cite{sastry:1999aa}. This, in turn, implies that a discretized version of $\dot x=-\omega(x)$, namely \begin{equation}\pxt{t+1}=\pxt{t}-\gamma\omega(\pxt{t}),\label{eq:update-x}\end{equation} converges locally for appropriately selected step size $\gamma$ at a rate of $O(1/t)$. Such results motivate the study of the continuous time dynamical system $\dot{x}=-\omega(x)$ in order to understand convergence properties of gradient-based learning algorithms of the form \eqref{eq:gradbasedlearn}. Another important class of critical points of a dynamical system are saddle points. \begin{definition} A point $x\in X$ is a {saddle point} of the dynamics $\dot x=-\omega(x)$ if $\omega(x)=0$ and $\lambda_1(x)\in \mathrm{spec}(D\omega(x))$ is such that $\mathrm{Re}(\lambda_1(x))\leq 0$. A saddle point such that $\mathrm{Re}(\lambda_i)<0$ for $i\in \{1, \ldots, \ell\}$ and $\mathrm{Re}(\lambda_j)>0$ for $j\in\{\ell+1, \ldots, m\}$ with $0<\ell<m$ is a {strict saddle point} of the continuous time dynamics $\dot x=-\omega(x)$. \end{definition} Strict saddle points are especially relevant to our analysis since their neighborhoods are characterized by stable and unstable manifolds \cite{sastry:1999aa}. When the agents evolve according to the dynamics solely on the stable manifold, they converge exponentially fast to the critical point. However, when they evolve solely on the unstable manifold, they diverge from the equilibrium exponentially fast. Agents whose strategies lie on the union of the two manifolds asymptotically avoid the equilibrium. We make use of this general fact in Section~\ref{sec:fullinfo}. To better understand the links between the critical points of the gradient dynamics and the Nash equilibria of the game, we make use of an equivalent characterization of strict local Nash that leverages first and second order conditions on player cost functions. This makes them simpler objects to link to the various dynamical systems notions of equilibria than local Nash equilibria. \begin{definition}[\cite{ratliff:2013aa, ratliff:2016aa}] A point $x\in X$ is a differential Nash equilibrium for the game defined by $(f_1,\ldots, f_n)$ if $\omega(x)=0$ and $D_{i}^2f_i(x) \succ 0$ for each $i\in \mathcal{I}$. \end{definition} In \cite{ratliff:2014aa}, it was shown that local Nash equilibria are generically differential Nash equilibria where $\det(D\omega(x))\neq 0$ (i.e., $D\omega$ is non-degenerate). Thus, in the space of games where the agents' costs are at least twice differentiable, the set of games that admit local Nash equilibria that are not non-degenerate differential Nash equilibria is of measure zero \cite{ratliff:2014aa}. In \cite{ratliff:2014aa} it was also shown that non-degenerate Nash equilibria are structurally stable, meaning that small perturbations to the agents' costs functions will not change the fundamental nature of the equilibrium. This also implies that gradient-play with slightly biased estimators of the gradient will not have vastly different behaviors in neighborhoods of equilibria. Given these different equilibrium notions of the learning dynamics and the underlying game, let us define the following sets which will be useful in stating the results in the following sections. For a game $\mathcal{G}=(f_1,\ldots, f_n)$, denote the sets of strict saddle points and locally asymptotically stable equilibria of the gradient dynamics, $\dot x=-\omega(x)$, as ${\tt{SSP}}(\omega)$ and ${\tt{LASE}}(\omega)$, respectively, where we recall that $\omega(x)=(D_1f_1(x), \ldots, D_n f_n(x))$. Similarly, denote the set of local Nash equilibria, differential Nash equilibria, and non-degenerate differential Nash equilibria of $\mathcal{G}$ as ${\tt{LNE}}(\mathcal{G})$, ${\tt{DNE}}(\mathcal{G})$, and ${\tt{NDDNE}}(\mathcal{G})$, respectively. As previously mentioned, ${\tt{NDDNE}}(\mathcal{G})={\tt{LNE}}(\mathcal{G})$ in almost all continuous games. The key takeaways of this section are summarized in Figure~\ref{fig:eqtype}. \begin{figure}[h] \center \includegraphics[width=0.8\linewidth]{figs/Eqtype.png} \caption{Links between the equilibria of generic continuous games $\mathcal{G}$ and their properties under the gradient dynamics $\dot x=-\omega(x)$.} \label{fig:eqtype} \end{figure} \subsection{General-sum games} We first analyze the properties of local Nash equilibria under the joint gradient dynamics in $n$-player general-sum games. \begin{proposition} A non-degenerate differential Nash equilibrium is either a locally asymptotically stable equilibrium or a strict saddle point of $\dot x=-\omega(x)$---i.e., ${\tt{NDDNE}}(\mathcal{G})\subset {\tt{SSP}}(\omega)\cup {\tt{LASE}}(\omega)$. \label{lem:nddne} \end{proposition} Locally asymptotically stable differential Nash equilibria satisfy the notion of variational stability introduced in \cite{Mertikopoulos2019}. In fact, a simple analysis shows that the definitions of variationally stable equilibria and locally asymptotically stable differential Nash equilibria~\cite{ratliff:2013aa} are equivalent in the games we consider---i.e., games where each players' cost is at least twice continuously differentiable. We remark that, from the definition of asymptotic stability, the gradient dynamics have a $O(1/t)$ convergence rate in the neighborhood of such equilibria. An important point to make is that not every locally asymptotically stable equilibrium of $\dot x=-\omega(x)$ is a non-degenerate differential Nash equilibrium. Indeed, the following proposition provides an entire class of games whose corresponding gradient dynamics admit locally asymptotically stable equilibria that are not local Nash equilibria. \begin{proposition} In the class of general-sum continuous games, there exists a continuum of games containing games $\mathcal{G}$ such that ${\tt{LASE}}(\omega)\not\subset {\tt{NDDNE}}(\mathcal{G})$, and moreover, ${\tt{LASE}}(\omega)\not\subset{\tt{LNE}}(\mathcal{G})$. \label{prop:gsg} \end{proposition} \begin{proof} Consider a two player game $\mathcal{G}=(f_1,f_2)$ on $\mathbb{R}^2$ where \begin{align*} f_1(x_1,x_2)= \frac{a}{2}x^2_1 + bx_1x_2,\ \ \text{and}\ \ f_2(x_1,x_2)= \frac{d}{2}x^2_2 + cx_1x_2 \end{align*} for constants $a,b,c,d \in \mathbb{R}$. The Jacobian of $\omega$ is given by \begin{align} \label{eq:domeg} D\omega(x_1,x_2)=\bmat{a & b \\ c & d}, \ \ \forall (x_1,x_2)\in \mathbb{R}^2. \end{align} If $a>0$ and $d<0$, then the unique stationary point $x=(0,0)$ is neither a differential Nash nor a local Nash equilibria since the necessary conditions are violated (i.e., $d<0$). However, if $a>-d$ and $ad>cb$, the eigenvalues of $D\omega$ have positive real parts and $(0,0)$ is asymptotically stable. Further, this clearly holds for a continuum of games. Thus, the set of locally asymptotically stable equilibria that are {not Nash equilibria} may be arbitrarily large. \end{proof} The, preceding proposition shows that there exists attracting critical points of the gradient dynamics in general-sum continuous games that are not Nash equilibria and may not be even relevant to the game. Thus, this provides a negative answer to \textbf{Q2} (whether all attracting equilibria in general-games are game-relevant for the learning dynamics). \begin{remark} We note that, by definition, the non-Nash locally asymptotically stable equilibria (or non-Nash equilibria) do not satisfy the second-order conditions for Nash equilibria. Thus, at these joint strategies, at least one player -- and maybe all of them -- has a direction in which they would unilaterally deviate if they were not using gradient descent. As such, we view convergence to these points to be undesirable. \end{remark} \subsection{Zero-sum games} Let us now restrict our attention to two-player zero-sum games, which often arise when training GANs, in adversarial learning, and in MARL \cite{goodfellow:2014aa,omidshafiei:2017aa,chivukula:2017aa}. In such games, one player can be seen as minimizing $f$ with respect to their decision variable and the other as minimizing $-f$ with respect to theirs. The following proposition shows that all differential Nash equilibria in two-player zero-sum games are locally asymptotically stable equilibria under the flow of $\dot x=-\omega(x)$. \begin{proposition} \label{prop:zsg} For an arbitrary two-player zero-sum game, $(f,-f)$ on $\mathbb{R}^m$, if $x$ is a differential Nash equilibrium, then $x$ is both a non-degenerate differential Nash equilibrium and a locally asymptotically stable equilibrium of $\dot x=-\omega(x)$---that is, ${\tt{DNE}}(\mathcal{G})\equiv {\tt{NDDNE}}(\mathcal{G})\subset {\tt{LASE}}(\omega)$. \end{proposition} This result guarantees that the differential Nash equilibria of zero-sum games are isolated and exponentially attracting under the flow of $\dot x=-\omega(x)$. This in turn guarantees that simultaneous gradient-play has a local linear rate of convergence to all local Nash equilibria in all zero-sum continuous games. Thus, the answer to \textbf{Q1} is the context of zero-sum games is ``yes'', since all Nash equilibria are attracting for the gradient dynamics. The converse of the preceding proposition, however, is not true. Not every locally asymptotically stable equilibrium in two-player zero-sum games are non-degenerate differential Nash equilibria. Indeed, there may be many locally asymptotically stable equilibria in a zero-sum game that are not local Nash equilibria. The following proposition highlights this fact. \begin{proposition} In the class of zero-sum continuous games, there exists a continuum of games such that for each game $\mathcal{G}$, ${\tt{LASE}}(\omega)\not\subset {\tt{DNE}}(\mathcal{G})\subset {\tt{LNE}}(\mathcal{G})$. \label{prop:zsg2} \end{proposition} \begin{proof} Consider the two-player zero-sum game $(f, -f)$ on $\mathbb{R}^2$ where \begin{align*} f(x_1,x_2)= \frac{a}{2}x^2_1 + bx_1x_2+\frac{c}{2}x^2_2; \end{align*} and $a,b,c \in \mathbb{R}$. The Jacobian of $\omega$ is given by \[D\omega(x_1,x_2)=\bmat{\ \ a &\ \ b \\ -b & -c}, \ \ \forall \ (x_1,x_2)\in \mathbb{R}^2.\] If $a>c>0$ and $b^2>ac$, then $D\omega(x_1,x_2)$ has eigenvalues with strictly positive real part, but the unique stationary point is not a differential Nash equilibrium---since $-c<0$---and, in fact, is not even a Nash equilibrium. Indeed, \begin{equation*} -f(0,0)>-f(0,x_2)=-\frac{c}{2}x_2^2, \ \ \forall\ x_2\neq 0.\end{equation*} Thus, there exists a continuum of zero-sum games with a large set of locally asymptotically stable equilibria of the corresponding dynamics $\dot{x}=-\omega(x)$ that are not differential Nash. \end{proof} The, preceding proposition again shows that there exists non-Nash equilibria of the gradient dynamics in zero-sum continuous games. Thus, this proposition also provides a negative answer to \textbf{Q2} in the context of zero-sum games. \subsection{Potential Games} One last set of games with interesting connections between the Nash equilibria and the critical points of the gradient dynamics is the class known as \emph{potential games}. This particularly nice class of games are ones for which $\omega$ corresponds to a gradient flow under a coordinate transformation---that is, there exists a function $\phi$ (commonly referred to as the potential function) such that for each $i\in\mathcal{I}$, $D_if_i\equiv D_i\phi$. We remark that due to the equivalence this class of games is sometimes referred to as an \emph{exact} potential game. Note that a necessary and sufficient condition for $(f_1,\ldots, f_n)$ to be a potential game is that $D\omega$ is \emph{symmetric}~\cite{monderer:1996aa}---that is, $D_{ij}f_j\equiv D_{ji}f_i$. This gives potential games the desirable property that the only locally asymptotically stable equilibria of the gradient dynamics are local Nash equilibria. \begin{proposition} \label{prop:potentialgame} For an arbitrary potential game, $\mathcal{G}=(f_1,\ldots,f_n)$ on $\mathbb{R}^m$, if $x$ is a locally asymptotically stable equilibrium of $\dot x=-\omega(x)$ (i.e., $x\in{\tt{LASE}}(\omega)$), then $x$ is a non-degenerate differential Nash equilibrium (i.e., $x\in {\tt{NDDNE}}(\mathcal{G})$). \end{proposition} The full proof of Proposition~\ref{prop:potentialgame} is supplied in Appendix~\ref{app:proofs}. The preceding proposition rules out non-Nash locally asymptotically stable equilibria of the gradient dynamics in potential games, and implies that every local minimum of a potential game must be a local Nash equilibrium. Thus, in potential games, unlike in general-sum and zero-sum games, the answer to \textbf{Q2} is positive. However, the following proposition shows that the existence of a potential function is not enough to rule out local Nash equilibria that are saddle points of the dynamics. \begin{proposition} In the class of continuous games, there exist a continuum of potential games containing games $\mathcal{G}$ that admit Nash equilibria that are saddle points of the dynamics $\dot{x}=-\omega(x)$---i.e., $\exists\ \mathcal{G}$ such that for some $x\in {\tt{LNE}}(\mathcal{G})$, $x\in {\tt{SSP}}(\omega)$. \label{ex:potssp} \end{proposition} \begin{proof} Consider the game $(f, f)$ on $X=\mathbb{R}^2$ described by \[ f(x_1,x_2)=\frac{a}{2}x_1^2 + bx_1x_2+\frac{c}{2}x_2^2\] where $a,b,d \in \mathbb{R}$. The Jacobian of $\omega$ is given by \[D\omega(x_1,x_2)=\bmat{a & b \\ b & c}, \ \ \forall \ (x_1,x_2)\in \mathbb{R}^2.\] If $a,c>0$, then $x=(0,0)$ is a local Nash equilibrium. However, if $ac<b^2$, $D\omega(x)$ has one positive and one negative eigenvalue and $(0,0)$ is a saddle point of the gradient dynamics. Thus, there exists a continuum of potential games where a large set of differential Nash equilibria are strict saddle points of $\dot{x}=-\omega(x)$. \end{proof} Proposition~\ref{ex:potssp} demonstrates a surprising fact about potential games. Even though all minimizers of the potential function must be local Nash equilibria, \emph{not all local Nash equilibria are minimizers of the potential function}. \subsection{Main Takeaways} The main takeaways of this section are summarized in Figure~\ref{fig:eqtype}. We note that for zero-sum games, Proposition~\ref{prop:zsg2} shows that ${\tt{LNE}}(\mathcal{G}) \subset {\tt{LASE}}(\omega)$. Since the inclusion is strict, the answer to \textbf{Q2} in such games is ``no''. For general-sum games, Proposition~\ref{prop:gsg} allows us to to conclude that there do exist attracting, non-Nash equilibria. Thus, the answer to \textbf{Q2} is also ``no''. In potential games, since ${\tt{LASE}}(\omega) \subset {\tt{LNE}}(\mathcal{G})$ the answer is ``yes''. In the following sections, we provide answers to \textbf{Q1} by showing that all local Nash equilibria in ${\tt{LNE}}(\mathcal{G}) \cap {\tt{SSP}}(\omega)$ are avoided almost surely by gradient-based algorithms in both the deterministic and stochastic settings. In particular, since ${\tt{LNE}}(\mathcal{G}) \cap {\tt{SSP}}(\omega) \ne \emptyset$ in potential and general-sum games, one cannot give a positive answer to \textbf{Q1} in either of these classes of games. \section{Convergence of Gradient-Based Learning} \label{sec:results} In this section, we provide convergence and non-convergence results for gradient-based algorithms. We also include a high-level overview of well-known algorithms that fit into the class of learning algorithms we consider; more detail can be found in Appendix~\ref{sec:examples}. \subsection{Deterministic Setting} \label{sec:fullinfo} We first address convergence to equilibria in the \emph{deterministic} setting in which agents have oracle access to their gradients at each time step. This includes the case where agents know their own cost functions $f_i$ and observe their own actions as well as their competitors' actions---and hence, can compute the gradient of their cost with respect to their own choice variable. Since we have assumed that each agent $i\in \mathcal{I}$ has their own \emph{learning rate} (i.e.~step sizes $\gamma_i$), the joint dynamics of all the players are given by \begin{equation} \pxt{t+1}=g(\pxt{t}) % % % \end{equation} where $g:x\mapsto x-\gamma \odot \omega(x)$ with $\gamma=(\gamma_i)_{i\in \mathcal{I}}$ and $\gamma>0$ element-wise. By a slight abuse of notation, $\gamma\odot \omega(\pxt{t})$ is defined to be element-wise multiplication of $\gamma$ and $\omega(\cdot)$ where $\gamma_1$ is multiplied by the first $m_1$ components of $\omega(\cdot)$, $\gamma_2$ is multiplied by the next $m_2$ components, and so on. We remark that this update rule immediately distinguishes gradient-based learning in games from gradient descent. By definition, the dynamics of gradient descent in single-agent settings always correspond to gradient flows ---i.e $x$ evolves according to an ordinary differential equation of the form $\dot x=-\nabla \phi(x)$ for some function $\phi:\mathbb{R}^d \rightarrow \mathbb{R}$. Outside of the class of \emph{exact} potential games we defined in Section~\ref{sec:connections}, the dynamics of players' actions in games are not afforded this luxury---indeed, $D\omega$ is not in general symmetric (which is a necessary condition for a gradient flow). This makes the potential limiting behaviors of $\dot{x}=-\omega(x)$ highly non-trivial to characterize in general-sum games. The structure present in a gradient-flow implies strong properties on the limiting behaviors of $x$. In particular, it precludes the existence of limit cycles or periodic orbits (limiting behaviors of dynamical systems where the state of system cycles infinitely through a set of states with a finite period) and chaos (an attribute of nonlinear dynamical systems where the system's behavior can vary extremely due to slight changes in initial position) \cite{sastry:1999aa}. We note that both of these behaviors can occur in the dynamics of gradient-based learning algorithms in games\footnote{The Van der Pol oscillator and Lorenz system (see e.g \cite{sastry:1999aa}) can be seen as the resulting gradient dynamics in a 2-player and 3-player general-sum game respectively. The first is a classic example of a system where players converge to cycles and the second is an example of a chaotic system.}. Despite the wide breadth of behaviors that gradient dynamics can exhibit in competitive settings, we are still make statements about convergence (and non-convergence) to certain types of equilibria. To do so, we first make the following standard assumptions on the smoothness of the cost functions $f_i$ and the magnitude of the agents' learning rates $\gamma_i$. \begin{assumption} For each $i\in \mathcal{I}$, $f_i\in C^s({X}, \mathbb{R})$ with $s\geq 2$, $\sup_{x\in X}\|D\omega(x)\|_2\leq L<\infty$, and $0<\gamma_i<1/L$ where $\|\cdot\|_2$ is the induced $2$-norm. \label{ass:ell} % \end{assumption} Given these assumptions, the following result rules out converging to strict saddle points. \begin{theorem} Let $f_i:{X}\rightarrow \mathbb{R}$ and $\gamma$ satisfy Assumption~\ref{ass:ell}. Suppose that ${X}=X_1\times \cdots \times X_n \subseteq\mathbb{R}^m$ is open and convex. If $g({X})\subset {X}$, the set of initial conditions $x\in {X}$ from which competitive gradient-based learning converges to % strict saddle points is of measure zero. \label{thm:fullinfo} \end{theorem} We remark that the above theorem holds for $X=X_1\times \cdots \times X_n=\mathbb{R}^m$ in particular, since $g(X)\subset X$ holds trivially in this case. It is also important to note that, as we point out in Section~\ref{sec:connections}, local Nash equilibria can be strict saddle points. Thus, all local Nash equilibria that are strict saddle points for $\dot{x}=-\omega(x)$ are avoided almost surely by gradient-play even with oracle gradient access and random initializations. This holds even when players randomly initialize uniformly in an arbitrarily small ball around such Nash equilibria. In Section~\ref{sec:LQR}, we show that many linear quadratic dynamic games have a strict saddle point as their global Nash equilibrium. For brevity, we provide the proof of Theorem~\ref{thm:fullinfo} in Appendix~\ref{app:proofs}, and provide a proof sketch below. \begin{proof}[Proof sketch of Theorem~\ref{thm:fullinfo}] The core of the proof is the celebrated stable manifold theorem from dynamical systems theory, presented in Theorem~\ref{thm:centerstable}. We construct the set of initial positions from which gradient-play will converge to strict saddle points and then use the stable manifold theorem to show that the set must have measure zero in the players' joint strategy space. Therefore, with a random initialization players will never evolve solely on the stable manifold of strict saddles and they will consequently diverge from such equilibria. To be able to invoke the stable manifold theorem, we first show that the mapping $g: \mathbb{R}^m\rightarrow \mathbb{R}^m$ is a diffeomorphism, which is non-trivial due to the fact that we have allowed each agent to have their own learning rate $\gamma_i$ and $D\omega$ is not symmetric. We then iteratively construct the set of initializations that will converge to strict saddle points under the game dynamics. By the stable manifold theorem, and the fact that $g$ is a diffeomorphism, the stable manifold of a strict saddle point must be measure zero. Then, by induction we show that the set of all initial points that converge to a strict saddle point must also be measure zero. \end{proof} % In potential games we can strengthen the above non-convergence result and give convergence guarantees. \begin{corollary} Consider a potential game $(f_1,\ldots, f_n)$ on open, convex $X=X_1\times \cdots \times X_n\subseteq \mathbb{R}^m$ and where each $f_i\in C^s(X, \mathbb{R})$ for $s\geq 3$. % % % Let $\nu$ be a prior measure with support $X$ which is absolutely continuous with respect to the Lebesgue measure and assume $\lim_{t\rightarrow \infty} g^t(x)$ exists. Then, % under Assumption~\ref{ass:ell}, competitive gradient-based learning converges to % % non-degenerate differential Nash equilibria almost surely. Moreover, the non-degenerate differential Nash to which it converges is generically a local Nash equilibrium. \label{cor:msfinite} \end{corollary} Corollary~\ref{cor:msfinite} guarantees that in potential games, gradient-play will converge to a differential Nash equilibrium. Combining this with Theorem~\ref{sec:fullinfo} guarantees that the differential Nash equilibrium it converges to is a local minimizer of the potential function. A simple implication of this result is that gradient-based learning in potential games cannot exhibit limit cycles or chaos. Of note is the fact that the agents \emph{do not} need to be performing gradient-based learning on $\phi$ to converge to Nash almost surely. That is, they do not need to know the function $\phi$; they simply need to follow the derivative of their own cost with respect to their own choice variable, and they are guaranteed to converge to a local Nash equilibrium that is a local minimizer of the potential function. We note that convergence to Nash equilibria is a known characteristic of gradient-play in potential games. However, our analysis also highlights that gradient-play will avoid a subset of the Nash equilibria of the game. This is surprising given the particularly strong structural properties of such games. The proof for Corollary~\ref{cor:msfinite} is provided in Appendix~\ref{app:proofs} and follows from Proposition~\ref{prop:potentialgame}, Theorem~\ref{thm:fullinfo}, and the fact that $D\omega$ is symmetric in potential games. \subsubsection{Implications and Interpretation of Convergence Analysis} \label{sec:implications} Both Theorem~\ref{thm:fullinfo} and Corollary~\ref{cor:msfinite} show that gradient-play in multi-agent settings avoids strict saddles almost surely even in the deterministic setting. Combined with the analysis in Section~\ref{sec:connections} which shows that (local) Nash equilibria can be strict saddles of the dynamics for general-sum games, this implies that a subset of the Nash equilibria are almost surely avoided by individual gradient-play, a potentially undesirable outcome in view of \textbf{Q1} (whether all Nash equilibria are attracting for the learning dynamics). In Section~\ref{sec:LQR}, we show that the global Nash equilibrium is a saddle point of the gradient dynamics in a large number of randomly sampled LQ dynamic games. This suggests that policy gradient algorithms may fail to converge in such games, which is highly undesired. This is in stark contrast to the single agent setting where policy gradient has been shown to converge to the unique solution of LQR problems \cite{Fazel2018GlobalCO}. In Section~\ref{sec:connections}, we also showed that local Nash equilibria of potential games can be strict saddles points of the potential function. Non-convergence to such points in potential games is not necessarily a bad result since this in turn implies convergence to a local minimizer of the potential function (as shown in~\cite{lee:2016aa, panageas:2016aa}) which are guaranteed to be local Nash equilibria of the game. However, these results do imply that \emph{one cannot answer ``yes'' to \textbf{Q1} in potential games} since some of the Nash equilibria are not attracting under gradient-play. In zero-sum games, where local Nash equilibria cannot be strict saddle points of the gradient dynamics, our result suggests that \emph{eventually} gradient-based learning algorithms will escape saddle points of the dynamics. The almost sure avoidance of all equilibria that are saddle points of the dynamics further implies that if \eqref{eq:sys} converges to a critical point $x$, then $x\in {\tt{LASE}}(\omega)$---i.e., $x$ is locally asymptotically stable for $\dot{x}=-\omega(x)$. This may not be a desired property however, since we showed in Section~\ref{sec:connections} that zero-sum and general-sum games both admit non-Nash LASE. Since gradient-play in games generally does not result in a gradient flow, other types of limiting behaviors such as limit cycles can occur in gradient-based learning dynamics. Theorem~\ref{thm:fullinfo} says nothing about convergence to other limiting behaviors. In the following sections we prove that the results described in this section extend to the stochastic gradient setting. We also formally define periodic orbits in the context of dynamical systems and state stronger results on avoidance of some more complex limiting behaviors like linearly unstable limit cycles. \ifzerosum Let us consider another important sub-class of games, namely two-player zero-sum games, in which agents are direct competitors. \begin{corollary} Assume the conditions of Theorem~\ref{thm:fullinfo} hold. Gradient-based learning algorithms for two-player zero-sum games---i.e.~$(f,-f)$---converge to local Nash equilibria with the strict saddle property on a set of measure zero. % % % % % \label{cor:zerosum} \end{corollary} Not all local Nash equilibria are saddle points for continuous zero-sum games; however, a large class of these games admit saddle point equilibria. Hence, the above results implies that for a large class of zero-sum games, local Nash cannot be reached. \begin{example} Consider a two player game $(f(x_1,x_2), -f(x_1,x_2))$ with $X_i=\mathbb{R}$. The game Hessian, i.e.~$D\omega$, is of the form \[D\omega(x_1,x_2)=\bmat{\ \ D_{11}^2f &\ \ D_{21}^2f\\ -D_{12}^2f & -D_{22}^2f_1}=\bmat{\ \ a_{11} & \ \ a_{12}\\ -a_{12} & -a_{22}}\] The eigenvalues of this matrix are $\{ \frac{1}{2}( a_{11}-a_{22}\pm\sqrt{(a_{11}+a_{22})^2-4a_{12}^2} ) \}$ This has equilibria with the strict saddle point property on a continuum in the class of zero-sum games. \end{example} \fi % \subsection{Stochastic Setting} \label{sec:gradientfree} We now analyze the stochastic case in which agents are assumed to have an unbiased estimator for their gradient. The results in this section allow us to extend the results from the deterministic setting to a setting where each agent builds an estimate of the gradient of their loss at the current set of strategies from potentially noisy observations of the environment. Thus, we are able to analyze the limiting behavior of a class of commonly used machine learning algorithms for competitive, multi-agent settings. In particular, we show that agents will almost surely not converge to strict saddle points. In Appendix~\ref{app:repel}, we show that the gradient dynamics will actually avoid more general limiting behaviors called linearly unstable cycles which we define formally. To perform our analysis, we make use of tools and ideas from the literature on stochastic approximations (see e.g \cite{BorkarStoch}). We note that the convergence of stochastic gradient schemes in the single-agent setting has been extensively studied \cite{robbin:1971aa,pemantle:1990aa,BottouSGD,MertReviewer}. We extend this analysis to the behavior of stochastic gradient algorithms in games. We assume that each agent updates their strategy using the update rule \begin{equation} \pxtwo{i}{t+1}=x_{i,t}-\gamma_{i,t}(D_if_i(x_{i,t},x_{-i,t})+w_{i,t+1}) % \label{eq:sa} \end{equation} for some zero-mean, finite-variance stochastic process $\{w_{i,t}\}$. Before presenting the results for the stochastic case, let us comment on the different learning algorithms that fit into this framework. \subsubsection{Examples of Stochastic Gradient-Based Learning} \label{subsec:gradientalgs} \bgroup \def0.9{0.9} {\setlength{\tabcolsep}{0.2em} \begin{table}[t] \centering \begin{tabular}{|c||c|c|} \hline\textbf{Class} & \textbf{Gradient Learning Rule} \\ % \hline\hline \multirow{2}{*}{Gradient-Play} & \multirow{2}{*}{$x_{i}^+=x_{i}-\gamma_iD_if_i(x_{i}, x_{-i})$}\\ & \\\hline \multirow{2}{*}{GANs} & $\theta^{+}\ =\theta-\gamma \mathbb{E}[D_{\theta}L(\theta,w)]\ $\\ & $w^+=w+\gamma \mathbb{E}[D_{w}L(\theta,w)]$ \\\hline \multirow{2}{*}{ MA Policy Gradient} &\multirow{2}{*}{$x_{i}^+=x_{i}-\gamma_i\mathbb{E}[{D_iJ_i}(x_{i}, x_{-i})]$}\\ & \\ \hline \multirow{2}{*}{Individual Q-learning} & \multirow{2}{*}{$q_i^+(u_i)=q_i(u_i)+\gamma_i(r_i(u_i, \pi_{-i}(q_i,q_{-i}))-q_i(u_i))$} \\ & \\\hline \multirow{1}{*}{MA Gradient Bandits} &$x_{i,\ell}^+=x_{i,\ell}+\gamma_i\mathbb{E}[\beta_iR_i(u_i,u_{-i})|u_i=\ell]$, $\ell=1,\ldots, m_i$\\ \hline \multirow{1}{*}{MA Experts} & $x_{i,\ell}^+=x_{i,\ell}+\gamma_i\mathbb{E}[R_i(u_i,u_{-i})|u_i=\ell]$, $\ell=1,\ldots, m_i$\\\hline \end{tabular} \caption{Example problem classes that fit into competitive gradient-based learning rules. Details on the derivation of these update rules as gradient-based learning schemes is provided in Appendix~\ref{sec:examples}. } % \label{tab:examples} \end{table}} The stochastic gradient-based learning setting we study is general enough to include a variety of commonly used multi-agent learning algorithms. The classes of algorithms we include is hardly an exhaustive list, and indeed many extensions and altogether different algorithms exist that can be considered members of this class. In Table~\ref{tab:examples}, we provide the gradient-based update rule for six different example classes of learning problems: (i) gradient-play in non-cooperative continuous games, (ii) GANs, (iii) multi-agent policy gradient, (iv) individual Q-learning, (v) multi-agent gradient bandits, and (vi) multi-agent experts. We provide a detailed analysis of these different algorithms including the derivation of the gradient-based update rules along with some interesting numerical examples in Appendix~\ref{sec:examples}. In each of these cases, one can view an agent employing the given algorithm as building an unbiased estimate of their gradient from their observation of the environment. For example, in multi-agent policy gradient (see, e.g.,~\cite[Chapter~13]{sutton:2017aa}), agents' costs are defined as functions of a parameter vector $x_i$ that parameterize their policies $\pi_{i}(x_i)$. The parameters $x_i$ are agent $i$'s choice variable. By following the gradient of their loss function, they aim to tune the parameters in order to converge to an \emph{optimal} policy $\pi_i$. Perhaps surprisingly, it is not necessary for agent $i$ to have access to $\pi_{{-i}}(x_{-i})$ or even $x_{-i}$ in order for them to construct an unbiased estimate of the gradient of their loss with respect to their own choice variable $x_i$ as long as they observe the sequence of actions, say $u_{-i,t}$, of all other agents generated. These actions are implicitly determined by the other agents' policies $\pi_{-i}(x_{-i})(\cdot)$. Hence, in this case if agent $i$ observes $\{(r_{j,t},u_{j,t},s_{j,t})$, $\forall \ j\in \mathcal{I}\}$ where $(r_j,u_j,s_j)$ are the reward, action, and state of agent $j$, then this is enough to construct an unbiased estimate of their gradient. We provide further details on multi-agent policy gradient in Appendix~\ref{sec:examples}. \subsubsection{Stochastic Gradient Results} Returning to the analysis of \eqref{eq:sa}, we make the following standard assumptions on the noise processes \cite{robbin:1971aa,robbins:1985aa}. \begin{assumption} % The stochastic process $\{w_{i,t+1}\}$ satisfies the assumptions $\mathbb{E}[w_{i,t+1}|\ \mathcal{F}_i^t]=0$, $t\geq 0$ and $\mathbb{E}[\|w_{i,t+1}\|^2|\ \mathcal{F}_i^t]\leq \sigma^2<\infty$ a.s., for $t\geq 0$, where $\mathcal{F}_{i,t}$ is an increasing family of $\sigma_i$-fields---i.e.~filtration, or history generated by the sequence of random variables---given by $\mathcal{F}_{i,t}=\sigma_i(\pxtwo{i}{k},w_{i,k}, k\leq t), \ t\geq 0$. \label{ass:estmartin} \end{assumption} We also make new assumptions on the players' step-sizes. These are standard assumptions in the stochastic approximation literature and are needed to ensure that the noise processes are asymptotically controlled. \begin{assumption} For each $i\in\mathcal{I}$, $f_i\in C^s(X,\mathbb{R})$ with $s\geq 2$, $D_{i}f_i$ is $L_i$--Lipschitz with $0<L_i<\infty$, the step-sizes satisfy $\gamma_{i,t}\equiv \gamma_t$ for all $i\in \mathcal{I}$ and $\sum_t \gamma_t=\infty$ and $\sum_t (\gamma_t)^2<\infty$, and $\sup_t\|\pxt{t}\|<\infty$ a.s. \label{ass:others} \end{assumption} Let $(a)^+=\max\{a, 0\}$ and $a\cdot b$ denotes the inner product. The following theorem extends the results of Theorem~\ref{thm:fullinfo} to the stochastic gradient dynamics in games. \begin{theorem} Consider a game $(f_1,\ldots, f_n)$ on $X=X_1\times \cdots \times X_n=\mathbb{R}^m$. Suppose each agent $i\in \mathcal{I}$ adopts a stochastic gradient algorithm that satisfies Assumptions~\ref{ass:estmartin} and \ref{ass:others}. Further, suppose that for each $i\in \mathcal{I}$, there exists a constant $b_i>0$ such that $\mathbb{E}[(w_{i,t}\cdot v)^+|\mathcal{F}_{i,t}]\geq b_i$ for every unit vector $v\in \mathbb{R}^{m_i}$. Then, competitive stochastic gradient-based learning converges to strict saddle points of the game % % on a set of measure zero. % % % % \label{thm:gradfree} \end{theorem} The proof follows directly from showing that \eqref{eq:sa} satisfies Theorem~\ref{thm:pementle}, provided the assumptions of the theorem hold. The assumption that $\mathbb{E}[(w_{i,t}\cdot v)^+|\mathcal{F}_{i,t}]\geq b_i$ rules out degenerate cases where the noise forces the stochastic dynamics onto the stable manifold of strict saddle points. Theorem~\ref{thm:gradfree} implies that the dynamics of stochastic gradient-based learning defined in \eqref{eq:sa}, have the same limiting properties as the deterministic dynamics vis-\`a-vis saddle points. Thus, the implications described in Section~\ref{sec:implications} extend to the stochastic gradient setting. In particular, stochastic gradient-based algorithms will avoid a non-negligible subset of the Nash equilibria in general-sum and potential games. Further, in zero-sum and general-sum games, if the players fo converge to a critical point, that point may be a non-Nash equilibrium. \subsubsection{Further Convergence Results for Stochastic Gradient-Play in Games} As we demonstrated in Section~\ref{sec:fullinfo}, outside of potential games, the dynamics of gradient-based learning algorithms in games are not gradient flows. As such, the players' actions can converge to more complex sets than simple equilibria. A particularly prominent class of limiting behaviors for dynamical systems are known as limit cycles (see e.g \cite{sastry:1999aa}). Limit cycles (or periodic orbits) are sets of states $\mathcal{S}$ such that each state $x \in \mathcal{S}$ is visited at periodic intervals \emph{ad infinitum} under the dynamics. Thus, if the gradient-based algorithms converge to a limit cycle they will cycle infinitely through the same sequence of actions. Like equilibria, limit cycles can be stable or unstable under the dynamics $\dot x=-\omega(x)$, meaning that the dynamics can either converge to or diverge from them depending on their initializations. We remark that the existence of oscillatory behaviors and limit cycles has been observed in the dynamics of of gradient-based learning in various settings like the training of Generative Adversarial Networks \cite{daskalakis:2017aa}, and multiplicative weights in finite action games \cite{mertikopoulos:2018aa}. We simply emphasize that the existence of such limiting behaviors is due to the fact that the dynamics are no longer gradient flows. This fact also allows for other complex limiting behaviors like chaos\footnote{A general term used to characterize dynamical systems where arbitrarily small perturbations in the initial conditions lead to drastically different solutions to the differential equations} to exist in the dynamics of gradient-based learning in games. We also show in Appendix~\ref{app:repel} that gradient-based learning avoids some limit cycles. In Appendix~\ref{app:repel}, we formalize the notion of a limit cycle and its stability in the stochastic setting. Using these concepts, we then provide an analogous theorem to Theorem~\ref{thm:gradfree} which states that competitive stochastic gradient-based learning converges to linearly unstable limit cycles---a parallel notion to strict saddle points but pertaining to more general limit sets---on a set of measure zero, provided that analogous assumptions to those in the statement of Theorem~\ref{thm:gradfree} hold. Providing such guarantees requires a bit more mathematical formalism, and as such we leave the details of these results to Appendix~\ref{app:ms}. In pursuit of a more general class of games with desirable convergence properties, in Appendix~\ref{app:msms} we also introduce a generalization of potential games, namely Morse-Smale games, for which the combined gradient dynamics correspond to a Morse-Smale vector field~\cite{hirsch:1976aa,palis:1970aa}. In such games players are guaranteed to converge to only (linearly stable) cycles or equilibria. In such games, however, players may still converge to non-Nash equilibria and avoid a subset of the Nash equilibria. \ifms As we have noted, games not admitting potential functions may lead to limit cycles. Hence, we use the expanded theory in~\cite{benaim:1996aa,benaim:1995aa} to show that stochastic gradient-based learning algorithms avoid repelling sets. To do so, we need further assumptions on our underlying space---i.e.~we need the underlying decision spaces of each agent---i.e.~$X_i$ for each $i\in \mathcal{I}$---to be \emph{smooth, compact manifolds without boundary}. As in~\cite{benaim:1995aa}, the stochastic process $\{x_n\}$ which follows \eqref{eq:sa} is \emph{defined on} $X$---that is, $x_n\in X$ for all $n\geq 0$. As before, it is natural to compare sample points $\{x_n\}$ to solutions of $\dot{x}=-\omega(x)$ where we think of \eqref{eq:sa} as a noisy approximation. The asymptotic behavior of $\{x_n\}$ can indeed be described by the asymptotic behavior of the flow generated by $\omega$. We also need a formal notion of \emph{cycles}. A non-stationary periodic orbit of $\omega$ is called a \emph{cycle}. Let $\xi\subset X$ be a cycle of period $T>0$. Denote by $\Phi_T$ the flow corresponding to $\omega$. For any $x\in \xi$, $\mathrm{spec}(D\Phi_T(x))=\{1\}\cup C(\xi)$ where $C(\xi)$ is the set of characteristic multipliers. We say $\xi$ is \emph{hyperbolic} if no element of $C(\xi)$ is on the complex unit circle. Further, if $C(\xi)$ is strictly inside the unit circle, $\xi$ is called \emph{linearly stable} and, on the other hand, if $C(\xi)$ has at least one element on the outside of the unit circle---that is, $D\Phi_T(x)$ for $x\in \xi$ has an eigenvalue with real part strictly greater than $1$---then $\xi$ is called \emph{linearly unstable}. The latter is the analog of strict saddle points in the context of periodic orbits. We denote by $\{x_t\}$ sample paths of the process \eqref{eq:sa} and $L(\{x_t\})$ is the \emph{limit set} of any sequence $\{x_t\}_{t\geq 0}$ which is defined in the usual way as all $p\in X$ such that $\lim_{k\rightarrow \infty} x_{t_k}=p$ for some sequence $t_k\rightarrow \infty$. It was shown in~\cite{benaim:1996aa} that under less restrictive assumptions than Assumptions~\ref{ass:estmartin} and \ref{ass:others}, $L(\{x_t\})$ is contained in the \emph{chain recurrent set} of $\omega$ and $L(\{x_t\})$ is a non-empty, compact and connected set invariant under the flow of $\omega$. \begin{theorem} Consider a game $(f_1,\ldots, f_n)$ where each $X_i$ is a smooth, compact manifold without boundary. Suppose each agent $i\in \mathcal{I}$ adopts a stochastic gradient-based learning algorithm that satisfies Assumptions~\ref{ass:estmartin} and \ref{ass:others} and is such that sample points $x_t\in X$ for all $t\geq 0$. Further, suppose that for each $i\in \mathcal{I}$, there exist a constant $b_i>0$ such that $\mathbb{E}[(w_{i,t}\cdot v)^+|\mathcal{F}_{i,t}]\geq b_i$ for every unit vector $v\in \mathbb{R}^{m_i}$. Then competitive stochastic gradient-based learning converges to linearly unstable cycles on a set of measure zero---i.e. $P(L(x_t)=\xi)=0$ where $\{x_t\}$ is a sample path. \label{thm:gradfreecycle} \end{theorem} As we noted, periodic orbits are not necessarily excluded from the limiting behavior of gradient-based learning in games. We leave out the proof of Theorem~\ref{thm:gradfreecycle} since after some algebraic manipulation, it is a direct application of~\cite[Theorem~2.1]{benaim:1995aa} which is provided in Theorem~\ref{thm:benaim} in Appendix~\ref{app:proofs}. The above theorem guarantees that competitive stochastic gradient-based learning avoids linearly unstable cycles almost surely. We can state stronger results for a more restrictive class of games admitting \emph{gradient-like} vector fields. Specifically, analogous to~\cite{benaim:1995aa}, we can consider Morse-Smale vector fields. We introduce a new class of games, which we call \emph{Morse-Smale games}, that are a generalization of potential games. These are a very important class of games since the vector field of $\omega$ corresponds to Morse-Smale vector fields which are known to be generic in $\mathbb{R}^2$ and are otherwise structurally stable~\cite{hirsch:1976aa,palis:1970aa}. \begin{definition} A game $(f_1,\ldots, f_n)$ with $f_i\in C^r$ for some $r\geq 3$ and where strategy spaces $X_i$ is a smooth, compact manifold without boundary for each $i\in \mathcal{I}$ is a Morse-Smale game if the vector field corresponding to the differential $\omega$ is Morse-Smale---that is, the following hold: (i) all periodic orbits $\xi$ (i.e.~equilibria and cycles) are hyperbolic and $W^s(\xi)\pitchfork W^u(\xi)$ (i.e.~the stable and unstable manifolds of $\xi$ intersect transversally), (ii) every forward and backward omega limit set is a periodic orbit, (iii) and $\omega$ has a global attractor. \end{definition} The conditions of Morse-Smale in the above definition ensure that there are only finitely many periodic orbits. The dynamics of games with more general vector fields, on the other hand, can admit chaos (e.g. the classic Lorentz attractor can be cast as gradient-play in a 3-player game). Hyperbolic equilibria and periodic orbits are the only types of limiting behavior that have been shown to correspond to strategies relevant to the underlying game~\cite{benaim:1997ab}. The simplest example of a Morse-Smale vector field is a gradient flow. However, not all Morse-Smale vector fields are gradient flows and hence, not all Morse-Smale games are potential games. \begin{example} Consider the $n$-player game with $X_i=\mathbb{R}$ for each $i\in \mathcal{I}$ and $f_n(x)=x_n(x_1^2-1), \ f_i(x)=x_ix_{i+1}, \ \forall i\in \mathcal{I}/\{n\}$ This is a Morse-Smale game that is not a potential game. Indeed, $\dot{x}=-\omega(x)$ where $\omega=[x_2, x_3, \ldots, x_{n-1}, x_1^2-1]$ is a dynamical system with a Morse-Smale vector field that is not a gradient vector field~\cite{conley:1978aa}. \end{example} Essentially, in a neighborhood of a critical point for a Morse-Smale game, the game behavior can be described by a Morse function $\phi$ such that near critical points $\omega$ can be written as $D\phi$ and away from critical points $\omega$ points in the same direction as $D\phi$---i.e.~$\omega\cdot D\phi>0$. Specializing the class of Morse-Smale games, we have stronger convergence guarantees. \begin{theorem} Consider a Morse-Smale game $(f_1,\ldots, f_n)$ on smooth boundaryless compact manifold $X$. Suppose Assumptions~\ref{ass:estmartin} and \ref{ass:others} hold and that $\{x_t\}$ is defined on $X$. Let $\{\xi_i, \ i=1, \ldots, l\}$ denote the set of periodic orbits in $X$. Then $\sum_{i=1}^l P(L(\{x_t\})=\xi_i)=1$ and $P(L(\{x_t\})=\xi_i)>0$ implies $\xi_i$ is linearly stable. Moreover, if the periodic orbit $\xi_i$ with $P(L(\{x_t\})=\xi_i)>0$ is an equilibrium, then it is either a non-degenerate differential Nash equilibrium---which is generically a local Nash---or a non-Nash locally asymptotically stable equilibrium. \label{thm:Morsesmale} \end{theorem} The proof of Theorem~\ref{thm:Morsesmale} follows by invoking Corollary~\ref{cor:benaim} in Appendix~\ref{app:proofs}. \fi % % % % \section{Saddle Point LNE in LQ Dynamic Games} \label{sec:LQR} In this section, we present empirical results that show that a non-negligible subset of two-player LQ games have local Nash equilibria that are strict saddle points of the gradient dynamics. LQ games serve as good benchmarks for analyzing the limiting behavior of gradient-play in a non-trivial setting since they are known to admit global Nash equilibria that can be found be solving a coupled set of Riccati equations \cite{BasarOlsder}. LQ games can also be cast as multi-agent reinforcement learning problems where each agent has a policy that is a linear function of the state and a quadratic reward function. Gradient-play in LQ games can therefore be seen as a form of policy gradient. The empirical results we now present imply that, even in the relatively straightforward case of linear dynamics, linear feedback policies, and quadratic costs, policy gradient multi-agent reinforcement learning would be unable to find the local Nash equilibrium in a non-negligible subset of problems. \paragraph{LQ game setup} For simplicity, we consider two-player LQ games in $\mathbb{R}^2$. Consider a discrete time dynamical system defined by \begin{align} z(t+1)=Az(t)+B_1u_1(t)+B_2u_2(t) \label{eq:updateLQR} \end{align} where $z(t) \in \mathbb{R}^2$ is the state at time $t$, $u_1(t)$ and $u_2(t)$ are the control inputs of players $1$ and $2$, respectively, and $A$, $B_1$, and $B_2$ are the system matrices. We assume that player $i$ searches for a linear feedback policy of the form $u_i(t)=-K_iz(t)$ that minimizes their loss which is given by \[\textstyle f_i(z_0,u_1,u_2)=\sum_{t=0}^\infty z(t)^TQ_iz(t)+u_{i}(t)^TR_iu_{i}(t)\] where $Q_i \succ 0$ and $R_i \succ 0$ are the cost matrices on the state and input, respectively. We note that the two players are coupled through the dynamics since $z(t)$ is constrained to obey the update equation \eqref{eq:updateLQR}. The vector of player derivatives is given by $\omega(K_1,K_2)=(D_1f_1(K_1,K_2),D_2f_2(K_1,K_2))$ where \[D_if_i(K_1,K_2) \textstyle=(R_{ii}K_i+B_i^TP_i(B_1K_1+B_2K_2)-B_i^TP_iA)\sum_{t=0}^\infty z(t)z(t)^T, \ i\in\{1,2\}.\] Note that there is a slight abuse of notation here as we are treating $D_if_i$ as a matrix and as the vectorization of a matrix. % The matrices $P_1$ and $P_2$ can be found by solving the Riccati equations \begin{align*} P_i & = (A-B_1K_1-B_2K_2)^TP_i(A-B_1K_1-B_2K_2)+ K_i^TR_iK_i +Q_i, \ \ i\in\{1,2\}, \end{align*} for a given $(K_1,K_2)$. As shown in \cite{BasarOlsder}, global Nash equilibria of LQ games can be found by solving coupled Ricatti equations. Under the following assumption, this can be done using an analogous method to the method of Lyapunov iterations outlined in \cite{LyapIterCitation} for continuous time LQ games. \begin{assumption} Either $(A,B_1,\sqrt{Q_1})$ or $(A,B_2,\sqrt{Q_2})$ is stabilizable-detectable. \end{assumption} Further information on the uniqueness of Nash equilibria in LQ games and the method of Lyapunov iterations can be found in \cite{BasarOlsder} and \cite{LyapIterCitation} respectively. \paragraph{Generating LQ games with strict saddle point Nash equilibria} Without loss of generality, we assume $(A,B_1,\sqrt{Q_1})$ is stabilizable-detectable. Given that we have a method of finding the global Nash equilibrium of the LQ game, we now present our experimental setup. We fix $B_1$, $B_2$, $Q_1$, and $R_1$ and parametrize $Q_2$, and $R_2$ by $q$ and $r$ respectively. The shared dynamics matrix $A$ has entries that are sampled from the uniform distribution supported on $(0,1)$. For each value of the parameters $b$, $q$, and $r$, we randomly sample $1000$ different $A$ matrices. Then, for each LQ game defined in terms of each of the sets of parameters, we find the optimal feedback matrices $(K_1^*,K_2^*)$ using the method of Lyapunov iterations, and we numerically approximate $D\omega(K_1^*,K_2^*)$ using auto-differentiation tools and check its eigenvalues. The exact values of the matrices are defined as follows: $A \in \mathbb{R}^{2\times 2}$ with each of the entries $a_{ij}$ sampled from the uniform distribution on $(0,1)$, \begin{align*} B_1=\begin{bmatrix}1\\1\end{bmatrix}, \ B_2=\begin{bmatrix}0\\1\end{bmatrix}, \ Q_1=\begin{bmatrix}0.01 & 0\\0 & 1\end{bmatrix}, \ Q_2=\begin{bmatrix}1 & 0\\0 & q \end{bmatrix}, \ R_1=0.01, \ R_2=r. \end{align*} The results for various combinations of the parameters $q$ and $r$ are shown in Figure~\ref{fig:lqr}. For all of the different parameter configurations considered, we found that in anywhere from $0\%-25\%$ of the randomly sampled LQ games, there was a global Nash equilibrium that was a strict saddle point of the gradient dynamics. Of particular interest is the fact that for all values of $q$ and $r$ we tested, at least $5\%$ of the LQ games had a global Nash equilibrium with the strict saddle property. In the worst case, around $25\%$ of the LQ games for the given values of $q$ and $r$ admitted such Nash equilibria. \begin{figure}[t] % \center \includegraphics[width=0.75\textwidth]{figs/FreqPlot.png} \caption{Frequency (out of 1000) of randomly sampled LQ games with global Nash equilibria that are avoided by policy-gradient. The experiment was run 10 times and the average frequency is shown by the solid line. The shaded region demarcates the $95\%$ confidence interval of the experiment. (left) $r$ is varied in $(0,1)$, $q=0.01$. (right) $q$ is varied in $(0,1)$, $r=0.1$.} \label{fig:lqr} \end{figure} \begin{remark} These empirical observations imply that multi-agent policy gradient, even in the relatively straightforward setting of linear dynamics, linear policies, and quadratic costs, has no guarantees of convergence to the global Nash equilibria in a non-negligible number of games. Further investigation is warranted to validate this fact theoretically. This in turn supports the idea that for more complicated cost functions, policy classes, and dynamics, local Nash equilibria with the strict saddle property are likely to be very common. \end{remark} \section{Discussion and Future Directions} \label{sec:discussion} In this paper we provided answers to the following two questions for classes of gradient-based learning algorithms: \begin{description}[leftmargin=20pt] \item[Q1.] \emph{Are all attractors of the learning algorithms employed by agents equilibria relevant to the underlying game?} \item[Q2.] \emph{Are all equilibria relevant to the game also attractors of the learning algorithms agents employ?} \end{description} We answered these questions in general-sum, zero-sum, and potential games without imposing structure on the game outside regularity conditions on the cost functions by exploiting the observation that gradient-based learning dynamics are not gradient flows. Our analysis, was shown in Section~\ref{sec:examples} to apply to a number of commonly used methods in multi-agent learning. \subsection{Links with Prior Work} As we noted, previous work on learning in games in both the game theory literature, and more recently from the machine learning community, has largely focused on \textbf{Q1}, though some recent work has analyzed \textbf{Q2} in the setting of zero-sum games. In the seminal work by Rosen \cite{Rosen1965}, $n$--player concave or monotone games are shown to either admit a unique Nash equilibrium or a continuum of Nash equilibria, all of which are attracting under gradient-play. The structure present in these games rules out the existence of non-Nash equilibria. Two-player, finite-action bilinear games have also been extensively studied. In \cite{GradDyn}, the authors investigate the convergence of the gradient dynamics in such games. Additionally, the dynamics of other (non gradient-based) algorithms like multiplicative weights have been studied in \cite{hommes:2012aa} among many others. In such settings, the structure guarantees that there exists a unique global Nash equilibrium and no other critical points of the gradient dynamics. As such, non-Nash equilibria, cannot exist. In the study of learning dynamics in the class of zero-sum games, it has been shown that cycles can be attractors of the dynamics (see, e.g.,~\cite{mertikopoulos:2018aa, Wesson2016, hommes:2012aa}). Concurrently with our results, \cite{Daskalakis} also showed the existence of non-Nash attracting equilibria in this setting. In more general settings, there has been some analysis of the limiting behavior of gradient-play though the focus has been for the most part, on giving sufficient conditions under which Nash equilibria are attracting under gradient-play. For example, ~\cite{ratliff:2013aa,ratliff:2014aa,ratliff:2016aa}, introduced the notion of a differential Nash equilibrium which is characterized by first and second order conditions on the players' individual cost functions and which we made extensive use of. Following this body of work, \cite{Mertikopoulos2019} also investigated the local convergence of gradient-play in continuous games. They showed that if a Nash equilibrium satisfies a property known as \emph{variational stability}, the equilibrium is attracting under gradient play. In twice continuously differentiable games, this condition coincides exactly with the definition of stable differential Nash equilibria. Though these works analyze a general class of games, the focus of the analysis is solely on the local characterization and computation (via gradient play) of local Nash equilibria. As such, the issues of non-convergence that we show in this paper were not discussed. \subsection{Open Questions} Our results suggest that gradient-play in multi-agent settings has fundamental problems. Depending on the players' costs, in general games and even potential games, which have a particularly \emph{nice} structure, a subset of the Nash equilibria will be almost surely avoided by gradient-based learning when the agents randomly initialize their first action. In zero-sum and general-sum games, even if the algorithms do converge, they may have converged to a point that has no game theoretic relevance, namely a non-Nash locally asymptotically stable equilibrium. Lastly, these results show that limit cycles persist even under a stochastic update scheme. This explains the empirical observations of limit cycles in gradient dynamics presented in ~\cite{daskalakis:2017aa,leslie:2005aa,hommes:2012aa}. It also implies that gradient-based learning in multi-agent reinforcement learning, multi-armed bandits, generative adversarial networks, and online optimization all admit limit cycles under certain loss functions. Our empirical results show that these problems are not merely of theoretical interest, but also have great relevance in practice. Which classes of games have all Nash being attracting for gradient-play and which classes preclude the existence of non-Nash equilibria is an open and particularly interesting question. Further, the question of whether gradient-based algorithms can be constructed for which only game-theoretically relevant equilibria are attracting is of particular importance as gradient-based learning is increasingly implemented in game theoretic settings. Indeed, more generally, as learning algorithms are increasingly deployed in markets and other competitive environments understanding and dealing with such theoretical issues will become increasingly important. % %
{ "timestamp": "2020-02-21T02:18:19", "yymm": "1804", "arxiv_id": "1804.05464", "language": "en", "url": "https://arxiv.org/abs/1804.05464" }
\section{INTRODUCTION} The development of device, sensing and communication technologies enables wide range of applications of wireless sensor networks. After the pioneering work on event-based sensor data scheduling proposed in~\cite{astrom2002comparison}, a variety of studies has been done to balance the estimation performance and the communication overhead in~\cite{imer2005optimal,leong2017sensor,ren2017infinite}. A large number of works on sensor scheduling focused on remote estimation of a linear time-invariant (LTI) dynamic process. There are also some other works addressing static processes and nonlinear models. However, the static models~\cite{gao2018optimal,vasconcelos2018optimal} are special cases of LTI systems and nonlinear models either involve approximation of a linear system~\cite{lin2009energy,shuman2010measurement} or the solution method requires numerically solving a partially observable Markov decision process, which is computationally inefficient~\cite{krishnamurthy2002algorithms,he2004sensor,krishnamurthy2007structured}. A few works~\cite{molin2014price,gatsis2015opportunistic} considered control problems with transmission constraints, which can also be transformed into sensor scheduling problems as they prove the separation between optimal controls and optimal transmissions. The sensor scheduling problems have been modeled in different frameworks. A number of works modeled it as a Markov decision process (MDP), which is a framework for optimal stochastic control problems. Obtaining an optimal solution of an MDP involves stochastic dynamic programming-based numerical algorithms such as a value iteration and a policy iteration, which prohibits solving large-scale problems due to the curse of dimensionality. Therefore, most works only use MDP to deal with a single process~\cite{nayyar2013optimal,akyol2014controlled,leong2017sensor}. When there is only one dynamic process, an approximation of the optimal sensor scheduling policy can also be obtained by analyzing a modified algebraic Riccati equation (MARE), which characterizes the dynamics of the remote estimation error. Zhao et al.~\cite{zhao2014optimal} studied the asymptotic behavior of the MARE and showed that the optimal policy can be approximated by a periodic one. Orihuela et al.~\cite{orihuela2014periodicity} further showed that a periodic policy is optimal under a myopic criterion. Some other works modeled the sensor scheduling problem as static sensor selection problems, resulting in an optimization problem in an Euclidean space with integer constraints. They either found a convex approximation of the original problem~\cite{mo2011sensor} or used some greedy based heuristics to find a suboptimal policy with theoretical performance bound~\cite{asghar2017complete}. Although efficient algorithms can be developed from approximated models, the gap between the approximated policy and the optimal policy can be significant. The framework for a sensor scheduling problem depends on the information available for scheduling. If there is only offline information, such as system parameters, open loop scheduling is enough. The sensors transmit data based on system clock and predetermined timing. The periodic policy~\cite{zhao2014optimal,orihuela2014periodicity} and static sensor selection~\cite{mo2011sensor,asghar2017complete} aforementioned are in this category. Besides offline scheduling, a large number of works were devoted to optimal online scheduling. Since additional online information is available, an online scheduling policy may yield better performance than an offline one. Nevertheless, analysis and design an online policy is nontrivial. Online information can be further categorized into two classes: system state information and holding time information. System state information refers to the actual system state if the state is fully observable, or the innovation of the measurements if the state observation is noisy. Once the size of the system state is greater than a threshold value, a sensor will be scheduled to transmit data. Therefore, these scheduling policies are also termed as data-driven or event-based. Works on data-driven scheduling mostly focus on the single sensor case~\cite{molin2012optimality,wu2013event,han2015stochastic,shi2016event}. Scheduling of multiple sensor with the system state poses significant challenges in light of coordination. Xia et al.~\cite{xia2016networked} showed that, if no coordination of the sensor transmissions is considered, the potential transmission collisions will cause an online policy to perform worse than an offline policy. Molin and Hirche~\cite{molin2014price} considered LQG control with fully observable states of multiple systems under a communication rate budget, which is inapplicable if the number of allowable channels is limited at every time step. Gatsis et al.~\cite{gatsis2015opportunistic} considered transmission power minimization under a system stability constraint. This cannot be applied if we aim to minimize the estimation error. Holding time information is the time elapsed since the remote estimator receives data from the sensors. In telecommunication society, this concept attracts a growing interest and is termed as the age of information (AoI)~\cite{kadota2018optimizing}. In this work, we shall see that there is a one-to-one correspondence between the holding time information and system performance if the sensors are able to conduct local computations. This facilitates design and analysis as the holding time only takes values in the set of positive integers. Leong et al.~\cite{leong2017sensor} utilized this property to study the optimal scheduling for one dynamic system over a lossy channel. If there is no packet dropout in the communication channel, the holding time becomes offline information as the packet arrival sequence is available before actual transmissions. In this case, the online problem is reduced to the offline one. In this work, we consider multiple sensor scheduling using online holding time information of multiple dynamic processes, which is an extension of previous works~\cite{shi2012scheduling,han2017optimal}. In these works, only unstable processes over a reliable channel were considered. We generalize the results to a setup where both stable and unstable processes exist over lossy channels. We use MDP to formulate the problem. Although the framework has been studied, the analysis fails to work for stable processes as mentioned in~\cite{han2017optimal}. If there are no packet dropouts, the state space can be restricted to be finite as done by~\cite{han2017optimal}. If the channel is lossy, however, the existing approach of~\cite{han2017optimal} no longer works. In addition, we take the costs of communication into consideration, which has not been addressed previously since the one-stage cost becomes more complicated. We show the optimality of a monotone deterministic stationary policy. Furthermore, we use the celebrated Whittle's index~\cite{whittle1988restless} to develop a heuristic policy, which can be written in a closed-form and is asymptotically optimal. The contribution of our work is multi-fold. (1) We develop an algorithm-based sufficient condition for existence of a deterministic stationary optimal policy, which generalizes the approaches in previous works (e.g.,~\cite{shi2012scheduling,han2017optimal}). We formulate the multi-sensor scheduling problem as an average cost Markov decision process (MDP) over an infinite horizon. As the communication channel is lossy, the state space of an MDP over an infinite horizon is infinite and there may not be an optimal policy in the class of deterministic stationary policies. We develop \textbf{Algorithm~\ref{alg:feasibility}} and show that deterministic stationary optimal policies indeed exist if the output of the algorithm is greater than the number of available channels. (2) We show the optimality of monotone policies (\textbf{Theorem~\ref{theorem: monotonicity for multiple processes}}), which sheds light on the structure of optimal policies. In particular, if it is optimal to schedule a sensor in one state, it is also optimal to schedule this sensor when the state of this sensor increases while others remaining unchanged. Although dynamic programming can be used as a general approach to tackle MDPs, only numerical solutions can be obtained and no design insights of an optimal policy can be acquired. The monotone structure seems intuitive, but its proof is not straightforward. (3) We use the Whittle's index~\cite{whittle1988restless} to develop an index-based heuristics for the scheduling policy (\textbf{Theorem~\ref{theorem: whittle's index}}) instead of solving the problem via brute-force numerical algorithms. The index-based policy provides an asymptotically optimal policy without using brute force numerical algorithms to solve the MDP. Although such heuristics have been adopted in several problems in an MDP setup, e.g.,~\cite{liu2010indexability,larranaga2015stochastic,borkar2018opportunistic,kadota2018optimizing} computing the Whittle's index generally requires an iterative algorithm. We derive analytic expression of these indices in this work, which reduces computation overhead significantly and facilitates online implementation. The remainder of this paper is organized as follows. In section~\Rmnum{2}, we present the mathematical formulation of the problem of interest. In section~\Rmnum{3}, we present the MDP formulation and the optimality of a monotone deterministic stationary policy. In section~\Rmnum{4}, we construct a Whittle's index-based suboptimal heuristics. The numerical examples in section~\Rmnum{5} are provided to demonstrate the monotone policies and performance of the index-based policy. We summarize the paper in section~\Rmnum{6}. We leave all proofs in the Appendix. \emph{Notation}: Denote $\mathbb{N}$ and $\mathbb{R}$ as the set of nonnegative integer numbers and real numbers, respectively. The symbol $\mathbb{X}^n$ stands for the $n$-th order Cartesian product of a set $\mathbb{X}$. Inequalities (i.e., $<,>,\leq,\geq$) between two vectors are interpreted an element-wise. For a matrix $X$, let $\Tr(X)$, $X^{\top}$ and $\rho(X)$ represent the trace, the transpose and the spectral radius of $X$, respectively. The symbol $I$ stands for an identity matrix of appropriate size. Let $\mathtt{Pr}(\cdot)$ and $\mathtt{Pr}(\cdot|\cdot)$ stand for the probability and conditional probability for certain events. Denote $\mathbb{E}[\cdot]$ as the expectation of a random variable. The composition of two mappings $f$ and $g$ is denoted by $g\circ f$ and the composition of a mapping $f$ for $t$ times is denoted by $f^t:=\underbrace{f \circ f \circ \cdots \circ f}_t$ with $f^0$ being the identity mapping. A Lyapunov operator is defined as $h_i(X):= A_iXA_i^{\top}+Q_i$. \section{SYSTEM SETUP AND PROBLEM FORMULATION} \subsection{System Setup} Consider the remote estimation system in Fig.~\ref{fig:architecture}. We illustrate each component as follows. \emph{Processes}. There are $n$ independent discrete-time linear dynamic processes whose states are measured by $n$ sensors, respectively. This type of system configuration can be implemented with the \emph{Wireless}HART protocol in industrial applications~\cite{song2008wirelesshart}. The dynamics of the sensor system is as follows: $$x^{(i)}_{k+1} = A_ix^{(i)}_{k} + w^{(i)}_k,~y^{(i)}_k = C_i{x}^{(i)}_{k} + {v}^{(i)}_k,$$ where $i \in \{1,\ldots,n\}$, ${x}_k^{(i)}\in\mathbb{R}^{n_i}$ is the state of the $i$-th system at time $k$ and ${y}^{(i)}_k\in\mathbb{R}^{m_i}$ is the noisy measurement taken by sensors. For all processes and $k\geq0$, the state disturbance noise ${w}_k^{(i)}$, the measurement noise ${v}_k^{(i)}$ and the initial state ${x}_0^{(i)}$ are mutually independent Gaussian random variables, which follow Gaussian distributions as ${w}_k^{(i)}\sim\mathcal{N}({0},{Q}_i)$, ${v}_k^{(i)}\sim\mathcal{N}({0},{R}_i)$ and ${x}_0^{(i)}\sim\mathcal{N}({0}, {\Sigma}^x_i)$. We assume that ${Q}_i$ and ${\Sigma}^x_i$ are positive semidefinite, and ${R}_i$ is positive definite. We assume that, for every $i \in N$, the pair $({A}_i,{C}_i)$ is detectable and the pair $({A}_i,\sqrt{{Q}}_i)$ is stabilizable. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{digram.eps} \caption{Architecture of the remote estimation system.} \label{fig:architecture} \end{figure} \emph{Sensors}. Each sensor is assumed to be equipped with computation unit and memory capacity. After taking the measurement, the sensor computes $\hat{{x}}_{local,k}$, the local minimum mean squared error estimate of the state ${x}^{(i)}_k$ at each time step based on the Kalman filter~\cite{kalman1960new}. After computation, the sensor transmit the local state estimates if the remote estimator delivers a transmission order to it through a feedback channel. \emph{Communication channels}. The communication bandwidth is considered to be limited. At each time step, the remote estimator can only receive data from $m$ out of the $n$ sensors through a forward channel. Let $a^{(i)}_k \in \{0, 1\}$ denote whether the $i$-th sensor is scheduled to transmit data at time $k$. This command is sent from the remote estimator to the sensor through the feedback channel. If the remote estimator decides to ask for data of sensor $i$ at time $k$, $a_k^{(i)}=1$; otherwise, $a_k^{(i)}=0$. We also consider the unreliability of the channel. Let $\eta_k^{(i)} \in \{0, 1\}$ denote whether the packet is successfully received by the remote estimator through the forward channel. Let $\eta^{(i)}_k=1$ stand for successful transmission, and $\eta^{(i)}_k=0$ for failure. Similar to the setting in~\cite{ren2018attack}, the channel condition is assumed to be independently distributed and $\mathbb{E}[\eta^{(i)}_k]=\lambda_i$, for any $k\geq0$. For the feedback channel, similar to other references in the literature~\cite{mo2014detecting}, the transmission is assumed to be reliable since the remote estimator is typically able to transmit signal with greater power. \emph{Remote estimator}. Let the random variable $\xi^{(i)}_k=a^{(i)}_k\eta_k^{(i)}$ denote whether a local estimate of sensor $i$ is received by the remote estimator. According to~\cite{anderson1979optimal}, since $({A}_i,{C}_i)$ are detectable and $({A}_i,\sqrt{{Q}_i})$ are stabilizable, the \emph{a posteriori} estimation error covariance ${P}_{local,k}^{(i)}$ converges exponentially fast to a steady state $\overline{{P}}^{(i)}$, usually in a few steps. We assume that the system operates in the steady state. Based on this fact, the optimal estimate of each process for the remote estimator is as follows: \begin{align*} \hat{{x}}_k^{(i)}= \begin{cases} \hat{{x}}^{(i)}_{local,k}, &\text{if~} \xi_k^{(i)}=1,\\ {A}_i \hat{{x}}^{(i)}_{k-1}, &\text{if~} \xi_k^{(i)}=0. \end{cases} \end{align*} Define the time elapsed since the last received packet of the $i$-th sensor at time $k$: \begin{align}\label{eq:definition of tau} \tau^{(i)}_k=\min_{t}\{0\leq t\leq k:\xi^{(i)}_{k-t}=1\}. \end{align} The estimation error covariance matrices at the remote estimator are thus as follows: \begin{align*} {P}_k^{(i)}= \begin{cases}\overline{{P}}^{(i)}, &\text{if~} \xi_k^{(i)}=1,\\h_i({P}_{k-1}^{(i)}), &\text{if~} \xi_k^{(i)}=0.\end{cases} \end{align*} The estimation error covariance of the remote estimator can be compactly written as \begin{align} P_k^{(i)}=h_i^{\tau_k^{(i)}}(\overline{P}^{(i)}).\label{eq:remote P} \end{align} According to~\cite[Lemma 3.1]{shi2012scheduling}, the operator $h_i^{\ell}({X})$ is monotonically increasing with respect to $\ell$, i.e., $\forall i \in N$, if $\ell_1\leq\ell_2$ for $\ell_1,~\ell_2 \in \mathbb{N}$, $h_i^{\ell_1}(\overline{{P}}^{(i)})\leq h_i^{\ell_2}(\overline{{P}}^{(i)})$. Moreover, $\forall \ell \in \mathbb{Z}_+$, $\Tr(\overline{{P}}^{(i)})<\Tr(h(\overline{{P}}^{(i)}))<\cdots<\Tr(h^{\ell}(\overline{{P}}^{(i)}))$. \subsection{Problem Formulation} From~\eqref{eq:remote P}, the expected estimation error covariance is a function of ${\tau^{(i)}_k}$ and is independent of the realization of $\hat{x}^{(i)}_{local,k}$. As the remote estimation error covariance now has a one-to-one correspondence with $\tau_k^{(i)}$, we denote the cost associated with the remote estimation error as \begin{align*} c_e^{(i)}(\tau^{(i)}_k)=\Tr(P_k^{(i)}). \end{align*} We also take energy consumption of the sensors into consideration. If sensor $i$ transmit data, an energy cost $c_c^{(i)}$ is incurred for sensor $i$. Our objective is to find a scheduling policy $\{a_k^{(i)}:i=1,2,\dots,n;\;k=0,1,2,\dots\}$ to minimize the expected time-averaged trace of the remote estimation error and the normalized energy cost over all sensors as follows. \begin{problem}\label{prb:problem1} \begin{align*} \min_{\{a_k^{(i)}\}} \quad &\lim_{T\to \infty} \frac{1}{T+1} \sum_{k=0}^{T} \sum_{i=1}^{n} \mathbb{E} [c_e^{(i)}(\tau_k^{(i)})+ c_c^{(i)}a_k^{(i)}]\\ \text{s.t.} \quad &\sum_{i=1}^n a_k^{(i)} \leq m, ~\forall k\geq0. \end{align*} \end{problem} The feasibility of Problem \ref{prb:problem1} requires that there exists a policy such that the objective function is bounded. A necessary condition is imposed as follows. \begin{assumption}\label{assumption:neccessary for stability} $ \max_i \rho^2(A_i)(1-\lambda_i) < 1. $ \end{assumption} This assumption ensures that the estimation error covariance of each process is bounded if every sensor is allowed to transmit simultaneously at each time step. This assumption is only a necessary condition to ensure the existence of a solution to the problem as the constraint on the number of simultaneous sensor transmissions is neglected. We develop a sufficient condition in Theorem~\ref{thm:existence} in the next section. \section{Structural Properties of an Optimal Policy} In this section, we first formulate Problem~\ref{prb:problem1} as a Markov decision process (MDP) with average cost over an infinite horizon. We then present an algorithm-based sufficient condition to guarantee the existence of a deterministic stationary optimal policy for the MDP. We show that there exist monotone structures in an optimal stationary policy, which extends the threshold structure of single sensor scheduling to a multiple-sensor case. \subsection{MDP Formulation} The form of \textbf{Problem~\ref{prb:problem1}} can be taken as an MDP with an infinite time-averaged cost which consists of a quadruple $(\mathbb{S},\mathbb{A},\mathtt{Pr}(\cdot|\cdot,\cdot),c(\cdot,\cdot))$. Each element is explained as follows. 1) The state space $\mathbb{S}$ contains all possible states ${s} := [\tau^{(1)},\ldots,\tau^{(n)}]^\top \in \mathbb{N}^n$, where $\tau^{(i)}$ is a shorthand notation for $\tau^{(i)}_k$ defined in~\eqref{eq:definition of tau} by omitting the time index $k$. This can be done because we are going to discuss the transition between two successive time steps, where the time index $k$ is not necessary. 2) The action space $\mathbb{A}$ contains all allowable scheduling actions, i.e., $\mathbb{A} := \{a = [a^{(1)},\dots,a^{(n)}]\in\{0,1\}^n:a^{(i)}\in\{0,1\}, \forall i=1,\ldots,n, \sum_{i=1}^na^{(i)}\leq m\}$, where $a^{(i)}=1$ stands for scheduling the $i$-th sensor and $0$ otherwise. 3) At time $k$, suppose the state is in $s_k=s$. After taking action $a_k=a$, the state will transit to another state $s_+$ in the next time step by following a time-homogeneous transition law as follows. \begin{align} \mathtt{Pr}(s_+|s,a)=\prod_{i=1}^n \mathtt{Pr}^{(i)}(\tau^{(i)}_+|\tau^{(i)},a^{(i)}), \end{align}\label{eq:transition law} where \begin{align}\label{eq:single process transisiton} \mathtt{Pr}^{(i)}(\tau^{(i)}_+|\tau^{(i)},a^{(i)})= \begin{cases} \lambda_i, &\text{if~} \tau^{(i)}_+=0,a^{(i)}=1,\\ 1-\lambda_i, &\text{if~} \tau^{(i)}_+=\tau^{(i)}+1,a^{(i)}=1,\\ 1, &\text{if~} \tau^{(i)}_+=\tau^{(i)}+1,a^{(i)}=0,\\ 0, &\text{otherwise.} \end{cases} \end{align} 4) The one-stage cost is defined as $c({s},{a}) := \sum_{i=1}^n c_e^{(i)}(\tau^{(i)})+ c_c^{(i)}a^{(i)}$. Let $(s_{0:k},a_{0:k-1})=({s}_0,{a}_0,\dots,{s}_{k-1},{a}_{k-1},{s}_k)$ stand for the history up to time $k$. A policy is a sequence of mappings from the history to a probability distribution of the scheduling actions, i.e., $\{\pi_k\}_{k=0}^\infty$, where $\pi_k:(s_{0:k},a_{0:k-1}) \mapsto \mathtt{Pr}(a_k)$. Let $\Pi$ denote the set of all feasible policies. The goal of an MDP is to minimize the expectation of a time-averaged cost over an infinite horizon as \begin{align*} \min_{\{\pi_k\}_{k=0}^\infty\in\Pi} \lim_{T\to \infty} \frac{1}{T+1} \sum_{k=0}^{T} \sum_{i=1}^{n} \mathbb{E} [c_e^{(i)}(\tau_k^{(i)})+ c_c^{(i)}a_k^{(i)}]. \end{align*} \subsection{Existence of Deterministic Stationary Policy} The general policy class $\Pi$ requires the information of the whole history and could be random, which hinders practical scheduling implementations. In this work, we consider deterministic stationary policies with the form \begin{align*} a_k = \pi(s_k) \end{align*} where $\pi=\pi_k$ for any $k\geq0$. These policies are more desirable, as the actions are deterministic and the mappings are stationary (independent of time $k$). We introduce Algorithm~\ref{alg:feasibility}, the output of which determines whether optimal policies can be found in the set of deterministic stationary ones. Let $G^{(u)}:=\{G^{(u)}[i]:\rho(A_{G^{(u)}[i]})\geq 1\}$ be the set of the indices of all unstable processes. Given the necessary condition (Assumption~\ref{assumption:neccessary for stability}), Algorithm~\ref{alg:feasibility} gives the least number of channels such that all the processes are stabilizable. \begin{algorithm} \caption{Feasibility of Multiple Sensor Scheduling} \label{alg:feasibility} \begin{algorithmic}[1] \State Initialize the group number counter $\Bbbk\leftarrow 1$ and the first group $G_1\leftarrow\{G^{(u)}[1]\}$ \For{Process $i=G^{(u)}[2]:|G^{(u)}|$} \For{$j=1:\Bbbk$} \If{Process $i$ and process in Group $G_j$ satisfy \begin{align*} \max_{i'\in G_j \bigcup\{i\}}\rho^2(A_{i'})\max_{j'\in G_j \bigcup\{i\}}(1-\lambda_{j'})<1 \end{align*} \hspace{2.5em}} \State $G_j\leftarrow G_j \bigcup\{i\}$ and \textbf{break} \EndIf \EndFor \If{process $i$ has not been put in any group} \State{$\Bbbk\leftarrow \Bbbk+1,~{\Bbbk}\leftarrow\{G^{(u)}[i]\}$} \EndIf \EndFor \State Output $\Bbbk$ \end{algorithmic} \end{algorithm} The following theorem characterizes a sufficient condition for existence of a deterministic stationary optimal policy for the MDP formulation. \begin{theorem}\label{thm:existence} If the output in Algorithm~\ref{alg:feasibility} is less than or equal to $m$, there exist a constant $\mathcal{J}^\star$, a function $V^\star(\tau)$, and a deterministic stationary policy $\pi^\star:\mathbb{S}\mapsto\mathbb{A}$ that satisfy the following Bellman optimality equation \begin{align*} \mathcal{J}^\star + V^\star(s) = \min_{a\in\mathbb{A}} \Bigg[ c(s,a) + \sum_{s_+\in\mathbb{S}} V^\star(s_+)\mathtt{Pr}(s_+|s,a) \Bigg] \end{align*} and \begin{align*} \mathcal{J}^\star + V^\star(s) = \Bigg[ c(s,\pi^\star(s)) + \sum_{\tau_+\in\mathbb{S}} V^\star(s_+)\mathtt{Pr}(s_+|s,\pi^\star(s)) \Bigg]. \end{align*} In addition, \begin{align*} J(\pi^\star) = \min_{\pi\in\Pi} J(\pi) = \mathcal{J}^\star. \end{align*} \end{theorem} This theorem shows that it is nontrivial to establish the existence of a regular optimal policy for the multiple sensor scheduling problem if packet dropouts occur. Roughly speaking, if the channel bandwidth is sufficient, there exists a deterministic stationary optimal policy. In previous works~\cite{shi2012scheduling,han2017optimal} on scheduling of multiple linear dynamic processes, a perfect channel is assumed. Our problem, however, considers a lossy channel. As a result, the number of the feasible consecutive packet loss cannot be restricted to be finite as it was done in~\cite{han2017optimal}. Therefore, proving the existence of a deterministic stationary policy is challenging. Furthermore, our result holds when there are stable processes. This extends the results of~\cite{han2017optimal}, which only considered unstable processes and cannot be extended to stable processes. \subsection{Structure of an Optimal Policy} One can directly obtain an optimal policy through relative value iteration or policy iteration for~\eqref{eq:acoe}. This, however, cannot provide more insights of the structure of the problem. One can observe that the one-stage cost $c(s,a)$ and the state transition law possesses certain monotone structure, which, leads to optimality of monotone policies. \begin{theorem}\label{theorem: monotonicity for multiple processes} There exists an optimal deterministic stationary policy $\pi^\star$ with a monotone structure. In particular, if $\tau^{(i)} \leq \tau'^{(i)}$ with $\tau^{(j)}=\tau'^{(j)}$ for $j \neq i$ and the $i$-th component of $\pi^\star(\tau)$ is one, then the $i$-th component of $\pi^\star(\tau')$ is also one. \end{theorem} This result shows that, if it is optimal to schedule sensor $i$ at state $s$, it is also optimal to schedule sensor $i$ at state $s'$, where $\tau^{(i)}\leq\tau'^{(i)}$ and $\tau^{(j)}=\tau'^{(j)}$ for $j\neq i$. In particular, if $m=1$ and $n=2$, there exists a switching curve between scheduling or not scheduling one sensor in the state space. Examples can be found in the numerical example section. The benefits of the monotone structure of the optimal policy are two-fold. Firstly, the structure policy reduces the storage space for online implementation. After obtaining the optimal scheduling policy, only the boundary state is needed to be stored for policy implementation. Secondly, by leveraging the monotone structure, we can reduce computation overhead of solving~\eqref{eq:acoe} compared with brute force numerical schemes such as relative value iteration or policy iteration. Following the idea in~\cite{zhou2017optimal}, the standard relative iteration can be revised as follows. The original relative value iteration iterates between the following two updates \begin{align} V_{k+1}(s) &= \min_{a\in\mathbb{A}} \Bigg[ c(s,a) + \sum_{s_+\in\mathbb{S}} V_k(s_+)\mathtt{Pr}(s_+|s,a) \Bigg],\label{eq:value update}\\ V_{k+1}(s) &= V_{k+1}(s) - V_{k+1}(s_o)\nonumber, \end{align} where $s_o\in\mathbb{S}$ is a fixed state. For each $k$, we can associate an optimal policy policy by letting \begin{align} \pi^{\star}_k(s)=\argmin_{a\in\mathbb{A}} \Bigg[ c(s,a) + \sum_{s_+\in\mathbb{S}} V_k(s_+)\mathtt{Pr}(s_+|s,a) \Bigg]\label{eq:policy improvement} \end{align} for each state $s$. In the revised version, before we compute~\eqref{eq:value update}, instead of minimizing for all state $s\in\mathbb{S}$, we first check whether there are $s'\leq s$ and $a\in\mathbb{A}$ such that $\pi^{\star}_k(s')=a$, and then let \begin{align*} \pi^{\star}_{k+1}(s) &= a,\\ V_{k+1}(s) &= c(s,a) + \sum_{s_+\in\mathbb{S}} V_k(s_+)\mathtt{Pr}(s_+|s,a) \end{align*} for the state $s$, if such $s'$ and $a$ exists. If such $s'$ and $a$ fail to exist, we execute the original update~\eqref{eq:value update} for $s$ and calculate $\pi^{\star}_{k+1}(s)$ via~\eqref{eq:policy improvement}. This revision removes the brute-force search over $\mathbb{A}$ in~\eqref{eq:policy improvement} by leveraging the monotone structure. According to Theorem~\ref{theorem: monotonicity for multiple processes}, the revised algorithm converges to the same policy as the original one. Similar revision can also be done for policy iteration. Details can be found in~\cite{zhou2017optimal}. Scheduling multiple sensors is complex by its nature. When $n$ is large, storing the switching boundaries in $n$-dimensions is still intense. Moreover, although searching space of the relative value iteration and policy iteration has been reduced, the computation complexity is still exponential in $n$. In the next section, we present an index-based heuristics for the scheduling policy to further reduce computation overhead and to simplify the scheduling decisions. \section{Index-Based Heuristics} To obtain the optimal solution of the MDP, one needs to resort to a dynamic-programming-based numerical algorithm. Suppose that each process is approximated by $N$ states. There are $N^n$ states in total, which grows exponentially as $n$ increases. Meanwhile, the action space is $\sum_{i=0}^m\binom{n}{i}$. The large state space and action space make the brute force numerical methods prohibitive. We construct an index-type heuristics based on the Whittle's index~\cite{whittle1988restless} to obtain a suboptimal scheduling policy. The index policy maps the each state of a sensor to a real number and determines which sensor to transmit based on the order of these real numbers. The mapping is calculated for sensors separately, which significantly reduces computation overhead. As mentioned in Whittle's seminal paper~\cite{whittle1988restless}, several conditions are needed to ensure that the index policy can be constructed, which are known as indexability. The indexability requires case-by-case analysis. Generally, computation of the indices raises a significant challenge. Researchers use ad hoc approaches to tackle specific problems. We show that the index of the sensor scheduling in this model can be written in closed-form, which makes the index easy to compute and facilitates online implementation. In addition, this suboptimal policy is asymptotically optimal as the number of sensors and channels goes to infinity. \subsection{Overview of the Index policy} The derivation of the Whittle's index is based on regularization, which relaxes the hard constraint on simultaneous transmissions at each time step. This leads to decoupled sensor scheduling problems. We schedule sensors with the top $m$ largest indices if these indices are positive. Therefore, the actual index policy will still meet the hard constraint. We start the analysis by transforming the hard constraint in Problem~\ref{prb:problem1} \begin{align*} \sum_{i=1}^n a_k^{(i)} \leq m, ~\forall k\geq0 \end{align*} into a relaxed time-averaged form as \begin{align}\label{eq: relaxed constraint} \lim_{T\to \infty} \frac{1}{T+1} \sum_{k=0}^{T} \sum_{i=1}^{n} \mathbb{E} [a_k^{(i)}] \leq m. \end{align} We transform Problem~\ref{prb:problem1} into an unconstrained one by incorporating relaxed constraint in the objective functional with an extra penalty for transmission $w$, i.e., \begin{align*} \min_\pi \lim_{T\to \infty} \frac{1}{T+1} \sum_{k=0}^{T} \sum_{i=1}^{n} \mathbb{E} [c_e^{(i)}(\tau_k^{(i)})+ c_c^{(i)}a_k^{(i)} + w a_k^{(i)}]. \end{align*} This problem has a separable structure which can be further decoupled into $n$ independent scheduling problems \begin{align}\label{eq:decoupled mdp} \min_{\pi_i} \lim_{T\to \infty} \frac{1}{T+1} \sum_{k=0}^{T} \mathbb{E} [c_e^{(i)}(\tau_k^{(i)})+ c_c^{(i)}a_k^{(i)} + w_i a_k^{(i)}] \end{align} for each $i$. This leads to $n$ \emph{decoupled MDPs}. Note that we further relax $w$ to $w_i$ for each $i$. By using the MDP framework in the last section, we have $n$ independent MDPs $(\mathbb{S}_i,\mathbb{A}_i,\mathtt{Pr}(\cdot|\cdot,\cdot),c^{(i)}(\tau^{(i)},a^{(i)}))$ with $c^{(i)}(\tau^{(i)},a^{(i)})=c_e^{(i)}(\tau^{(i)}) + c_c^{(i)}a^{(i)} + w_i a^{(i)}$, and the optimal policy for each $i$ can be characterized by the following Bellman optimality equation \begin{align} \mathcal{J}_i^\star + &V_i^\star(\tau^{(i)}) = \min_{a^{(i)}\in\mathbb{A}_i} \Bigg[ c_e^{(i)}(\tau^{(i)}) + c_c^{(i)}a^{(i)} + w_i a^{(i)}\nonumber\\ &+ \sum_{\tau^{(i)}_+\in\mathbb{S}_i} V_i^\star(\tau_+)\mathtt{Pr}^{(i)}(\tau^{(i)}_+|\tau^{(i)},a^{(i)}) \Bigg]\label{eq:acoe for each}. \end{align} An optimal policy determines whether $a^{(i)}=1$ or $a^{(i)}=0$ for each state $\tau^{(i)}$ and varies for different $w$. For each given state $\tau^{(i)}$, there exists a $w_i(\tau^{(i)})$ such that both $a^{(i)}=1$ and $a^{(i)}=0$ minimize the term inside the bracket in the right hand side of~\eqref{eq:acoe for each}. We can thus interpret $w_i(\tau^{(i)})$ as the importance of $\tau^{(i)}$. Whittle calls these $w_i(\tau^{(i)})$ indices. Whittle's original index policy runs as follows. Suppose that, for each $i$, the corresponding process is \emph{indexable} (see more details later). At each time step, we first \emph{sort} the index of each sensor according to their current state $\tau^{(i)}$ and then \emph{schedule} the $m$ sensors with largest indices. \subsection{Derivation of the Index policy} The key component of adopting the index policy is computing Whittle's index. Generally, this is computationally intense as the index $w_i(\tau^{(i)})$ is coupled in the Bellman optimality equation and we need to solve the equation for each state. In our problem, however, it turns out that we can obtain a closed-form expression of $w_i(\tau^{(i)})$ which tremendously reduces computation overhead. Before we proceed to the computation, we clarify that our problem indeed meets the assumption made by Whittle. The applicability of the Whittle's index policy requires that each decoupled MDP in~\eqref{eq:decoupled mdp} is indexable. Denote $\mathbb{U}_i(w):=\{t:\pi^\star_i(t)=1,w_i=w\}$ as the set of states where transmission is optimal when the extra penalty is $w$. \begin{definition} A decoupled MDP is indexable if $\mathbb{U}_i(w)$ monotonically decreases from the whole state space $\mathbb{S}_i$ to the empty set as the extra cost $w_i$ increases from $-\infty$ to $+\infty$. \end{definition} The sensor scheduling problem is indeed indexable, which is based on the optimality of threshold policies and monotonicity of the threshold with respect to $w_i$. \begin{lemma}\label{lemma: indexability} \begin{enumerate} \item There exists a constant $\theta_i^\star(w_i)$ depending on $w_i$ such that the threshold policy of the form \begin{align*} \pi^\star_i(t) = \begin{cases} 1,&~\text{if}~t\geq\theta_i^\star(w_i),\\0,&~\text{if}~t\leq\theta_i^\star(w_i).\end{cases} \end{align*} achieves the minimization in~\eqref{eq:acoe for each} with $w=w_i$. \item The thresholds satisfy $\theta_i^\star(w_i)\leq\theta_i^\star(w_i')$ if $w_i\leq w'_i$. \end{enumerate} \end{lemma} We conclude from Lemma~\ref{lemma: indexability} that the indexable condition indeed holds. As a threshold policy is optimal, we can obtain $U_i(w_i)=\{t:t \geq \theta_i^\star(w_i)\}$. From the monotonicity of the threshold, we can further obtain $U_i(w_i)\subset U_i(w'_i)$ if $w_i \geq w'_i$. Moreover, since $w_i=-\infty$ and $w_i=+\infty$ lead to $U_i(w_i)=\mathbb{S}_i$ and $U_i(w_i)=\emptyset$, we verify that the decoupled MDP for sensor $i$ is indexable. Before we proceed to the closed-form expression for the Whittle's index, we need the following lemma to compute the averaged estimation error under a threshold policy. \begin{lemma}\label{lemma: closed form expression for average costs} The time-averaged communication rate under a threshold policy with threshold $\tau^{(i)}$ is \begin{align*} \lim_{T\to\infty} \frac{1}{T+1} \mathbb{E}\Big[\sum_{k=0}^T a_k^{(i)}\Big] = \frac{1}{\lambda_i\tau^{(i)}+1}. \end{align*} The time-averaged estimation error $J_e^{(i)}(\tau^{(i)})$ under the same threshold policy is \begin{align*} &J_e^{(i)}(\tau^{(i)}) = \\ &\begin{cases} \lambda_i \Tr(S_{\overline{P}^{(i)}}) + (1-\lambda_i) \Tr(S_{Q_i}), &~\text{if}~\tau^{(i)}=0,\\ \Big[ \Tr(S_{h_i^{\tau^{(i)}}(\overline{P}^{(i)})}) + \frac{1-\lambda_i}{\lambda_i} \Tr(S_{Q_i}) \\ \quad + \sum_{t=0}^{\tau^{(i)}-1}c_e^{(i)}(t)\Big] \cdot \frac{\lambda_i}{\lambda_i\tau^{(i)}+1},&~\text{if}~\tau^{(i)}>0, \end{cases} \end{align*} where $S_{\overline{P}^{(i)}}$ and $S_{Q_i}$ are the solutions of \begin{align*} S = (1-\lambda_i)A_i S A_i^\top + \overline{P}^{(i)} \end{align*} and \begin{align*} S = (1-\lambda_i)A_i S A_i^\top + Q_i, \end{align*} respectively. \end{lemma} This lemma implies that, under a threshold policy, the time-averaged communication rate and the estimation error can be efficiently computed for each sensor $i$. This helps us develop an analytic expression of the Whittle's indices in the following. \begin{theorem}\label{theorem: whittle's index} The Whittle's index as a function of the time elapsed since the last successful transmission from sensor $i$ is \begin{multline} w_i(\tau^{(i)})=\frac{\lambda_i(\lambda_i\tau^{(i)}+1)}{1-\lambda_i} \\ \cdot\Big[ (\tau^{(i)}+1)J_e^{(i)}(\tau^{(i)}) - \sum_{t=0}^{\tau^{(i)}}c_e^{(i)}(t) \Big] - c_c^{(i)},\label{eq:form of whittle index} \end{multline} where $J_e^{(i)}(\tau^{(i)})$ is the expected time-averaged estimation error of sensor $i$ under a threshold policy with threshold $\tau^{(i)}$. \end{theorem} Theorem~\ref{theorem: whittle's index} gives an analytic expression of Whittle indices. Significant computation overhead is thus reduced compared with numerical algorithms such as value iteration and policy iteration. Moreover, this facilitates online implementations. It is worth noting that, apart from the extra penalty determined by the Whittle's index, every transmission will cause an energy cost $c_c^{(i)}$. Therefore, the Whittle's index can be negative. We revise the Whittle's index policy as follows. At each time step, we first pick $m$ sensors whose Whittle's indices are the top $m$, and then only schedule those sensors with positive Whittle's indices. Weber and Weiss~\cite{weber1990index} proved that, if some conditions hold\footnote{The asymptotic optimality holds if the fluid approximation to the index policy has a globally asymptotically stable equilibrium point. The authors claims that examples violating these conditions are extremely rare and the suboptimality is expected to be minuscule.}, the Whittle's index policy is asymptotically optimal. The cost of the original MDP is lower bounded by the minimal average cost under a time-averaged constraint on its actions. As shown in~\eqref{eq: relaxed constraint}, the time-averaged constrained MDP is a relaxation of the original MDP, in which only $m$ out of $n$ sensors are scheduled at each time step. Meanwhile, as the Whittle's index policy meets the original constraint, it yields a performance upper bound of the original MDP. These bounds can be written as $C^{relax} \leq C^\star \leq C^{W}$, where $C^{relax}$ stands for the minimal cost under the relaxed MDP, $C^\star$ stands for the minimal cost for the original MDP, and $C^W$ stands for the time-averaged cost under the Whittle's index policy. Webber and Weiss showed that $C^W$ is asymptotically the same as $C^{relax}$ as $m$ and $n$ go to infinity with ratio $m/n$ fixed. Because $C^W$ asymptotically reaches $C^{relax}$, it also asymptotically reaches $C^\star$. In our numerical examples, the performance of the Whittle's index policy outperforms other two celebrated heuristics. \section{NUMERICAL EXAMPLE} In this section, we present numerical examples to illustrate the theoretical results. The first example is provided to show the optimality of monotone policies (Theorem~\ref{theorem: monotonicity for multiple processes}). The second example is provided to show the performance of the Whittle's index policy. We first consider the case when $n=2$. The two processes and their parameters are as follows: \begin{gather*} {A}_1=\begin{bmatrix}1.1 &1\\0 &1\end{bmatrix}, ~{C}_1=\begin{bmatrix}2 &0\\0 &1\end{bmatrix}, ~{Q}_1=\begin{bmatrix}1 &0\\0 &1\end{bmatrix}, ~{R}_1=\begin{bmatrix}1 &0\\0 &1\end{bmatrix};\\ {A}_2=\begin{bmatrix}1 &1\\0 &1.2\end{bmatrix}, ~{C}_2=\begin{bmatrix}1 &0\\0 &1\end{bmatrix}, ~{Q}_2=\begin{bmatrix}1 &0\\0 &1\end{bmatrix}, ~{R}_2=\begin{bmatrix}1 &0\\0 &1\end{bmatrix}. \end{gather*} Moreover, the packet arrival rate of the two channels are $\lambda_1=0.8$ and $\lambda_2=0.9$, respectively. We consider two scenarios with zero or positive transmission costs, respectively. For the positive costs, we let $c_c^{(1)}=20$ and $c_c^{(2)}=10$. We use the relative value iteration to compute an optimal policy. The monotonicity structure of the optimal policy is shown in Fig.~\ref{fig:threshold_cost}. Sub-figure (a) shows an optimal policy when $c_c^{(1)}= c_c^{(2)}=0$, and Sub-figure (b) shows an optimal policy when $c_c^{(1)}=20$ and $c_c^{(2)}=10$. The horizontal and vertical axes represent the consecutive packet drops of sensor $1$ and $2$, respectively. It is clear that there exists a boundary splitting the $(\tau_1,\tau_2)$ plane into two regions. The states in the left upper corner correspond to scheduling sensor $2$, while the states in the right lower corner correspond to scheduling sensor $1$. In addition, when there are extra transmission costs, it may be optimal not to schedule any sensor if $\tau^{(i)}$ are small. \begin{figure}[t] \subfigure[No transimission costs.]{ \centering \includegraphics[width=0.22\textwidth]{threshold.eps} } \hspace{0.01em} \subfigure[With transmission costs.]{ \centering \includegraphics[width=0.22\textwidth]{threshold_cost.eps} } \caption{Visualization of the monotone policy when $n=2$ and $m=1$.} \label{fig:threshold_cost} \end{figure} \begin{figure*}[t] \includegraphics[width=0.9\textwidth]{switchingsurface_cost.eps} \caption{Visualization of the switching surface policy when $n=3$, $m=2$ and communication costs $c_c^{(1)}=50$, $c_c^{(2)}=30$, $c_c^{(3)}=40$.} \label{fig:swithcingsurface_cost} \end{figure*} When $n>2$, the monotone structure is hard to depict. We consider a case with $n=3$ and $m=2$. The LTI processes dynamics are as follows: \begin{gather*} {A}_1=\begin{bmatrix}1.1 &1\\0 &1\end{bmatrix}, ~{C}_1=\begin{bmatrix}1 &0\end{bmatrix}, ~{Q}_1=\begin{bmatrix}1 &0\\0 &4\end{bmatrix}, ~{R}_1=1;\\ {A}_2=\begin{bmatrix}1.2 &1\\0 &1\end{bmatrix}, ~{C}_2=\begin{bmatrix}1 &0\end{bmatrix}, ~{Q}_2=\begin{bmatrix}1 &0\\0 &2\end{bmatrix}, ~{R}_2=1;\\ {A}_3=\begin{bmatrix}1.1 &1\\0 &1.3\end{bmatrix}, ~{C}_3=\begin{bmatrix}1 &0\\0 &1\end{bmatrix}, ~{Q}_3=\begin{bmatrix}1 &0\\0 &1\end{bmatrix}, ~{R}_2=I, \end{gather*} where $I = \begin{bmatrix}1 &0\\0 &1\end{bmatrix}$. The packet arrivals are set as $\lambda_i=0.9$ for $i=1,2,3$. Let the communication costs be $c_c^{(1)}=50$, $c_c^{(2)}=30$, $c_c^{(3)}=40$. There are seven feasible actions. \begin{enumerate} \item No schedule for any sensor; \item Schedule one sensor: schedule sensor 1, schedule sensor 2, schedule sensor 3; \item Schedule two sensors: schedule sensor 1 and 2, schedule sensor 1 and 3, schedule sensor 2 and 3. \end{enumerate} By following the same procedure when $n=2$, we obtain an optimal policy. For each sensor, either it is scheduled or not is a feasible action. We plot optimal actions for each sensor with respect to different states in Fig.~\ref{fig:swithcingsurface_cost}. The region of scheduling each sensor are shown in each sub-figure. We can observe that there exists a switching surface between scheduling a particular sensor and not scheduling this sensor. As there are extra communication costs, we can see that it is optimal to schedule no sensors when $\tau^{(i)}$ are small. Finally, we present the performance of Whittle's index policy. For comparison, we also simulate scheduling under two celebrated heuristics, \emph{maximum-error-first} policy and \emph{maximum-delay} first policy. In the former, we choose the $m$ sensors whose expected errors $\Tr(h_i^{(\tau^{(i)}_k)}(\overline{P}^{(i)}))$ are the $m$-largest at time $k$. In the later, we choose the $m$ sensors whose delays $\tau^{(i)}_k$ are the $m$-largest. Since there are transmission costs, the Whittle's index may not be positive. We consider two types of Whittle's index policy, the original one and the revised one we discussed in the end of the last section. We randomly generate $40$ first-order LTI systems: \begin{align*} x^{(i)}_{k+1} = Ax^{(i)}_{k} + w^{(i)}_{k}, y^{(i)}_{k} = Cx^{(i)}_{k} + v^{(i)}_{k}, \end{align*} with system gains $A$ drawn from a standard normal distribution, observation gains $C$ drawn from uniform distribution on the closed interval $[1,10]$, and the state disturbance covariances $\mathbb{E}[w_k^{(i)}\cdot w_k^{(i)}]$ and the observation disturbance covariances $\mathbb{E}[v_k^{(i)}\cdot v_k^{(i)}]$ drawn from uniform distribution on the closed interval $[0, 100]$. The transmission costs are randomly drawn from the closed interval $[5,15]$. We simulate five scenarios, $n=20$ with $m=8$, $n=25$ with $m=10$, $n=30$ with $m=12$, $n=35$ with $m=14$, and $n=40$ with $m=16$. The ratio $\frac{m}{n}=0.4$ in all scenarios. In each scenario, we run Monte Carlo simulations of the scheduling process of the four scheduling heuristics over a time-horizon with length $1000$ for $100$ times. We compute the averaged total costs of each heuristics, which consist of the averaged estimation error and the averaged transmission costs. The performance of each heuristics is shown in Fig.~\ref{fig:performance comparison}, where ``MaxError" refers to the maximum-error-first policy, `MaxDelay" refers to the maximum-delay-first policy, and ``Index" and ``cIndex" refers to the original Whittle's index policy and revised Whittles' index policy, respectively. We observe that the two Whittle's index policies outperform the other two heuristics. The revised policy in most cases performs better than the original one as the costs of transmission are also considered. The average percentage of active sensor nodes under the revised policy is reported in Fig.~\ref{fig:activepercent}. Note that the percentage of other three policies is always one as they always schedule $m$ sensors simultaneously. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{whittlecompare_cost.eps} \caption{Performance comparison of heuristic policies.} \label{fig:performance comparison} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{activepercent.eps} \caption{The ratio between the average number of active sensors over the allowed simultaneous transmissions $m$.} \label{fig:activepercent} \end{figure} \section{CONCLUSION} We formulated the multiple sensor scheduling problem as a Markov decision process (MDP) with an average cost over an infinite horizon. An algorithm (Algorithm~\ref{alg:feasibility}) was proposed to check the existence of a deterministic stationary optimal policy. We proved the optimality of monotone policies. The monotone structure reduced the computation effort of finding an optimal policy and facilitated online implementation. We leveraged the structure of the problem to prove that each process is indexable in the sense of Whittle's. We adopted Whittle's index to construct an index heuristics with closed-form expressions, which tremendously saved computation effort and facilitated online implementation. Numerical examples showed the empirical performance of the proposed index policy outperforms other two common heuristics. The current setup assumes that the channel condition is invariant and known beforehand. It would be a challenging problem if the channel condition follows a time-varying model and the parameters are unknown. In this case, a learning based method such as $Q$-learning can be used. In this work, the centralized scheduling is considered. Another future direction involves a distributed design. If some information exchange among the sensors is applicable, the scheduling policy can be done in a distributed manner. \bibliographystyle{IEEETran}
{ "timestamp": "2020-01-10T02:10:02", "yymm": "1804", "arxiv_id": "1804.05618", "language": "en", "url": "https://arxiv.org/abs/1804.05618" }
\section{Introduction} When people listen to music, they can determine many features, such as genre and composer. The genre of music is easy to determine without previous knowledge, but not the composer, even if you have some knowledge. The difficulty depends on what should be estimated. There are some existing studies for such estimation, which are based on machine learning ~\cite{Dannenberg1997, Sawada2000}. The contribution of ~\cite{Sawada2000} implies that the feature that reflects the composer is short note sequence. Since the compression program is a kind of program which captures frequent sequences of data, it may not be suprising if we use a compression program to estimate composer. Actually, there is an interesting research ~\cite{Anan2012} that uses compression programs for composer estimation. They use the formular called NCD (Normalized Compression Distance). We focus on a similar but different similarity measure called compression-based dissimilarity measure (CDM)~\cite{Keogh2004}, which is tested in a wide range of data, not limited to music. Both CDM and NCD are based on the same plinciple. These principle are recently well presented in ~\cite{Louboutin2016}. \IEEEpubidadjcol Although a compression program is easy to use, the result depends on the compression program and the behavior is difficult to analyze. Moreover, since the compression is carried out with every known musical score, there is a concern that the amount of calculation becomes enormous when we determine the degree of similarity for a new musical score. In this study, we propose a novel method that is well formalized. The proposed method realized the scalability of a large number of learning data by pre-processing the group of learning data. Finally, the precision of the proposed method was verified to be better than the method where the value of the CDM is determined by the compressed file size. \IEEEpubidadjcol \section{Baseline Method} In this section, we will describe baseline CDM method~\cite{Takamoto2016} for estimating the composers. This work focuses on the improvement of CDM. They conducted experiments on a very simple system with CDM, but it still performs well for composer estimation task, in order to make the analysis of improvments possible. We have followed this work because we are also interesting in CDM, although we are aiming at replacing CDM with the proposed method, rather than simply improving the CDM. In baseline method, the musical scores are first converted into string representation, where information for sound 'on' or 'off' is expressed. The string representation is a long sequence of character '0' for 'off' and character '1' for 'on'. The position of each character corresponds to a piano key number $key$ and timing number $time$, where $position = 88 \times time + key$. The number 88 is the number of keys on a piano. Second, it uses the CDM proposed by Keogh~\cite{Keogh2004} for a pair of musical scores. The CDM is defined as follows: \begin{eqnarray} {\rm CDM}(x,y) &=& \frac{C(xy)}{C(x)+C(y)} \label{eq_cdm} \end{eqnarray} where $C(x)$ is the compressed file size of string $x$, and $C(xy)$ is the compressed file size of the concatenation of $x$ and $y$. The value of the CDM shows the dissimilarity between the two strings. The more the patterns shared by the two strings, the smaller the CDM value of the two strings. It is based on the principle that the string has more similar patterns, such as repetitions, if the compressed file size of its concatenated string is smaller based on the assumption that if specific phrases of the composer exist, then he/she used them in other musical scores. The estimation of the composer is based on this concept. This method is based on the study in~\cite{Takamoto2016}. It is interesting that the CDM, which is a simple function of a compressed file size, can estimate the composer of musical scores. However, there is an issue of scalability in the CDM. \figref{fig_comp_cdm} illustrates this issue, where $x$ is the string representation of a musical score of unknown composer, and $a_1$ to $a_{15}$ are musical scores of a composer $A$. The CDM is defined as a measure between two musical scores. In a previous study of composer estimation, an unknown musical score was compared with all the known musical scores, then the $k$-nearest neighbor method ($k$-NN) was applied~\cite{NLP-a} to the result. In general, when an application uses the relationship between two scores, an unknown musical score need to be compared with all the known musical scores. The larger is the number of known scores, the more are the computation time required for one new musical score. This method cannot be scaled up to a large number of known musical scores. The study in~\cite{Takamoto2016} also argues that the compressed file size of string $x$ is the approximation of the information quantity. The study in~\cite{Takamoto2016} also proposes to use offsetted compressed file size, where the value of the offset is obtained by observing the behavior of a specific compression program. This method was reported to improve the number of correct estimation significantly. However, the problems of dependency on the compression program and scalability remained the same with the CDM method. \section{Proposed Method} In this study, we formed a group of musical scores of the same composer to address the scalability issue. Then, we computed the information quantity based on the probability of substrings of a large string. This large string corresponds to the group. \figref{fig_comp_1} shows how the groups were used. As is in \figref{fig_comp_cdm}, in \figref{fig_comp_1}, $x$ is the string representation of a musical score of unknown composer, and $a_1$ to $a_{15}$ are musical scores of a composer $A$. The box shows that these scores form a group, and there is one long string representation for one group. The information quantity is then computed using the probability in $a_1 , a_2, ..., a_{15}$, and not the probability in $x$. Then, we computed the information quantity of an unknown musical score using the method described in the next session. The same process was carried out for the musical scores of the other four composers. We computed the information quantity of the unknown musical score $x$ with the group of each composer. We obtained five information quantities and determined that the composer of $x$ is the one whose string had the least information quantity. Using the pre-processing, the computation time of information quantity of one music score does not depend on the number of music score in a group. It only depends on the length of music score to judge. Therefore, the number of computations for one unknown musical score is proportional to the number of composers, rather than the number of known musical scores. \fig[width=0.65\columnwidth]{ Baseline system and other compression based approaches. When a method uses the relationship between two scores, an unknown musical score need to be compared with all the known musical scores. }{fig_comp_cdm}{fig/compareCDM.png} \fig[width=0.65\columnwidth]{ Proposed system or scalable method. By pre-prosessing through all music score in a group. computation time of one music scores not depend the number of scores in group. }{fig_comp_1}{fig/compare1.png} \section{Information Quantity} In general, we calculate the information quantity of a string from the probabilities of characters in the string. However, we can make a good guess that specified substrings, such as words emerge repeatedly in the real strings. Therefore, in this study the calculation of the information quantity of a string was performed using the emergent probabilities of all substrings. First, we consider the information quantity of one character. In general, the information quantity of a certain event depends on the occurring probability. Let the emergent probability of a certain character $c$ be $P(c)$, then the information quantity of $c$ is expressed using self-information~\cite{NLP-b} as follows: \begin{eqnarray} I_c(c)=-\log_2P(c) \label{eq_logp} \end{eqnarray} Let us consider the information quantity for the case where we treat the character sequence as a string. Let the $i$-th character of a string $S$ with length $N$ be $c_i$. The character $c_i$ in $S$ is independent of each other. The information quantity $I_c(S)$ of $S$ based on the characters is expressed as \eqnref{eq_str_info_c}. The expression in \eqnref{eq_str_info_c} indicates that the string information quantity is equal to the total sum of the information quantity of the characters. \begin{eqnarray} I_c(S) &=& - \log_2(\prod_{i = 1}^N P(c_i)) \nonumber\\ &=& - \sum_{i = 1}^N \log_2 P(c_i) \label{eq_str_info_c} \end{eqnarray} For the case of the string representation of a musical score, specified substrings, such as motif may emerge repeatedly. Thus, if we assume that a string consists of some subsequences, then the information quantity $I_s$ is expressed as \eqnref{eq_str_info_S}. \begin{eqnarray} I_s(S) = \mathop{\rm min}\limits_{\pi_k \in \pi (S)} \Bigl( -\sum_{t \in \pi _k} \log_2P(t) \Bigr) \label{eq_str_info_S} \end{eqnarray} where $\pi(S)$ is the set of all possible ways to divide $S$, which includes $2^{N-1}$ ways, and $t$ is a member of divided strings (a substring). More precisely, we divide the strings into finer substrings and calculate the information quantity as the sum of the information quantities of the new divided substrings. The information quantity varies depending on the partition. We should take the minimum quantity because the more are the substrings considered, the less becomes the information quantity of the string. The number of partitions is $2^{N-1}$, where $N$ is the length of the string. Although this is a large number, the minimum value is easily obtained in $\mathcal{O}(N^2)$, when a dynamic programming is used. To implement a program that obtains $I_s(S)$, we require a module to compute $P(t)$, where $t$ can be all substrings of the given large string. An efficient data structure, called suffix array, can be used to obtain the frequency of any substring~\cite{Manber1993}. Using this data structure, whose size is proportional to size of the large string, we can obtain the frequency of a substring $t$ in the large string efficiently. We used suffix array in the program implemented in this study. There is also a more efficient data structure called suffix tree~\cite{Stringology}. Furthermore, there is a good algorithm that can construct suffix tree in $\mathcal{O}(N)$ time complexities, and $\mathcal{O}(1)$ time complexities to obtain the frequency of a substring using the suffix tree. Maximum Likelihood Estimator (MLE) is usually used to estimate the probability from the frequency. We use MLE but we use $frequency - 1$ rather than $frequency$ in order to make the computed value stable. \section{Computational Complexity} Let us examine by how much the computational complexity is reduced by the proposed method compared with the existing method. Let $l$ be the average length of a string representation of a musical score. Let $c$ be the number of composers. Let $g$ be the average number of musical scores in one group. Let $n$ be the number of unknown musical scores. First, the computational complexity to compress a string representation of musical score is proportional to the length of the string. Thus: \begin{eqnarray} T_{compression} = \mathcal{O}(l) \end{eqnarray} To estimate the composer of one musical score using CDM, we need to compute $g \times c$ compression: \begin{eqnarray} T_{CDM-ONE} = \mathcal{O}(l \times g \times c)) \end{eqnarray} When there are many musical scores, we need to repeat the above operation for each $n$ musical scores. \begin{eqnarray} T_{CDM} = \mathcal{O}(n \times l \times g \times c) \end{eqnarray} To compute one information quantity in \figref{fig_comp_1}, we need to compute two things: the pre-processing of the groups and to obtain the minimum of the considered partition. \begin{eqnarray} T_{information-quantity} = \mathcal{O}(g \times l + l^2) \end{eqnarray} To estimate the composer of one musical score using the proposed method, we need to compute the information quantity $c$ times. \begin{eqnarray} T_{Proposed-ONE} = \mathcal{O}(c \times g \times l + c \times l^2) \end{eqnarray} When there are many musical scores, we only require one pre-processing operation. This is the reason why the proposed method is scalable. \begin{eqnarray} T_{Proposed} = \mathcal{O}(c \times g \times l + n \times c \times l^2) \end{eqnarray} Both the complexity of $T_{CDM}$ and the $T_{Proposed}$ are proportional to $c$. There is no difference with respect to $c$. When $n$ is large, the computational complexity of the proposed method is independent of $g$, while that of the CDM is multiplied by $g$. This means that when the number of musical scores for each composer increases, the computational complexity of the proposed method becomes smaller than CDM method. The proposed method has a complexity that is proportional to the square of the length of the unknown musical score, while that of the CDM method is proportional to the length. This is because the proposed method considers all substrings, while the compression program only considers some subsets of the substrings. As a result, the proposed method requires a large value of $g$ when the $l$ is large. \section{Evaluation} For the evaluation, the result can be much better than it should be if we include the same musical score as $x$ in some of the known musical scores. Therefore, we have to change our setting from \figref{fig_comp_1} into \figref{fig_comp_2} to measure the correctness of the methods. The musical score in question should be intentionally excluded from the set of known musical scores. In the CDM method, the CDM between the same musical scores was not computed using the one-leave-out method. \figref{fig_comp_2} corresponds to this approach. In doing so, we need to pre-process many times, and this is only for the evaluation, and not the actual estimation. In \figref{fig_comp_2}, A, B, C, D, and E indicate the composers and $a_1, \cdots, a_{15}$ denote the musical scores of composer A. When we need to estimate the composer of $a_1$, we remove $a_1$ from the group of composer A, and create a new group data of the remaining musical scores. Then, we calculate the information quantities of $a_1$ with each of the five grouping data. We estimate that the composer of $a_1$ is the one whose string attains the least information quantity. Then, we determine the estimation from the information that the composer of $a_1$ is A. This information is used only for determining the correctness of the estimation. \fig[width=0.65\columnwidth]{ For the evaluation, the result can be much better than it should be if we include the same musical score in some of the known musical scores. Therefore, the musical score in question should be intentionally excluded from the set of known musical scores for evaluation. This corresponds to one-leave-out method. }{fig_comp_2}{fig/compare2.png} A summary of the total correct results is presented in \tabref{tab_summary}. In the estimations of 75 musical scores, the proposed method yielded 55 correct results. Since the task was to select one composer out of five composers, a random choice can achieve 20\% correct answers. Our method achieved more than 70\% correct answers. This suggests that the proposed method can estimate the composer. Unlike the CDM, the proposed method is formalized as the estimation of information quantity, and is not dependent on a particular compression program. Therefore, reproduction of the result should be much easier than the CDM. \input{tab/_summary} As presented in \tabref{tab_summary}, the proposed method yielded more correct results than previous methods. We performed the McNemar's test of the proposed method with the original CDM and with offsetted CDM~\cite{Takamoto2016}. As presented in table \tabref{tab_mcnemar_CDM}, the proposed method performed better than the CDM with a significance of $\alpha< 0.01$, although we could not achieve the statistical significance in \tabref{tab_mcnemar_offset}. Since offsetted CDM~\cite{Takamoto2016} aimed to obtain a more precise value of the information quantity rather than the compression, the behavior of the proposed method would be similar to offsetted CDM~\cite{Takamoto2016}. However, it can be seen that the proposed method is independent of the implementation of a particular compression program, while ~\cite{Takamoto2016} depends on a compression program, bzip2. \tabref{tab_bach} to \tabref{tab_satie} present the detailed results of applying the proposed method to 15 pieces for each five composer: Bach, Chopin, Debussy, Mozart, and Satie. Each column below the label ``Music'', contains the identifier starting with its composer and ending with the identification number. The column of the name of each composer contains information quantities using the group. The value is truncated to the nearest integer. The case where the information quantity of a given score is the least, it is underlined. The corresponding composer of the underlined value is the estimation using the proposed method. The column ``Result'' indicates whether the estimation is correct or not, where ``1'' is correct, and ``0'' is incorrect. The column ``CDM'' is the result of using the baseline method, which follows the technique in ~\cite{Keogh2004}, where $C(x)$ is the file size of the compressed file using bzip2. The column ``offset'' is the result from a previous study~\cite{Takamoto2016}, where $C(x)$ is the offsetted size of the compressed file. \input{tab/_McNemar} \input{tab/_McNemar2} \section{Discussion} Sometimes, the methods of smaller computational complexities may be slower in actual number of data when the data is not big enough. Currentlly, this is the case of the proposed method. Since the current group consists of 15 musical scores, the proposed method was slower than the CDM method in the current condition. There are several reasons for this inefficiency. The most important reason is that the proposed method requires a computation time that is proportional to the square of the length of the string, while the CDM (or compression program) requires a computational time that is proportional to the length. Furthermore, the string representation usually consists of more than 10000 characters. This length could be the reason for the inefficiency. We may improve the computation time by limiting the set of substring that is used to compute the information quantity and concentrate computation resources to the string that should be effective. With respect to the efficiency of the compression program, the compression program may not consider all the strings. Thus, we need a heuristic technique that may be common to the compression program. There may be other viewpoints in terms of the contributions of this work. The application of the CDM is not limited to this task, but also applies to various types of tasks. We may search an appropriate task where there are many samples in a class and the length of data to consider is small. There is a method to calculate information quantity for estimation similarities called normalized compression distance (NCD)~\cite{Cilibrasi2005}. Some studies have applied this method in the field of biological information~\cite{Li2003} and in the field of musical information~\cite{Cilibrasi2004, Ahonen2011}. The order of computational complexity with this method is the same as in the CDM. It seems useful to select data patterns or subsequences that emerged repeatedly from known data and improve the compression program so that the information quantity of an unknown data is calculated with the selected data. However, some compression programs have a limit in the number of words registered in their dictionary. Since our proposed method considers all the substrings of the given string, the method does not suffer this limitation, and we may state that it uses a larger dictionary compared with any other method. \section{Conclusion} We proposed a novel method that can replace the CDM method for the composer estimation task. The main feature of the proposed method is the pre-processing of the grouped data of each composer. We showed that the computational complexity in terms of the number of known musical scores was smaller than in the CDM. This means that the proposed method is scalable. We also verified that the number of correct estimations obtained was 55 out of 75 estimations. This result is better than the estimation result of the CDM method. Moreover, the computational complexity to determine a new score was smaller than the CDM method. Based on the number of correct results and the order of computational complexity, we can conclude that computing the information quantity with grouping is effective. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-04-17T02:13:05", "yymm": "1804", "arxiv_id": "1804.05486", "language": "en", "url": "https://arxiv.org/abs/1804.05486" }
\section{Introduction} The ZX-calculus \cite{CD1,CD2} is a universal graphical language for qubit theory, which comes equipped with simple rewriting rules that enable one to transform diagrams representing one quantum process into another quantum process. More broadly, it is part of categorical quantum mechanics which aims for a high-level formulation of quantum theory \cite{AC1,CKbook}. It has found applications both in quantum foundations \cite{CDKZ,CDKZ2,MiriamSpek} and quantum computation \cite{DP2,Clare,DomError,de2017zx}, and is subject to automation thanks to the Quantomatic software \cite{quanto-cade}. Recently ZX-calculus has been completed by Ng and Wang \cite{ng2017universal}, that is, provided with sufficient additional rules so that any equation between matrices in Hilbert space can be derived in ZX-calculus. This followed earlier completions by Backens for stabiliser theory \cite{Backens} and one-qubit Clifford+T circuits \cite{Backens2}, and by Jeandel, Perdrix and Vilmart for general Clifford+T theory \cite{jeandel2017complete}. In Section \ref{sec:ZX-rules} we present Backens' two theorems. This paper concerns a sufficient set of ZX-rules for establishing all equations between 2-qubit Clifford+T quantum circuits, which again can be seen as a completeness result. We were motivated in two manners to seek this result: \begin{itemize} \item Firstly, we wish to understand the utility of the ZX-rules. In the case of the full completion \cite{ng2017universal,jeandel2018diagrammatic} these were added using a purely theoretical methodology which consisted of translating Hilbert space structure into diagrams, passing via another graphical calculus \cite{Amar,hadzihasanovic2017algebra}. However, a natural question concerns the actual practical use of each of these rules, as well as of other rules derived from them. As an example, one of the key ZX-rules: \[ \tikzfig{diagrams/b2s} \] is equivalent to the following well known circuit equation \cite{CD2}: \[ \tikzfig{diagrams/strongcomplementary1CNOT} \] involving CNOT gates (green $\simeq$ control). In this paper we are concerned with all such equations for 2-qubit Clifford+T quantum circuits. \item Secondly, in quantum computing algorithms are converted into elementary gates making up circuits, and these circuits then have to be implemented on a computer. Currently the most considered universal set of elementary gates is the Clifford+T gate set. The high cost of implementing those gates makes any simplification of a circuit (cf.~having less CNOT-gates and/or having less T-gates) highly desirable. We expect our result to be an important stepping stone towards efficient simplification of arbitrary n-qubit Clifford+T circuits, and that the quantomatic software will be a crucial part of this. The fact that a small set of rules suffices for us here raises the hope that general circuit simplification could already be done with a small set of ZX-rules. \end{itemize} Selinger and Bian derived a complete set of circuit equations for Clifford+T 2-qubit circuits \cite{ptbian}. However, these circuit equations are very large and rigid, and their method for producing these beyond two-qubits doesn't scale to more qubits. On the other hand, in the case of ZX-calculus we already have an overarching completeness results that carries over to circuits of arbitrary qubits. So the main question then concerns the rules needed specifically for efficient circuit rewriting. The advantage of ZX-rules is that they are not constrained by unitarity. Also, in the ZX computation at intermediate stages phase gates may not even be within Clifford+T, although their actual values play no roles, that is, they can be treated as variables. Note that going beyond the constraints of the formalism which one aims to prove something about is a standard practice in mathematics, e.g.~complex analysis. \section{Background 1: ZX-calculus language} A pedestrian introduction is \cite{coecke2012tutorial}. There are two ways to present ZX-calculus, either as \emph{diagrams} or as a \emph{category}. Following \cite{CKbook}, the `language' of the ZX-calculus consists of certain special \emph{processes} or \emph{boxes}: \[ \tikzfig{diagrams//box} \] which can be wired together to form \emph{diagrams}: \[ \tikzfig{diagrams//compound-process-capscups} \] All the diagrams should be read from top to bottom. Note that the wiring of inputs to inputs and outputs to inputs, as well as feed-back loops is admitted. Equivalently, following \cite{CD2}, it consists of certain morphisms in a compact closed category, which has the natural numbers: $0, 1, 2, \cdots$ as objects, with the addition of numbers as the tensor: \[ m \otimes n = m+n \] In diagrams $n$ corresponds to $n$ wires side-by-side. The special processes/boxes/morphisms that we are concerned with in this paper are \emph{spiders} of two \emph{colours}: \[ \tikzfig{diagrams//spider_green_alpha}\qquad\tikzfig{diagrams//spider_red_alpha} \] where $\alpha\in[0, 2\pi)$. Equivalently, one can only consider spiders of one colour as well as a colour changer (cf.~rule (H2) below): \[ \tikzfig{diagrams//HadaDecomSingleslt} \] ZX-calculus can also be seen as a calculus of graphs, provided that one introduces special input and output nodes. Sometimes it is useful to also think of wires appearing in the diagram as boxes, which can take the following forms: \[ \tikzfig{diagrams//Id}\qquad\quad\tikzfig{diagrams//swap}\qquad\quad\tikzfig{diagrams//cup}\qquad\quad \tikzfig{diagrams//cap} \] In particular, then the full specification of what `wiring boxes together' actually means can be reduced to what it means to put boxes side-by-side and connect the output of a box to the input of another box: \[ \tikzfig{diagrams//box_par}\qquad\raisebox{1mm}{\tikzfig{diagrams//box_seq}} \] The following key property uses this fact: \begin{theorem}\cite{ContPhys,CD2} The ZX language is \emph{universal} for qubit quantum computing, when giving the following interpretation: \[ \left\llbracket \tikzfig{diagrams//generator_spider_alpha} \right\rrbracket=\ket{0}^{\otimes m}\bra{0}^{\otimes n}+e^{i\alpha}\ket{1}^{\otimes m}\bra{1}^{\otimes n} \qquad \left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}\\ \] \[ \left\llbracket\tikzfig{diagrams//Id}\right\rrbracket= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \qquad \left\llbracket\tikzfig{diagrams//swap}\right\rrbracket= \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \qquad \left\llbracket\tikzfig{diagrams//cap}\right\rrbracket= \begin{pmatrix} 1 \\ 0 \\ 0 \\ 1 \\ \end{pmatrix} \qquad \left\llbracket\tikzfig{diagrams//cup}\right\rrbracket= \begin{pmatrix} 1 & 0 & 0 & 1 \end{pmatrix} \] \[ \left\llbracket \tikzfig{diagrams//box_par} \right\rrbracket = \left\llbracket \tikzfig{diagrams//box_f} \right\rrbracket \otimes \left\llbracket \tikzfig{diagrams//box_g} \right\rrbracket \qquad \left\llbracket \raisebox{1mm}{\tikzfig{diagrams//box_seq}} \right\rrbracket = \left\llbracket \tikzfig{diagrams//box_f} \right\rrbracket \circ \left\llbracket \tikzfig{diagrams//box_g} \right\rrbracket \] That is, every linear map of type $\mathbb{C}^{2^n}\to\mathbb{C}^{2^m}$ can be written down as a ZX-diagram, and consequently, every qubit process can be written down as a ZX-diagram. \end{theorem} \section{Background 2: ZX-calculus rules}\label{sec:ZX-rules} Above we specified the ingredients of the ZX-calculus as linear maps. Now, in quantum theory linear maps only matter up to a non-zero scalar multiple, i.e.~a diagram with no inputs nor outputs. We will do so too here, since this makes that the rules of the ZX-calculus appear much simpler (see e.g.~\cite{backens2017towards} for a presentation of the ZX-calculus rules with explicit scalars that make equations hold on-the-nose). Due to the diagrammatic underpinning, in addition to the rules given below, there is one meta-rule that ZX-calculus obeys, namely: \begin{center} \fbox{\it Only connectedness matters!} \end{center} One could do without it by adding a few more rules, but it is entirely within the spirit of diagrammatic reasoning that it should all boil down to connectedness. We now give an overview of ZX-rule sets that have been considered. \emph{Stabiliser ZX-calculus} is the restriction of ZX-calculus to $\alpha\in\{{n\pi\over 2}\mid n\in\mathbb{N}\}$. As shown in \cite{Backens}, the following rules make ZX-calculus complete for this fragment of quantum theory: \[ \begin{tabular}{ccccc} \tikzfig{diagrams/spider-bis}&\quad(S1)&$\qquad$&\tikzfig{diagrams/greenredidentity}&\quad(S2)\\ &&\ \ && \\ \tikzfig{diagrams/b1}&\quad(B1)&&\tikzfig{diagrams/b2s}&\quad(B2)\\ &&\ \ && \\ \tikzfig{diagrams/hadamdecom}&\quad(H1)&&\tikzfig{diagrams/clchge}&\quad(H2) \end{tabular} \] That is, any equation between stabiliser ZX-diagrams that can be proven using matrices can also be proved by using these rules. The `only connectivity matters rule' means that we also have \cite{backens2017towards}: \[ \tikzfig{diagrams/induced_compact_structure}\hfill(S2') \] Some other derivable rules that we will use are: \[ \tikzfig{diagrams/Hopfx} \quad(Hf) \qquad \tikzfig{diagrams//hexagon2}\quad(Hex) \qquad \tikzfig{diagrams/Cy} \quad (Cy) \] where the dots in (Cy) denote zero or more wires. The 1st and last rule are derived in \cite{CD2} and the middle one in \cite{DP1}. We also use the following variation form of (B2), to which we also refer as (B2): \[ \tikzfig{diagrams/b2var} \hfill(B2) \] The rules (S1) and (H) apply to spiders with an arbitrary number of input and output wires, including none, so (S1) and (H) appear to be an infinite set of rules. Firstly, these rules do have algebraic counterparts as Frobenius algebras, which constitute a finite set. Secondly, using the concept of \emph{bang-boxes} \cite{kissinger2016tensors}, even in their present form these rules can be notationally reduced to a single rule, and the quantomatic-software accounts for rules in this form. Allowing for bang-boxes, one can also merge rules (B1) and (B2) into a single rule: \[ \tikzfig{diagrams/strongcomplementaryn} \] hence reducing the number of equations to be memorised to six. \emph{Single-qubit Clifford+T ZX-calculus} is the restriction of ZX-calculus to spiders with exactly one input and one output, and $\alpha\in\{{n\pi\over 4}\mid n\in\mathbb{N}\}$. As shown in \cite{Backens2}, the rules (S1), (S2), (H1) and (H2) together with the rule: \[ \tikzfig{diagrams/k2}\hfill(N) \] make ZX-calculus complete for this fragment of quantum theory. We will also use the following special form of the (N) rule, to which we again refer as (N): \[ \tikzfig{diagrams/nvar} \hfill(N) \] As single qubit circuits can be seen as a restriction of 2-qubit circuits, simply by letting the 2nd qubit unaltered, our result can also be seen as a completeness result for single-qubit Clifford+T ZX-calculus. However, it is weaker than Backens' as we employ more rules. \section{Result: ZX rules vs.~circuit equations.} Recall that in this paper the ZX-rules hold up to a non-zero scalar. \begin{theorem}\label{eq:mainthm} The rules (S1), (S2), (B1), (B2), (H1), (H2), (N) and (P) depicted below make ZX-calculus complete for 2-qubit Clifford+T circuits: \[ \begin{tabular}{|ccccc|} \hline \tikzfig{diagrams/spider-bis}&\quad(S1)&$\qquad$&\tikzfig{diagrams/greenredidentity}&\quad(S2)\\ &&\ \ && \\ \tikzfig{diagrams/b1}&\quad(B1)&&\tikzfig{diagrams/b2s}&\quad(B2)\\ &&\ \ && \\ \tikzfig{diagrams/hadamdecom}&\quad(H1)&&\tikzfig{diagrams/clchge}&\quad(H2)\\ &&\ \ && \\ \tikzfig{diagrams/k2}&\quad(N)&& \tikzfig{diagrams/zxztoxzx2}&\quad( P)\vspace{0.5mm} \\ \hline \end{tabular} \] where $\alpha_2=\gamma_2$ if $\alpha_1=\gamma_1$, and $\alpha_2=\pi+\gamma_2$ if $\alpha_1=-\gamma_1$; the equality (*) should be read as follows: for every diagram in LHS there exists $\alpha_2, \beta_2$ and $\gamma_2$ such that LHS=RHS (and vice versa if conjugating by the Hadamard gate). In what follows we will see that we actually don't need to know the precise values of $\alpha_2, \beta_2$ and $\gamma_2$. \end{theorem} So as compared to the rules that we saw in the previous section there is only one additional rule here, the (P) rule. This rule is a new rule that was not present as such in any previous presentation of the ZX-calculus. Of course, as the rules presented in \cite{ng2017universal} yield universal completeness, one should be able to derive it from these: \begin{lemma}\label{zxztoxzxcr} For $\alpha_1, \beta_1, \gamma_1 \in (0, ~2\pi)$ we have: \begin{equation}\label{zxztoxzxcreq} \tikzfig{diagrams//zxztoxzx}\qquad\mbox{with}\quad \left\{ \begin{array}{l} \alpha_2=\arg z+\arg z'\\ \beta_2=2\arg (|\frac{z}{z'}|+i)\\ \gamma_2=\arg z-\arg z' \end{array} \right. \end{equation} where: \[ \begin{array}{l} z=\cos\frac{\beta_1}{2}\cos\frac{\alpha_1+\gamma_1}{2}+i\sin\frac{\beta_1}{2}\cos\frac{\alpha_1-\gamma_1}{2} \qquad z'=\cos\frac{\beta_1}{2}\sin\frac{\alpha_1+\gamma_1}{2}-i\sin\frac{\beta_1}{2}\sin\frac{\alpha_1-\gamma_1}{2} \end{array} \] So if $\alpha_1=\gamma_1$, then $\alpha_2=\gamma_2$, and if $\alpha_1=-\gamma_1$, then $\alpha_2=\pi+\gamma_2$. \end{lemma} This Lemma is restated as Corollary \ref{zxztoxzxcr} and proved in the appendix, which has a more general analytic solution for this `colour-swapping' property for arbitrary generalised phases. The idea for the need for a rule of this kind was first suggested by Schr\"oder de Witt and Zamdzhiev \cite{VladComp}. As already indicated in the introduction, it is also clear that this rule takes one out of the Clifford+T realm in the sense that the values of the angles in the RHS of (\ref{zxztoxzxcreq}) usually go beyond Clifford+T even if the LHS is inside of the realm. The proof of Theorem \ref{eq:mainthm} draws from Selinger and Bian's \cite{ptbian} set of circuit equations that is complete for 2-qubit circuits. Here we rely on universality of ZX-language to write down these circuits, and in particular besides CNOT-gates these also involve symmetric CZ-gates: \[ \tikzfig{diagrams//CZ} \] In the statement of the following theorem we adopt the more usual left-to-right reading of circuits, although we still express it as ZX diagrams. \begin{theorem}\label{ptbianthm}\cite{ptbian} The following equations are complete for 2-qubit Clifford+T circuits: \begin{equation}\label{cmrns-2} \tikzfig{diagrams//completerelationlist-2} \end{equation}\vspace{-1.5mm} \begin{equation}\label{cmrns-1} \tikzfig{diagrams//completerelationlist-1} \end{equation}\vspace{-0.5mm} \begin{equation}\label{cmrns0} \tikzfig{diagrams//completerelationlist0} \end{equation}\vspace{0.0mm} \begin{equation}\label{cmrns1} \tikzfig{diagrams//completerelationlist1} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns2} \tikzfig{diagrams//completerelationlist2} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns3} \tikzfig{diagrams//completerelationlist3} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns4} \tikzfig{diagrams//completerelationlist4} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns5} \tikzfig{diagrams//completerelationlist5} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns6} \tikzfig{diagrams//completerelationlist6} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns7} \tikzfig{diagrams//completerelationlist7} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns8} \tikzfig{diagrams//completerelationlist8} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns9} \tikzfig{diagrams//completerelationlist9} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns10} \tikzfig{diagrams//completerelationlist10} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns11} \tikzfig{diagrams//completerelationlist11} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns12} \left(\tikzfig{diagrams//pbct1}\right)^2=\tikzfig{diagrams//completerelationlist12} \end{equation}\vspace{1.5mm} \begin{equation}\label{cmrns13} \left(\tikzfig{diagrams//pbctb}\right)^2=\tikzfig{diagrams//completerelationlist12} \end{equation}\vspace{-1.5mm} \begin{eqnarray*} &&\tikzfig{diagrams//completerelationlist141}\vspace{1.5mm}\\ &&\tikzfig{diagrams//completerelationlist142}\vspace{1.5mm}\\ &&\ =\tikzfig{diagrams//completerelationlist12} \end{eqnarray*} \vspace{-13mm} \begin{equation}\label{cmrns14} \end{equation} \end{theorem} Not only does this Theorem serve as a stepping stone, it is also the main point of comparison of our result. The ZX-rules are clearly much simpler than the circuit equations, which, to say the least, are virtually impossible to memorise, let alone apply. \section{Proof.} We need to show that the equations in Theorem \ref{ptbianthm} can be derived from those in Theorem \ref{eq:mainthm}. Doing so is a straightforward calculation for the first 14 ones. However, this is not the case for the remaining circuit relations (\ref{cmrns12}), (\ref{cmrns13}) and (\ref {cmrns14}) each of which we prove as a lemma. \begin{lemma} Let $A=$ \[ \tikzfig{diagrams//pbct1} \] then $A^2=I.$ \end{lemma} \begin{proof} First we have $A=$ \begin{equation} \tikzfig{diagrams//pbct22new} \end{equation} By the rule (P), we can assume that \begin{equation}\label{pbct32} \tikzfig{diagrams//pbct32new} \end{equation} Since $e^{i\frac{-\pi}{4}} e^{i\frac{\pi}{4}}=1 $, we could let $\gamma=\alpha+\pi$. Also note that \begin{equation} \tikzfig{diagrams//pbct421new}=\left(\tikzfig{diagrams//pbct422new}\right)^{-1} \end{equation} Thus: \begin{equation}\label{pbct52} \tikzfig{diagrams//pbct52new} \end{equation} Therefore, $A=$ $$ \tikzfig{diagrams//pbct62} $$ Finally, $A^2=$ $$ \tikzfig{diagrams//pbct722} $$ \end{proof} \begin{lemma} Let $B=$ \[ \tikzfig{diagrams//pbctb} \] then $B^2=I$. \end{lemma} \begin{proof} Firstly we have: \[ \tikzfig{diagrams//pbctb22new} \] By the rule (P), we can assume that: \begin{equation}\label{pbctb32} \tikzfig{diagrams//pbctb32new} \end{equation} Since $e^{i\frac{-\pi}{4}} e^{i\frac{\pi}{4}}=1 $, we could let $\gamma=\alpha+\pi$. Also note that: \[ \tikzfig{diagrams//pbctb421new}=\left(\tikzfig{diagrams//pbctb422new}\right)^{-1} \] Thus: \begin{equation}\label{pbctb52} \tikzfig{diagrams//pbctb52new} \end{equation} Using again the same technique as earlier we obtain: $$ \tikzfig{diagrams//pbctb62-reduced} $$ Finally, again following the previous lemma, $B^2=$ \[ \tikzfig{diagrams//pbctb72new-reduced} \] \end{proof} \begin{lemma} Let $C=$ \[ \tikzfig{diagrams//pbctc1} \] and $D=$ \[ \tikzfig{diagrams//pbctc2} \] then $D\circ C=I$. \end{lemma} \begin{proof} Firstly we simplify the circuit $C$ as follows: \[ \tikzfig{diagrams//pbctc3new} \] By the rule (P), we can assume that: \begin{equation}\label{circuitceq} \tikzfig{diagrams//pbctc32new} \end{equation} Then we have for $C$: \begin{equation}\label{circuitceq4} \tikzfig{diagrams//pbctc34new} \end{equation} Secondly, we simplify the circuit $D$ as follows: \begin{equation*} \tikzfig{diagrams//pbctc4new} \end{equation*} By the rule (P), we have \begin{equation}\label{circuitceq2} \tikzfig{diagrams//pbctc33new} \end{equation} Therefore we have for $D$: \begin{equation}\label{circuitceq5} \tikzfig{diagrams//pbctc35new} \end{equation} Then we obtain the composition for $D\circ C=$ \begin{equation}\label{circuitceq6} \tikzfig{diagrams//pbctc36new} \end{equation} By the rule (P), we can assume that: \begin{equation}\label{circuitceq7} \tikzfig{diagrams//pbctc37new} \end{equation} Then for its inverse, we have \begin{equation}\label{circuitceq8} \tikzfig{diagrams//pbctc38new} \end{equation} Also we can obtain that: \begin{equation}\label{circuitceq9} \tikzfig{diagrams//pbctc39new} \end{equation} As a consequence, we have the inverse for both sides of (\ref{circuitceq9}): \begin{equation}\label{circuitceq10} \tikzfig{diagrams//pbctc310new} \end{equation} Now we can rewrite $D\circ C$ as: \begin{equation}\label{circuitceq11} \tikzfig{diagrams//pbctc311new} \end{equation} We can depict the dashed part of (\ref{circuitceq11}) in a form of connected octagons, and to deal with these octagons we use (Hex): \begin{equation}\label{octagon2checkeq} \tikzfig{diagrams//octagon2checknew-part1} \end{equation} \ \tikzfig{diagrams//octagon2checknew-part2} \ By the (P) rule, we have: \begin{equation}\label{octasim2eq} \tikzfig{diagrams//octasim2new} \end{equation} where $z=x+\pi$. Then we take inverse for each side of (\ref{octasim2eq}) and obtain that: \begin{equation}\label{octasim3eq} \tikzfig{diagrams//octasim3new} \end{equation} By rearranging the phases on both sides of (\ref{octasim2eq}), we have: \begin{equation}\label{octasim4eq} \tikzfig{diagrams//octasim4new} \end{equation} Thus: \begin{equation}\label{octasim5eq} \tikzfig{diagrams//octasim5new} \end{equation} Therefore: \begin{equation}\label{octasim6eq} \tikzfig{diagrams//octasim6new} \end{equation} It then follows that: \begin{equation}\label{octasim7eq} \tikzfig{diagrams//octasim7new} \end{equation} If we take the inverse of the left-hand-side of (\ref{octasim7eq}), then we have: \begin{equation}\label{octasim8eq} \tikzfig{diagrams//octasim8new} \end{equation} Now we can further simplify the final diagram in (\ref{octagon2checkeq}) as follows: \begin{equation}\label{octasim9eq} \tikzfig{diagrams//octasim9new} \end{equation} Finally, the composite circuit $D\circ C$ as can be simplified as follows: \begin{equation}\label{octasim10eq} \tikzfig{diagrams//octasim10new} \end{equation} where we used the following property: \begin{equation}\label{octasim11eq} \tikzfig{diagrams//octasim11new} \end{equation} \end{proof} \section{Conclusion and further work} We gave a set of ZX-rules that allows one to establish all equations between 2-qubit circuits, and these ZX-rules are remarkably simpler than the relations between unitary gates from which they were derived. The key to this simplicity is: (i) abandoning unitarity at intermediate stages, and (ii) abandoning the T-restriction, which comes about when applying rule (P). In the case of the latter, it is important to stress again that the actual values of the phases in the RHS of (P) don't have to be known. Also, while the techniques used to establish the relations between two-qubit unitary gates don't scale to more than two qubits, the ZX-calculus, by being complete, already provides us with such a set. It is just a matter to figure out if all of those rules are actually needed for the case of circuits. Automation is moreover also possible thanks to the quantomatic software. Although we don't yet have a general strategy for simplifying quantum circuits by the ZX-calculus, it is possible at least in some cases. In fact, in ongoing work in collaboration with Niel de Beaudrap, using similar techniques as some of the ones in this paper, we have shown that using ZX-calculus we can outperform the state-of-the-art for quantum circuit simplification. A paper on this is forthcoming. We expect the new rule (P) to have many more utilities within the domain of quantum computation and information. The same question remains for other rules that emerged as part of the the completion of ZX-calculus. A natural challenge of interest to the Reversible Computing community is whether the classical fragment of ZX-calculus can be used for deriving similar completeness results for classical circuits. \section*{Acknowledgement} This work was sponsored by Cambridge Quantum Computing Inc.~for which we are grateful. QW also thanks Kang Feng Ng for useful discussions. \bibliographystyle{splncs03}
{ "timestamp": "2018-06-13T02:13:26", "yymm": "1804", "arxiv_id": "1804.05356", "language": "en", "url": "https://arxiv.org/abs/1804.05356" }
\section{Introduction} Online retail is growing exponentially in recent years, among which the clothing shopping occupies a large proportion. Driven by the huge profit potential, intelligent clothing item retrieval is receiving a great deal of attention in the multimedia and computer vision literature. Meanwhile, online video streaming service is becoming increasingly popular. When watching idol drama or TV shows, such as the Korean TV drama \textit{My Love From the Star}, where beautiful girls wearing fashion clothes, the viewers are more easily attracted by those beautiful clothes and stimulated to buy the identical ones shown in the video, especially the females. In this paper, we consider a new scenario of such online clothing shopping: finding the clothes identical to the ones worn on the actors during watching videos. We call this new search approach as \textit{Video2Shop}. \begin{figure*}[tb] \centering \includegraphics[scale=0.45]{./images/FR.pdf} \caption{Framework of the proposed AsymNet. After clothing detection and tracking, deep visual features are generated by image feature network (IFN) and video feature network (VFN), respectively. These features are then fed into the similarity network to perform pair-wise matching. } \label{fig:FR} \vspace{-0.2in} \end{figure*} Although the street-to-shop clothing matching problem, which searches the online clothing by street fashion photos, has been explored recently \cite{ICCV15_CD, MM14_DS, ICCV15_Wheretobuy, ICMR16_Product}, finding clothes appeared in videos to the exact same items in online shops is not well studied yet. The diverse appearance of cloth, cluttered scenes, occlusion, different light condition and motion blur in the video make video2shop challenging. More specifically, the clothing items appeared in videos and online shopping websites demonstrate significant visual discrepancy. On one hand, in the video, the clothes are usually captured from different viewpoints, (the front, the side or the back), or following the path of the actors, which leads to great varieties in clothes appearance. The complex scenes and the common motion blur in videos even make the situation worse. On the other hand, the online clothing images are not always with clean background, since the clothes are often worn by fashion models in outdoor scenes to show its real wearing effect. The cluttered background imposes difficulties for clothing localization and analysis. These problems caused by the videos and the online clothing images make the \textit{Vidoe2Shop} task more challenging than the street-to-shop search. The architecture of the proposed a deep neural network, AsymNet, is illustrated in Fig. \ref{fig:FR}. When users watch videos through web pages or set-top box devices, the system will retrieve the exact matched clothing items from online shops and return them to the users. Clothing detector is first deployed for both video side and image side, to extract a set of proposals (clothing patches) to identify the potential clothing regions, limiting the impact of background regions and leading to more accurate clothing localization. For videos, clothing tracker is then conducted to track clothing patches to generate clothing trajectory, which contains the same clothing items appeared in continuous frames. Intuitively, the clothing patches with inconsistent viewpoints are preserved. Due to their promising performance and stability, Faster-RCNN \cite{NIPS15_FasterR-CNN} and Kernelized Correlation Filters (KCF) tracker \cite{TPAMI15_KCF} are adopted in this paper as the clothing detector and clothing tracker, respectively. Deep visual features are generated for clothing images in shops and clothing trajectories in videos, which are achieved with image feature network (IFN) and video feature network (VFN), respectively. For videos, deep visual features are further fed into a Long Short-Term Memory (LSTM) framework \cite{AR14_LSTM} for sequence modeling, which captures the temporal dynamics in videos. To consider the whole clothing trajectories, this problem is formulated as an asymmetric (multiple-to-single) matching problem, i.e., exact matching a sequence of a cloth appeared in videos to a single online shopping clothing. These features are then fed into the similarity network to perform pair-wise matching between clothing regions from videos and shopping images, in which a reconfigurable deep tree structure is proposed to automatically learn the fusion strategy. The top ranked results are then returned to users. The main contributions of the proposed work are summarized as follows: \begin{itemize}[leftmargin=*] \item A novel deep-based network, AsymNet, is proposed for cross-domain Video2Shop application, which is formulated as an asymmetric (multiple-to-single) matching problem. It mainly consists of two components: image and video feature representation and similarity measure. \item To conduct exact matching, LSTM hidden states for clothing trajectories in videos, and image features representing online shopping images, are jointly modeled under the similarity network with a reconfigurable deep tree structure. \item To train AsymNet, an approximate training method is proposed to improve the training efficiency. The proposed method can handle the large-scale online search. \item Experiments conducted on the first and the largest Video2Shop dataset demonstrate the effectiveness of the proposed method, which consists of 26,352 clothing trajectories in videos and 85,677 clothing images from shops. The proposed method outperforms the state-of-the-art approaches. \end{itemize} The rest of our paper is organized as follows: related works are first reviewed in Section \ref{sec:rw}. The details of feature extraction networks and similarity networks are elaborated in Sections \ref{sec:Feature} and \ref{sec:SN}, respectively. The approximate training of the network is presented in Section \ref{sec:Train}. Finally, experiments are introduced in Section \ref{sec:ex}. \section{Related Work} \label{sec:rw} \subsection{Cross-Scenario Clothing Retrieval} Cross-scenario clothing retrieval has widely applicability for commercial systems. There have been extensive efforts on similar clothing retrieval \cite{TOG15_CS,ICCV15_CD,MM14_DS,TMM16_CoPars,ICMR13_Recognitionandsegmentation,CVPR12_Street-to-shop} and exactly same clothing retrieval \cite{ICCV15_Wheretobuy,ICMR16_Product}. For similar clothing retrieval, clothing recognition and segmentation techniques are used in \cite{TMM16_CoPars,ICMR13_Recognitionandsegmentation} to retrieve similar clothing. In order to tackle the domain discrepancy between street photos and shop photos, sparse representations are utilized in \cite{CVPR12_Street-to-shop}. With the adoption of deep learning, an attribute-aware fashion-related retrieval system is proposed in \cite{MM14_DS}. A convolutional neural network using the contrastive loss is proposed in \cite{TOG15_CS}. Based on the Siamese network, a Dual Attribute-aware Ranking Network (DARN) is proposed in \cite{ICCV15_CD}. For exactly same clothing retrieval, exact matching street clothing photos in online shops is firstly explored in \cite{ICCV15_Wheretobuy}. A robust deep feature representation is learned in \cite{ICMR16_Product} to bridge the domain gap between the street and shops. A new deep model, namely FashionNet, is proposed in \cite{CVPR16_DeepFashion}, which learns clothing features by jointly predicting clothing attributes and land-marks. Despite recent advances in exactly street-to-shop retrieval, there have been rather few studies focused specifically on exact matching clothes in videos to online shops. \subsection{Deep Similarity Learning} As deep convolutional neural networks are becoming ubiquitous, there has been growing interest in similarity learning with deep models. For image patch-matching, some convolutional neural networks are proposed in \cite{CVPR15_MatchNet,CVPR15_SIM,CVPR15_CSIM_lecun}. These techniques learn representations coupled with either pre-defined distance functions, or with more generic learned multi-layer network similarity measures. For object retrieval, an neural network with contrastive loss function is designed in \cite{TOG15_CS}. A novel \czq{Deep Fashing Network architecture is proposed in \cite{CVPR16_DeepFashion}} for efficient similarity retrieval. Inspired by these works, we propose a tree structure similarity learning networks to match clothes appeared in videos to the exact same items in online shops. \section{Representation Learning Networks} \label{sec:Feature} When the clothing regions are detected in images and then tracked into clothing trajectories for videos, feature extraction networks are then conducted to obtain the deep features. \subsection{Image Representation Learning Networks} \label{sec:IFN} The image feature network (IFN) is implemented based on VGG16 \cite{AR16_VGG16}. In VGG16, the input image patches are scaled to 256x256 and then cropped to a random 227x227 region. This requirement comes from the fact that the output of the last convolutional layer of the network needs to have a predefined dimension. In our Video2Shop matching task, Faster-RCNN \cite{NIPS15_FasterR-CNN} is adopted to detect clothing regions in the shopping images. Unfortunately, the detected clothing regions are with arbitrary sizes, which does not meet the requirement of the input size. Enlightened by the idea of the recently proposed spatial pyramid pooling (SPP) architecture \cite{TPAMI_SPP}, which pool features in arbitrary regions to generated fixed-length representations, a spatial pyramid pooling layer is inserted between the convolutional layers and the fully-connected layers of the network in VGG16, as shown in Fig. \ref{fig:N1}. It aggregates features of the last convolutional layer through spatial pooling, so that the size of the pooling regions is independent of the size of the input. \begin{figure}[tb] \centering \includegraphics[scale=0.6]{./images/F1.pdf} \caption{The Architecture of Image Feature Network} \label{fig:N1} \vspace{-0.2in} \end{figure} \subsection{Video Representation Learning Networks} \label{sec:VFN} Video Feature Network (VFN) is illustrated in Fig. \ref{fig:FR}. For videos, the aforementioned image feature network (IFN) is also used to extract convolutional features. Since the temporal dynamics exist in videos, traditional average pooling strategy becomes invalid. Recurrent neural network is a perfect choice to solve this problem. Recently, due to its long short-term memory capability for modeling sequential data, Long Short-Term Memory (LSTM) \cite{AR14_LSTM} has been successfully applied to a variety of sequence modeling tasks. In this paper, it is chosen to characterize the clothing trajectories in videos. Based on the LSTM unit proposed in \cite{AR14_Lstm_cell}, a typical LSTM unit consists of an input gate $i_t$, a forget gate $f_t$, an output gate $o_t$, as well as a candidate cell state $g_t$. The interaction between states and gates along the time dimension is defined as follows: \begin{align} \begin{pmatrix} \mathbf{i}_t\\ \mathbf{f}_t\\ \mathbf{o}_t\\ \mathbf{g}_t \end{pmatrix} &= \begin{pmatrix} \sigma \\ \sigma \\ \sigma \\ \text{tanh} \end{pmatrix} M \begin{pmatrix} \mathbf{h}_{t-1}\\ \mathbf{m}_t \end{pmatrix},\nonumber \\ \mathbf{c}_t &= \mathbf{f}_t \odot \mathbf{c}_{t-1} + \mathbf{i}_t \odot \mathbf{g}_t,\\ \mathbf{h}_t &= \mathbf{o}_t \odot \tanh\left(\mathbf{c}_t\right). \nonumber \end{align} Here, $c_t$ encodes the cell state, $h_t$ encodes the hidden state, and $m_t$ is the convolutional feature generated by the image feature network. The operator $\odot$ represents element-wise multiplication. Given convolutional features $M(m_1 ,...,m_n)$ of a clothing trajectory in videos, a single LSTM computes a sequence of hidden states $(h_1 ,...,h_n )$. Further, we find that the temporal variety cannot be fully learned by a single LSTM, so we stack LSTM network to further increase the discriminative ability of the network, by using the hidden units from one layer as inputs for the next layer. After experimental validation, a two-level LSTM network is utilized in this work. \section{Similarity Learning Networks} \label{sec:SN} \subsection{Motivation} To conduct pair-wise similarity measure between clothing trajectories from videos and shopping images, a similarity network is proposed. The inputs are several LSTM hidden states $(h_1, h_2,..., h_n)$ from video feature network and a convolutional feature $m_i$ from image feature network. The output is a similarity score $Y$. This problem is formulated as an asymmetric (multiple-to-single) matching problem. Traditionally, this problem is solved by conducting average or max pooling on whole clothing trajectories to obtain the global similarity or directly select the similarity of the last one in trajectories. More recently, a key volume detection method \cite{CVPR2016_Keyvolume} is also proposed to solve the similar problem. However, these methods will fail in our Video2Shop application due to the large variability and complexity of video data. The average or max values cannot completely represent the clothing trajectory. Although key volume is able to learn the most critical parts, it is still too simple to solve this task. Based on the statistical theory \cite{NC91_ME1,NC94_ME2}, these learning problems are formulated as a mixture estimation problem, which attacks a complex problem by dividing it into simpler problems whose solutions can be combined to yield a solution to the complex problem. Enlightened by this idea, we novelly extend the generalized mixture expert model to Recurrent Neural Networks (RNN), and modify the strategy of mixture estimation to gain a global similarity. The proposed approach attempts to allocate fusion nodes to summarize the single similarity located in different viewpoints. \subsection{Network Structure} Because there are multiple inputs and only one output, a tree structure is proposed to automatically adjust the fusion strategy, which is illustrated in Fig. \ref{fig:FR}. There are two types nodes involved in the tree structure, i.e., single similarity network node (SSN) and fusion nodes (FN), corresponding to the leaves and the branches in the tree. The single similarity network (SSN) acts as the leaves of the tree, which calculates the similarity between a single LSTM hidden state $h_i$ and a convolutional feature $m_i$. After that, these results are passed to Fusion Node (FN), which generates a scalar output controlling the weights of similarity fusion. These fusion nodes will be passed layer by layer to fuse the results of internal results. In this work, a five-layer structure is adopted. Finally, a final global similarity $Y$ will be given. Details of each substructure are given below. \paragraph{Single Similarity Network (SSN)} To facilitate understanding, we will first introduce the one-to-one similarity measure between a LSTM hidden state $h_i$ and a convolutional feature \czq{$m_i$}. As indicated in \cite{ICCV15_Wheretobuy}, cosine similarity is too general to capture the underlying differences between features. Therefore, the similarity between $h_i$ and \czq{$m_i$} is modeled as a network with two fully-connected layers, denoted as the red dotted box shown in Fig. \ref{fig:FR}. Specifically, the first two fully-connected layers have 256 (fc1) and 1 (fc2) outputs, respectively. The output of the last fully-connected layer is a real value $z$. On the top of the network, logistic regression is used to generated the similarity between $h_i$ and \czq{$m_i$} as: \begin{equation} \label{eqn_1} \hat{y}=\frac{1}{1+e^{-z}} \end{equation} \paragraph{Fusion Node (FN)} Since SSN is piece-wisely smoothed, which is analogous to corresponding generalized linear models (GLIM) \cite{ML1987_GLIM}. Once the individual SSN is calculated, the fusion nodes (FN) at lower levels will integrate the results of SSN and control their weights, which are defined as a generalized linear system \cite{ML1994_HME}. The intermediate variable $\mathbf{\varepsilon_{ij}}$ is defined as: \begin{equation} \label{eqn_Fn_1} \mathbf{\varepsilon_{ij}}=\mathbf{v_{ij}}^{T}\left (\mathbf{x_{ij}} \right ) \end{equation} where \czq{ the subscript $i$ and $j$ denotes the index of fusion nodes, in which $i$ and $j$ refer to the low-level and high-level FN nodes, as Fig. \ref{fig:FR}}, $\mathbf{\mathbf{v_{ij}}}$ is a weight vector, $\mathbf{x_{ij}}$ is a feature \czq{vector} of the fc1 layer. \czq{The output of lower levels of the fusion node is a product of $g_{i|j}$ (output of Eqn. \ref{eqn_Fn_2}) times $\hat{y}$ (output of SSN). The $g_{i|j}$ is a scale as}: \begin{equation} \label{eqn_Fn_2} g_{i|j}= \frac{e^{\varepsilon_{i,j}}}{\sum_{k}e^{\varepsilon_{i,j}}} \end{equation} Note that, $g_{i|j}$ is positive and their sum is equal to one, which can be also interpreted as providing a local fusion for each top level fusion node. Considering the hierarchical fusion strategy can obtain a better performance \cite{ML1994_HME}, the fusion nodes are constructed as a tree structure. Similarly, the intermediate variable $\varepsilon_{j}$ is defined, and the weight vector $\mathbf{v_{j}}$ is defined as Eqn. \ref{eqn_Fn_1}. In particular, \czq{$\mathbf{x_{j}}$ is an average pooling vector from multiple $\mathbf{x_{ij}}$}. The output $g_j$ of the top fusion node is also defined as Eqn. \ref{eqn_Fn_2}. $g_j$ is positive and their sum is equal to one, which can be interpreted as providing a global fusion function. With such a tree structure, for each mini-batch, we update the weights of fusion nodes in the forward pass. Once the similarity network converges, the global similarity is obtained. \subsection{Learning Algorithm} In this subsection, we will introduce the learning method of our similarity network. The learning is implemented in a two-step iteration approach, where single similar network and fusion nodes will be mutually enhanced. The feature representation network and SSN are first learnt, and then the fusion nodes are learnt when SSN is fixed. \paragraph{Learning of Single Similarity Network.} The learning problem of SSN is defined as minimizing a Logarithmic Loss. Suppose that we have $N$ convolutional features from the first fully-connected layer fc1 as $X = \{x_1 ,x_2 ...x_N \}$ and each has a label $y_i \in \{0,1\}$, where 0 means ``does not match'' while 1 means ``matches''. It is defined as: \begin{equation} \label{eqn:SN1} L(W)=\frac{1}{N}\sum_{i=1}^{n}(y_i log\left ( \hat{y_i} \right ) + \left ( 1-y_i \right ) log\left ( 1-\hat{y_i} \right ))+\lambda \left \|W \right \|^{2} \end{equation} where $W$ is the parameters of SSN, $y_i = 1$ for positive examples and $y_i = 0$ for negative examples. $\hat{y_i}$ is the output of single similarity network. \paragraph{Learning of Fusion nodes.} When SSN is fixed, for a given mini-batch feature set $X = \{x_1 ,x_2 ...x_N \}$ of the fc1 layer, the global similarity $Y$ can be defined as the mixture of the probabilities of generating $y_{ij}$ from each SSN: \begin{equation} \label{eqn:obj} P(Y|X,\theta) = \sum_{j}g_{j}(X,\mathbf{v_{j}})\sum_{i}g_{j|i}(X,\mathbf{v_{ij}})p(y_{ij}|X,W_{ij}) \end{equation} where $P(Y|X,\theta)$ and $p(y_{ij}|X,W_{ij})$ are global and single similarity. $g_{j}(X,\mathbf{v_{j}})$ and $g_{j|i}(X,\mathbf{v_{ij}})$ are the weights of top and lower fusion nodes. $\theta$ contains $\mathbf{v_{j}}$, $\mathbf{v_{ij}}$ and $W_{i,j}$, which are the weights of top fusion nodes, lower fusion nodes and SSN, respectively. In order to implement the learning algorithms of Eqn. \ref{eqn:obj}, posterior probabilities of fusion nodes are defined. The probabilities $g_j$ and $g_{j|i}$ are referred as prior probabilities, because they are computed based only on the input $\mathbf{x_{i}}$ from fc1 layer as Eqn. \ref{eqn_Fn_2}, without the knowledge of corresponding target output $y$ as described in SSN. With Bayes' rule, the posterior probabilities at the nodes of the tree are denoted as follows: \begin{equation} \label{eqn:h1} h_{j} = \frac{g_{j}\sum_{i}g_{j|i}P_{ij}(y)}{\sum_{j}g_{i}\sum_{i}g_{j|i}P_{ij}(y)} \end{equation} and \begin{equation} \label{eqn:h2} h_{ij} = \frac{g_{j|i}P_{ij}(y)}{\sum_{j}g_{j|i}P_{ij}(y)} \end{equation} With these posterior probabilities, a gradient descent learning algorithm is developed for Eqn. \ref{eqn:obj}. The log likelihood of a mini-batch dataset $X =\{x^{t},y^{t}\}_{1}^{N}$ is obtained: \begin{equation} l(\theta ;X)= \sum_{t}\ln \sum_{j}g_{i}^{(t)}\sum_{i}g_{j|i}^{(t)}P_{ij}(y^{(t)}) \end{equation} In this case, by differentiating $l(\theta ;X)$ with respect to the parameters, the following gradient descent learning rules for the weight matrix are obtained. \begin{equation} \label{eqn:SN_2} \bigtriangledown \mathbf{v_{j}}= \alpha \sum_{t}(h_{j}^{(t)}-g_{j}^{(t)}) \mathbf{x_{j}}^{(t)} \end{equation} \begin{equation} \label{eqn:SN_3} \bigtriangledown \mathbf{v_{ij}}= \alpha \sum_{t}h_{i}^{(t)}(h_{j|i}^{(t)}-g_{j|i}^{(t)}) \mathbf{x_{ij}}^{(t)} \end{equation} where $\alpha$ is a learning rate. These equations denote a batch learning algorithm to train fusion nodes (i.e. tree structure). To form a deeper tree, each SSN is expanded recursively into a fusion node and a set of sub-SSN networks. In our experiment, we have five-level deep tree structure and the number of fusion nodes in each level is 32, 16, 8, 4, 2, respectively. \floatname{algorithm}{Algorithm} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{algorithm}[t] \caption{Approximate Training Method.} \label{alg:2} \begin{algorithmic}[1] \REQUIRE An AsymNet containing IFN, VFN and SSN, L: LSTM hidden states, C: convolutional feature. \ENSURE AsymNet \STATE Sample $n$ clothing trajectories and each trajectory $u$ has $2\times S$ clothing images; \STATE L= net\_foward(VFN), C= net\_foward(IFN); \STATE \czq{Copy L to $2\times S$ times as $\hat{L}$, sent C and $\hat{L}$ to SSN}; \STATE Train SSN as Eqn. \ref{eqn:SN1} and compute $\bigtriangledown (SSN)$; \STATE Net\_foward(SSN) and compute $h_i$ and $h_{ij}$ as Eqn. \ref{eqn:h1}-\ref{eqn:h2} \STATE Train fusion nodes as Eqn. \ref{eqn:SN_2}-\ref{eqn:SN_3}; \STATE Net\_backward(IFN; $\bigtriangledown (SSN)$); \STATE Net\_backward(VFT; $\bigtriangledown(VFN_u)$) as Eqn. \ref{eqn:vfn}; \end{algorithmic} \end{algorithm} \section{Approximate Training} \label{sec:Train} Intuitively, to achieve good performance, different models should be trained independently for different clothing categories. To achieve this goal, a general AsymNet is first trained, followed by fine-tuning for each clothing category to achieve category specific models. \czq{There are 14 models to be trained.} In this section, we will introduce the approximate training of AsymNet. To train a robust model, millions of training samples are usually needed. It is extremely time-consuming to train the AsymNet using traditional training strategy. Based on an intrinsic property of this application, that is, many positive and negative samples (i.e. shopping clothes) share the same clothing trajectories in the training stage, an efficient training method is proposed, which is summarized in Alg. \ref{alg:2}. Suppose that the batch size of training is $n$, so $n$ trajectories in videos are sampled. Meanwhile, for \czq{a single} trajectory $u$, $2\times S$ shopping images are sampled (the number of positives and negatives is equal to $S$). In total, we have $n$ clothing trajectories in videos and $2\times S \times n$ clothing images in shops in each batch. To achieve the acceleration of training, the LSTM hidden states of $n$ trajectories are copied $2\times S$ times and sent them to the similarity network to accelerate the similarity network training. In backward time, the gradient of each clothing trajectory can be approximated as \begin{equation} \label{eqn:vfn} \bigtriangledown (VFN_u)= \frac{1}{2 \times S}\bigtriangledown (SSN_u) \end{equation} and gradient of clothing image in shops can be backward directly. \section{Experiment} \label{sec:ex} In this section, we will evaluate the performance of individual component of AsymNet, and compare the proposed method with state-of-the-art approaches. \subsection{Dataset and Metrics} Without proper datasets available for Video2Shop application, we collect a new dataset to evaluate the performance of identical clothing retrieval through videos, which will be released later. To the best of our knowledge, this is the first and the largest dataset for Video2Shop application. There are a number of online stores in e-commerce websites Tmall.com and Taobao.com, which sell the same styles of clothes appeared in movies, TV and variety shows. Accordingly, the videos and corresponding online clothing images are also posted on these stores. We download these videos from Tmall MagicBox, a set-top-box device from Alibaba Group, and the frames containing the corresponding clothing are extracted as the clothing trajectories manually. In total, there are 85,677 online clothing shopping images from 14 categories, 26,352 clothing trajectories are extracted from 526 videos through Tmall MagicBox, 39,479 exact matching pairs are obtained. We also collect similar matching pairs for evaluation of similar retrieval algorithms. The dataset information is listed in Table \ref{tab:performance}. In order to train the clothing detector, 14 categories of clothes are manually labeled, in which 2000 positive samples are collected per category from online images. Faster-RCNN \cite{NIPS15_FasterR-CNN} is utilized as the clothing detector, and the clothing trajectories are generated by Kernelized Correlation Filters (KCF) tracker \cite{TPAMI15_KCF}. \czq{The parameters used in Faster RCNN and KCF are the same as the original version.} Duplicate clothing trajectories are removed. The length of the clothing trajectories is roughly equal to 32. To maintain the temporal characteristics of clothing trajectories, a sliding window is used to unify the length of clothing trajectories into 32. Each clothing trajectory in our dataset is linked to exact matched clothing images and they are manually verified by annotators, which form the ground truth. With an approximate ratio of 4:1, these exact matching video-to-shop pairs are split into two disjoint sets (training and testing sets), which are nonoverlapped. Meanwhile, in order to reduce the impact of background and lead to more accurate clothing localization. Faster-RCNN is also used to extract a set of clothing proposals for online shopping images. \textbf{Evaluation Measure:} Since the category is assumed to be known in advance, the experiments are performed within the category. Followed by the evaluation criterion of \cite{ICCV15_Wheretobuy,ICMR16_Product}, the retrieval performance is evaluated based on \emph{top-k accuracy}, which is the ratio of correct matches within the top k returned results to the total number of search. Notice that once there is at least one exactly same product among the top 5 results as the query, which is regarded as a correct match in our setup. For simplicity, the weighted average is used for evaluation. \begin{figure}[tb] \centering \includegraphics[scale=0.23]{./images/F2.pdf} \caption{Performance Comparison of Representation Networks} \label{fig:F2} \vspace{-0.2in} \end{figure} \subsection{Performance of Representation Networks} In this subsection, we compare the performance of representation networks with other baselines. 1) Average pooling, 2) Max pooling, 3) Fisher Vector \cite{CVPR07_Fishvector} and 4) VLAD \cite{CVPR10_Vlad}. We utilize 256 components for Fisher vectors and 256 centers for VLAD as common choices in \cite{CVPR10_Vlad,IJCV13_Fishvector}. The PCA projections, GMM components of Fisher vectors, and K-means centers of VLAD are learned from approximately 18,000 sampled clothing regions in the training set. For these baselines, average pooling and max pooling are directly used on the CNN features of clothing trajectories. Fisher vector and VLAD are used to encode the CNN features of shopping images and clothing trajectories, respectively. The similarity is then estimated by single similarity network. In addition, the impact of different levels (1, 3 and 4 levels) of LSTM network is also investigated, denoted as LSTM1, LSTM3 and LSTM4, respectively. \czq{For LSTM based networks, the final output from the similarity feature network is used as the final matching result.} The performance comparison is shown in Fig. \ref{fig:F2}. From Fig. \ref{fig:F2}, we can see that the general performance is increased as $k$ becomes larger, which means that it will be treated as a correct match once there is at least one exactly same item with the top $k$ returned results. But we can also noticed that the performance of top 10 is still far from satisfactory, since it still a challenging task to match clothes appeared in videos to the online shopping images. There exists significant discrepancy between these cross-domain sources, including diverse visual appearance, cluttered background, occlusion, different light condition, motion blur in the video, and so on. The performance of average pooling is better than max pooling. Both Fisher Vector and VLAD have better performance than the average pooling representation. And VLAD has slightly better performance than Fisher Vector. Overall, all LSTM based networks outperform pooling based methods. The proposed AsymNet achieves the best performance, which has significantly higher performance than the other two pooling approaches. As the increase of the levels of LSTM network, the performance is firstly increased and then dropped when the number of levels is more than two. Our AsymNet adopts the two-level LSTM structure. \subsection{Structure Selection of Similarity Networks} \begin{figure}[tb] \centering \includegraphics[scale=0.19]{./images/F3.pdf} \caption{The top-20 retrieval accuracy (\%) of the proposed AsymNet with different structures.} \label{fig:F3} \vspace{-0.2in} \end{figure} To investigate the structure of similarity network, we vary the number of levels and the fusion nodes in similarity network, while keeping all other common settings fixed. We evaluate two types of architectures: 1) Homogeneous branches: all fusion nodes have the same number of branches; 2) Varying branches: the number of branches is inconsistent across layers. For homogeneous setting, one-level flat structure with 32 fusion nodes to hierarchical structure with five levels (62 fusion nodes) are tested. For the varying temporal branches, we compare six networks with branches in increasing order: 4-8, 2-4-4, 2-2-2-4 and decreasing order: 8-4, 4-4-2, 4-2-2-2, respectively. The performance of these architectures is shown in Fig. \ref{fig:F3}, in which the structure is represented in the form: \#Level:\#Branches in each level from leaves to root of the tree, connected with hyphen. From this figure, we can see that the overall performance is significantly improved as the number of epoch increases. As the training proceeds, the parameters in the fusion nodes begin to grow in magnitude, which means that the weights of fusion nodes are becoming more and more reasonable. Meanwhile, the performance is significantly improved as the number of epoch increases. However, the improvement is not obvious after 4 epochs, since the weights of fusion nodes tends to be stable. The weight adjustment becomes subtle because the overall weights are optimized. When one-level flat structure is adopted, it only has the leaves in the tree structure. The entire similarity network is reduced to a single averaged generalized linear models at the root of the tree. As the training proceeds, the parameters in the fusion nodes begin to grow in magnitude. When the fusion notes begin to take action, the performance of the system is boosted. We also notice that the general performance is increased when more levels of fusion nodes are involved. The boosting is pretty conspicuous for the first three layers. The improvement becomes minor when multi-level structure is formed. It indicates that the similar network becomes stable when the levels of fusion nodes are more than three. \subsection{Performance of Similarity Learning Networks} In order to verify the effectiveness of our similarity network, we compare the performance of the proposed method with other methods when fusion nodes are not included. These baselines include: the final matching result is determined by the average (Avg) and the maximum (Max) of all single similar networks, or the last (Last) single similar network. In addition, the latest work KVM \cite{CVPR2016_Keyvolume} is also considered, in which the key volume proposal method used in KVM is directly utilized to fuse the fc1 features in SSN. We formulate the similarity learning task as a binary classification problem. With that, the same loss function in KVM can still be used. The top-20 retrieval performance comparison is shown in Fig. \ref{fig:S1}. From this figure, we can see that the performance of Avg is better than Max. Last has better performance than Avg and Max. The main reason is that the last hidden states learn the whole temporal information in the clothing trajectories. The noise in clothing trajectories affects the performance of Avg and Max greatly. KVM considers the discriminative information may occur sparsity in a few key volumes, while other volumes are irrelevant to the final result. Although KVM is able to learn the most critical parts from clothing trajectories, it is too simple to consider the whole trajectory, in which different local viewpoints in trajectory is not well considered. The proposed AsymNet outperforms these baselines, which has significantly higher performance. \begin{figure}[tb] \centering \includegraphics[scale=0.23]{./images/S1.pdf} \caption{Performance of Similarity Learning Network} \label{fig:S1} \vspace{-0.2in} \end{figure} \begin{table*}[!t] \centering \caption{The top-20 retrieval accuracy (\%) of the proposed AsymNet compared with state-of-the-art approaches. The notations represent the numbers of images (\# I), video trajectories (\# TJ), queries (\# Q) and its corresponding results (\# R).} \label{tab:performance} \begin{tabularx}{17.5cm}{Xcccccccccc} \hline\hline \textbf{Category} &\textbf{\# I} &\textbf{\# TJ} &\textbf{\# Q} & \textbf{\# R} &\textbf{AL \cite{NIPS12_AlexNet}} & \textbf{DS \cite{MM14_DS}} & \textbf{FT \cite{ICCV15_Wheretobuy}} &\textbf{CS \cite{TOG15_CS}} &\textbf{RC \cite{ICMR16_Product}} &\textbf{AsymNet} \\ \hline Outwear & 18,144 & 5,581 & 1,116 & 3,628 & 17.31 & 22.94 & 26.97 & 27.61 & 31.80 & \textbf{42.58}\\ Dress & 14,128 & 4,346 & 869 & 2,825 & 22.93 & 24.90 & 25.56 & 29.33 & 34.34 & \textbf{49.58}\\ Top & 7,155 & 2,201 & 440 & 1,431 & 17.45 & 24.83 & 25.26 & 29.14 & 32.94 & \textbf{35.12}\\ Mini skirt & 6,571 & 2,021 & 404 & 1,314 & 23.35 & 24.83 & 27.47 & 29.50 & 31.30 & \textbf{32.48}\\ Hat & 6,534 & 2,010 & 402 & 1,306 & 15.82 & 13.98 & 20.19 & 25.87 & 33.81 & \textbf{35.12}\\ Sunglass & 6,133 & 1,886 & 377 & 1,226 & 11.85 & 7.46 & 11.35 & 11.83 & \textbf{12.26} & 12.16\\ Bag & 5,257 & 1,617 & 323 & 1,051 & 23.78 & 27.63 & 27.47 & 25.67 & 25.48 & \textbf{36.82}\\ Skirt & 4,453 & 1,370 & 274 & 890 & 19.79 & 25.06 & 22.44 & 24.50 & 24.43 & \textbf{41.75}\\ Suit & 3,906 & 1,201 & 240 & 781 & 18.65 & 25.18 & 19.72 & 25.29 & 26.60 & \textbf{42.08}\\ Shoes & 3,358 & 1,033 & 206 & 671 & 11.45 & 24.10 & 23.92 & 25.03 & \textbf{27.58} & 26.95\\ Shorts & 3,249& 999 & 199 & 649 & 11.15 & 5.99 & 13.90 & 14.84 & \textbf{16.62} & 13.74\\ Pants & 2,738 & 842 & 168 & 547 & 17.57 & 22.54 & 25.77 & 29.49 & 28.36 & \textbf{32.13}\\ Breeches & 2,044& 628 & 125 & 408 & 23.45 & 22.99 & 25.03 & 28.52 & 28.76 & \textbf{48.28}\\ High shoots& 2,007 & 617 & 123 & 401 & 12.05 & 13.11 & 14.57 & 15.46 & \textbf{16.04} & 14.94\\ \hline Overall& 85,677 & 26,352 & 5,266 & 17,128 & 18.36 & 21.44 & 23.47 & 25.73 & 28.73 & \textbf{36.63}\\ \hline\hline \end{tabularx} \end{table*} \subsection{Comparison With State-of-the-art Approaches} To verify the effectiveness of the proposed AsymNet, we compare it with the following state-of-the-art approaches: 1) \textbf{AlexNet (AL)} \cite{NIPS12_AlexNet}: the activations of the fully-connected layer fc6 (4,096-d) are used to form the feature representation. 2) \textbf{Deep Search (DS)} \cite{MM14_DS}: it is an attribute-aware fashion-related retrieval system based on convolutional neural network. 3) \textbf{F.T. Similarity (FT)} \cite{ICCV15_Wheretobuy}: category-specific two-layer neural networks are trained to predict whether two features extracted by the AlexNet represent the same product item. 4) \textbf{Contrastive \& Softmax (CS)} \cite{TOG15_CS}: it is based on the Siamese Network, where the traditional contrastive loss function and softmax loss function are used. 5) \textbf{Robust contrastive loss (RC)} \cite{ICMR16_Product}: multi-task fine-tuning is adopted, in which the loss is the combination of contrastive and softmax. For clothing trajectories in videos, we calculate the average similarity to gain the most similar shopping images. The cosine similarity is used in all these methods except FT. The detailed performance comparison is listed in Table \ref{tab:performance}. AsymNet achieves the highest performance for top-20 retrieval accuracy. It significantly outperforms AlexNet, in which the performance is almost doubled. The performance of AlexNet \cite{NIPS12_AlexNet} and Deep Search \cite{MM14_DS} is unsatisfactory, which only use the convolutional features to retrieve images and do not learn the underlying similarity, The performance of two contrastive based methods (CS \cite{TOG15_CS} \& RC \cite{ICMR16_Product}) are slightly better than FT \cite{ICCV15_Wheretobuy}, since contrastive loss has a stronger capability to identify minor differences. RC has better performance than CS because it exploits the category information of clothing. For some categories having no obvious difference in clothing trajectories, RC performs slightly better than AsymNet. Overall, our proposed approach shows clearly better performance than these approaches. This is mainly because AsymNet can handle the temporal dynamic variety existing in videos, and it integrates discriminative information of video frames by automatically adjusting the fusion strategy. Three examples with top-5 retrieval results of the proposed AsymNet are illustrated in Fig. \ref{fig:F4}, where the exact matches are marked with green tick. Relatively, it is easier to obtain the visually similar clothes, but it is much challenging to obtain the identical one, especially the query is from videos. For the first two rows, these returned results are visually similar. However, some detailed decorative patterns are different, which are labelled with red boxs. In the last row, although the clothing style is the same, the color is different, so it will not be treated as the correct match. \begin{figure}[tb] \centering \includegraphics[scale=0.33]{./images/F4.pdf} \caption{Example with top-5 retrieval results of the proposed AsymNet. The difference in terms of detailed decorative patterns are labelled with red boxs.} \label{fig:F4} \vspace{-0.2in} \end{figure} \subsection{Efficiency} To investigate the efficiency of the approximate training method, we compare it with traditional training procedure. All these experiments are conducted on a server with 24 Intel(R) Xeon(R) E5-2630 2.30GHz CPU, 64GB RAM and one NVIDIA K20 Tesla Graphic GPUs. In our experiment, \czq{one sample is performed in inference}, the image feature network processes 200 images/sec. The video feature network conducts 0.5 trajectories/sec and the similarity network performs 345 pairs/sec. The computation can be further pipelined and distributed for large-scale applications. The approximate training only costs 1/25 of the training time of tradition way. Meanwhile, the effectiveness of AsymNet is not influenced with the approximate training method. The training of our AsymNet model only takes around 12 hours to converge. \section{Conclusion} In this paper, a novel deep neural network, AsymNet is proposed to exact match clothes in videos to online shops. The challenge of this task lies in the discrepancy existing in cross-domain sources between clothing trajectories in videos and online shopping images, and the strict requirement of exact matching. This work is the first exploration of Video2Shop application. In our future work, we will integrate clothing attributes to further improve the performance. {\small \bibliographystyle{ieee}
{ "timestamp": "2018-12-05T02:18:40", "yymm": "1804", "arxiv_id": "1804.05287", "language": "en", "url": "https://arxiv.org/abs/1804.05287" }
\section{Introduction} Mirror symmetry has made powerful and striking predictions in enumerative geometry. It has led to groundbreaking results in algebraic and differential geometry, number theory, gauge theory and other branches of mathematics. Strominger-Yau-Zaslow \cite{SYZ} proposed that mirror symmetry can be understood as torus duality. It conjectured a geometric construction of mirror manifolds and a canonical transformation to derive the homological mirror symmetry conjecture \cite{Kont-HMS}. There have been a lot of breakthroughs in SYZ mirror symmetry. The Gross-Siebert program \cite{GS07} gave a purely algebraic method to reconstruct the mirror manifolds. Auroux \cite{Auroux07,Auroux09} provided a symplectic approach to SYZ and the Gross-Siebert program. Moreover, Floer theory of wall-crossing was developed in Pascaleff-Tonkonog \cite{PT17} based on the work of Seidel \cite{Seidel-lect}. Furthermore, based on the works of Fukaya-Oh-Ohta-Ono \cite{FOOO,FOOO-T}, Seidel \cite{Seidel-g2} and Akaho-Joyce \cite{AJ}, deformation and moduli theory of Lagrangian immersions are being developed by Cho-Hong-Lau \cite{CHL,CHL2,HL} which enhance and generalize the SYZ program. Floer theory of generic singular SYZ fibers and its relation with wall-crossing were understood in \cite{HKL18,ERT}. Finally, the family Floer theory initiated by Fukaya \cite{Fuk-famFl} and further developed by Tu \cite{Tu-reconstruction,Tu-FM} and Abouzaid \cite{Ab-famFl1,Ab-famFl2} provides a canonical functor which realizes the SYZ mirror transformation. In view of these recent developments, SYZ mirror symmetry can be understood via a local-to-global approach. First we need to understand SYZ transformation for local geometries around singular Lagrangians. Second we need to glue the local mirrors using Floer-theoretical methods. Toric Calabi-Yau manifolds and their mirrors provide a rich source of local models. Wall-crossing and SYZ mirror construction have been understood due to the works of Auroux \cite{Auroux07, Auroux09}, Chan-Lau-Leung \cite{CLL}, Abouzaid-Auroux-Katzarkov \cite{AAK} and Chan-Cho-Lau-Tseng \cite{CCLT2}. Using the local models, geometric transitions have been studied by Castano-Bernard and Matessi \cite{CBM3} and other groups \cite{L13,CPU,KL1,KL2,L18}. In this paper we study SYZ for the hyper-Kähler analog of toric manifolds. Analogous to toric manifolds, they are obtained as hyper-Kähler quotients of $T^*\mathbb{C}^n$. Typical examples of hypertoric manifolds include $T^*\mathbb{C}\mathbb{P}^n$ and crepant resolutions of $A_n$ singularities. We expect that they should provide useful local models to understand mirror symmetry for holomorphic symplectic manifolds. The structure of the paper is as follows. In Section \ref{review}, we review the definition of properties of hypertoric varieties. We construct Lagrangian fibrations on hypertoric manifolds in Section \ref{sec:fib}. It uses the techniques of Gross \cite{Gross-eg} and Goldstein \cite{Goldstein} by symplectic reduction, and Abouzaid-Auroux-Katzarkov \cite{AAK} by Moser argument. The Lagrangian fibrations have codimension-one amoeba-like discriminant loci. We carry out the SYZ mirror construction for hypertoric varieties in Section \ref{SYZ} with a brief review of SYZ in Section \ref{sec:SYZ}. We first analyze the walls over which the Lagrangian torus fibers bound holomorphic discs of Maslov index $0$ (Section \ref{sec:wall}). The walls divide the base of a Lagrangian fibration into chambers (Section \ref{sec:chambers}). We then find all the holomorphic discs of Maslov index $2$ bounded by a fiber in each chamber (Section \ref{sec:disc2}) and show their regularity (Section \ref{regularity}). As a result we obtain the generating functions of open Gromov--Witten invariants which are countings of these holomorphic discs (Section \ref{sec:GF}). We compactify the manifold in order to have sufficiently many boundary divisors (Section \ref{sec:cptfy}). Finally, in Section \ref{sec:mirror}, we construct a SYZ mirror variety as the spectrum of the ring of generating functions associated to boundary divisors. By construction the mirror we obtain is affine, and is singular in general. It should be viewed as the affinization of a smooth mirror. A resolution is necessary to better understand the geometry. We glue together a resolution using local charts coming from the wall and chamber structure of the SYZ base. The gluing can be explained using Floer-theoretical techniques as in \cite{Seidel-lect,PT17,HL}, but we will leave this in future work. The variety admits another resolution by a multiplicative hypertoric variety (Section \ref{sec:multiplicative}). In general these resolutions are topologically different. We conclude with the following theorems. \begin{theorem} Let $\mathfrak{M}_{u,\lambda}$ be a smooth hypertoric variety, and $D^-\subset\mathfrak{M}_{u,\lambda}$ a certain anti-canonical divisor (given by Equation (\ref{D-})). The SYZ mirror $\mathfrak{M}_{u,\lambda}^{\vee}$ of the pair $(\mathfrak{M}_{u,\lambda},D^-)$ is the affine variety \[ \mathfrak{M}_{u,\lambda}^{\vee}=\left\{((\bm{u}_1,\bm{v}_1,\ldots,\bm{u}_d,\bm{v}_d), (\bm{Z}_{1},\ldots,\bm{Z}_{d}))\in\mathbb{C}^{2d}\times (\mathbb{C}^{\times})^d \mid \bm{u}_i\bm{v}_i=\prod_{k\in \bm{j}}(1+\bm{Z}_k), i=1,\ldots,d\right\}, \] which admits a canonical resolution given by the wall and chamber structure of the SYZ base. \end{theorem} The notations are explained in Section \ref{sec:GF}. \begin{theorem} Let $\mathfrak{M}$ be a smooth hypertoric variety which is obtained as a hyper-Kähler quotient of $T^*\mathbb{C}^n$ by a sub-torus $K\subset T^n$. Its SYZ mirror is birational to the multiplicative hypertoric variety $\bm{\mu}^{-1}(q)//_{\chi} K_{\mathbb{C}}$ where $\bm{\mu}$ is the multiplicative moment map, and $q\in K_\mathbb{C}$ is determined by the K\"ahler parameters of $\mathfrak{M}$, and $\chi\in\mathrm{Hom}(K_\mathbb{C},\mathbb{C}^\times)$ is a generic character. \end{theorem} Below we introduce some important related works and questions that we wish to understand in the future. Closed-string equivariant mirror symmetry for hypertoric manifolds was found by Mcbreen and Shenfeld \cite{MS}. They derived a presentation of the $T^d\times\mathbb{C}^\times$-equivariant quantum cohomology of a hypertoric manifold and relate it with the Gauss-Manin connection of the mirror moduli. To understand the equivariant quantum cohomology from the SYZ perspective in this paper, we need to study equivariant Floer theory. In a recent preprint \cite{MW18}, Mcbreen and Webster showed that a category of equivariant coherent sheaves on a hypertoric variety are derived equivalent to the category of DQ-modules on the corresponding \textit{Dolbeault hypertoric variety}, establishing a version of homological mirror symmetry in the reverse direction. Dolbeault hypertoric varieties as defined in \cite{MW18} are analog of hyper-Kähler quotient of Ooguri-Vafa space and carry canonical special Lagrangian torus fibrations. In a subsequent work \cite{GMW}, Gammage, Mcbreen and Webster proved homological mirror symmetry for multiplicative hypertoric varieties. Moreover, they conjectured that multiplicative hypertoric varieties are complements of additive hypertoric varieties $\mathfrak{M}_{u,\lambda}$ of some anti-canonical divisors (Conjecture 1.7 of \cite{GMW}). It is an interesting direction to understand the relation with the anti-canonical divisor $D^-$ used in this paper, and mirror transformation of objects from the SYZ perspective. Furthermore, we believe hypertoric varieties are useful to understand mirror symmetry for cotangent bundles of smooth flag varieties. Toric degenerations of flag varieties were used to construct their mirrors by Nishinou-Nohara-Ueda \cite{NNU,NU}. It is reasonable to expect that mirrors of the total spaces of cotangent bundles of flag varieties are closely related to the mirrors of (singular) hypertoric varieties. \subsection*{Acknowledgment} The first named author is grateful to Conan Leung for bringing his interest to mirror symmetry for hypertoric varieties. The authors thank to Yoosik Kim and Hansol Hong for useful discussions. The work of the first named author is partially supported by the Simons collaboration grant. \section{Review of hypertoric varieties} \label{review} In this section, we review the definition and basic properties of hypertoric varieties. We refer to \cite{BD,HS,Proudfoot} for more detailed account of the subject. All material in this section, except Proposition \ref{prop:complement}, are from the existing literature. \subsection{Hypertoric varieties} \label{sec:hypertoric} Let $\mathfrak{t}^n$ and $\mathfrak{t}^d$ be real vector spaces of dimension $n$ and $d$, respectively. Let $\mathfrak{t}^n_{\mathbb{Z}}\subset\mathfrak{t}^n$ and $\mathfrak{t}^d_{\mathbb{Z}}\subset\mathfrak{t}^d$ be the integer lattices. Let $\{e_1,\ldots,e_n\}\subset\mathfrak{t}^n_{\mathbb{Z}}$ be an integer basis and let $\{\check{e}_1,\ldots,\check{e}_n\}\subset (\mathfrak{t}^n_{\mathbb{Z}})^*$ be the dual basis. Given a collection $u=\{u_1,\ldots,u_n\}\subset\mathfrak{t}^d_{\mathbb{Z}}$ of $n$ integer vectors that span $\mathfrak{t}^d_{\mathbb{Z}}$ over $\mathbb{Z}$, we define a map $\pi:\mathfrak{t}^n\to\mathfrak{t}^d$ by $\pi(e_i)=u_i$. We have the following exact sequences: \begin{equation} \label{ses1} 0\longrightarrow\mathfrak{k}\overset{\iota}{\longrightarrow}\mathfrak{t}^n\overset{\pi}{\longrightarrow}\mathfrak{t}^d\longrightarrow 0, \end{equation} \begin{equation} \label{ses2} 0\longleftarrow(\mathfrak{k})^*\overset{\iota^*}{\longleftarrow}(\mathfrak{t}^n)^*\overset{\pi^*}{\longleftarrow}(\mathfrak{t}^d)^*\longleftarrow 0, \end{equation} where $\mathfrak{k}=\ker{\pi}$, and (\ref{ses2}) is the dual sequence of \ref{ses1}. Exponentiating (\ref{ses1}) gives an exact sequence of real tori \begin{equation} \label{ses3} 0\longrightarrow K\longrightarrow T^n\longrightarrow T^d\longrightarrow 0. \end{equation} Let $T^*\mathbb{C}^n$ be equipped its standard hyper-Kähler structure. Let $(z,w)=(z_1,w_1,\ldots,z_n,w_n)$ be the standard coordinates on $T^*\mathbb{C}^n$. We consider $T^*\mathbb{C}^n$ equipped with the Kähler form $\omega_{\mathbb{R}}$ \[ \omega_{\mathbb{R}}=\frac{\sqrt{-1}}{2}\sum_{i=1}^n (dz_i\wedge d\bar{z}_i+dw_i\wedge d\bar{w}_i), \] and holomorphic symplectic form $\omega_{\mathbb{C}}$ \[ \omega_{\mathbb{C}}=\sum_{i=1}^n dz_i\wedge dw_i. \] Let $\vec{t}=(t_1,\ldots,t_n)\in T^n$ act on $T^*\mathbb{C}^n$ by \[ \vec{t}\cdot(z,w)=(t_1z_1,t_1^{-1}w_1,\ldots,t_nz_n,t_n^{-1}w_n), \] preserving the hyper-Kähler structure. The hyper-Kähler moment map \[ (\mu_{\mathbb{R}},\mu_{\mathbb{C}}): T^*\mathbb{C}^n \to (\mathfrak{k})^*\oplus (\mathfrak{k}_{\mathbb{C}})^* \] for the restriction to $K$ of the $T^n$-action on $T^*\mathbb{C}^n$ is given by \[ \mu_{\mathbb{R}}(z,w)=\frac{1}{2}\sum_{i=1}^n(|z_i|^2-|w_i|^2)\iota^*\check{e}_i, \qquad \mu_{\mathbb{C}}(z,w)=\sum_{i=1}^n(z_iw_i)\iota^*_{\mathbb{C}}\check{e}_i. \] \begin{definition} \label{def:hypertoric} Given a collection of primitive integer vectors $u$ and parameters $\lambda=(\lambda_{\mathbb{R}},\lambda_{\mathbb{C}})\in(\mathfrak{k})^*\oplus (\mathfrak{k}_{\mathbb{C}})^*$, the hyper-Kähler quotient \[ \mathfrak{M}_{u,\lambda}=\left(\mu_{\mathbb{R}}^{-1}(\lambda_{\mathbb{R}})\cap\mu_{\mathbb{C}}^{-1}(\lambda_{\mathbb{C}})\right)/K \] is called a \textit{hypertoric variety}\footnote{A usual convention is setting $\lambda_{\mathbb{C}}=0$ in the definition. In this paper we work with a generic complex structure and do not make this assumption.}. \end{definition} Alternatively, $\mathfrak{M}_{u,\lambda}$ can be constructed as the GIT quotient \[ \mathfrak{M}_{u,\lambda}=\mu_{\mathbb{C}}^{-1}(\lambda_{\mathbb{C}})/\kern-0.2em/_{\lambda_{\mathbb{R}}} K_{\mathbb{C}}=\mathrm{Proj} \left(\bigoplus_{k=0}^{\infty}\mathbb{C}[\mu_{\mathbb{C}}^{-1}(\lambda_{\mathbb{C}})]^{\lambda_{\mathbb{R}}^k}\right), \] where $K_{\mathbb{C}}$ the complexification of $K$, and $\lambda_{\mathbb{R}}\in(\mathfrak{k})^*$ is understood as a character $\lambda_{\mathbb{R}}:K_{\mathbb{C}}\to \mathbb{C}^{\times}$. The quotient torus $T^d=T^n/K$ acts on $\mathfrak{M}_{u,\lambda}$ with the hyper-Kähler moment map \[ (\bar{\mu}_{\mathbb{R}},\bar{\mu}_{\mathbb{C}}):\mathfrak{M}_{u,\lambda}\to (\mathfrak{t}^d)^*\oplus (\mathfrak{t}^d_{\mathbb{C}})^* \] given by \[ (\bar{\mu}_{\mathbb{R}},\bar{\mu}_{\mathbb{C}})[z,w]=\frac{1}{2}\sum_{i=1}^n(|z_i|^2-|w_i|^2+\hat{\lambda}_{\mathbb{R},i})\check{e}_i\oplus\sum_{i=1}^n(z_iw_i+\hat{\lambda}_{\mathbb{C},i})\check{e}_i\in\mathrm{Ker}{(\iota^*)}\oplus\mathrm{Ker}{(\iota^*_{\mathbb{C}})}=(\mathfrak{t}^d)^*\oplus (\mathfrak{t}^d_{\mathbb{C}})^*, \] where $((\hat{\lambda}_{\mathbb{R},1},\ldots,\hat{\lambda}_{\mathbb{R},n}),(\hat{\lambda}_{\mathbb{C},1},\ldots,\hat{\lambda}_{\mathbb{C},n}))\in(\mathfrak{t}^n)^*\oplus (\mathfrak{t}^n_{\mathbb{C}})^*$ is a lift of $\lambda$. Note that this map is always surjective. \begin{example} Let $\{u_1,\ldots,u_d\}\subset\mathfrak{t}^d$ be a primitive integer basis. Define the map $\pi:\mathfrak{t}^{d+1}\to\mathfrak{t}^d$ by $\pi(e_i)=u_i$ for $i=1,\ldots,d$, and $\pi(e_{d+1})=u_{d+1}:=\sum_{j=1}^d (-u_j)$. $K\hookrightarrow T^{d+1}$ is then the diagonal sub-torus. If we set $\lambda_{\mathbb{R}}\in(\mathfrak{k})^*$ to be a regular value, and $\lambda_{\mathbb{C}}=0$, then the hypertoric variety $\mathfrak{M}_{u,\lambda}$ is $T^*\mathbb{P}^d$ (equipped with the standard complex structure). \end{example} \begin{example} Let $u_1\in\mathfrak{t}^1$ be a primitive integer vector. Define $\pi:\mathfrak{t}^{n+1}\to\mathfrak{t}^1$ by $\pi(e_i)=u_1$ for $i=1,\ldots,n+1$. $K\hookrightarrow T^{n+1}$ is then the subtorus \[ K=\{(t_1,\ldots,t_{n+1})\in T^{n+1}| \prod_{i=1}^{n+1} t_i=1\}. \] For $\lambda_{\mathbb{R}}$ a regular value and $\lambda_{\mathbb{C}}=0$, the hypertoric variety $\mathfrak{M}_{u,\lambda}$ is $\widetilde{\mathbb{C}^2/\mathbb{Z}_{n+1}}$, the crepant resolution of $A_{n}$ singularity $\mathbb{C}^2/\mathbb{Z}_{n+1}$. \end{example} \subsection{Hyperplane arrangements} Let $\mathfrak{M}_{u,\lambda}$ be a hypertoric variety. Denote by $\mathcal{H}_{\mathbb{R}}=\{H_{\mathbb{R},i}\}_{i=1}^n$ and $\mathcal{H}_{\mathbb{C}}=\{H_{\mathbb{C},i}\}_{i=1}^n$ the collections of hyperplanes \[ H_{\mathbb{R},i}=\{s\in(\mathfrak{t}^d)^*|\left<s,u_i\right>-\hat{\lambda}_{\mathbb{R},i}=0\}, \] and \[ H_{\mathbb{C},i}=\{v\in(\mathfrak{t}^d_{\mathbb{C}})^*|\left<v,u_i\right>-\hat{\lambda}_{\mathbb{C},i}=0\}. \] $\mathcal{H}_{\mathbb{R}}$ and $\mathcal{H}_{\mathbb{C}}$ are called the \textit{associated hyperplane arrangements} of $\mathfrak{M}_{u,\lambda}$. The hyperplane arrangements $\mathcal{H}_{\mathbb{R}}$ and $\mathcal{H}_{\mathbb{C}}$ are independent of the choice of the lift of $\lambda$ up to a translation and determine $\mathfrak{M}_{u,\lambda}$ up to a canonical isomorphism. The following definition is important for smoothness of hypertoric varieties. \begin{definition} A hyperplane arrangement $\mathcal{H}_{\mathbb{R}}$(resp. $\mathcal{H}_{\mathbb{C}}$) is called \textit{simple} if every subset of $k$ hyperplanes with nonempty intersection intersects in codimension $k$. $\mathcal{H}_{\mathbb{R}}$(resp. $\mathcal{H}_{\mathbb{C}}$) is called \textit{unimodular} if every collection of d linearly independent vectors $\{u_{i_1},\ldots,u_{i_d}\}$ spans $\mathfrak{t}^d_{\mathbb{Z}}$ over $\mathbb{Z}$. \end{definition} \begin{remark} \label{rmk:holfib} The holomorphic moment map $\bar{\mu}_{\mathbb{C}}:\mathfrak{M}_{u,\lambda}\to(\mathfrak{t}^d_{\mathbb{C}})^*$ is a holomorphic $(\mathbb{C}^{\times})^d$-fibration, i.e. generic fibers of $\bar{\mu}_{\mathbb{C}}$ are biholomorphic to $(\mathbb{C}^{\times})^d$. If $v_0\in (\mathfrak{t}^d_{\mathbb{C}})^*$ is a point such that $v_0\in\bigcap_{i\in I}H_{\mathbb{C},i}$ for some nonempty subset $I\subset\{1,\ldots,n\}$ and $v_0\notin H_{\mathbb{C},i}$ for $i\notin I$, then $\bar{\mu}_{\mathbb{C}}^{-1}(v_0)\cong (\mathbb{C}\cup_0\mathbb{C})^{\min\{|I|,d\}}\times(\mathbb{C}^{\times})^{\max\{d-|I|,0\}}$, where $\mathbb{C}\cup_0\mathbb{C}$ denotes the union of two affine lines intersecting transversely at the origin. \end{remark} See Figure \ref{fig:hyperplane-arrangement} for examples of hyperplane arrangements. \begin{figure}[htb!] \includegraphics[scale=0.5]{hyperplane-arrangement.pdf} \caption{Examples of hyperplane arrangements. The left corresponds to $\widetilde{\mathbb{C}^2/\mathbb{Z}_{4}}$ resolutions. The middle corresponds to $T^*\mathbb{C}\mathbb{P}^2$. The right corresponds to a hypertoric variety which contains both $T^*\mathbb{F}_1$ and $T^*\mathbb{C}\mathbb{P}^2$}. \label{fig:hyperplane-arrangement} \end{figure} \subsection{Geometry and topology of hypertoric varieties} Let $\mathcal{A}=\{A_i\}_{i=1}^n$ be the collection of affine subspaces $A_i=H_{\mathbb{R},i}\times H_{\mathbb{C},i}\subset (\mathfrak{t}^d)^*\oplus (\mathfrak{t}^d_{\mathbb{C}})^*$. We have the following necessary and sufficient conditions for $\mathfrak{M}_{u,\lambda}$ to be an orbifold or smooth manifold: \begin{theorem}[{\cite[Theorem 3.2, 3.3]{BD}}] \label{thm:smooth} $\mathfrak{M}_{u,\lambda}$ is an orbifold with at worst Abelian quotient singularities if and only if every $d+1$ distinct elements in $\mathcal{A}$ have empty intersection. It is a smooth manifold if and only if, in addition, whenever $d$ distinct elements $A_{i_1},...,A_{i_d}$ have nonempty intersection, the set $\{u_{i_1},\ldots,u_{i_d}\}$ spans $\mathfrak{t}^d_{\mathbb{Z}}$ over $\mathbb{Z}$. \end{theorem} \begin{corollary} For $\lambda_{\mathbb{C}}=0$, $\mathfrak{M}_{u,\lambda}$ is a smooth manifold if and only if $\mathcal{H}_{\mathbb{R}}$ is both simple and unimodular. \end{corollary} \begin{remark} The expression for the SYZ mirror in Theorem \ref{thm:SYZmir} still makes sense even when the hyperplane arrangements are not simple nor unimodular. We speculate that it is useful for the study of hypertoric degenerations. \end{remark} For a generic choice of $\lambda_{\mathbb{C}}$, $\mathfrak{M}_{u,\lambda}$ is simply an affine variety. \begin{theorem}[{\cite[Theorem 5.1]{BD}}] \label{thm:cplxstr} Let $\mathfrak{M}_{u,\lambda}$ be a hypertoric orbifold, and suppose $\mathcal{H}_{\mathbb{C}}$ is simple. Then, $\mathfrak{M}_{u,\lambda}$ equipped with the complex structure inherited from $T^*\mathbb{C}^n$ is biholomorphic to affine variety $\mathrm{Spec}\left(\mathbb{C}[W]^{K_{\mathbb{C}}}\right)$, where $W\subset T^*\mathbb{C}^n\times\mathbb{C}^d$ is defined by the equations \[ z_iw_i=\left<v,u_i\right>-\hat{\lambda}_{\mathbb{C},i}, \quad i=1,\ldots,n, \] and $K_{\mathbb{C}}$ acts on $T^*\mathbb{C}^n\times\mathbb{C}^d$ by $\vec{t}\cdot(z,w,v)=(t_1 z_1,t_1^{-1}w_1,\ldots,t_nz_n,t_n^{-1}w_n,v)$. \end{theorem} In general it is difficult to write down an explicit hyper-Kähler metric. For a hypertoric variety, the K\"ahler metric is descended from the standard metric on $T^*\mathbb{C}^n$ and has a simple expression. \begin{theorem}[{\cite[Theorem 8.3]{BD}}] Let $s_i=|z_i|^2-|w_i|^2$, $v_i=z_iw_i$, and $r_i=\sqrt{s_i^2+4v_i\bar{v}_i}$. Then, on the open dense subset of $\mathfrak{M}_{u,\lambda}$ where the $T^d$-action is free, the induced Kähler form $\omega$ is given by \begin{equation} \label{KP} \omega=\frac{1}{4}dd^c(2\bar{\mu}_{\mathbb{R}},\bar{\mu}_{\mathbb{C}})^*\left(\sum_{i=1}^n(r_i+2\hat{\lambda}_{\mathbb{R},i}\ln(s_i+r_i))\right), \end{equation} where $d^c=\sqrt{-1}(\bar{\partial}-\partial)$. \end{theorem} \subsection{Circuits and primitive curve classes} \label{sec:circuits} The SYZ mirrors that we are going to construct depend on K\"ahler parameters, which are recording the symplectic areas of primitive curve classes in $H_2(\mathfrak{M}_{u,\lambda};\mathbb{Z})$. The following definition is crucial to understand primitive curve classes in hypertoric varieties. \begin{definition}[{\cite{MS}}] A \textit{circuit} $S\subset \{1,\ldots,n\}$ in $\mathcal{H}_{\mathbb{R}}$ is a minimal subset satisfying $$\bigcap_{i\in S} H_{\mathbb{R},i}=\emptyset.$$ \end{definition} A circuit $S$ admits a unique splitting $S=S^+\coprod S^-$ (up to swapping $S^+$ and $S^-$), which is characterized by the equation \[ \sum_{i\in S^+} u_i-\sum_{i\in S^- } u_i=0 \in \mathfrak{t}^d. \] For each circuit $S$, we fix the splitting $S=S^+\coprod S^-$ such that if we set \[ \beta_{S}=\sum_{i\in S^+} e_i - \sum_{i\in S^-} e_i, \] then $\hat{\lambda}_{\mathbb{R}}(\beta_{S})>0$ for any lift $\hat{\lambda}_{\mathbb{R}}\in (\mathfrak{t}^n)^*$ of $\lambda_{\mathbb{R}}$. $\beta_{S}$ is a primitive class in $\mathfrak{k}_{\mathbb{Z}}=H_2(\mathfrak{M}_{u,\lambda};\mathbb{Z})$. It can be understood as a curve class obtained from gluing holomorphic discs emanated from the hyperplanes indexed by $S$. We denote by $q^{\beta_{S}}$ the Kähler parameter associated to $\beta_{S}$. \subsection{Cotangent bundles of toric varieties in a hypertoric variety} Let $\mathcal{H}_{\mathbb{R}}$ be the real hyperplane arrangement of $\mathfrak{M}_{u,\lambda}$. Let $\Delta$ be a convex polytope in $(\mathfrak{t}^d)^*$ with its interior being a chamber in the complement of $\mathcal{H}_{\mathbb{R}}$. We will assume $\Delta$ is simple, which is the case when $\mathcal{H}_{\mathbb{R}}$ is simple. We further assume $\lambda_{\mathbb{C}}=0$, so that $\mathfrak{M}_{u,\lambda}$ is equipped with its canonical complex structure. Then, the cotangent bundle $T^*X_{\Delta}$ of the toric variety $X_{\Delta}$ naturally embeds into $\mathfrak{M}_{u,\lambda}$ as an open dense subset (Fig. \ref{fig:cotang-in-hypertoric}). \begin{theorem}[{\cite[Theorem 7.1]{BD}}] \label{thm:cotangent} $T^*X_{\Delta}$ with its canonical holomorphic-symplectic structure is $T^d$-equivariantly isomorphic to an open dense subset $U_{\Delta}$ of $\mathfrak{M}_{u,\lambda}$. The hyper-Kähler metric of $\mathfrak{M}_{u,\lambda}$ restricted to the zero section of $T^*X_{\Delta}$ is the Kähler metric on $X_{\Delta}$ determined by $\Delta$. \end{theorem} $T^*X_{\Delta}\subset \mathfrak{M}_{u,\lambda}$ was constructed in \cite{BD} as follows. For simplicity, let's fix a lift $$((\hat{\lambda}_{\mathbb{R},1},\ldots,\hat{\lambda}_{\mathbb{R},n}),(\hat{\lambda}_{\mathbb{C},1},\ldots,\hat{\lambda}_{\mathbb{C},n}))\in(\mathfrak{t}^n)^*\oplus (\mathfrak{t}^n_{\mathbb{C}})^*$$ of $\lambda$ such that $\hat{\lambda}_{\mathbb{R},i}=0$ and $\hat{\lambda}_{\mathbb{C},i}=0$ for $i=1,\ldots,d$. Let $\{H_{\mathbb{R},i}^+\}_{i=1}^n$ and $\{H_{\mathbb{R},i}^-\}_{i=1}^n$ be the half-spaces \[ H_{\mathbb{R},i}^+=\{s\in (\mathfrak{t}^d)^*|\left<s,u_i\right>-\hat{\lambda}_{\mathbb{R},i}\ge 0\}, \] \[ H_{\mathbb{R},i}^-=\{s\in (\mathfrak{t}^d)^*|\left<s,u_i\right>-\hat{\lambda}_{\mathbb{R},i}\le 0\}. \] Let $\sigma:\{1,...,n\}\to \{+,-\}$ be the sign vector such that \[ \Delta=\bigcap_{i=1}^n H_{\mathbb{R},i}^{\sigma(i)}, \] and let $\bar{\sigma}$ be the sign vector such that $\bar{\sigma}(i)\ne \sigma(i)$ for all $i$. Each face $F$ of $\Delta$ is given by an intersection of hyperplanes $\bigcap_{i\in I} H_{\mathbb{R},i}$, for some $I\subset\{1,\ldots,n\}$. For $F$ a face of $\Delta$, we define a subset $Y_F\subset T^*\mathbb{C}^n$ by \[ Y_F=\{(z,w)\in T^*\mathbb{C}^n|z_i=0\iff i\in I \text{ and }\sigma(i)=+; w_i=0\iff i\in I \text{ and }\sigma(i)=-\}. \] In particular, if $F$ is the codimension-$0$ face, we have $I=\emptyset$, and \[ Y_{F}=\{(z,w)\in T^*\mathbb{C}^n|z_i\neq 0 \text{ if }\sigma(i)=+; w_i\neq 0, \text{ if }\sigma(i)=-\}. \] We define a $T^n$-invariant subset $Y_{\Delta}\subset T^*\mathbb{C}^n$ to be the union \[ Y_{\Delta}=\bigcup_F Y_F, \] where the union is over all faces $F$ of $\Delta$. $T^*X_{\Delta}\subset\mathfrak{M}_{u,\lambda}$ is then constructed by restricting the hyper-Kähler quotient construction to $Y_{\Delta}$, \[ T^*X_{\Delta}=\left(Y_{\Delta}\cap\mu_{\mathbb{R}}^{-1}(\lambda_{\mathbb{R}})\cap\mu_{\mathbb{C}}^{-1}(0)\right)/K. \] We provide here an explicit description of the complement of $T^*X_{\Delta}$ in $\mathfrak{M}_{u,\lambda}$ in term of its moment map image. Let $\mathfrak{J}$ be the collection of all subsets $J\subset\{1,\ldots,n\}$ such that the intersection $\bigcap_{j\in J} H_j$ is nonempty, and is not a face of $\Delta$. Denote by $\Delta_J$ the polytope \[ \Delta_J=\bigcap_{j\in J}H^{\bar{\sigma}(j)}_{\mathbb{R},j}. \] Notice that $\Delta_J$ is non-adjacent to $\Delta$. \begin{prop} \label{prop:complement} The complement of $T^*X_{\Delta}$ in $\mathfrak{M}_{u,\lambda}$ is the union $V_{\Delta}=\bigcup_{J\in \mathfrak{J}} V_J$, where \[ V_J=(\bar{\mu}_{\mathbb{R}},\bar{\mu}_{\mathbb{C}})^{-1}\left(\Delta_J\times \bigcap_{j\in J}H_{\mathbb{C},j}\right). \] \end{prop} \begin{proof} Let $J\in\mathfrak{J}$, and denote by $Y_J\subset T^*\mathbb{C}^n\setminus Y_{\Delta}$ the subset \[ Z_J=\{(z,w)\in T^*\mathbb{C}^n|z_j=0\iff j\in J \text{ and }\sigma(j)=+; w_j=0\iff j\in J \text{ and }\sigma(j)=-\}. \] Restricting the hyper-Kähler quotient construction to $Z_J$ gives \[ V_J=\left(Y_J\cap\mu_{\mathbb{R}}^{-1}(\lambda_{\mathbb{R}})\cap\mu_{\mathbb{C}}^{-1}(0)\right)/K\subset\mathfrak{M}_{u,\lambda}\setminus T^*X_{\Delta}. \] By construction, we have $\mathfrak{M}_{u,\lambda}\setminus T^*X_{\Delta}=\bigcup_{J\in \mathfrak{J}} V_J$. The image of $V_J$ under the hyper-Kähler moment map $(\bar{\mu}_{\mathbb{R}},\bar{\mu}_{\mathbb{C}})$ is $\Delta_J\times \bigcap_{j\in J} H_{\mathbb{C},j}$. To see that it is disjoint from the image of $T^*X_{\Delta}$, suppose we have $[z,w]\in T^*X_{\Delta}$ with $\bar{\mu}_{\mathbb{C}}([z,w])\in\bigcap_{j\in J} H_{\mathbb{C},j}$, since $J$ does not define a face of $\Delta$, we must have $z_j\ne 0, w_j= 0$ and $\sigma(j)=+$ or $z_j=0, w_j\ne 0$ and $\sigma(j)=-$ for some $j\in J$, but then $\bar{\mu}_{\mathbb{R}}([z,w])\notin H^{\bar{\sigma}(j)}_{\mathbb{R},j}\supset\Delta_J$. \end{proof} \begin{figure}[htb!] \includegraphics[scale=0.5]{cotang-in-hypertoric.pdf} \caption{A hypertoric manifold that contains both $T^*\mathbb{C}\mathbb{P}^2$ and $T^*\mathbb{F}_1$. The closure of the shaded region on the left (resp. right) corresponds to the image of the complement of $T^*\mathbb{C}\mathbb{P}^2$ (resp. $T^*\mathbb{F}_1$) under $\bar{\mu}_{\mathbb{R}}$.} \label{fig:cotang-in-hypertoric} \end{figure} In this paper we work with smooth hypertoric varieties. In addition to $\mathfrak{M}_{u,\lambda}$ being smooth, we shall assume $\mathcal{H}_{\mathbb{C}}$ to be simple for the rest of this paper. When $\mathcal{H}_{\mathbb{C}}$ is simple, under the unimodularity assumption, $\mathfrak{M}_{u,\lambda}$ is smooth for all choices of $\lambda_{\mathbb{R}}$ by Theorem \ref{thm:smooth}. We do not assume $\mathcal{H}_{\mathbb{R}}$ to be simple. \section{Lagrangian torus fibrations on hypertoric varieties} \label{sec:fib} In this section, we construct piecewise smooth Lagrangian torus fibrations on hypertoric varieties. It was first suggested by Joyce in \cite{Joyce-sing} that special Lagrangian fibrations should in general be piecewise smooth. In \cite{AAK}, Abouzaid, Auroux and Katzarkov constructed piecewise smooth Lagrangian torus fibrations on the anticanonical divisor complement $X^0$ of the blowup $X=\mathrm{Bl}_{H\times\{0\}}V\times\mathbb{C}$, where $V$ is a toric variety and $H\subset V$ is a hypersurface, by pulling back Lagrangian torus fibrations on the symplectic reductions of $X^0$ (which are isomorphic to the open dense torus orbit $V^0\subset V$) and assembling them together. This construction is similar to those previously considered by Gross \cite{Gross-eg}, Goldstein \cite{Goldstein}, Castaño-Bernard and Matessi \cite{CBM1,CBM2}. The additional technical input in \cite{AAK} was the use of Moser's trick to interpolate between the reduced (possibly singular) K\"ahler forms and the torus-invariant K\"ahler forms on $V^0$. \subsection{Lagrangian torus fibrations on the reduced spaces} Denote by $s=(s_1,\ldots,s_n)$ the standard coordinates on $(\mathfrak{t}^n)^*$ rescaled by a factor of $2$, and $v=(v_1,\ldots,v_n)$ the standard complex-coordinates on $(\mathfrak{t}^n_{\mathbb{C}})^*$. We first construct Lagrangian torus fibrations on the symplectic reductions $X_s$ of $\mathfrak{M}_{u,\lambda}$ at level $\frac{s}{2}\in (\mathfrak{t}^d)^*\subset (\mathfrak{t}^n)^*$. $X_{s}$ can be constructed as \[ X_{s}=\bar{\mu}_{\mathbb{R}}^{-1}\left(\frac{s}{2}\right)/(T^n/K). \] For simplicity, we will assume from now on that the vectors $u_1,\ldots,u_d$ are linearly independent, and write $u_{\ell}=\sum_{i=1}^d a_{\ell i}u_i$ for $\ell=d+1,\ldots,n$. The coefficients $a_{\ell i}$ are integers, since $\{u_1,\ldots,u_d\}$ spans $\mathfrak{t}^d_{\mathbb{Z}}$ over $\mathbb{Z}$. We also fix a lift $((\hat{\lambda}_{\mathbb{R},1},\ldots,\hat{\lambda}_{\mathbb{R},n}),(\hat{\lambda}_{\mathbb{C},1},\ldots,\hat{\lambda}_{\mathbb{C},n}))\in(\mathfrak{t}^n)^*\oplus (\mathfrak{t}^n_{\mathbb{C}})^*$ of $\lambda$ such that $\hat{\lambda}_{\mathbb{R},i}=0$ and $\hat{\lambda}_{\mathbb{C},i}=0$ for $i=1,\ldots,d$. We can then identify the $\bar{\mu}_{\mathbb{C}}:\mathfrak{M}_{u,\lambda}\to (\mathfrak{t}^d_{\mathbb{C}})^*$ with the map $(z_1w_1,\ldots,z_dw_d):\mathfrak{M}_{u,\lambda}\to \mathbb{C}^d$ via the projection to the first $d$ components. The restriction of $\bar{\mu}_{\mathbb{C}}$ to $\bar{\mu}_{\mathbb{R}}^{-1}(\frac{s}{2})$ descends to a biholomorphism $ X_{s}\to (\mathfrak{t}^d_{\mathbb{C}})^*.$ We can therefore identify the reduced spaces $X_{s}$ with $\mathbb{C}^d*$ equipped with complex-coordinates $(v_1,\ldots,v_d)$. We will abuse notations and write \[ s_i=|z_i|^2-|w_i|^2, \quad v_i=z_iw_i, \] and set \[ r_i=\sqrt{s_i^2+4v_i\bar{v}_i}. \] for $i=1,\ldots,n$. These can be viewed as functions on $X_{s}$. In particular, $s_i$ are constants. The K\"ahler potential of the reduced K\"ahler form on $X_{s}$ has a simple expression in term of $s_i$ and $r_i$. \begin{lemma} The K\"ahler potentials $K_{red,s}$ for the reduced K\"ahler forms $\omega_{red,s}$ on $X_{s}$ are given by \begin{equation} \label{kp1} K_{red,s}=\frac{1}{4}\sum_{i=1}^n\left(r_i-s_i\ln|s_i\pm r_i|\right). \quad + \text{ if } s_i\ge 0, - \text{ otherwise}. \end{equation} \end{lemma} \begin{proof} Consider the action of $T^n$ and its complexification $(\mathbb{C}^{\times})^n$ restricted to the invariant subvariety $W=\mu_{\mathbb{C}}^{-1}(\lambda_{\mathbb{C}})\subset T^*\mathbb{C}^n$. $X_{s}$ can be obtained either as a symplectic reduction or a GIT quotient of $W$, \[ X_{s}=(\tilde{\mu}_{\mathbb{R}}|_{W})^{-1}\left(\frac{s}{2}\right)/T^n=W/\kern-0.2em/_{\frac{s}{2}} (\mathbb{C}^{\times})^n, \] where $\tilde{\mu}_{\mathbb{R}}$ is the moment map for the $T^n$-action on $T^*\mathbb{C}^n$ with respect $\omega_{\mathbb{R}}$. For any $(z,w)\in W$, there exist a unique element $\vec{t}_{(z,w)}\in \exp(i\mathfrak{t}^n)$ such that $\vec{t}_{(z,w)}\cdot (z,w)\in (\tilde{\mu}_{\mathbb{R}}|_{W})^{-1}\left(\frac{s}{2}\right)$. Denote by $q:W\to (\tilde{\mu}_{\mathbb{R}}|_{W})^{-1}\left(\frac{s}{2}\right)$ the map $q(z,w)=\vec{t}_{(z,w)}\cdot (z,w)$, and by $p:(\tilde{\mu}_{\mathbb{R}}|_{W})^{-1}\left(\frac{s}{2}\right)\to X_{s}$ the quotient map. Let $\hat{\omega}$ be the pull-back of $\omega_{red,s}$ on $W$ via $p\circ q$. Let $\chi_{\frac{s}{2}}:(\mathbb{C}^{\times})^n\to\mathbb{C}^{\times}$ the character given by $\frac{s}{2}$. By \cite[Theorem 7]{BG}, we have $\hat{\omega}=dd^c\hat{K}$, for a $T^n$-invariant function $\hat{K}$ on $W$ defined as \begin{equation} \label{kp2} \hat{K}(z,w)=K_0(\vec{t}_{(z,w)}\cdot (z,w))+\frac{1}{4\pi}\ln|\chi_{\frac{s}{2}}(\vec{t}_{(z,w)})|^2, \end{equation} where $K_0$ is the K\"ahler potential $\frac{1}{4}\sum_{i=1}^n |z_i|^2+|w_i|^2$ restricted to $W$. We have \begin{equation} \label{eq1} K_0\left(\vec{t}_{(z,w)}\cdot (z,w)\right)=\frac{1}{4}\sum_{i=1}^n \sqrt{s_i^2+4v_i\bar{v}_i}, \end{equation} whereas \[ |\chi_{\frac{s}{2}}(t_{(z,w)})|^2=\prod_{i=1}^n |\vec{t}_{(z,w),i}|^{-2\pi s_i}. \] $\vec{t}_{(z,w),i}$ is determined by \[ \left|(\vec{t}_{(z,w),i}z_i\right|^2-\left|\vec{t}_{(z,w),i}^{-1}w_i\right|^2=s_i. \] This means \[ |\vec{t}_{(z,w),i}|^2=\cfrac{s_i\pm \sqrt{s_i^2+4|z_i|^2|w_i|^2}}{2|z_i|^2}. \] Thus, \begin{equation} \label{eq2} \frac{1}{4\pi}\ln|\chi_{\frac{s}{2}}(\vec{t}_{(z,w)})|^2=\frac{1}{4}\sum_{i=1}^n -s_i\ln\left|s_i\pm \sqrt{s_i^2+4|z_i|^2|w_i|^2}\right|+s_i\ln(2|z_i|^2). \end{equation} Notice that $s_i$ in (\ref{eq1}) and (\ref{eq2}) are constants. Denote by $\iota:(\tilde{\mu}_{\mathbb{R}}|_{W})^{-1}\left(\frac{s}{2}\right)\to W$ the inclusion map. Since $dd^c\ln(2|z_i|^2)=0$, the terms $\frac{1}{4}s_i\ln(2|z_i|^2)$ do not contribute to $\hat{\omega}$. Thus, we have \[ p^*\omega_{red,s}=\iota^*\hat{\omega}=\iota^*dd^c\left(\hat{K}(z,w)-\frac{1}{4}\sum_{i=1}^n s_i\ln(2|z_i|^2)\right)=\iota^*dd^c(p\circ q)^*K_{red,s}=p^*dd^c K_{red,s}. \] \end{proof} \begin{remark} If we view \[ F=\frac{1}{4}\sum_{i=1}^n \left(r_i-s_i\ln|s_i+r_i|\right) \] as a function on $(\mathfrak{t}^n)^*\oplus (\mathfrak{t}^n_{\mathbb{C}})^*$, it is then the Legendre transform of the Kähler potential $\frac{1}{4}\sum_{i=1}^n|z_i|^2+|w_i|^2$ on $T^*\mathbb{C}^n$. In \cite{BD}, (\ref{KP}) was obtained as the Legendre transform of $F$ restricted to the subspace $(\mathfrak{t}^d)^*\oplus (\mathfrak{t}^d_{\mathbb{C}})^*\subset (\mathfrak{t}^n)^*\oplus (\mathfrak{t}^n_{\mathbb{C}})^*$. We can alternatively derive (\ref{kp1}) as the Legendre transform of $F$ further restricted to the subspace $\{\frac{s}{2}\}\times(\mathfrak{t}^d_{\mathbb{C}})^*\subset (\mathfrak{t}^n)^*\oplus (\mathfrak{t}^n_{\mathbb{C}})^*$. \end{remark} The reduced K\"ahler forms $\omega_{red,s}$ are singular along the hyperplane $H_{\mathbb{C},i}$, when $s\in H_{\mathbb{R},i}$. They are also not invariant under any obvious $T^d$-action on $X_{s}\cong\mathbb{C}^d$. These obstacles to constructing Lagrangian torus fibrations on the reduced spaces were also encountered in \cite{AAK}. We will use their strategy to construct Lagrangian torus fibrations on $X_{s}$. We first introduce an explicit family of smoothing $\omega_{sm,s}$ of $\omega_{red,s}$: \begin{equation} \label{kp3} \omega_{sm,s}=dd^c K_{sm,s}:=\frac{1}{4}dd^c \left(\sum_{i=1}^n\sqrt{s_i^2+4v_i\bar{v}_i+\kappa^4}-s_i\ln\left|s_i+\sqrt{s_i^2+4v_i\bar{v}_i+\kappa^4}\right|\right), \end{equation} where $\kappa>0$ is an arbitrarily small constant. $\omega_{sm,s}$ is Kähler by construction. Since $H^2(X_s;\mathbb{R})=0$, we have $[\omega_{sm,s}]=[\omega_{red,s}]$. We write $v_{\ell}=\sum_{i}^d a_{\ell i}v_i+b_{\ell}$ for $\ell=d+1,\ldots,n$, where $b_{\ell}\in\mathbb{C}$ are constants determined by $\lambda_{\mathbb{C}}$. Notice that the terms \[ \frac{1}{4}dd^c \left(\sum_{\ell=d+1}^n\sqrt{s_{\ell}^2+4v_{\ell}\bar{v}_{\ell}+\kappa^4}-s_i\ln\left|s_{\ell}+\sqrt{s_{\ell}^2+4v_{\ell}\bar{v}_{\ell}+\kappa^4}\right|\right) \] in (\ref{kp3}) are not invariant under the standard $T^d$-action centered at a point in $\mathbb{C}^d$. To remedy this, let $c=(c_1,\ldots,c_d)\in\mathbb{C}^d$ be a point away from the hyperplanes in $\mathcal{H}_{\mathbb{C}}$, and $T^d$ acts on $\mathbb{C}^d$ by the standard action centered at $c$. We isotope $\omega_{sm,s}$ to the family of $T^d$-invariant Kähler form $\omega_{inv,s}$ defined by averaging $\omega_{sm,s}$ over the $T^d$-action, \[ \omega_{inv,s}=\cfrac{1}{(2\pi)^d}\int_{g\in T^d} g^*\omega_{sm,s}dg. \] Since $\omega_{inv,s}$ is the exterior derivative of a $T^d$-invariant $1$-form (which is the $T^d$-average of $d^c K_{sm,s}$), its pullback to each $T^d$-orbit must vanish. This means the $T^d$-orbits in $X_{s}$ are Lagrangian with respect to $\omega_{inv,s}$. We now prove the following lemma. \begin{lemma} \label{lemma:moser} There exists a family of homeomorphisms $(\phi_{s})_{s\in (\mathfrak{t}^d)^*}$ of $X_{s}$ such that \begin{enumerate}[label=\textnormal{(\arabic*)}] \item $\phi_s$ is a diffeomorphism if $s\notin H_{\mathbb{R},i}$ for $i=1,\ldots,d$. It is a diffeomorphism away from $H_{\mathbb{C},i}$ if $s\in H_{\mathbb{R},i}$; \item $\phi_s$ intertwines the reduced (possibly singular)Kähler form $\omega_{red,s}$ and the $T^d$-invariant Kähler form $\omega_{inv,s}$; \item $\phi_{s}$ depends on $s$ continuously, and smoothly away from $\bigcup_{i=1}^n H_{\mathbb{R},i}$. \end{enumerate} \end{lemma} \begin{proof} We construct $\phi_{s}$ as the composition of $\phi_{sm,s}$ and $\phi_{inv,s}$ such that $\phi_{sm,s}$ takes $\omega_{red,s}$ to $\omega_{sm,s}$, and $\phi_{inv,s}$ takes $\omega_{sm,s}$ to $\omega_{inv,s}$, each satisfying the desired properties. \textbf{Step 1.} We interpolate between $\omega_{red,s}$ and $\omega_{sm,s}$ via the family of Kähler forms $\omega_{t,s}$, $t\in[0,\kappa]$, defined by \begin{equation} \label{moser1} \omega_{t,s}=dd^c K_{t,s}:=\frac{1}{4}dd^c\left(\sum_{i=1}^n r_{t,i}-s_i\ln|s_i+r_{t,i}|\right), \end{equation} where $r_{t,i}=\sqrt{s_i^2+4v_i\bar{v_i}+t^4}$. We use Moser's trick and look for the vector field $V_{t,s}$ satisfying \[ \mathcal{L}_{V_{t,s}}\omega_{t,s}+\frac{d}{dt}\omega_{t,s}= \mathcal{L}_{V_{t,s}}\omega_{t,s}+dd^c\left(\cfrac{dK_{t,s}}{dt}\right)=0 . \] By Cartan's formula, we have \[ d\iota_{V_{t,s}}\omega_{t,s}=-dd^c\left(\cfrac{dK_{t,s}}{dt}\right), \] from which we deduce \begin{equation*} \iota_{V_{t,s}}\omega_{t,s}=a_{t,s}:=-d^c\left(\cfrac{dK_{t,s}}{dt}\right)=-\frac{1}{2}d^c\left(\sum_{i=1}^n \cfrac{t^3}{s_i+r_{t,i}}\right). \end{equation*} We write $u_{\ell}=\sum_{i=1}^d a_{\ell i}u_i$ for $\ell=1,\ldots,n$, where $a_{\ell i}=\delta_{\ell i}$ for $\ell=1,\ldots,d$. We denote $\bm{i}=\sqrt{-1}$ so that it is not confused with the index $i$. We have \begin{equation*} \omega_{t,s}=\sum_{1\le i,j\le d}\omega_{t,s,ij}dv_i\wedge d\bar{v}_j:=\bm{i}\sum_{1\le i,j\le d}\Bigg(\sum_{\ell=1}^n a_{\ell i}a_{\ell j}\left(\cfrac{(s_{\ell}+r_{t,\ell})r_{t,\ell}-2|v_{\ell}|^2}{(s_{\ell}+r_{t,\ell})^2r_{t,\ell}}\right)\Bigg)dv_i\wedge d\bar{v}_j, \end{equation*} and \begin{equation*} a_{t,s}=\sum_{i=1}^d a_{t,s,i} d\bar{v}_i-\bar{a}_{t,s,i} dv_i :=\bm{i} \sum_{i=1}^d\left(\sum_{\ell=1}^n \cfrac{t^3 a_{\ell i}v_{\ell}}{(s_{\ell}+r_{t,\ell})^2r_{t,\ell}}\right)d\bar{v}_i-\left(\sum_{\ell=1}^n \cfrac{t^3 a_{\ell i}\bar{v}_{\ell}}{(s_{\ell}+r_{t,\ell})^2r_{t,\ell}}\right)dv_i. \end{equation*} Denote by $A=(A_{ij})$ the matrix with entries $A_{ij}=\omega_{t,s,ij}$, and let $A^{-1}=(A^{ji})$ be its inverse. The vector field $V_{t,s}$ is then given by \[ V_{t,s}=\sum_{j=1}^d f_{t,s,j}\frac{\partial}{\partial v_j}+g_{t,s,j}\frac{\partial}{\partial \bar{v}_j}=\sum_{j=1}^d\left(\sum_{i=1}^d A^{ji} a_{t,s,i}\right)\frac{\partial}{\partial v_j}+\left(\sum_{i=1}^d A^{ji} \bar{a}_{t,s,i}\right)\frac{\partial}{\partial \bar{v}_j}. \] $V_{t,s}$ is smooth except when $t=0$ and $s\in H_{\mathbb{R},i}$, in which case it is singular along $H_{\mathbb{C},i}$. We will show that the flow of $V_{t,s}$ is well-defined and $V_{t,s}$ is complete. Let $I\subset\{1,\ldots,n\}$ be a multi-index such that $\bigcap_{k\in I} H_{\mathbb{C},k}\ne\emptyset$. Let $s\in(\mathfrak{t}^d)^*$ be a point such that $s\in H_{\mathbb{R},k}$ if and only if $k \in I$, and let $v_0\in\bigcap_{k\in I} H_{\mathbb{C},k}$. To analyze the singularities of the functions $f_{t,s,j}$ (the analysis for $g_{t,s,j}$ is identical and hence omitted), we consider the following limits: \begin{equation*} \label{limit} \lim_{(t,v)\to (0,v_0)}f_{t,s,j}=\lim_{(t,v)\to (0,v_0)}\sum_{i=1}^d A^{ji}a_{t,s,i}=\lim_{(t,v)\to (0,v_0)}\sum_{i=1}^d\frac{C_{ji}}{\det A}a_{t,s,i}, \end{equation*} where $C_{ji}$ is the $(j,i)$-cofactor of $A$. Since $\bigcap_{k\in J} H_{\mathbb{C},k}\ne\emptyset$ and the hyperplane arrangement $\mathcal{H}_{\mathbb{C}}$ is simple, the vectors $\{u_k\}_{k\in I}$ are linearly independent. Thus we can assume $I\subset\{1,\ldots,d\}$ by rearranging the indices (notice that the coefficients in $u_{\ell}=\sum_{i=1}^d a_{\ell i}u_i$, $\ell=d+1,\ldots,n$, will change accordingly). Let $A_I$ be the matrix obtained from $A$ by removing the $k^{\mathrm{th}}$ row and column for $k\in I$. Denote by $A_{I,ij}$ the matrix obtained from $A_I$ by removing the $j^{\mathrm{th}}$ row and the $i^{\mathrm{th}}$. Note that $\det A_I\ne 0$ since $A_I$ is positive-definite, and $\det A_{I,ij}$ is non-singular. As $(t,v)\to (0,v_0)$, $\det A$ is dominated by the term \[ \left(\prod_{k\in I}\cfrac{(s_k+r_{t,k})r_{t,k}-2|v_k|^2}{(s_k+r_{t,k})^2r_{t,k}}\right)\times\det A_I. \] while $C_{ji}$ is dominated by the term \[ \left(\prod_{k\in I\setminus\{i,j\}}\cfrac{(s_k+r_{t,k})r_{t,k}-2|v_k|^2}{(s_k+r_{t,k})^2r_{t,k}}\right)\times\det A_{I,ij}, \] As $(t,v)\to (0,v_0)$, $C_{ji}$ blows up of at most the same order as $\det A$, while $a_{t,s,i}$ vanishes. This shows that $V_{t,s}$ extends continuously to be zero along its singular loci. On the other hand, let $g_{t,s}$ be the Kähler metric determined by $\omega_{t,s}$. Since the Kähler metric on $\mathfrak{M}_{u,\lambda}$ is complete, $g_{t,s}$ is a complete metric whenever it is non-singular. Denote by $\norm{\cdot}_{t,s}$ be the norm with respect to $g_{t,s}$. Since $V_{t,s}$ is dual vector field of $a_{t,s}$, we have \[ \norm{V_{t,s}}_{t,s}=\norm{a_{t,s}}_{t,s}\le 2\sum_{i=1}^d |a_{t,s,i}|\norm{dv_i}_{t,s}=O(|v|^{-\frac{3}{2}}) \] as $|v|\to\infty$. Thus, $V_{t,s}$ is uniformly bounded with respect to $g_{t_0,s}$, $t_0>0$.. We can therefore define $\phi_{sm,s}$ to be the time-$\kappa$ flow generated by $V_{t,s}$. \textbf{Step 2.} We interpolate between $\omega_{sm,s}$ and $\omega_{inv,s}$ via the family of Kähler forms $\omega'_{t,s}$, $t\in[0,1]$, defined by \begin{equation*} \omega'_{t,s}=t\omega_{inv,s}+(1-t)\omega_{sm,s}=dd^c \left(tK_{inv,s}+(1-t)K_{sm,s}\right), \end{equation*} where $K_{sm,s}$ is defined as in (\ref{kp3}), and \[ K_{inv,s}=\cfrac{1}{(2\pi)^d}\int_{g\in T^d} g^*K_{sm,s}dg. \] We again use Moser's trick and look for the vector field $V'_{t,s}$ satisfying \[ \mathcal{L}_{V'_{t,s}}\omega'_{t,s}+\frac{d}{dt}\omega'_{t,s}=0 . \] By Cartan's formula, we have \[ d\iota_{V'_{t,s}}\omega'_{t,s}=\omega_{sm,s}-\omega_{inv,s}, \] from which we deduce \begin{equation*} \label{mt1} \iota_{V'_{t,s}}\omega'_{t,s}=a'_{t,s}=d^c\left(K_{sm,s}-K_{inv,s}\right). \end{equation*} Writing out the relevant terms explicitly, we have \begin{multline*} \omega'_{t,s}=\sum_{1\le i,j\le d}\omega'_{t,s,ij}dv_i\wedge d\bar{v}_j:=\bm{i}\sum_{1\le i,j\le d}\Bigg(\sum_{\ell=1}^n \cfrac{t a_{\ell i}a_{\ell j}}{(2\pi)^d}\int_{g\in T^d} g^*\left(\cfrac{\left((s_{\ell}+r_{\kappa,\ell})r_{\kappa,\ell}-2|v_{\ell}|^2\right)}{(s_{\ell}+r_{\kappa,\ell})^2r_{\kappa,\ell}}\right)dg \\ + (1-t)a_{\ell i}a_{\ell j}\left(\cfrac{\left((s_{\ell}+r_{\kappa,\ell})r_{\kappa,\ell}-2|v_{\ell}|^2\right)}{(s_{\ell}+r_{\kappa,\ell})^2r_{\kappa,\ell}}\right)\Bigg)dv_i\wedge d\bar{v}_j, \end{multline*} and \begin{multline*} a'_{t,s}=\sum_{i=1}^d a'_{t,s,i} d\bar{v}_i-\bar{a}'_{t,s,i} dv_i :=\bm{i}\sum_{i=1}^d\left(\sum_{\ell=1}^n\left(\cfrac{a_{\ell i}v_{\ell}}{s_{\ell}+r_{\kappa,\ell}}\right)-\cfrac{1}{(2\pi)^d}\int_{g\in T^d} g^*\left(\cfrac{a_{\ell i}v_{\ell}}{s_{\ell}+r_{\kappa,\ell}}\right)dg\right)d\bar{v}_i\\ -\left(\sum_{\ell=1}^n\left(\cfrac{a_{\ell i}\bar{v}_{\ell}}{s_{\ell}+r_{\kappa,\ell}}\right)-\cfrac{1}{(2\pi)^d}\int_{g\in T^d} g^*\left(\cfrac{a_{\ell i}\bar{v}_{\ell}}{s_{\ell}+r_{\kappa,\ell}}\right)dg\right)dv_i. \end{multline*} Let $g'_{t,s}$ be the complete Kähler metric determined by $\omega'_{t,s}$. Let $\norm{\cdot}'_{t,s}$ be the norm with respect to $g'_{t,s}$. Since $V'_{t,s}$ is the dual vector field of $a'_{t,s}$, we have \[ \norm{V'_{t,s}}'_{t,s}=\norm{a'_{t,s}}'_{t,s}\le 2\sum_{i=1}^d |a'_{t,s,i}|\norm{dv_i}'_{t,s}=O(|v|^{\frac{1}{2}}), \] as $|v|\to\infty$. Denote by $\rho:\mathbb{C}^d\to [0,\infty)$ the Riemannian distance function (from the origin) with respect to the metric $g'_{t,s}$. By \cite{GG}, the auxiliary complete metric $g$ defined by \[ g=\cfrac{g'_{t,s}}{L^2(\rho'(v))} \] is complete. $V'_{t,s}$ is uniformly bounded with respect to $g$. Moreover, the time-$1$ flow $\phi_{inv,s}$ generated by $V_{t,s}'$ intertwines $\omega_{sm,s}$ and $\omega_{inv,s}$, as desired. \end{proof} Denote by $\mathbb{T}$ the \textit{tropical semi-field} $\mathbb{T}=\mathbb{R}\cup\{-\infty\}$. Let $c=(c_1,\ldots,c_d)\in\mathbb{C}^d$ be a point away from the hyperplanes in $\mathcal{H}_{\mathbb{C}}$ previously chosen to be the center of the $T^d$-action. Recall that the $T^d$-orbits (where are regular fibers of $(|v_1-c_1|,\ldots,|v_d-c_d|)$) are Lagrangian with respect to $\omega_{inv,s}$. \begin{definition} \label{def:lagfib_red} Let $\mathrm{Log}_t:\mathbb{C}^d\to\mathbb{T}^d$ be the map defined by \[ \mathrm{Log}_t(v_1,\ldots,v_d)=\left(\log_{t}|v_1-c_1|,\ldots,\log_{t}|v_d-c_d|\right), \] where $t\gg0$ is a constant. Denote by $\pi_{s}:X_{s}\to \mathbb{T}^d$ the composition $\pi_{s}=\mathrm{Log}_t\circ\phi_{s}$. $\pi_{s}$ is our preferred Lagrangian torus fibration on $X_{s}$. \end{definition} \subsection{Lagrangian torus fibrations on hypertoric varieties and the discriminant loci} \begin{definition} \label{def:lagfib} We denote by $\pi:\mathfrak{M}_{u,\lambda}\to B=\mathbb{R}^d\times\mathbb{T}^d$ the map which sends a point $x\in \bar{\mu}_{\mathbb{R}}^{-1}(\frac{s}{2})$ to $\pi(x)=\left(s,\pi_{s}\left([x]\right)\right)$, where $[x]\in X_{s}$ is the $T^d$-orbit of $x$. $\pi$ is a piecewise smooth Lagrangian torus fibration. \end{definition} Let $b=(s,\tau)=(s_1,\ldots,s_d,\tau_1,\ldots,\tau_d)\in B$. For generic values $b$, the fiber $\pi^{-1}(b)\cong T^{2d}$ is a smooth Lagrangian torus. When exactly $k$ components of $\tau$ is $-\infty$, the fiber $\pi^{-1}(b)$ degenerates to a torus $T^{2d-k}$. If $s\in H_{\mathbb{R},i}$ and $\tau\in\pi_{s}\left(H_{\mathbb{C},i}\right)$, the fiber $\pi^{-1}(b)$ is a \textit{pinched torus} (i.e. a product of immersed $\mathbb{S}^2$ and tori) of dimension $2d$. We denote by $\Sigma\subset B$ the set of all points over which the fibers of $\pi$ are singular: \[ \Sigma=\partial B\cup\left(\bigcup_{i=1}^n \{(s,\tau)\in B|s\in H_{\mathbb{R},i}\text{ and }\tau\in\pi_{s}\left(H_{\mathbb{C},i}\right)\}\right). \] We will call $\Sigma$ the \textit{discriminant loci} of $\pi$ (e.g. Fig \ref{fig:Lag-fib-HT}). Let $B^0=B\setminus\Sigma$. $\pi$ restricts to a $T^{2d}$-bundle over $B^0$, and induces an integral affine structure on $B^0$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{Lag-fib-HT.pdf} \caption{Lagrangian fibrations on $T^*\mathbb{C}\mathbb{P}^1$ and $T^*\mathbb{C}\mathbb{P}^2$, where the base are $\mathbb{R}\times\mathbb{T}$ and $\mathbb{R}^2\times\mathbb{T}^2$, respectively. The complex hyperplanes are taken to be in general positions.} \label{fig:Lag-fib-HT} \end{center} \end{figure} \section{SYZ mirror construction for hypertoric varieties} \label{SYZ} In this section, we carry out the SYZ mirror construction for smooth hypertoric varieties. We begin by reviewing the SYZ construction. \subsection{The SYZ mirror construction} \label{sec:SYZ} Let $\pi: X \to B$ be a proper Lagrangian torus fibration of a compact K\"ahler manifold $(X,\omega)$ of dimension $d$ such that the base $B$ is a compact manifold with corners, and the preimage of each codimension-one facet of $B$ is a smooth irreducible divisor denoted by $D_i$ for $1\le i \le m$. We assume that the regular Lagrangian fibers of $\pi$ are special with respect to a nowhere-vanishing meromorphic volume form $\Omega$ on $X$ whose pole divisor is the boundary divisor $D:=\sum_{i=1}^m D_i$ (and hence $D$ is an anti-canonical divisor). We denote by $B^0 \subset B$ the complement of the discriminant locus of $\pi$, and we assume that $B^0$ is connected\footnote{When the discriminant locus has codimension-two, $B^0$ is automatically connected. Although the Lagrangian fibrations on hypertoric varieties that we constructed have codimension-one discriminant loci, $B^0$ is still connected.}. We denote by $L_b$ a fiber of $\pi$ over $b \in B^0$. \begin{lemma}[Maslov index of disc classes {\cite[Lemma 3.1]{Auroux07}}] \label{MaslovIndex} For a disc class $\beta \in \pi_2(X,L_b)$ where $b \in B^0$, the Maslov index of $\beta$ is $\mu(\beta)=2[D]\cdot \beta$. \end{lemma} \begin{definition}[Wall \cite{CLL}] \label{def:wall} The \textit{wall} $\bm{W}$ of a Lagrangian fibration $\pi:X\to B$ is the set of points $b \in B^0$ such that the fiber $L_b$ bounds nonconstant holomorphic discs with Maslov index $0$. \end{definition} The complement of $\bm{W}\subset B^0$ consists of several connected components, which we call \textit{chambers}. Over different chambers the Lagrangian fibers behave differently in a Floer-theoretic sense. Away from the wall $\bm{W}$, the fibers are \textit{weakly unobstructed} and the one-pointed open Gromov--Witten invariants are well-defined using the machinery of Fukaya--Oh--Ohta--Ono \cite{FOOO}. \begin{definition}[Open Gromov--Witten invariants {\cite{FOOO}}] \label{def:oGW} For $b \in B^0 \setminus\bm{W}$ and $\beta \in \pi_2(X,L_b)$, let $\mathcal{M}_1(\beta;L_b)$ be the moduli space of stable discs with one boundary marked point of class $\beta$, and $[\mathcal{M}_1(\beta;L_b)]^{\mathrm{vir}}$ be the virtual fundamental class of $\mathcal{M}_1(\beta;L_b)$. The \textit{open Gromov--Witten invariant} associated to $\beta$ is $n_{\beta}:=\int_{[\mathcal{M}_1(\beta;L_b)]^{\mathrm{vir}}}\mathrm{ev}^*[\mathrm{pt}]^{\mathrm{PD}}$, where $\mathrm{ev}:\mathcal{M}_1(\beta;L_b) \to L_b$ is the evaluation map at the boundary marked point and $[\mathrm{pt}]^{\mathrm{PD}}$ is the Poincar\'e dual of the point class of $L_b$. \end{definition} We will restrict to disc classes which are transversal to the boundary divisor $D$ when we construct the mirror space (while for the mirror superpotential we need to consider all disc classes). \begin{definition}[Transversal disc class] \label{def:transversal} A disc class $\beta \in \pi_2(X,L_b)$ for $b \in B^0$ is said to be transversal to the boundary divisor $D$, which is denoted as $\beta \pitchfork D$, if it is represented by a map $u$ with $\mathrm{Im}(u) \cap D$ being a finite set of points and the intersections are transversal \end{definition} Due to dimension reason, the open Gromov--Witten invariant $n_\beta$ is nonzero only when the Maslov index $\mu(\beta)=2$. When $\beta$ is transversal to $D$ or when $X$ is semi-Fano, namely $c_1(\alpha) = [D] \cdot \alpha \geq 0$ for all holomorphic sphere classes $\alpha$, the number $n_\beta$ is invariant under small deformation of complex structure and under Lagrangian isotopy in which all Lagrangian submanifolds in the isotopy do not intersect $D$ nor bound nonconstant holomorphic disc of Maslov index less than $2$. The SYZ mirror construction can be realized as follows \cite{CLL}. First, the semi-flat mirror $X^\vee_0$ is defined as the space of pairs $(L_b,\nabla)$ where $b \in B^0$ and $\nabla$ is a flat $\mathrm{U}(1)$-connection on the trivial complex line bundle over $L_b$ up to gauge. There is a natural map $\pi^\vee:X^\vee_0\to B^0$ given by forgetting the second coordinate. The semi-flat mirror $X^\vee_0$ has a canonical complex structure \cite{Leung} and the functions $\mathrm{e}^{-\int_{\beta}\omega}\mathrm{Hol}_{\nabla}(\partial \beta)$ on $X^\vee_0$ for disc classes $\beta \in \pi_2(X,L_b)$ are called semi-flat complex coordinates. Here $\mathrm{Hol}_{\nabla} (\partial \beta)$ denotes the holonomy of the flat $\mathrm{U}(1)$-connection $\nabla$ along $\partial \beta \in \pi_1(L_b)$. Then the generating functions of transversal open Gromov--Witten invariants are defined by \begin{equation} \mathcal{I}_i(L_b,\nabla) := \sum_{\substack{\beta \in \pi_2(X,L_b) \\ \beta \cdot D_i = 1, \beta\pitchfork D}} n_\beta \exp\left(-\int_{\beta}\omega\right)\mathrm{Hol}_{\nabla}(\partial \beta), \label{eq:gen} \end{equation} for $1 \le i \le m$, $(L_b, \nabla) \in (\pi^\vee)^{-1}(B^0\setminus \bm{W})$. They serve as quantum corrected complex coordinates. The function $\mathcal{I}_i$ can be written in terms of the semi-flat complex coordinates, and hence they generate a subring $\mathbb{C}[\mathcal{I}_1, \ldots, \mathcal{I}_m]$ in the coordinate ring\footnote{In general we need to use the Novikov ring instead of $\mathbb{C}$ since $\mathcal{I}_i$ could be a formal Laurent series. In the cases that we study later, $\mathcal{I}_i$ are Laurent polynomials whose coefficients are convergent, and hence the Novikov ring is not necessary.} of $(\pi^\vee)^{-1}(B^0\setminus \bm{W})$. \begin{definition} \label{def:SYZ} An SYZ mirror of $X$ is the pair $(X^\vee,W)$ where $X^\vee:=\mathrm{Spec} \left(\mathbb{C}[\mathcal{I}_1,\ldots, \mathcal{I}_m] \right)$ and $$W := \sum_{\substack{\beta \in \pi_2(X,L_b)}} n_\beta \exp\left(-\int_{\beta}\omega\right)\mathrm{Hol}_{\nabla}(\partial \beta). $$ Moreover, $X^\vee$ is called to be an SYZ mirror of $X-D$. \end{definition} \begin{remark} \label{rmk:defect} In general the mirror space $X^\vee$ defined in this way, which only uses the generating functions of stable discs emanated from boundary divisors, is always affine and can be singular. The reason is that our construction ignores the local holomorphic functions living on the intermediate chambers in the base and only take the coordinate functions into account. Indeed for most hypertoric varieties this is the case. A resolution is necessary, and this will be carried out in Section \ref{sec:mirror}. The derived category is expected to be independent of the choice of a resolution. On the other hand, the Lagrangian fibration $\pi$ on $X$ indeed canonically fixes the resolution if we look more closely into Lagrangian Floer theory of the immersed fibers and glue in their formal deformation spaces. In this paper we will perform the resolution by assuming some combinatorial rules resulting from Lagrangian Floer theory. \end{remark} \begin{remark} Note that $W$ is a sum over all disc classes which are not necessarily transversal. If $X$ is semi-Fano, then every stable holomorphic disc class of Maslov index $2$ is of the form $\beta+\alpha$ where $\beta$ is transversal with $\mu(\beta)=2$, and $\alpha \in H_2(X)$ with $c_1(\alpha)=0$. Hence it takes the form $W=\sum_{i=1}^m a_i \mathcal{I}_i$ where $a_i$ are certain series in K\"ahler parameters. If $X$ is not semi-Fano, then some algebraic manipulation is necessary to write $W$ as a series in $\mathcal{I}_i$ over the Novikov ring. In this paper we deal with $X-D$ and hence do not concern about $W$. \end{remark} \subsection{Maslov index $0$ holomorphic discs and walls} \label{sec:wall} Let $c=(c_1,\ldots,c_d)\in (\mathfrak{t}_{\mathbb{C}}^d)^*$ be as in Definition \ref{def:lagfib_red}. Denote by $D^-_{i}$ the divisor \begin{equation} \label{D-} D^-_{i}=\{[z,w]\in\mathfrak{M}_{u,\lambda} |z_iw_i=c_i\}, \end{equation} and set $D^-=\sum_{i=1}^d D^-_{i}$. We will assume the isotopies $\phi_{s}$ in Lemma \ref{lemma:moser} preserves $D^-$. This can be achieved by modifying $\phi_{s}$ using the construction in {\cite[Lemma B.2]{AAK}. \begin{lemma}[Maslov index formula] \label{lemma:maslov} Let $L_b=\pi^{-1}(b)$ be the fiber of $\pi:\mathfrak{M}_{u,\lambda}\to B$ over $b\in B^0$. For any disc class $\beta\in \pi_2(\mathfrak{M}_{u,\lambda}, L_b)$, the Maslov index $\mu(\beta)$ is twice the algebraic intersection number $\beta\cdot [D^-]$. \end{lemma} \begin{proof} Let $\Omega$ be the meromorphic volume form on $\mathfrak{M}_{u,\lambda}$ with pole divisor $D^-$ defined by \[ \Omega=\cfrac{\bigwedge_{i=1}^d dz_i\wedge dw_i}{\prod_{i=1}^d z_iw_i-c_i}. \] Let $b=(s,\tau)$. If $s\notin H_{\mathbb{R},i}$ for all $i$, the $T^n/K$-action on the level set $\bar{\mu}_{\mathbb{R}}^{-1}\left(\frac{s}{2}\right)$ containing $L_b$ is free, and hence $\bar{\mu}_{\mathbb{R}}^{-1}\left(\frac{s}{2}\right)$ is a trivial $T^d$-bundle over $\mathbb{C}^d$. From Lemma \ref{lemma:moser}, we have a one parameter family $(\phi_{s,t})_{t\in[0,1+\kappa]}$ of homeomorphisms taking the projection $\bar{\mu}_{\mathbb{C}}(L_b)\subset\mathbb{C}^d$ of $L_b$ to a standard product torus centered at the point $c$. We can lift $(\phi_{s,t})_{t\in[0,1+\kappa]}$ to $\bar{\mu}_{\mathbb{R}}^{-1}(\frac{s}{2})$ by defining it to be fiber-wise constant and extend it to a one parameter family of homeomorphisms of $(\Phi_{b,t})_{t\in[0,1+\kappa]}$ of $\mathfrak{M}_{u,\lambda}$. If $s\in H_{\mathbb{R},i}$, we can isotope $L_b$ to a nearby smooth fiber $L_{b'}$ contained in a level set $\bar{\mu}_{\mathbb{R}}^{-1}(\frac{s'}{2})$ with $s'\notin H_{\mathbb{R},i}$ for all $i$, and then define $(\Phi_{b,t})_{t\in[0,1+\kappa]}$ by pre-composing $(\Phi_{b',t})_{t\in[0,1+\kappa]}$ with this isotopy. The phase function $\arg(\Omega|_{\Phi_{b,1+\kappa}(L_b)}):\Phi_{b,1+\kappa}(L_b)\to \mathbb{S}^1$ is identically zero since $\Phi_{b,1+\kappa}(L_b)$ is a special Lagrangian in $\mathfrak{M}_{u,\lambda}\setminus D^-$. This means the map $\arg(\Omega|_{L_b})_*:\pi_1(L_b)\to \pi_1(\mathbb{S}^1)=\mathbb{Z}$ induced by $\arg(\Omega|_{L_b}):L_b\to \mathbb{S}^1$ is trivial, and hence the Maslov class of $L_b$ vanishes in $\mathfrak{M}_{u,\lambda}\setminus D^{-}$, i.e. $\arg(\Omega|_{L_b})$ lifts to a real-valued function. It is then a well known fact (see \cite[Lemma 3.1]{Auroux07} and \cite{AAK}) that $\mu(\beta)=2\beta\cdot D^{-}$. \end{proof} \begin{prop} \label{prop:walls} The set of points $b\in B^0$ such that the fiber $L_b$ bound nontrivial holomorphic discs of Maslov index $0$ is the union $\bigcup_{i=1}^n W_i$, where $W_i$ is defined by \[ W_i=\{(s,\tau)\in B^0|\tau\in \pi_{s}(H_{\mathbb{C},i})\}. \] \end{prop} We will refer to $W_i$ as the walls of Lagrangian torus fibration $\pi:\mathfrak{M}_{u,\lambda}\to B$. \begin{proof} Let $L_b$ be the fiber of $\pi$ over $b=(s,\tau)\in B^0$. Then, $L_b$ is contained in the level set $\bar{\mu}_{\mathbb{R}}^{-1}(\frac{s}{2})$. Let $u:(D^2,\partial D^2)\to (\mathfrak{M}_{u,\lambda},L_b)$ be a holomorphic disc with boundary in $L_b$ representing a disc class $\beta\in\pi_2(\mathfrak{M}_{u,\lambda},L_b)$ with $\mu(\beta)=0$. Denote by $L_{red}$ the projection of $L_b$ to $\mathbb{C}^d$ via $\bar{\mu}_{\mathbb{C}}$. $L_{red}$ is a Lagrangian torus with respect to $\omega_{red,s}$, and its projection to the $i^{\mathrm{th}}$ component is a loop around $c_i$. The image of the holomorphic disc $\bar{\mu}_{\mathbb{C}}\circ u:(D^2,\partial D^2)\to (\mathbb{C}^d,L_{red})$ is contained in $\mathbb{C}^d\setminus\{c\}$ by Proposition \ref{lemma:maslov}. By maximal principle, $\bar{\mu}_{\mathbb{C}}\circ u$ is necessarily constant. This means the image of $u$ is contained in a fiber $\bar{\mu}_{\mathbb{C}}^{-1}(v_0)$ for some $v_0\in \mathbb{C}^d$. If $b\notin W_i$ for all $i$, then we have $v_0\notin H_{\mathbb{C},i}$ for all $i$. In this case, $\bar{\mu}_{\mathbb{C}}^{-1}(v_0)\cong (\mathbb{C}^{\times})^d$, while $\bar{\mu}_{\mathbb{C}}^{-1}(v_0)\mathbin{\scalebox{1.5}{\ensuremath{\cap}}} L_b=T^d$ is a product torus in $(\mathbb{C}^{\times})^d$ centered at the origin. Maximal principle then implies that $u$ is necessarily constant. On the other hand, let $I\subset\{1,\ldots,n\}$ be the set of indices such that $b\in W_i$ and suppose $I\ne\emptyset$. Then, we can have $v_0\in H_{\mathbb{C},i}$ for $i\in I'$ where $I'\subset I$ is a nonempty subset. In which case, $\bar{\mu}_{\mathbb{C}}^{-1}(v_0)\cong(\mathbb{C}\cup_0\mathbb{C})^{|I'|}\times(\mathbb{C}^{\times})^{d-|I'|}$. $\bar{\mu}_{\mathbb{C}}^{-1}(v_0)\mathbin{\scalebox{1.5}{\ensuremath{\cap}}} L_b=T^d$ is a product torus in $(\mathbb{C}\mathbin{\scalebox{1.5}{\ensuremath{\cup}}}_0\mathbb{C})^{|I'|}\times(\mathbb{C}^{\times})^{d-{|I'|}}$ such that each $\mathbb{C}\mathbin{\scalebox{1.5}{\ensuremath{\cup}}}_0\mathbb{C}$ contains a $\mathbb{S}^1$-component of $T^d$ in one of the irreducible components (depending on the signs of the corresponding components of $s$). It is then easy to see that $\bar{\mu}_{\mathbb{C}}^{-1}(v_0)\mathbin{\scalebox{1.5}{\ensuremath{\cap}}} L_b$ bounds exactly ${|I'|}$ nonconstant holomorphic discs (and all their multiple covers) of Maslov index $0$. \end{proof} \begin{remark} The construction of $(\phi_{s})_{s\in (\mathfrak{t}^d)^*}$ in Lemma \ref{lemma:moser} gives us an one-parameter family of homeomorphisms of $B^0$ taking each $W_i$ to $(\mathbb{R}^d\setminus H_{\mathbb{R},i})\times\mathrm{Log}_t(H_{\mathbb{C},i})$, where $\mathrm{Log}_t(H_{\mathbb{C},i})$ is a amoeba that retracts to a tropical hyperplane in $\mathbb{T}^d$ as $t\to\infty$. Since we only need the wall and chamber structure on $B^0$ for the mirror construction, which is purely combinatorial, we will simply illustrate each $W_i$ as a tropical hyperplane in $\mathbb{T}^d$ (see Fig \ref{fig:chambers}). \end{remark} \subsection{Chambers and simply connected affine charts.} \label{sec:chambers} Let $H$ be a tropical hyperplane in $\mathbb{T}^d$ defined by the tropical polynomial $\max\{\tau_{i_1},\ldots,\tau_{i_m},a\}$. $H$ divides $\mathbb{T}^d$ into tropical chambers each of which a monomial of the defining equation attains maximum. We label the chamber where the constant $a$ attains maximum by $0$, and the chamber where the monomial $\tau_{i}\in\{\tau_{i_1},\ldots,\tau_{i_m}\}$ attains maximum by $i$. Using this convention, we can label the chambers given by a simple arrangement of tropical hyperplanes $\{H_i\}_{i=1}^n$ by $n$-tuples $\bm{h}=(h_1,\ldots,h_n)$, where $h_i\in\{0,\ldots,d\}$ indicates the position of the chamber relative to $H_i$. Let $\mathcal{H}=\{H_i\}_{i=1}^n$ be the arrangement of tropical hyperplanes $H_i$, where $H_i$ is the tropical limit of $\mathrm{Log}_t(H_{\mathbb{C},i})$. We can choose $\lambda_{\mathbb{C}}$ such that for $\ell=d+1,\ldots,n$, $|b_{\ell}|$ (in the expression $v_{\ell}=\sum_{i=1}^d a_{{\ell,i}}v_i+b_{\ell})$ are distinct powers of $t$, making $\mathcal{H}$ simple (i.e. every subset of $k$ tropical hyperplanes with nonempty intersection intersects in codimension $k$). We will denote by $\mathcal{C}_{\bm{h}}$ both the tropical chambers and their preimages in $B^0$. This shall not cause any confusion. Notice that the wall and chamber structure on $B^0$ depend on the choice of $\lambda_{\mathbb{C}}$. \begin{figure}[h] \begin{center} \label{fig:chambers} \includegraphics[scale=0.75]{tropical_chambers2.pdf} \caption{Tropical hyperplane arrangement and chambers} \end{center} \end{figure} Let $\sigma$ be a sign vector. We cover $B^0$ by simply connected affine charts $B^0_{\sigma}$ defined by \[ B^0_{\sigma}=\{(s,\tau)\in B^0|s\in H_{\mathbb{R},i}^{\sigma(i)} \text{ if } \tau\in \pi_s(H_{\mathbb{C},i})\text{ and }s\in \mathbb{R}^d \text{ if } \tau\notin \pi_s(H_{\mathbb{C},i})\text{ for all }i\}. \] \subsection{Effective disc classes of Maslov index $2$.} \label{sec:disc2} Let $b\in B^0_{\sigma}$ and assume $b$ is inside a chamber $\mathcal{C}_{\bm{h}}$. In particular, this means $b\notin W_i$ for all $i$. Let $\beta^-_{1},\ldots,\beta^-_{d}\in\pi_2(\mathfrak{M}_{u,\lambda},L_{b})$ be disc classes given by primitive cycles $\gamma_{\sigma,1},\ldots,\gamma_{\sigma,d}\in H_1(L_b,\mathbb{Z})$ such that $\gamma_{\sigma,i}$ vanishes in the singular fibers over $D^-_{i}$, and let $\alpha_1,\ldots,\alpha_n\in\pi_2(\mathfrak{M}_{u,\lambda},L_{b})$ be disc classes given by primitive cycles $\gamma_{\sigma,d+1},\ldots,\gamma_{\sigma,d+n}\in H_1(L_b,\mathbb{Z})$ such that $\gamma_{\sigma,d+i}$ vanishes in the fibers over the interior discriminant locus $\{(s,\tau)\in B|s\in H_{\mathbb{R},i}\text{ and }\tau\in\pi_{s}\left(H_{\mathbb{C},i}\right)\}$. When $b\in W_i$, $\alpha_i$ is the Maslov index $0$ disc class described in Proposition \ref{prop:walls}. We now classify the effective disc classes $\beta\in\pi_2(\mathfrak{M}_{u,\lambda},L_{b})$ of Maslov index $2$. \begin{prop} \label{prop:maslov2} The effective disc classes $\beta\in\pi_2(\mathfrak{M}_{u,\lambda},L_b)$ of Maslov index $2$ are of the following form: \begin{equation} \label{effdisc} \beta=\beta^-_{j}+\delta_1\alpha_{j_1}+\ldots+\delta_N\alpha_{j_N}, \quad j=1,\ldots,d, \end{equation} where $\delta_k\in\{0,1\}$, and $j_1,\ldots,j_N\in\{1,\ldots,n\}$ are the set of indices such that $h_{j_k}=j$. This means the projections of holomorphic discs of class $\beta$ in $B$ cross the walls $W_{j_1},\ldots,W_{j_N}$. \end{prop} \begin{proof} Let $u:(D^2,\partial D^2)\to (\mathfrak{M}_{u,\lambda},L_b)$ be a holomorphic disc of Maslov index $2$. For $i=1,\ldots,n$, denote by $\mathcal{Z}_i$ and $\mathcal{W}_i$ the the divisors \[ \mathcal{Z}_i=\{[z,w] \in\mathfrak{M}_{u,\lambda}| z_i=0\}, \] and \[ \mathcal{W}_i=\{[z,w] \in\mathfrak{M}_{u,\lambda}| w_i=0\}. \] Let $\bm{h}(j)=\{j_1,\ldots,j_N\}$. By Proposition \ref{lemma:maslov} and positivity of intersection, $u$ intersects exactly one divisor $D^-_{j}$ with multiplicity $1$. Thus, $u$ cannot intersect both $\mathcal{Z}_i$ and $\mathcal{W}_i$ for $i\in\bm{h}(j)$ by a winding number argument. For a splitting $I^+\coprod I^-=\bm{h}(j)$ of $\bm{h}(j)$, we define an open subset $U_{(I^+,I^-)}\subset\mathfrak{M}_{u,\lambda}$ by \begin{equation} \label{conic_bundle} U_{(I^+,I^-)}=\{[z,w]\in\mathfrak{M}_{u,\lambda}|z_i\ne0 \text{ if } i\in \{1,\ldots,n\}\setminus I^-; w_i\ne 0\text{ if } i\in I^-\}. \end{equation} We have $L_b\in U_{(I^+,I^-)}$ for all splittings $(I^+,I^-)$, and $u(D^2)\subset U_{(I^+,I^-)}$ for exactly one $(I^+,I^-)$ since $b\notin W_i$ for all $i$. Note that each $U_{(I^+,I^-)}$ is biholomorphic to the trivial $(\mathbb{C}^{\times})^d$-bundle over $\mathbb{C}^d$. Let $(v_1,\ldots,v_d,\nu_1,\ldots,\nu_d)$ be the complex coordinates on $U_{(I^+,I^-)}$ with $v_i=z_iw_i$ the base coordinates and $\nu_i$ the fiber coordinates. Assume $u(D^2)\subset U_{(I^+,I^-)}$ and write $u:(D^2,\partial D^2)\to (U_{(I^+,I^-)},L_b)$ as \[ u(\zeta)=(v_1(\zeta),\ldots,v_d(\zeta),\nu_1(\zeta),\ldots,\nu_d(\zeta)). \] By maximal principle, only the $v_j$-component of $u$ is nonconstant. The $v_j$-component of $u$ is unique up to reparametrization. This means all holomorphic discs $u$ of Maslov index $2$ with $u(D^2)\subset U_{(I^+,I^-)}$ for a splitting $(I^+,I^-)$ represent the same disc class in $\pi_2(\mathfrak{M}_{u,\lambda},L_{b})$, which we denote by $\beta_{(I^+,I^-)}$. For $i\in I$, set $\mathrm{sgn}(i)=+$ if $i\in I^+$ and $\mathrm{sgn}(i)=-$ if $i\in I^-$. We claim that $\beta_{(I^+,I^-)}=\beta^-_{j}+\delta_1\alpha_{j_1}+\ldots+\delta_N\alpha_{j_N}$, where $\delta_k=1$ if $\mathrm{sgn}(j_k)\ne \sigma(j_k)$, and $\delta_k=0$ if $\mathrm{sgn}(j_k)= \sigma(j_k)$. Since $\mathfrak{M}_{u,\lambda}$ is simply connected (see \cite[Theorem 6.7]{BD}), the following long exact sequence \[ \cdots\to\pi_2(L_b)=0\to\pi_2(\mathfrak{M}_{u,\lambda})\cong H_2(\mathfrak{M}_{u,\lambda};\mathbb{Z})=\mathfrak{k}_{\mathbb{Z}}\hookrightarrow\pi_2(\mathfrak{M}_{u,\lambda},L_b)\to\pi_1(L_b)\to\pi_1(\mathfrak{M}_{u,\lambda})=0\to\cdots \] shows that $\pi_2(\mathfrak{M}_{u,\lambda},L_b)$ is generated by the disc classes $\beta_1,\ldots,\beta_d,\alpha_1,\ldots,\alpha_n$. This combined with the intersection numbers of these generators with the divisors proves our claim. \end{proof} \subsection{Regularity and open Gromov--Witten invariants} \label{regularity} We now prove regularity of the disc classes in (\ref{effdisc}) and compute relevant open Gromov--Witten invariants necessary for the mirror construction. Our strategies of proofs are similar to that of Lemma 7 and Corollary 8 in \cite{Auroux15}. Let $u:(D^2,\partial D^2)\to (\mathfrak{M}_{u,\lambda},L_b)$ be a holomorphic disc. Denote by $(\mathcal{E},\mathcal{F})$ the sheaf of holomorphic sections of $E=u^*T\mathfrak{M}_{u,\lambda}$ with boundary values in $F=(u|_{\partial D^2})^*TL_b$. Denote by $\mathcal{A}^0(E,F)$ the sheaf of smooth sections of $E$ with boundary values in $F$, and $\mathcal{A}^{(0,1)}(E)$ the sheaf of smooth $E$-valued $(0,1)$-forms. \begin{lemma}{\cite[Lemma 6.2]{CO}} The sequence \begin{equation} \label{ses4} 0\longrightarrow (\mathcal{E},\mathcal{F})\longrightarrow \mathcal{A}^0(E,F)\overset{\bar{\partial}}\longrightarrow\mathcal{A}^{(0,1)}(E)\longrightarrow 0 \end{equation} defines a fine resolution of $(\mathcal{E},\mathcal{F})$. \end{lemma} \begin{prop} \label{prop:regularity} The holomorphic discs representing classes in (\ref{effdisc}) are Fredholm regular, i.e. its linearization $\bar{\partial}$ is surjective. \end{prop} \begin{proof} Let $u:(D^2,\partial D^2)\to (\mathfrak{M}_{u,\lambda},L_b)$ be a holomorphic disc of class $\beta$ in (\ref{effdisc}) in . Denote by $u_{red}$ the composition $\bar{\mu}_{\mathbb{C}}\circ u:(D^2,\partial D^2)\to (\mathbb{C}^d,L_{red})$, where $L_{red}=\bar{\mu}_{\mathbb{C}}(L_b)$. Let $\mathcal{L}_{\mathbb{R}}$ and $\mathcal{L}_{\mathbb{C}}$ be the real and complex spans of the vector fields generating the $T^n/K$-action. Suppose $\beta\cdot[D^-_j]=1$, then, as noted in the proof of Proposition \ref{prop:maslov2}, both $L_b$ and the image of u are contained in an open set $U_{(I^+,I^-)}$ (see (\ref{conic_bundle})) for a splitting $(I^+,I^-)$ of $\bm{h}(j)$. The $T^n/K$-action is free on $U_{(I^+,I^-)}$, and thus we have the following short exact sequences: \begin{equation} \label{SES1} 0\longrightarrow\mathcal{L}_{\mathbb{C}}\longrightarrow T\mathfrak{M}_{u,\lambda}\longrightarrow\bar{\mu}_{\mathbb{C}}^*T\mathbb{C}^d\longrightarrow 0, \end{equation} \begin{equation} \label{SES2} 0\longrightarrow\mathcal{L}_{\mathbb{R}}\longrightarrow TL_b\longrightarrow\bar{\mu}_{\mathbb{C}}^*TL_{red}\longrightarrow 0, \end{equation} in $U_{(I^+,I^-)}$. Pulling back the exact sequences above via $u$, we find that $E$ admits a trivial holomorphic subbundle $u^*\mathcal{L}_{\mathbb{C}}$, with a trivial real subbundle $(u|_{\partial D^2})^*\mathcal{L}_{\mathbb{R}}\subset F$ on the boundary. Since the $\bar{\partial}$-operator for complex-valued functions on the unit disc with trivial real boundary condition on the boundary circle is surjective, the surjectivity of $\bar{\partial}$ on sections of $E$ with boundary conditions $F$ is then equivalent to the surjectivity of $\bar{\partial}$ on the quotient bundle $E/u^*\mathcal{L}_{\mathbb{C}}=u_{red}^*T\mathbb{C}^d$ with boundary conditions $F/(u|_{\partial D^2})^*\mathcal{L}_{\mathbb{R}}=(u_{red}|_{\partial D^2})^*TL_{red}$. Since only the $j^{\mathrm{th}}$ component of $u_{red}$ is nonconstant, the surjectivity of $\bar{\partial}$ reduces to a one-dimensional Riemann-Hilbert problem which then follows from Theorem II and III in \cite{Oh}. \end{proof} \begin{prop} \label{prop:gw} With the notations as in Proposition \ref{prop:maslov2}, we have \[ n_{\beta}=\left\{ \begin{array}{ll} 1 & \mbox{for } \beta=\beta^-_{j}+\delta_1\alpha_{j_1}+\ldots+\delta_N\alpha_{j_N},\\ 0 & \mbox{otherwise.} \end{array} \right. \] \end{prop} \begin{proof} Due to dimension reason, we have $n_{\beta}=0$ for $\mu(\beta)\ne 2$. Suppose $\beta$ is an effective disc class with $\mu(\beta)=2$, intersecting the divisor $D^-_{j}$. Denote by $p\in\partial D^2$ be the unique boundary marked point on the unit disc $D^2$. Let $L_{red}=\bar{\mu}_{\mathbb{C}}(L_b)\subset\mathbb{C}^d$, and let $\bar{\beta}=(\bar{\mu}_{\mathbb{C}})_*\beta\in\pi_2(\mathbb{C}^d,L_{red})$. Denote by $\bar{D}^-_{i}$ the divisor $\{(v_1,\ldots,v_d)\in\mathbb{C}^d|v_i=c_i\}$. We have $\bar{\beta}\cdot [\bar{D}^-_{j}]= 1$, and $\bar{\beta}\cdot[\bar{D}^-_{i}]=0$ for $i\ne j$. Let's first consider the moduli space $\mathcal{M}_1(L_{red},\bar{\beta})$. By maximal principle, for any $[\bar{u}]\in\mathcal{M}_1(L_{red},\bar{\beta})$, all but the $j^{\mathrm{th}}$ component of $\bar{u}$ are constant, and the $j^{\mathrm{th}}$ component of $\bar{u}$ is unique up to automorphisms of $D^2$ fixing $p$. Thus, for each $q\in L_{red}$, there exists a unique $[\bar{u}]\in\mathcal{M}_1(L_{red},\bar{\beta})$ with $\bar{u}(p)=q$. Moreover, the map $\mathrm{ev}:\mathcal{M}_1(L_{red},\bar{\beta})\to L_{red}$ given by evaluation at the boundary marked point is a diffeomorphism. Now, consider the projection $\mathcal{M}_1(L_b,\beta)\to\mathcal{M}_1(L_{red},\bar{\beta})$ given by post-composing holomorphic discs $u:(D^2,\partial D^2)\to (\mathfrak{M}_{u,\lambda},L_b)$ with $\bar{\mu}_{\mathbb{C}}$. We will show momentarily that for any given $[\bar{u}]\in\mathcal{M}_1(L_{red},\bar{\beta})$, and a lift $\tilde{q}\in L_b$ of $q$, there exist a unique $[u]\in \mathcal{M}_1(L_b,\beta)$ with $\bar{\mu}_{\mathbb{C}}\circ u=\bar{u}$ and $u(p)=\tilde{q}$. Any holomorphic disc in ${M}_1(L_b,\beta)$ has its image is contained in $U_{(I^+,I^-)}$ for a splitting $(I^+,I^-)$ of $\bm{h}(j)$. Recall that $U_{(I^+,I^-)}$ is biholomorphic to the trivial $(\mathbb{C}^{\times})^d$-bundle over $\mathbb{C}^d$. Denote by $(v_1,\ldots,v_d,\nu_1,\ldots,\nu_d)$ the complex coordinates on this open set with $v_1,\ldots,v_d$ being the base coordinates and $\nu_1,\ldots,\nu_d$ being the fiber coordinates. Write $\tilde{q}=(\tilde{q}_1,\ldots,\tilde{q}_{2d})$. We define the lift of $\bar{u}$ to be the holomorphic disc $u:(D^2,\partial D^2)\to (U_{(I^+,I^-)},L_b)$ defined by \[ u(\zeta)=(\bar{u}(\zeta),\tilde{q}_{d+1},\ldots,\tilde{q}_{2d}). \] We have a free $T^d$-action on $\mathcal{M}_1(L_b,\beta)$ given by composing holomorphic discs $[u]\in\mathcal{M}_1(L_b,\beta)$ with the $T^d$-action on $\mathfrak{M}_{u,\lambda}$. The orbits of this action are exactly the fibers of $\mathcal{M}_1(L_b,\beta)\to\mathcal{M}_1(L_{red},\bar{\beta})$. Therefore, $\mathcal{M}_1(L_b,\beta)\to\mathcal{M}_1(L_{red},\bar{\beta})$ is a $T^d$-bundle. Since the evaluation map $\mathrm{ev}:\mathcal{M}_1(L_b,\beta)\to L_b$ is $T^d$-equivariant, it is again a diffeomorphism, i.e. it is of degree $\pm 1$. As for the orientations of $\mathcal{M}_1(L_b,\beta)$, recall that a spin structure on $L_b$ determines an orientation on $\mathcal{M}_1(L_b,\beta)$ (see \cite[Chapter 8]{FOOO}). Since $L_{red}$ is isotopic to the standard product torus in $\mathbb{C}^d$, we can choose the standard spin structure on $L_{red}$ such that $\mathrm{ev}:\mathcal{M}_1(L_{red},\bar{\beta})\to L_{red}$ is orientation-preserving. We choose the spin structure on $L_b$ to be standard along the $T^d$-orbits and consistent under the splitting (\ref{SES2}) with the spin structure previously chosen on $L_{red}$. Then, with the induced orientation on $\mathcal{M}_1(L_b,\beta)$, the evaluation map $\mathrm{ev}:\mathcal{M}_1(L_b,\beta)\to L_b$ is orientation-preserving, i.e. it is of degree $1$. \end{proof} \begin{prop} \label{prop:unobstructed} $L_b$ is weakly unobstructed. \end{prop} \begin{proof} Due to degree reason, only stable holomorphic discs of Maslov index less than or equal to $2$ can contribute to $\mathfrak{m}_0^b$. In our case there is no stable discs with negative Maslov index (Prop. \ref{lemma:maslov}). Thus the only discs with Maslov index less than $2$ are the constant ones, which are not stable since there is only one output marking. For an effective disc class $\beta$ of Maslov index $2$, the evaluation map at the boundary marked point gives a diffeomorphism $\mathcal{M}_1(L_b,\beta) \to L_b$ (Prop. \ref{prop:gw}). Hence $\mathfrak{m}_0^b$, which is the sum over $\beta$ of $\mathrm{ev}_* [\mathcal{M}_1(L_b,\beta)]$ weighted by $T^{-\int_\beta\omega}$ (where $T$ is the formal Novikov parameter), is proportional to the fundamental class of $L_b$. \end{proof} \subsection{Partial compactifications of hypertoric varieties.} \label{sec:cptfy} Our idea of constructing the mirror $\mathfrak{M}_{u,\lambda}^{\vee}$ is to construct coordinate functions of $\mathfrak{M}_{u,\lambda}^{\vee}$ by counting holomorphic discs emanating from boundary divisors of ${\mathfrak{M}_{u,\lambda}}$. The problem is that in our situation, $B$ has only $d$ codimension-one boundary, while we need $2d$ coordinate functions. To resolve this, one may consider counting holomorphic cylinders (with one boundary component on $L$ and the other asymptotic to infinity), which requires the extra work of defining rigorously the corresponding Gromov--Witten invariants. Another way is to consider a partial compactification of ${\mathfrak{M}_{u,\lambda}}$ by adding \textit{divisors at infinity} and count the additional holomorphic Maslov index $2$ discs emanated from these divisors. We will use the second approach in this paper. This method was used in \cite{CLL}, \cite{AAK} to construct mirrors of Calabi-Yau toric varieties, and blow-ups of toric varieties along a hypersurface. Recall from Remark \ref{rmk:holfib} that the holomorphic moment map $\bar{\mu}_{\mathbb{C}}:\mathfrak{M}_{u,\lambda}\to\mathbb{C}^d$ is a holomorphic $(\mathbb{C}^{\times})^d$-fibration. We can partially compactify $\mathfrak{M}_{u,\lambda}$ by extending $\bar{\mu}_{\mathbb{C}}$ to a holomorphic $(\mathbb{C}^{\times})^d$-fibration over $(\mathbb{P}^1)^d$. Let $([\zeta_1:\tilde{\zeta_1}],\ldots,[\zeta_n:\tilde{\zeta_n}])$ be the homogeneous coordinates on $(\mathbb{P}^1)^n$. We embed $\mathbb{C}^d$ into $(\mathbb{P}^1)^n$ via the map $(v_1,\ldots,v_d)\mapsto ([v_1:1],\ldots,[v_d:1],[v_{d+1}:1],\ldots,[v_{n}:1])$, where $v_{\ell}=\sum_{k=1}^d a_{\ell k}v_k+b_{\ell}$ for $\ell=d+1,\ldots,n$,. Its closure $\overline{\mathbb{C}^d}$ in $(\mathbb{P}^1)^n$ is defined by the following homogeneous polynomials \[ f_{\ell}=\tilde{\zeta_1}\ldots\tilde{\zeta}_d\zeta_{\ell}-\sum_{i=1}^d a_{\ell i} \tilde{\zeta}_1\ldots\zeta_i\ldots\tilde{\zeta}_d\tilde{\zeta}_{\ell}+b_{\ell}\tilde{\zeta}_1\ldots\tilde{\zeta}_d\tilde{\zeta}_{\ell}, \quad \ell=d+1,\ldots,n, \] and is biholomorphic to $(\mathbb{P}^1)^d$. The hyperplanes $\{H_{\mathbb{C},i}\}_{i=1}^n$ extends naturally to divisors $\{\bar{H}_{\mathbb{C},i}\}_{i=1}^n$ on $\overline{\mathbb{C}^d}$ defined by \[ \bar{H}_{\mathbb{C},i}=\{([\zeta_1:\tilde{\zeta_1}],\ldots,[\zeta_n:\tilde{\zeta_n}])\in\overline{\mathbb{C}^d}|\zeta_i=0\}. \] Let $E$ be total space of the rank $2n$ complex vector bundle on $(\mathbb{P}^1)^n$ defined by \[ E=\mathcal{O}(\bar{H}_{\mathbb{C},1})\oplus\mathcal{O}_1\oplus\ldots\oplus\mathcal{O}(\bar{H}_{\mathbb{C},n})\oplus\mathcal{O}_n\to (\mathbb{P}^1)^n, \] where $\mathcal{O}_i=\mathcal{O}$ are trivial complex line bundles. Denote by $w_i$ the fiber coordinate of $\mathcal{O}_i$, $z_i$ the local coordinate of the $\mathcal{O}(\bar{H}_{\mathbb{C},i})$ over $U_i=\{\tilde{\zeta}_i\ne 0\}$, and $\tilde{z}_i$ the local coordinate of $\mathcal{O}(\bar{H}_{\mathbb{C},i})$ over $\tilde{U}_i=\{\zeta_i\ne 0\}$. The gluing between $\mathcal{O}(\bar{H}_{\mathbb{C},i})|_{U_i}$ and $\mathcal{O}(\bar{H}_{\mathbb{C},i})|_{\tilde{U}_i}$ is given by $z_i\tilde{\zeta}_i=\zeta_i\tilde{z}_i$. For $i=1,\ldots,d$, let $g_i=z_i\tilde{\zeta}_iw_i-\zeta_i$. Let $V\subset E$ the subvariety defined by the ideal $(f_{d+1},\ldots,f_n,g_1,\ldots,g_n)$. We now define a $(\mathbb{C}^{\times})^n$-action on $E$. For $\vec{t}=(t_1,\ldots,t_n)\in(\mathbb{C}^{\times})^n$, let $\vec{t}$ act on $\mathcal{O}(\bar{H}_{\mathbb{C},i})$ via multiplication by $t_i$ and on $\mathcal{O}_i$ via multiplication by $t_i^{-1}$. Let $\vec{t}$ act trivially on the base $(\mathbb{P}^1)^n$. $V$ is then a $(\mathbb{C}^{\times})^n$-invariant subvariety of $E$. Let $K_{\mathbb{C}}\subset (\mathbb{C}^{\times})^n$, and $\lambda_{\mathbb{R}}:K_{\mathbb{C}}\to \mathbb{C}^{\times}$ be the same as in Definition \ref{def:hypertoric}. Then, the GIT quotient \[ \overline{\mathfrak{M}}_{u,\lambda}=V//_{\lambda_{\mathbb{R}}} K_{\mathbb{C}} \] is a partial compactification of $\mathfrak{M}_{u,\lambda}$. The embedding $\mathfrak{M}_{u,\lambda}\hookrightarrow\overline{\mathfrak{M}}_{u,\lambda}$ is holomorphic and $(\mathbb{C}^{\times})^n/K_{\mathbb{C}}$-equivariant. Alternatively, we can construct $\overline{\mathfrak{M}}_{u,\lambda}$ via symplectic reduction. Notice that the subbundles $\mathcal{O}(\bar{H}_{\mathbb{C},i})\to (\mathbb{P}^1)^n$ of $E$ are the pullbacks of $\mathcal{O}(1)\to \mathbb{P}^1$ via the projections $(\mathbb{P}^1)^n\to \mathbb{P}^1$ to the $i^{\mathrm{th}}$ component. The sum of pullbacks of Fubini-Study form then defines a Kähler form on the total space of the subbundle $\bigoplus_{i=1}^n\mathcal{O}(\bar{H}_{\mathbb{C},i})$. Combined with the standard symplectic form on the fibers of $\mathcal{O}_i$, we have a $T^n$-invariant Kähler form $\omega_E$ on $E$. We can construct $\overline{\mathfrak{M}}_{u,\lambda}$ as the symplectic reduction of $V$ at level $\lambda_{\mathbb{R}}$ with respect to the action of the maximal torus $K\subset K_{\mathbb{C}}$ and the restriction of $\omega_{E}$ to $V$. This equips $\overline{\mathfrak{M}}_{u,\lambda}$ with a $T^n/K$-invariant Kähler form $\bar{\omega}$. We can then construct a Lagrangian torus fibration \[ \bar{\pi}:\overline{\mathfrak{M}}_{u,\lambda}\to\bar{B}=\mathbb{R}^d\times(\mathbb{R}\cup\{\pm\infty\})^d \] using symplectic reductions as in Section \ref{sec:fib}. The reduced spaces are biholomorphic to $(\mathbb{P}^1)^d$. Since the reduced spaces are now compact, the construction of $\bar{\pi}$ is simple applications of Moser's trick, and hence omitted. The discriminant loci $\bar{\Sigma}$ of $\bar{\pi}$ is the union $\Sigma$ and the new boundaries of $\bar{B}$ at infinity. Notice that we have $\bar{B}\setminus\bar{\Sigma}=B^0\subset B$. \begin{remark} If we were to strictly follow the SYZ construction outlined in Section \ref{sec:SYZ}, we could have compactified $\mathfrak{M}_{u,\lambda}$ by compactifying the fiber directions of $E$. However, since the cycles $\gamma_{\sigma,d+1},\ldots,\gamma_{\sigma,2d}\in H_1(L_b;\mathbb{Z})$ (see Section \ref{sec:disc2}) are monodromy-invariant, the count of holomorphic discs emanated from the these additional divisors would receive no quantum correction. Therefore, it suffices to consider the partial compactification $\overline{\mathfrak{M}}_{u,\lambda}$. \end{remark} We now state the results analogous to Propositions \ref{lemma:maslov}, \ref{prop:walls}, \ref{prop:maslov2}, \ref{prop:regularity}, \ref{prop:unobstructed} and \ref{prop:gw} in order to define the additional generating functions. We will be brief since the proofs are nearly identical to the previous ones. Denote by $D^+_{i}$ the divisor given by \[ D^+_{i}=\{(\{\tilde{\zeta}_i=0\}\mathbin{\scalebox{1.5}{\ensuremath{\cap}}} V)//_{\lambda_{\mathbb{R}}} K_{\mathbb{C}}\}.\] Let $D^+:=\sum_{i=1}^d D^+_{i}$, and set $D:=D^-+D^+$. We will assume the isotopies obtained from Moser's tricks leaves $D$ invariant. \begin{prop} \label{prop:cptmaslov} Let $L_b$ be the fiber of $\bar{\pi}:\overline{\mathfrak{M}}_{u,\lambda}\to\bar{B}$ over $b\in B^0$. For any disc class $\beta\in \pi_2(\overline{\mathfrak{M}}_{u,\lambda}, L_b)$, the Maslov index $\mu(\beta)$ is equal to twice the algebraic intersection number $\beta\cdot [D]$. \end{prop} \begin{proof} We first extend the meromorphic volume form $\Omega$ (see Proposition \ref{lemma:maslov}) on $\mathfrak{M}_{u,\lambda}$ to a meromorphic volume form on $\overline{\mathfrak{M}}_{u,\lambda}$ with generically simple poles along $D$. Consider the form \[ \bar{\Omega}=\cfrac{\bigwedge_{i=1}^n d\log\xi_i\wedge d\log w_i}{\prod_{i=1}^n 1-\frac{c_i}{\xi_i}} \] defined on $U=\{\tilde{\zeta}_i\ne 0,\forall i\}\subset E$. Its restriction to $U\cap V$ descends to $\Omega$ on $\mathfrak{M}_{u,\lambda}$. Let $I\subset\{1,\ldots,n\}$, and set $U_I=\{\tilde{\zeta}_i\ne 0,\forall i\in I\}$, $\tilde{U}_I=\{\zeta_i\ne 0,\forall i\in I\}$. Let $I^-\coprod I^+$ be a splitting of $\{1,\ldots,n\}$. We extend $\bar{\Omega}$ to $W$ by defining it to be \[ \cfrac{(-1)^{\mathrm{sgn}(I^-,I^+)}\left(\bigwedge_{i\in I^-}d\log\xi_i\wedge d\log w_i\right)\left(\bigwedge_{j\in I^+}-d\log \tilde{\xi_j}\wedge d\log w_j\right)}{\left(\prod_{i\in I^-} 1-\frac{c_i}{\xi_i}\right)\left(\prod_{j\in I^+} 1-c_j\tilde{\xi_j}\right)} \] on $U_{I^-}\cap \tilde{U}_{I^+}$, where $\mathrm{sgn}(I^-,I^+)$ is the sign of the concatenation of $I^-$ and $I^+$ as a permutation. Note that the expression above is simply given by rewriting $\bar{\Omega}$ under the change of coordinates. We denote the extension of $\bar{\Omega}$ to $E$ again by $\bar{\Omega}$. $\bar{\Omega}$ is $(\mathbb{C}^{\times})^n$-invariant, hence its restriction to $V$ descends to a meromorphic volume form on $\overline{\mathfrak{M}}_{u,\lambda}$, which is the extension of $\Omega$. With $\bar{\Omega}$ constructed, the proof then follows from Proposition \ref{lemma:maslov}. \end{proof} The restriction of the projection $E\to (\mathbb{P}^1)^n$ to $V$ descends to a holomorphic $(\mathbb{C}^{\times})^d$-fibration $\rho:\overline{\mathfrak{M}}_{u,\lambda}\to (\mathbb{P}^1)^d$, extending $\bar{\mu}_{\mathbb{C}}:\mathfrak{M}_{u,\lambda}\to\mathbb{C}^d$. We denote by $\rho_0=\bar{\mu}_{\mathbb{C}}:\overline{\mathfrak{M}}_{u,\lambda}\setminus D^+\to\mathbb{C}^d$, and $\rho_{\infty}:\overline{\mathfrak{M}}_{u,\lambda}\setminus D^-\to\mathbb{C}^d$ the restrictions of $\rho$ to the respective domains. \begin{prop} \label{walls2} The walls of the Lagrangian torus fibration $\bar{\pi}:\overline{\mathfrak{M}}_{u,\lambda}\to\bar{B}$ are the sets $\{W_i\}_{i=1}^n$ defined in Proposition \ref{prop:walls}. \end{prop} \begin{proof} Since $\bar{B}\setminus\bar{\Sigma}=B^0$, any fiber over $B^0$ is contained in $\mathfrak{M}_{u,\lambda}$. By Proposition \ref{prop:cptmaslov}, any Maslov index $0$ holomorphic disc $u:(D^2,\partial D^2)\to (\overline{\mathfrak{M}}_{u,\lambda},L_b)$ is contained in $\mathfrak{M}_{u,\lambda}$. Composing $u$ with $\rho_0$ reduces this to Proposition\ref{prop:walls}. \end{proof} We again denote by $\mathcal{C}_{\bm{h}}$ and $B^0_{\sigma}$ the chambers and simply connected affine charts on $B^0\subset\bar{B}$, respectively. Let's fix a reference point $b\in B^0_{\sigma}$ and assume $b\in \mathcal{C}_{\bm{h}}$. We now classify the effective disc classes $\beta\in\pi_2(\overline{\mathfrak{M}}_{u,\lambda},L_{b})$ with $\mu(\beta)=2$. We express any vector $v$ in the basis $\{u_1,\ldots,u_d\}$, and denote the corresponding coefficients by $v^{(i)}$. \begin{prop} \label{prop:cptmaslov2} Denote by $\beta^+_{1},\ldots,\beta^+_{d}\in\pi_2(\overline{\mathfrak{M}}_{u,\lambda},L_b)$ the disc classes given by the cycles $\gamma_{\sigma,1}\ldots,\gamma_{\sigma,d}\in H_1(L_b;\mathbb{Z})$ (see Section \ref{sec:disc2}) vanishing on $D^+_{1},\ldots,D^+_{d}$. The effective disc classes $\beta\in\pi_2(\overline{\mathfrak{M}}_{u,\lambda},L_b)$ with $\mu(\beta)=2$ are of the form \begin{equation} \label{effdisc2} \beta=\beta^{\pm}_{j}+\delta_1\alpha_{j_1}+\ldots+\delta_N\alpha_{j_N},\quad j=1,\ldots,d, \end{equation} where $\delta_i\in\{0,1\}$, and $j_1,\ldots,j_N\in\{1,\ldots,n\}$ is the set of indices such that $h_{j_k}=j$ if $\beta$ has the $\beta^-_{j}$ component, and it is the set indices such that $u_{j_k}^{(j)}\ne 0$ and $h_{j_k}\ne j$ if $\beta$ has the $\beta^+_{j}$ component. This means the projections of holomorphic discs of class $\beta$ in $\bar{B}$ cross the walls $W_{j_1},\ldots,W_{j_N}$. \end{prop} \begin{proof} By Proposition \ref{prop:cptmaslov}, an effective disc class $\beta\in\pi_2(\overline{\mathfrak{M}}_{u,\lambda},L_b)$ with $\mu(\beta)=2$ must intersects either $D^-$ or $D^+$ with multiplicity $1$. In either case, we can classify the effective disc classes by using local charts as in Proposition \ref{prop:maslov2}. \end{proof} \begin{prop} The holomorphic discs representing classes in (\ref{effdisc2}) are Fredholm regular. \end{prop} \begin{proof} Let $u:(D^2,\partial D^2)\to (\overline{\mathfrak{M}}_{u,\lambda},L_b)$ be a holomorphic disc representing $\beta$ in (\ref{effdisc2}). By the same argument as in \ref{prop:regularity}, regularity of $u$ is equivalent to regularity of $\rho_0\circ u$ if $\beta\cdot [D^{-}]=1$ and of $\rho_{\infty}\circ u$ if $\beta\cdot [D^{+}]=1$, which is then a one-dimensional Riemann-Hilbert problem and follows from Theorem II and III of \cite{Oh}. \end{proof} \begin{prop} \label{prop:gw2} With the notations as in Proposition \ref{prop:cptmaslov2}, we have \[ n_{\beta}=\left\{ \begin{array}{ll} 1 & \mbox{for } \beta=\beta^{\pm}_{j}+\delta_1\alpha_{j_1}+\ldots+\delta_N\alpha_{j_N},\\ 0 & \mbox{otherwise}. \end{array} \right. \] \end{prop} \begin{proof} The proof is identical to that of Proposition \ref{prop:gw} except we have $T^d$-bundles $\mathcal{M}_1(L_b,\beta)\to\mathcal{M}_1(\rho_0(L_b),(\rho_0)_*(\beta))$ and $\mathcal{M}_1(L_b,\beta)\to\mathcal{M}_1(\rho_{\infty}(L_b),(\rho_{\infty})_*(\beta))$ depending on whether $\beta\cdot [D^{-}]=1$ or $\beta\cdot [D^{+}]=1$. \end{proof} Similar to Proposition \ref{prop:unobstructed}, we have \begin{prop} $L_b$ is weakly unobstructed. \end{prop} \subsection{Generating functions of open Gromov--Witten invariants and wall-crossing} \label{sec:GF} Denote by $\mathfrak{M}_0^{\vee}$ the semi-flat mirror of $\mathfrak{M}_{u,\lambda}$(see Section \ref{sec:SYZ}). The semi-flat complex coordinates on $\mathfrak{M}_0^{\vee}$ is defined as follows. For each simply connected affine chart $B^0_{\sigma}$, we have an open subset $\mathfrak{M}_{0,\sigma}^{\vee}=(\pi^\vee)^{-1}(B^0_{\sigma})\subset\mathfrak{M}_0^{\vee}$. For each $\sigma$, we fix a reference point $b_{\sigma}\in B^0_{\sigma}$. Let $\{\gamma_{\sigma,1},\ldots,\gamma_{\sigma,2d}\}\subset H_1(L_{b_{\sigma}})$ be the cycles described in Section \ref{sec:disc2} and note that they form a primitive integer basis. \begin{definition} \label{def:semiflatcoord} The semi-flat complex coordinates on $\mathfrak{M}_0^{\vee}$ is defined locally on the charts $\mathfrak{M}_{0,\sigma}^{\vee}$ by \[ Z_{\sigma,i}(L_{b},\nabla)=\exp\left(-\int_{\Gamma_{\sigma,i}(b)}\bar{\omega}\right)\mathrm{Hol}_{\nabla}(\gamma_{\sigma,i}(b)),\quad i=1,\ldots,2d, \] where $\gamma_{\sigma,i}(b)\in H_1(L_{b},\mathbb{Z})$ is the parallel transport of $\gamma_{\sigma,i}$, and $\Gamma_{\sigma,i}(b)$ is the cylinder given by parallel-transporting $\gamma_{\sigma,i}$. \end{definition} The transition map between the charts $\mathfrak{M}_{0,\sigma}^{\vee}$ and $\mathfrak{M}_{0,\sigma'}^{\vee}$ is given by (exponential of) the integral affine transformation between $B^0_{\sigma}$ and $B^0_{\sigma'}$. \begin{definition} \label{def:gen} The generating functions $\bm{u}_j$ (resp. $\bm{v}_j$) for discs emanated from boundary divisors $D^-_{j}$ (resp. $D^+_{j}$) for $j=1,\ldots,d$, are given by \[ \bm{u}_j(L_b,\nabla)= \sum_{\substack{\beta \in \pi_2(X,L_b) \\ \beta \cdot D^-_{j} = 1, \beta\pitchfork D}} n_\beta \exp\left(-\int_{\beta}\bar{\omega}\right)\mathrm{Hol}_{\nabla}(\partial \beta), \] \[ \bm{v}_j(L_b,\nabla)= \sum_{\substack{\beta \in \pi_2(X,L_b) \\ \beta \cdot D^+_{j} = 1, \beta\pitchfork D}} n_\beta \exp\left(-\int_{\beta}\bar{\omega}\right)\mathrm{Hol}_{\nabla}(\partial \beta). \] \end{definition} Let $C_{\sigma,i}=\exp\left(-\int_{\beta^-_{i}}\bar{\omega}\right)$ and $C_{\sigma,d+i}=\exp\left(-\int_{\alpha_i}\bar{\omega}\right)$ for $i=1,\ldots,d$, where $\beta^-_{i},\alpha_i\in H_2(\overline{\mathfrak{M}}_{u,\lambda},L_{b_{\sigma}})$ are as descried in Section \ref{sec:disc2}. Since the cycles $\gamma_{\sigma,d+1},\ldots,\gamma_{\sigma,2d}\in H_1(L_b;\mathbb{Z})$ are monodromy-invariant, we have $C_{\sigma,d+i}Z_{\sigma,d+i}=C_{\sigma',d+i}Z_{\sigma',d+i}$ on $\mathfrak{M}_{0,\sigma}^{\vee}\mathbin{\scalebox{1.5}{\ensuremath{\cap}}}\mathfrak{M}_{0,\sigma'}^{\vee}$ for any pair $\sigma,\sigma'$ of sign vectors. Thus, $\bm{Z}_{i}:=C_{\sigma,d+i}Z_{\sigma,d+i}$ are global holomorphic functions on $\mathfrak{M}_0^{\vee}$. Let $S_{\ell}$ be the circuits corresponding to the relation $u_{\ell}=\sum_{i=1}^d a_{\ell i}u_i$ for $\ell=d+1,\ldots,n$. We have Kähler parameters $q^{\beta_{S_{\ell}}}$ associated to the primitive curve classes $\beta_{S_{\ell}}$ (see Section \ref{sec:circuits}). Let \[\bm{Z}_{\ell}:=q^{\beta_{S_{\ell}}}\prod_{i=1}^d Z_{d+i}^{u_{\ell}^{(i)}}=q^{\beta_{S_{\ell}}}\prod_{i=1}^d Z_{d+i}^{a_{\ell i}}.\] The generating functions can be expressed locally in term of semi-flat complex coordinates as follows. \begin{prop} For $j=1,\ldots,d$, denote by $\bm{j}$ be the collection of all $k\in\{1,\ldots,n\}$ such that $u_k^{(j)}\ne 0$. On the open subset $(\pi^\vee)^{-1}(B^0_{\sigma}\cap \mathcal{C}_{\bm{h}})\subset\mathfrak{M}_0^{\vee}$, we have \[ \bm{u}_j=C_{\sigma,j}Z_{\sigma,j}(1+\bm{Z}_{j_1})\ldots(1+\bm{Z}_{j_N}), \] where $j_1,\ldots,j_N$ are the set of indices such that $h_{j_i}=j$, and \[ \bm{v}_j=\exp\left(\int_{\mathbb{P}^1_{j}}-\bar{\omega}\right)C_{\sigma,j}^{-1}Z_{\sigma,j}^{-1}\left(\prod_{k\in \bm{j}\setminus \{j_1,\ldots,j_N\}}1+\bm{Z}_{k}\right). \] where $\mathbb{P}^1_{j}$ is the holomorphic sphere obtained from gluing $\beta^-_{j}$ and $\beta^+_{j}$ in $H_2(\overline{\mathfrak{M}}_{u,\lambda},L_{b_{\sigma}})$. \end{prop} \begin{proof} This follows from Propositions \ref{prop:cptmaslov2}, \ref{prop:gw2} and Definitions \ref{def:semiflatcoord},\ref{def:gen} (see also \cite[Proposition 4.39]{CLL}). \end{proof} \subsection{SYZ mirror and its resolution} \label{sec:mirror} Set $q^{\beta_{S_{\ell}}}=\exp\left(-\int_{\beta_{S_{\ell}}}\bar{\omega}\right)$. Since the curve classes $\beta_{S_{\ell}}$ are contained in $\mathfrak{M}_{u,\lambda}$, we can rescale $\bar{\omega}$ such that $\exp\left(-\int_{\beta_{S_{\ell}}}\bar{\omega}\right)=\exp\left(-\int_{\beta_{S_{\ell}}}\omega\right)$. By Definition \ref{def:SYZ}, an SYZ mirror is given by $\mathrm{Spec}(R)$ where $R$ is the subring of coordinate ring on $(\pi^{\vee})^{-1}(B^0\setminus\bigcup_i^n W_i)$ generated by the functions $\bm{u}_i$ and $\bm{v}_i$ for $i=1,\ldots,d$. By combining the above propositions, we obtain the following. \begin{theorem} \label{thm:SYZmir} An SYZ mirror of $\mathfrak{M}_{u,\lambda}- D^{-}$ is \[ \mathfrak{M}_{u,\lambda}^{\vee}=\left\{((\bm{u}_1,\bm{v}_1,\ldots,\bm{u}_d,\bm{v}_d), (\bm{Z}_{1},\ldots,\bm{Z}_{d}))\in\mathbb{C}^{2d}\times (\mathbb{C}^{\times})^d|\bm{u}_j\bm{v}_j=\prod_{k\in \bm{j}}(1+\bm{Z}_k), j=1,\ldots,d\right\}. \] \end{theorem} For simplicity we have rescaled the variables $\bm{u}_i$ so that the constant terms $\exp\left(-\int_{\mathbb{P}^1_{j}}\bar{\omega}\right)$ for $j=1,\ldots,d$ do not appear in the above expression. \begin{example} An SYZ mirror of $T^*\mathbb{P}^2$ is the subvariety of $\mathbb{C}^4\times (\mathbb{C}^{\times})^2$ given by \begin{align*} \bm{u}_1\bm{v}_1=(1+\bm{Z}_1)(1+q^{\beta_{S_3}}\bm{Z}_1^{-1}\bm{Z}_2^{-1});\\ \bm{u}_2\bm{v}_2=(1+\bm{Z}_2)(1+q^{\beta_{S_3}}\bm{Z}_1^{-1}\bm{Z}_2^{-1}). \end{align*} Note that this subvariety is singular at the one-dimensional loci $\{\bm{Z}_1=-1, \bm{Z}_2=q^{\beta_{S_3}}, \bm{u}_1=\bm{v}_1=0\}$ and $\{\bm{Z}_2=-1, \bm{Z}_1=q^{\beta_{S_3}}, \bm{u}_2=\bm{v}_2=0\}$. \end{example} In general $\mathfrak{M}_{u,\lambda}^{\vee}$ is singular. The wall and chamber structure of the Lagrangian torus fibration explained in Section \ref{sec:chambers} gives a resolution of $\mathfrak{M}_{u,\lambda}^{\vee}$, provided that $\mathfrak{M}_{u,\lambda}$ is smooth. In the following we construct this resolution. The construction can be justified by Lagrangian Floer theory of immersed Lagrangians which is explained in \cite{HL,HKL18} (see also \cite{seidel97} and \cite{PT17} for more Floer theoretical aspects on gluing the chambers). We will study more about Lagrangian Floer theory in future work. In the following we glue up the resolution from local charts by hand. \textbf{Step 1.} First we glue the charts corresponding to smooth torus fibers by wall-crossing functions. Recall that we have a collection of tropical hyperplanes which divide the base into chambers (see Figure \ref{fig:chambers} and Section \ref{sec:chambers} for the labels). For each chamber $\mathcal{C}_{\bm{h}}$, we define a chart $U_{\bm{h}}\cong (\mathbb{C}^{\times})^{d}\times (\mathbb{C}^{\times})^{d}$ by \[ U_{\bm{h}}=\left\{\left((\bm{u}^{(\bm{h})}_1,\bm{v}^{(\bm{h})}_1,\ldots,\bm{u}^{(\bm{h})}_d,\bm{v}^{(\bm{h})}_d),(\bm{Z}_1,\ldots,\bm{Z}_d)\right)\in (\mathbb{C}^{\times})^{2d}\times (\mathbb{C}^{\times})^{d}\bigr|\bm{u}^{(\bm{h})}_i\bm{v}^{(\bm{h})}_i=1, i=1,\ldots,d\right\}. \] Consider a pair of chambers $\mathcal{C}_{\bm{h}}$ and $\mathcal{C}_{\bm{h}'}$ where $\bm{h}=(h_1,\ldots,h_n)$ and $\bm{h}'=(h'_1,\ldots,h'_n)$. For $j=1,\ldots,d$, let $J_{j,\bm{h},\bm{h}'}$ be the set of all indices $k\in\{1,\ldots,n\}$ such that $h_k\ne h'_k$ and either $h_k=j$ or $h'_k=j$. These indices label the hyperplanes which give walls in between the two chambers involving the $j$-th direction. Let $U_{\bm{h},\bm{h}'}\subset U_{\bm{h}}$ be the open subset defined by \[ U_{\bm{h},\bm{h}'}=\left\{\left((\bm{u}^{(\bm{h})}_1,\bm{v}^{(\bm{h})}_1,\ldots,\bm{u}^{(\bm{h})}_d,\bm{v}^{(\bm{h})}_d),(\bm{Z}_1,\ldots,\bm{Z}_d)\right)\in U_{\bm{h}}\bigr|1+\bm{Z}_k\ne 0 \textrm{ for all } k\in\bigcup_{j=1}^d J_{j,\bm{h},\bm{h}'}\right\}. \] Let $\delta^{(\bm{h},\bm{h}')}_j=\delta^{(\bm{h}',\bm{h})}_j=0$ if $J_{j,\bm{h},\bm{h}'}=\emptyset$. Let $\delta^{(\bm{h},\bm{h}')}_j=1$ and $\delta^{(\bm{h}',\bm{h})}_j=0$ if there exist(and hence for all) $k\in J_{j,\bm{h},\bm{h}'}$ such that $h'_k=j$. Let $\delta^{(\bm{h},\bm{h}')}_j=0$ and $\delta^{(\bm{h}',\bm{h})}_j=1$ if there exist $k\in J_{j,\bm{h},\bm{h}'}$ such that $h_k=j$. We glue $U_{\bm{h}}$ and $U_{\bm{h}'}$ via the biholomorphism $\psi_{\bm{h},\bm{h}'}:U_{\bm{h},\bm{h}'}\to U_{\bm{h}',\bm{h}}$ defined by \[ \bm{u}^{(\bm{h}')}_j=\bm{u}^{(\bm{h})}_j\left(\prod_{k\in J_{j,\bm{h},\bm{h}'}}(1+\bm{Z}_k)^{-1} \right)^{\delta^{(\bm{h},\bm{h}')}_j}\left(\prod_{k\in J_{j,\bm{h},\bm{h}'}}1+\bm{Z}_k\right)^{\delta^{(\bm{h}',\bm{h})}_j}; \] \[ \bm{v}^{(\bm{h}')}_j=\bm{v}^{(\bm{h})}_j\left(\prod_{k\in J_{j,\bm{h},\bm{h}'}}1+\bm{Z}_k\right)^{\delta^{(\bm{h},\bm{h}')}_j}\left(\prod_{k\in J_{j,\bm{h},\bm{h}'}}(1+\bm{Z}_k)^{-1} \right)^{\delta^{(\bm{h}',\bm{h})}_j} \] and the variables $\bm{Z}_i$ are identified trivially. \textbf{Step 2.} We now glue in the charts corresponding to singular SYZ fibers. For each tropical hyperplane $H\subset\mathbb{T}^d$, we can associate to it its dual simplex $\Delta\subset(\mathbb{R}^d)^*$. (Note that $\Delta$ can have dimension less than d.) For $k\ge 1$, each $k$-dimensional face $\sigma$ of $\Delta$ corresponds to a $d-k$-dimensional tropical stratum $H_{\sigma}$ of $H$. The $0$-dimensional faces(vertices) of $\Delta$ corresponds to the tropical chambers adjacent to $H$. We will denote by $|\sigma|$ the dimension of $\sigma$, and note that $\sigma$ is itself a simplex. Let $\Delta_1,\ldots,\Delta_n$ be the dual simplexes of the tropical hyperplanes $H_1,\ldots,H_n$. We will abuse notation and denote by $\mathcal{H}$ both the tropical hyperplane arrangement $\mathcal{H}=\{H_i\}_{i=1}^n$ and the union $\mathcal{H}=\bigcup_{i=1}^n H_i$. We define a stratification of $\mathcal{H}$ as follows. Let $\bm{\sigma}=\{\sigma_{j_1},\ldots,\sigma_{j_{\nu}}\}$ be a collection such that $\sigma_{j_i}$ is a face of $\Delta_{j_i}$, and set $|\bm{\sigma}|:=\sum_{\sigma_j\in\bm{\sigma}}|\sigma_j|$. Let $\mathcal{H}_{\bm{\sigma}}\subset\mathcal{H}$ be the set \[ \mathcal{H}_{\bm{\sigma}}=\bigcap_{\sigma_j\in\bm{\sigma}} (H_j)_{\sigma_j}. \] We define the $0$-dimensional strata of $\mathcal{H}$ to be points of the form $\mathcal{H}_{\bm{\sigma}}$ where $|\bm{\sigma}|=d$. We then define the $\ell$-dimensional strata of $\mathcal{H}$ for for $\ell\ge 1$ to be the connected components of $\mathcal{H}_{\bm{\sigma}}\setminus\mathcal{H}_{\ell-1}$, where $|\bm{\sigma}|=d-\ell$, and $\mathcal{H}_{\ell-1}$ denotes the union of the $(\ell-1)$-dimensional strata $\Theta$ of $\mathcal{H}$. Let $\{e_1,\ldots,e_d\}$ be the standard basis on $(\mathbb{R}^d)^*$, and denote by $<,>$ the standard pairing between $\mathbb{R}^d$ and $(\mathbb{R}^d)^*$. For each $\bm{\sigma}$ with $1\le|\bm{\sigma}|\le d$, we associate to it a collection of primitive and linearly independent vectors $\{\vec{a}^{\bm{\sigma}}_1,\ldots,\vec{a}^{\bm{\sigma}}_{\ell}\}$ parallel to $\mathcal{H}_{\bm{\sigma}}$. For each $\sigma_{j_k}\in\bm{\sigma}$, we associate to it a collection of primitive vectors $\{\vec{a}^{\sigma_{j_k}}_1,\ldots,\vec{a}^{\sigma_{j_k}}_{|\sigma_{j_k}|+1}\}$ such that $\vec{a}^{\sigma_{j_k}}_i$ is normal to the $i^{\mathrm{th}}$ facet of $\sigma_{j_k}$(notice that the number of facets of $\sigma_{j_k}$ is $|\sigma_{j_k}|+1)$, and parallel to $\bigcap_{\sigma_j\in\bm{\sigma},j\ne j_k} (H_j)_{\sigma_j}$. In particular, we can choose $\{\vec{a}^{\sigma_{j_k}}_1,\ldots,\vec{a}^{\sigma_{j_k}}_{|\sigma_{j_k}|+1}\}$ such that $\vec{a}^{\sigma_{j_k}}_{|\sigma_{j_k}|+1}=-\sum_{i=1}^{|\sigma_{j_k}|} \vec{a}^{\sigma_{j_k}}_{i}$. For each $\ell$-dimensional stratum $\Theta$, there exists a unique collection $\bm{\sigma}=\{\sigma_{j_1},\ldots,\sigma_{j_{\nu}}\}$ such that $|\bm{\sigma}|=d-\ell$, and $\Theta\subset\mathcal{H}_{\bm{\sigma}}$. We will call a stratum $\Theta$ \textit{admissible} if $\bigcap_{k=1}^\nu H_{\mathbb{R},j_k}\ne\emptyset$. Now, we associate to each admissible stratum $\Theta$ a chart $U_{\Theta}$ defined by \[ U_{\Theta}=\left\{\begin{array}{l}\left((x^{(\Theta,\sigma_{j_i})}_{1},\ldots,x^{(\Theta,\sigma_{j_i})}_{|\sigma_{j_i}|+1} )_{i=1,\ldots,\nu},(y^{(\Theta)}_1,\ldots,y^{(\Theta)}_{\ell}),(\bm{Z}_1,\ldots,\bm{Z}_d)\right)\in\left(\prod_{i=1}^{\nu}\mathbb{C}^{|\sigma_{j_i}|+1}\right)\times(\mathbb{C}^{\times})^{\ell}\times (\mathbb{C}^{\times})^d\\ \text{ s.t. } \prod_{k=1}^{|\sigma_{j_i}|+1}x^{(\Theta,\sigma_{j_i})}_{k}=1+\bm{Z}_{j_i}, i=1,\ldots,\nu \end{array}\right\}. \] We glue $U_{\Theta}$ to the resulting space from Step 1. For any tropical chamber $\mathcal{C}_{\bm{h}}$ adjacent to $\Theta$, we define an open embedding $\psi_{\bm{h},\Theta}:U_{\bm{h}}\to U_{\Theta}$ by \[ y^{(\Theta)}_k=\prod_{i=1}^{d}(\bm{u}^{(\bm{h})}_i)^{<e_i,\vec{a}^{\bm{\sigma}}_k>}, \quad k=1,\ldots,\ell; \] \[ x^{(\Theta,\sigma_{j_i})}_{k}=\prod_{m=1}^d (\bm{u}^{(\bm{h})}_m)^{<e_m,\vec{a}^{\sigma_{j_i}}_k>} \qquad i=1,\ldots,\nu; \] if the $k^{\mathrm{th}}$ facet of $\sigma_{j_i}$ is adjacent to the vertex of $\sigma_{j_i}$ corresponding to the chamber $\mathcal{C}_{\bm{h}}$, and \[ x^{(\Theta,\sigma_{j_i})}_{k}=(1+\bm{Z}_{j_i})\prod_{m=1}^d (\bm{u}^{(\bm{h})}_m)^{<e_m,\vec{a}^{\sigma_{j_i}}_k>} \qquad i=1,\ldots,\nu, \] otherwise. The variables $\bm{Z}_i$ are identified trivially. We denote the smooth variety obtained from this gluing by $\widetilde{\mathfrak{M}_{u,\lambda}^{\vee}}$. We have $H^0(\widetilde{\mathfrak{M}_{u,\lambda}^{\vee}},\mathcal{O}_{\widetilde{\mathfrak{M}_{u,\lambda}^{\vee}}})=R$. Thus, the resolution $\widetilde{\mathfrak{M}_{u,\lambda}^{\vee}}\to \mathfrak{M}_{u,\lambda}^{\vee}$ is the affinization map. Figure \ref{fig:hypertoric-gluing} shows an example of the above gluing procedure. \begin{remark} $\widetilde{\mathfrak{M}_{u,\lambda}^{\vee}}$ is equipped with a holomorphic symplectic form $\sum_{i=1}^d d\log\mathbf{u}_i\wedge d\log\bm{Z}_i$ and a holomorphic volume form \[ \Omega^{\vee}=\prod_{i=1}^d d\log\mathbf{u}_i\wedge d\log\bm{Z}_i \] which is preserved under the change of coordinates (up to signs). Let $\gamma\in H_{2d}(\widetilde{\mathfrak{M}_{u,\lambda}^{\vee}})$ and consider the period integrals \begin{equation} \label{period} \int_{\gamma}\Omega^{\vee}. \end{equation} Let $\bm{\pi}=(\bm{Z_1},\ldots,\bm{Z}_d):\widetilde{\mathfrak{M}_{u,\lambda}^{\vee}}\to (\mathbb{C}^{\times})^d$ be projection map. $\bm{\pi}$ is a $(\mathbb{C}^{\times})^d$-fibration over base $(\mathbb{C}^{\times})^d$. Denote by $\bm{H}$ the union of multiplicative hyperplanes \[ \bm{H}=\bigcup_{i=1}^n \{\bm{Z}\in(\mathbb{C}^{\times})^d|q_i\bm{Z}^{\hat{\lambda}_{\mathbb{R},i}}_i=-1\}, \] where $q_i=1$, for $i=1,\ldots,d$ and $q_{\ell}=q^{\beta_{S_{\ell}}}$, for $\ell=d+1,\ldots,n$. The period integrals (\ref{period}) reduces to integration over relative cycles $\gamma'\in H_d((\mathbb{C}^{\times})^d,\bm{H})$, \[ \int_{\gamma'}\prod_{i=1}^d d\log\bm{Z}_i \] by dimension reduction similar to the case of toric varieties (see e.g. \cite{CLT11}). In \cite{MS}, Mcbreen and Shenfeld observed that certain period integrals on $(\mathbb{C}^{\times})^d$ with local coefficients satisfy the same GKZ system as the $T^d\times\mathbb{C}^{\times}$-equivariant quantum cohomology. \end{remark} \begin{figure}[htb!] \includegraphics[scale=0.65]{hypertoric-gluing.pdf} \caption{An example of gluing. The left is the tropical hyperplane arrangement, and the right is the real hyperplane arrangement. Each admissible intersection stratum gives a local chart and they are glued to adjacent chambers according to the vectors associated to the variables.} \label{fig:hypertoric-gluing} \end{figure} \subsection{Multiplicative hypertoric varieties} \label{sec:multiplicative} In this subsection, we show that smooth multiplicative hypertoric varieties (see e.g.\cite{ganev14}) provide alternative resolutions to our SYZ mirrors. Let's first review the construction of multiplicative hypertoric varieties. Let \[ (T^*\mathbb{C}^{n})^{\circ}=\{(z,w)\in T^*\mathbb{C}^n|1+z_iw_i\ne 0, i=1,\ldots,n\}. \] We equip $(T^*\mathbb{C}^{n})^{\circ}$ with the holomorphic symplectic form \[ \omega^{\circ}=\sum_i^n \cfrac{dz_i\wedge dw_i}{1+z_iw_i}. \] Let $\vec{t}=(t_1,\ldots,t_n)\in (\mathbb{C}^{\times})^n$ act on $(T^*\mathbb{C}^{n})^{\circ}$ by \[ \vec{t}\cdot(z,w)=(t_1z_1,t_1^{-1}w_1,\ldots,t_nz_n,t_n^{-1}w_n). \] This action comes with a $(\mathbb{C}^{\times})^n$-valued moment map(for the general theory of Lie-group valued moment maps, see \cite{AMM}) $\tilde{\bm{\mu}}:(T^*\mathbb{C}^{n})^{\circ}\to (\mathbb{C}^{\times})^n$ given by \[ \tilde{\bm{\mu}}(z,w)=\left((1+z_1w_1),\ldots,(1+z_nw_n)\right). \] Let $K_{\mathbb{C}}\subset (\mathbb{C}^{\times})^n$ be the subtorus defined by the collection of vectors $u$ as in Section \ref{sec:hypertoric}. Let $(\iota^*_{ij})_{1\le i\le n-d, 1\le j\le n}$ be the matrix associated to $\iota^*:(\mathfrak{t}^n)^*\to \mathfrak{k}^*$. The multiplicative moment map $\bm{\mu}:(T^*\mathbb{C}^{n})^{\circ}\to K_{\mathbb{C}}$ of the $(\mathbb{C}^{\times})^n$-action on $(T^*\mathbb{C}^{n})^{\circ}$ restricted to $K_{\mathbb{C}}$ is given by \[ \bm{\mu}(z,w)=\left(\prod_{j=1}^n (1+z_jw_j)^{\iota^*_{1j}},\ldots,\prod_{j=1}^n (1+z_jw_j)^{\iota^*_{(n-d)j}}\right). \] Let $\eta=(\eta_1,\ldots,\eta_{n-d})\in K_{\mathbb{C}}$, and let $\chi:K_{\mathbb{C}}\to \mathbb{C}^{\times}$ be a character. We define a \textit{multiplicative hypertoric variety} to be the GIT quotient \[ X_{u,\chi,\eta}=\bm{\mu}^{-1}(\eta)//_{\chi} K_{\mathbb{C}}. \] or equivalently, \[ X_{u,\chi,\eta}=\mathrm{Proj}\left(\bigoplus_{k\ge 0} \mathcal{O}\left(\bm{\mu}^{-1}(\eta)\right)^{\chi^k}\right). \] Set $q=\left((-1)^{\sigma_{d+1}+1}q^{\beta_{S_{d+1}}},\ldots,(-1)^{\sigma_{n}+1}q^{\beta_{S_n}}\right)\in K_{\mathbb{C}}$, where $\sigma_{\ell}$ is the parity of $\sum_{i=1}^d a_{\ell i}$, and $a_{\ell i}$ are coefficients in $u_{\ell}=\sum_{i=1}^d a_{\ell i}u_i$. Consider the multiplicative hypertoric variety $X_{u,0,q}$. We have \[ X_{u,0,q}=\mathrm{Spec}\left(\mathbb{C}[\bm{\mu}^{-1}(q)]^{K_{\mathbb{C}}}\right), \] where $\mathbb{C}[\bm{\mu}^{-1}(q)]^{K_{\mathbb{C}}}$ denotes the $K_{\mathbb{C}}$-invariant subring of $\mathbb{C}[\bm{\mu}^{-1}(q)]$. Let $\Pi=(\pi^*_{ji})_{1\le j\le n, 1\le i\le d}$ be the matrix associated to the map $\pi^*:(\mathfrak{t}^d)^*\to (\mathfrak{t}^n)^*$ with respect to the ordered basis $u_1,\ldots,u_d$. $(\pi^*_{ji})_{1\le j,i\le d}$ is the identity $d$ by $d$ matrix. Since $\Pi$ is \textit{totally unimodular}, the remaining entries take values in $\{-1,0,1\}$. The columns of $\Pi$ correspond to $K_{\mathbb{C}}$-invariant polynomials $\bm{z}_i=\prod_{j=1}^n x_{ij}^{|\pi^*_{ji}|}$ and $\bm{w}_i=\prod_{j=1}^n y_{ij}^{|\pi^*_{ji}|}$, where $x_{ij}=z_j$, $y_{ij}=w_j$ if $\pi^*_{ji}\ge 0$, and $x_{ij}=w_j$, $y_{ij}=z_j$ if $\pi^*_{ji}<0$. Denote by $S$ the multiplicative system generated by $\bm{z}_i,\bm{w}_i$, and $z_iw_i$ for $i=1,\ldots,d$. \begin{lemma} \label{lemma:gen} $S^{-1}\mathbb{C}[\bm{\mu}^{-1}(q)]^{K_{\mathbb{C}}}$ is generated by $\bm{z}_i^{\pm 1},\bm{w}_i^{\pm 1}$, and $(z_iw_i)^{\pm 1}$ for $i=1,\ldots,d$. \end{lemma} \begin{proof} Let $f=\prod_{i=1}^n z_i^{a_i}\prod_{i=1}^n w_i^{b_i}$ be an arbitrary nonconstant Laurent monomial in $S^{-1}\mathbb{C}[\bm{\mu}^{-1}(q)]$. If $f$ is not divisible by neither $\bm{z}_i^{\pm 1}$ nor $\bm{w}_i^{\pm 1}$ for $i=1,\ldots,d$, then the vector $\langle a_1-b_1,\ldots,a_n-b_n\rangle$ is not in the kernel of $\iota^*:(\mathfrak{t}^n)^*\to \mathfrak{k}^*$ unless it is the zero vector. In the first case, $f$ is not $K_{\mathbb{C}}$-invariant, while in the second case, $f$ is a product of $(z_iw_i)^{\pm 1}$. \end{proof} \begin{prop} \label{prop:birational} For a generic choice of $\chi$, $X_{u,\chi,q}$ is a resolution of $\mathfrak{M}_{u,\lambda}^{\vee}$. \end{prop} \begin{proof} We have a ring homomorphism $\varphi:R\to\mathbb{C}[\bm{\mu}^{-1}(q)]^{K_{\mathbb{C}}}$ given by \[ \varphi(\bm{u}_i)=(-1)^{\mathrm{sgn}\left(\sum_{j=1}^n |\pi^*_{ji}|\right)}\bm{z}_i,\quad \varphi(\bm{v}_i)=\bm{w}_i,\quad \varphi(\bm{Z}_{i})=-1-z_iw_i,\text{ for } i=1,\ldots,d. \] Denote by $R'$ the ring obtained by localizing $R$ at the multiplicative system generated by $\bm{u}_i,\bm{v}_i$, and $1+\bm{Z}_i$ for $i=1,\ldots,d$. The induced map $\varphi_*:X_{u,0,q}\to\mathfrak{M}_{u,\lambda}^{\vee}$ is birational since $\varphi$ descends to a ring isomorphism $R'\cong S^{-1}\mathbb{C}[\bm{\mu}^{-1}(q)]^{K_{\mathbb{C}}}$. When the Kähler parameters of $\mathfrak{M}_{u,\lambda}$ are generic (i.e. $\mathcal{H}_{\mathbb{R}}$ is simple), $X_{u,\chi,q}$ is smooth and is independent of $\chi$, and therefore we have a resolution of $\mathfrak{M}_{u,\lambda}^{\vee}$ by $X_{u,\chi,q}$. On the other hand, if the Kähler parameters are not generic, $X_{u,0,q}$ is singular. However, the affinization map $X_{u,\chi,q}\to X_{u,0,q}=\operatorname{Spec}{H^0(X_{u,\chi,q},\mathcal{O}_{X_{u,\chi,q}})}$ is a resolution. In this case, the composition $X_{u,\chi,q}\to X_{u,0,q}\to\mathfrak{M}_{u,\lambda}^{\vee}$ is a resolution. \end{proof} \bibliographystyle{amsalpha}
{ "timestamp": "2019-09-04T02:30:44", "yymm": "1804", "arxiv_id": "1804.05506", "language": "en", "url": "https://arxiv.org/abs/1804.05506" }
\section{Introduction} With the advent of social media, irony and sarcasm detection has become an active area of research in Natural Language Processing (NLP) \cite{joshi2016automatic,riloff,joshi2015,ghosh2017role}. Most computational studies have focused on building state-of-the-art models to detect whether an utterance or comment is ironic/sarcastic\footnote{We treat irony and sarcasm similarly in this paper.} or not, sometimes without theoretical grounding. In linguistics and discourse studies, \citeauthor{attardo2000ironya} (2000) and later \citeauthor{burgers2010verbal} (2010) have studied two theoretical aspects of irony in the text: \emph{irony factors'} and \emph{irony markers}. Irony factors are characteristics of ironic utterances that cannot be removed without destroying the irony. In contrast, irony markers are a meta-communicative clue that ``alert the reader to the fact that a sentence is ironical'' \cite{attardo2000ironya}. They can be removed and the utterance is still ironic. In this paper, we examine the role of irony markers in social media for irony recognition. Although punctuations, capitalization, and hyperboles are previously used as features in irony detection \cite{bamman2015contextualized,muresanjasist2016}, here we thoroughly analyze a set of theoretically-grounded types of irony markers, such as tropes (e.g., metaphors), morpho-syntactic indicators (e.g., tag questions), and typographic markers (e.g., emoji) and their use in ironic utterances. Consider the following two irony examples from $Twitter$ and $Reddit$ given in Table \ref{table:examples}. \begin{table} \centering \begin{tabular}{ p{1.5cm}|p{6cm} } \hline Platform & \multicolumn{1}{c}{Utterances} \\ \hline {$Reddit$} & Are you telling me iPhone 5 is only marginally better than iPhone 4S? I thought we were reaching a golden age with this game-changing device. /s\\ {$Twitter$} & With 1 follower I must be AWESOME. :P \#ironic \\ \hline \end{tabular} \caption{ Use of irony markers in two social media platforms} \label{table:examples} \end{table} Both utterances are labeled as ironic by their authors (using hashtags in $Twitter$ and the /s marker in $Reddit$). In the $Reddit$ example, the author uses several irony markers such as \emph{Rhetorical question} (e.g., ``are you telling'' \dots) and \emph{metaphor} (e.g., ``golden age''). In the $Twitter$ example, we notice the use of capitalization (``AWESOME'') and emoticons (``:P'' (tongue out)) that the author uses to alert the readers that it is an ironic tweet. We present three contributions in this paper. First, we provide a detailed investigation of a set of theoretically-grounded irony markers (e.g., tropes, morpho-syntactic, and typographic markers) in social media. We conduct the classification and frequency analysis based on their occurrence. Second, we analyze and compare the use of irony markers on two social media platforms ($Reddit$ and $Twitter$). Third, we provide an analysis of markers on topically different social media content (e.g., technology vs. political subreddits). \section{Data} \emph{\textbf{Twitter:}} We use a set of 350K tweets for our experiments. The ironic/sarcastic tweets are collected using hashtags, such as \#irony, \#sarcasm, and \#sarcastic whereas the non-sarcastic tweets do not contain these hashtags, but they might include sentiment hashtags, such as \#happy, \#love, \#sad, \#hate (similar to \cite{gonzalez,ghoshguomuresan2015EMNLP}). As pre-processing, we removed the retweets, spam, duplicates, and tweets written in languages other than English. Also, we deleted all tweets where the hashtags of interest were not located at the very end (i.e., we eliminated ``\#sarcasm is something that I love''). We lowercased the tweets, except the words where all the characters are uppercased. \emph{\textbf{Reddit:}} \citeauthor{khodak2017large} (2018) introduced an extensive collection of sarcastic and non-sarcastic posts collected from different subreddits. In Reddit, authors mark their sarcastic intent of their posts by adding ``/s'' at the end of a post/comment. We collected 50K instances from the corpus for our experiments (denoted as $Reddit$), where the sarcastic and non-sarcastic replies are at least two sentences (i.e., we discard posts that are too short) For brevity, we denote ironic utterances as $I$ and non-ironic utterances as $NI$. Both $Twitter$ and $Reddit$ datasets are balanced between the $I$ and $NI$ classes. We uuse 80\% of the datasets for training, 10\% for development, and the remaining 10\% for testing. \section{Irony Markers} Three types of markers --- tropes, morpho-syntactic, and typographic are used as features. \subsection{Tropes:} Tropes are figurative use of expressions. \begin{itemize} \item Metaphors - Metaphors often facilitate ironic representation and are used as markers. We have drawn metaphors from different sources (e.g., 884 and 8,600 adjective/noun metaphors from \cite{tsvetkov2014metaphor} and \cite{gutierrez2016literal}, respectively, and used them as binary features. We also evaluate the metaphor detector \cite{rei2017grasping} over $Twitter$ and $Reddit$ datasets. We considered metaphor candidates that have precision $\ge$ 0.75 (see \citeauthor{rei2017grasping} (2017)). \item Hyperbole - Hyperboles or intensifiers are commonly used in irony because speakers frequently overstate the magnitude of a situation or event. We use terms that are denoted as ``strong subjective'' (positive/negative) from the MPQA corpus \cite{wilson2005recognizing} as hyperboles. Apart from using hyperboles directly as the binary feature we also use their sentiment as features. \item Rhetorical Questions - Rhetorical Questions (for brevity $RQ$) have the structure of a question but are not typical information seeking questions. We follow the hypothesis introduced by \citeauthor{oraby2017you} (2017) that questions that are in the middle of a comment are more likely to be RQ since since questions followed by text cannot be typical information seeking questions. Presence of $RQ$ is used as a binary feature. \end{itemize} \subsection{Morpho-syntactic (MS) irony markers:} This type of markers appear at the morphologic and syntactic levels of an utterance. \begin{itemize} \item Exclamation - Exclamation marks emphasize a sense of surprise on the literal evaluation that is reversed in the ironic reading \cite{burgers2010verbal}. We use two binary features, single or multiple uses of the marker. \item Tag questions - We built a list of tag questions (e.g.,, ``didn't you?'', ``aren't we?'') from a grammar site and use them as binary indicators.\footnote{http://www.perfect-english-grammar.com/tag-questions.html} \item Interjections - Interjections seem to undermine a literal evaluation and occur frequently in ironic utterances (e.g., ```yeah", `wow'', ``yay'',``ouch'' etc.). Similar to tag questions we assembled interjections (a total of 250) from different grammar sites. \end{itemize} \subsection{Typographic irony markers:} \begin{itemize} \item Capitalization - Users often capitalize words to represent their ironic use (e.g., the use of ``GREAT", ``SO'', and ``WONDERFUL'' in the ironic tweet ``GREAT i'm SO happy shattered phone on this WONDERFUL day!!!''). \item Quotation mark - Users regularly put quotation marks to stress the ironic meaning (e.g., ``great'' instead of GREAT in the above example). \item Other punctuation marks - Punctuation marks such as ``?'', ``.'', ``;'' and their various uses (e.g., single/multiple/mix of two different punctuations) are used as features. \item Hashtag - Particularly in $Twitter$, hashtags often represent the sentiment of the author. For example, in the ironic tweet ``nice to wake up to cute text. \#suck'', the hashtag ``\#suck'' depicts the negative sentiment. We use binary sentiment feature (positive or negative) to identify the sentiment of the hashtag, while comparing against the MPQA sentiment lexicon. Often multiple words are combined in a hashtag without spacing (e.g., ``fun'' and ``night'' in \#funnight). We use an off-the-shelf tool to split words in such hashtags and then checked the sentiment of the words.\footnote{https://github.com/matchado/HashTagSplitter} \item Emoticon - Emoticons are frequently used to emphasize the ironic intent of the user. In the example ``I love the weather ;) \#irony'', the emoticon ``;)'' (wink) alerts the reader to a possible ironic interpretation of weather (i.e., bad weather). We collected a comprehensive list of emoticons (over one-hundred) from Wikipedia and also used standard regular expressions to identify emoticons in our datasets.\footnote{http://sentiment.christopherpotts.net/code-data/} Beside using the emoticons directly as binary features, we use their sentiment as features as well (e.g., ``wink'' is regarded as positive sentiment in MPQA). \item Emoji - Emojis are like emoticons, but they are actual pictures and recently have become very popular in social media. Figure \ref{figure:emoji} shows a tweet with two emojis (e.g., ``unassumed'' and ``confounded'' faces respectively) used as markers. We use an emoji library of 1,400 emojis to identify the particular emoji used in irony utterances and use them as binary indicators.\footnote{https://github.com/vdurmont/emoji-java} \begin{figure}[t] \center \includegraphics[width=3in]{tweet_emoji.png} \caption{Utterance with emoji (best in color)} \label{figure:emoji} \end{figure} \end{itemize} \section{Classification Experiments and Results} We first conduct a binary classification task to decide whether an utterance (e.g., a tweet or a $Reddit$ post) is ironic or non-ironic, exclusively based on the irony marker features. We use Support Vector Machines (SVM) classifier with linear kernel \cite{fan2008liblinear}. Table \ref{table:tweetresults} and Table \ref{table:redditresults} present the results of the ablation tests for $Twitter$ and $Reddit$. We report Precision ($P$), Recall ($R$) and $F1$ scores of both $I$ and $NI$ categories. \begin{table} \centering \begin{small} \begin{tabular}{lcccc} \hline Features & Category & $P$ & $R$ & $F1$ \\ \hline \multirow {2} {*} {all} & $I$ & 66.93 & \textbf{77.32} & \textbf{71.75} \\ & $NI$ & \textbf{73.13} & 61.78 & \textbf{66.97} \\ \hline \multirow {2} {*} {- tropes} & $I$ & \textbf{67.70} & {48.00} & 56.18 \\ & $NI$ & {59.70} & \textbf{77.09} & \textbf{67.29} \\ \hline \multirow {2} {*} {- MS} & $I$ & 63.59 & \textbf{78.09} & 70.10\\ & $NI$ & \textbf{71.59} & {55.27} & 62.38 \\ \hline \multirow {2} {*} {- typography} & $I$ & 57.30 & 77.95 & 66.05 \\ & $NI$ & 65.49 & 41.86 & 51.07 \\ \hline \end{tabular} \end{small} \caption{Ablation Tests of irony markers for $Twitter$. \textbf{bold} are best scores (in \%).} \label{table:tweetresults} \end{table} \begin{table} \centering \begin{small} \begin{tabular}{lcccc} \hline Features & Category & $P$ & $R$ & $F1$ \\ \hline \multirow {2} {*} {all} & $I$ & \textbf{73.16} & {48.52} & \textbf{58.35} \\ & $NI$ & \textbf{61.49} & \textbf{82.20} & \textbf{70.35} \\ \hline \multirow {2} {*} {- tropes} & $I$ & {71.45} & \textbf{50.36} & \textbf{59.08} \\ & $NI$ & \textbf{61.67} & {79.88} & \textbf{69.61} \\ \hline \multirow {2} {*} {- MS} & $I$ & 58.37 & \textbf{49.36} & 53.49\\ & $NI$ & {56.13} & {64.8} & 60.16 \\ \hline \multirow {2} {*} {- typography} & $I$ & 73.29 & 48.52 & 58.39 \\ & $NI$ & \textbf{61.52} & \textbf{82.32} & \textbf{70.42} \\ \hline \end{tabular} \end{small} \caption{Ablation Tests of irony markers for $Reddit$ posts. \textbf{bold} are best scores (in \%).} \label{table:redditresults} \end{table} Table \ref{table:tweetresults} shows that for ironic utterances in $Twitter$, removing tropes have the maximum negative effect on Recall, with a reduction on $F1$ score by 15\%. This is primarily due to the removal of hyperboles that frequently appear in ironic utterances in $Twitter$. Removing typographic markers (e.g., emojis, emoticons, etc.) have the maximum negative effect on the Precision for the irony $I$ category, since particular emojis and emoticons appear regularly in ironic utterances (Table \ref{table:tweettopf}). For $Reddit$, Table \ref{table:redditresults} shows that removal of typographic markers such as emoticons does not affect the F1 scores, whereas the removal of morpho-syntactic markers, e.g., tag questions, interjections have a negative effect on the F1. Table \ref{table:tweettopf} and Table \ref{table:reddittopf} represent the $top$ most discriminative features for both categories based on the feature weights learned during the SVM training for $Twitter$ and $Reddit$, respectively. Table \ref{table:tweettopf} shows that for $Twitter$, typographic features such as emojis and emoticons have the highest feature weights for both categories. Interestingly, we observe that for ironic tweets users often express negative sentiment directly via emojis (e.g., angry face, rage) whereas for non-ironic utterances, emojis with positive sentiments (e.g., hearts, wedding) are more familiar. For $Reddit$ (Table \ref{table:reddittopf}), we observe that instead of emojis, other markers such as exclamation marks, negative tag questions, and metaphors are discriminatory markers for the irony category. In contrary, for the non-irony category, positive tag questions and negative sentiment hyperboles are influential features. \begin{table} \centering \begin{small} \begin{tabular}{p{1cm}p{7cm}} \hline Category & Top features \\ \hline $I$ & emoticons: annoyed (``-\_-''), perplexed (``:-/''); emojis: angry face/monster, unamused, expressionless, confounded, rage, neutral\_face, thumbsdown; negative\_tag questions (``is n't it?'', ``don't they?") \\ \hline $NI$ & emojis: birthday, tophat, hearts, wedding, rose, ballot\_box\_with\_check; quotations, hashtags (positive sentiment), emoticons: happy (``:)''), overjoyed (``$\wedge\_\wedge$'') \\ \hline \end{tabular} \end{small} \caption{Irony markers based on feature weights for $Twitter$} \label{table:tweettopf} \end{table} \begin{table} \centering \begin{small} \begin{tabular}{p{1cm}p{7cm}} \hline Category & Top features \\ \hline $I$ & exclamation (single, multiple), negative\_tag questions (``is n't it?'', ``don't they?"), interjections, presence of metaphors, positive sentiment hyperbolic words (e.g., ``notably'', ``goodwill'', ``recommendation'') \\ \hline $NI$ & negative sentiment hyperbolic words (e.g., ``vile'', ``lowly'', ``fanatic''), emoticon: laugh (``:))''), positive\_taq questions (``is it?'', ``are they?''), punctuations such as periods/multiple periods \\ \hline \end{tabular} \end{small} \caption{Irony markers based on feature weights for $Reddit$} \label{table:reddittopf} \end{table} \begin{table*} \centering \begin{small} \begin{tabular}{p{1.5cm}p{2cm}p{2cm}p{2cm}p{2.6cm}p{2.6cm}} \hline \multicolumn{2}{c}{Irony Markers} & \multicolumn{4}{c}{Genres} \\ Type & Marker & Technology (a) & Sports (b) & Politics (c) & Religion (d) \\ \hline & Metaphor & 0.01 (0.06) & 0.002 (0.05) & 0.02 (0.12) & 0.01 (0.10)\\ Trope & Hyperbole & 0.19 (0.39) & 0.34 (0.48)$^{a{^{**}}}$ & 0.74 (0.44)$^{(a,b){^{**}}}$ & 0.76 (0.43)$^{{(a,b)^{**}}, c{^*}}$\\ & $RQ$ & 0.06 (0.23) & 0.11 (0.32)$^{a{^{**}}}$ & 0.22 (0.41)$^{(a,b){^{**}}}$ & 0.2 (0.4)$^{(a,b){^{**}}}$\\ \hline & Exclamation & 0.09 (0.29) & 0.14 (0.34)$^{a{^{**}}}$ & 0.42 (0.49)$^{(a,b){^{**}}}$ & 0.37 (0.48)$^{(a,b,c){^{**}}}$\\ MS & Tag Question & 0.03 (0.16) & 0.05 (0.23)$^{a{^{**}}}$ & 0.11 (0.32)$^{(a,b){^{**}}}$ & 0.1 (0.30)$^{(a,b){^{**}}}$ \\ & Interjection & 0.13 (0.34) & 0.23 (0.42)$^{a{^{**}}}$ & 0.45 (0.50)$^{(a,b){^{**}}}$ & 0.52 (0.5)$^{(a,b,c){^{**}}}$\\ \hline & Capitalization & 0.04 (0.19) & 0.08 (0.27)$^{a{^{**}}}$ & 0.20 (0.40)$^{(a,b){^{**}}}$ & 0.1 (0.31)$^{(a,b,c){^{**}}}$\\ Typographic & Punctuations & 0.23 (0.42) & 0.45 (0.50)$^{a{^{**}}}$ & 0.84 (0.36)$^{(a,b){^{**}}}$ & 0.89 (0.31)$^{(a,b,c){^{**}}}$\\ \hline \end{tabular} \end{small} \caption{Frequency of irony markers in different genres (subreddits). The mean and the SD (in bracket) are reported.$^{x{^{**}}}$ and $^{x{^{*}}}$ depict significance on $p\le 0.005$ and $p\le 0.05$, respectively.} \label{table:meangenre} \end{table*} \subsection{Frequency analysis of markers} We also investigate the occurrence of markers in the two platforms via frequency analysis (Table \ref{table:freq}). We report the mean of occurrence per utterance and the standard deviation (SD) of each marker. Table \ref{table:freq} shows that markers such as hyperbole, punctuations, and interjections are popular in both platforms. Emojis and emoticons, although the two most popular markers in $Twitter$ are almost unused in $Reddit$. Exclamations and $RQ$s are more common in the $Reddit$ corpus. Next, we combine each marker with the type they belong to (i.e., either trope, morpho-syntactic and typographic) and compare the means between each pair of types via independent t-tests. We found that the difference of means is significant ($p \leq 0.005$) for all pair of types across the two platforms. \begin{table} \centering \begin{small} \begin{tabular}{p{1.5cm}p{1.8cm}p{1.8cm}p{1.8cm}} \hline \multicolumn{2}{c}{Irony Markers} & \multicolumn{2}{c}{Corpus} \\ Type & Marker & $Twitter$ & $Reddit$ \\ \hline & Metaphor & 0.02 (0.16) & 0.01 (0.08)\\ Trope & Hyperbole & 0.45 (0.50) & 0.43 (0.50)\\ & $RQ$ & 0.01 (0.08) & 0.15 (0.36)\\ \hline & Exclamation & 0.02 (0.16) & 0.19 (0.39)\\ MS & Tag Question & 0.02 (0.10) & 0.08 (0.26) \\ & Interjection & 0.22 (0.42) & 0.32 (0.46) \\ \hline & Capitalization & 0.03 (0.16) & 0.10 (0.30)\\ & Quotation & 0.01 (0.01) & - \\ Typographic & Punctuations & 0.10 (0.29) & 0.47 (0.50)\\ & Hashtag & 0.02 (0.14) & - \\ & Emoticon & 0.03 (0.14) & 0.001 (0.03) \\ & Emoji & 0.05 (0.22) & - \\ \hline \end{tabular} \end{small} \caption{Frequency of irony markers in two platforms. The mean and the SD (in bracket) are reported.} \label{table:freq} \end{table} \subsection{Irony markers across topical subreddits} Finally, we collected another set of irony posts from \cite{khodak2017large}, but this time we collected posts from specific topical subreddits. We collected irony posts about politics (e.g., subreddits: politics, hillary, the\_donald), sports (e.g., subreddits: nba, football, soccer), religion (e.g., subreddits: religion) and technology (e.g., subreddits: technology). Table \ref{table:meangenre} presents the mean and SD for each genre. We observe that users use tropes such as hyperbole and $RQ$, morpho-syntactic markers such as exclamation and interjections and multiple-punctuations more in politics and religion than in technology and sports. This is expected since subreddits regarding politics and religion are often more controversial than technology and sports and the users might want to stress that they are ironic or sarcastic using the markers. \section{Conclusion} We provided a thorough investigation of irony markers across two social media platforms: Twitter and Reddit. Classification experiments and frequency analysis suggest that typographic markers such as emojis and emoticons are most frequent for $Twitter$ whereas tag questions, exclamation, metaphors are frequent for $Reddit$. We also provide an analysis across different topical subreddits. In future, we are planning to experiment with other markers (e.g., ironic echo, repetition, understatements). \iffalse !_MUL: 0.707460 !: 0.551737 ?_MUL: 0.375296 ?: 0.350587 NEGATIVE_TAG_PRESENT: 0.118520 EMOTI_smiley_PRESENT: 0.046256 INTERJ_PRESENT: 0.011863 EMOTI_POSITIVE_PRESENT: 0.009792 METAPHOR_PRESENT: 0.005127 HYPERBOLIC_PRESENT: 0.004167 HYPERBOLIC_NEGATIVE_PRESENT: -0.004330 TAG_PRESENT: -0.015489 UPPER_PRESENT: -0.022623 EMOTI_laugh_PRESENT: -0.036464 .: -0.078927 HYPERBOLIC_POSITIVE_PRESENT: -0.105323 POSITIVE_TAG_PRESENT: -0.134009 RQ_PRESENT: -0.140100 ._MUL: -0.319297 \fi
{ "timestamp": "2018-04-17T02:07:47", "yymm": "1804", "arxiv_id": "1804.05253", "language": "en", "url": "https://arxiv.org/abs/1804.05253" }
\section{Introduction}Over the past two decades, there has been an explosion of interest in the properties of one-dimensional (1D) quantum gases with short-range interactions modelled by a Dirac $\delta$-potential. With the emergence of new technologies and experimental techniques, the properties of these systems, such as the number of particles, their interactions-, and the shape of the trapping potentials, can be controlled with high accuracy \cite{kauf,serwane}. As a result, it has become possible to simulate various physical phenomena in controllable ways that can even provide the opportunity to realize experimentally different toy models. Various physical realizations of systems of cold atoms are nowadays achieved \cite{kauf,serwane,Greiner,Selim2012,catani,Selim2013,Selim2015}. In view of this tremendous technological progress, research activity has exploded in the area of investigating the many-body properties of various quantum composite cold-atom systems \cite{Sowinski2016a,Zinner2017,Zinner2016,Har,Diaz2017,frank,bosebose,Koscik2018,sow,sow1,sow2}. There are few 1D systems with contact interactions that can be solved analytically. The best known of these is the system composed of two identical particles held together in a harmonic trap, which has closed-form solutions in the whole interacting regime \cite{Busch1999}. The Bethe ansatz method is known to be a solution for the 1D Bose gas in the absence of an external potential; that is, the so-called Lieb-Liniger model \cite{LL}. The most important result regarding the 1D gas is the famous Bose-Fermi mapping theorem \cite{Girardeau} that maps the Tonks-Girardeau (TG) gas of bosons with infinitely strong repulsions to a free-fermion gas, which does not depend on the external potential. As a result, the theoretical study of TG gases is a relatively easy task even in the limit of large particle numbers. The first observation of TG gases in experiments \cite{Kinoshita} provided theoretical communities with the impetus to study the properties of TG gases under different external potentials \cite{Xi,Murphy1,Murphy,periodic}. The Bose-Fermi mapping theorem also provides a tool for studying properties of multicomponent mixtures of strongly interacting gases \cite{mix0,mix1,mix2,mix3}. It is worth pointing out that a powerful pair-correlated variational approach to studying the ground states of bosonic systems with a harmonic trap has been developed in \cite{Brouzos} and subsequently extended to fermionic mixtures \cite{Brouzos1} and to bosonic systems with different interactions between the pairs of atoms \cite{barf}. This approach can also be extended to other mixtures of cold atoms, such as boson-fermion mixtures \cite{mix0} and bosonic mixtures \cite{Pyzh}. However, in most cases, full numerical calculations are required to describe the transition between the weakly and strongly correlated regimes, and this is generally a cumbersome task. Numerical simulations of ultra-cold gases are often performed with the exact-diagonalization (ED) method and in the framework of the multiconfiguration time-dependent Hartree method \cite{MCTDH} that has been extended for bosons \cite{Mbosons} and fermions \cite{Mfermions}, as well as for bosonic (fermionic) mixtures \cite{MLMCTDHX1,MLMCTDHX2}. The standard ED method is based on the Rayleigh-Ritz procedure and uses as a variational wave function a finite linear combination of many-particle states of a proper symmetry under the exchange of particles, usually made up of solutions of the corresponding one-particle system. In contrast to variational methods that use single-trial wave functions, which are usually specialized to treat only ground states of specific systems, the ED method enables precise determination of ground- and higher bound-states in a systematic way. In particular, most of the studies available in the literature about 1D systems with harmonic trapping potentials $x^2/2$ use the ED method with harmonic oscillator (HO) eigenfunctions: \begin{equation}\label{basis} u_{n}(x)=\left( {\sqrt{\Omega}\over \sqrt{\pi} 2^n n!} \right)^{{1\over 2}}\mathrm{e}^{-{\Omega x^2\over 2}}\mathbf{H}_n\left(\sqrt{\Omega} x\right), \end{equation} with $\Omega=1$, where $\mathbf{H}_n$ is the $n$th order Hermite polynomial. However, this results in very poor convergence of the many-body eigenstates as a function of the number of basis states \cite{pol}. In fact, even for systems with small particle numbers, huge numbers of many-particle functions are needed to describe strongly interacting regime \cite{frank}. In this paper we show that the ED method with basis functions given by (\ref{basis}) can be a very effective tool for studying various trapped systems with delta interactions, provided the parameter $\Omega$ is variationally optimized. The structure of this paper is as follows. Section \ref{tb} outlines the formalism of the optimized ED (OED) approach. Section \ref{results} tests the convergence of the OED method for the examples of harmonic and double-well potentials. Specifically, a significant improvement in the convergence is demonstrated for the harmonic trapping potential compared to the case without optimization of $\Omega$ ($\Omega=1$). Finally, section \ref{con} presents some concluding remarks. \section{Optimized ED Approach }\label{tb} Without loss of generality, we deal only with systems of identical bosons. We begin with the dimensionless Hamiltonian to deal with any confining potential: \begin{equation}\label{Hamiltonian_total} \hat{{\cal H}} = \sum_{i=1}^N h_{0}(x_{i}) + g\sum_{i<j}\delta(x_i-x_j), \end{equation} where the strength of the interaction is governed by the coefficient $g$ and $h_{0}$ is the one-body Hamiltonian given by \begin{equation}\label{one particle Hamiltonian} h_{0}(x)=-\frac{1}{2}\frac{\partial^2}{\partial x^2} + { V}(x). \end{equation} The true $N$-particle bosonic wave function can be represented as a linear combination: \begin{equation}\label{exp} |\Psi\rangle=\sum_{\beta}a_{\beta} |\textbf{u}_{\beta}\rangle .\end{equation} Here, $|\textbf{u}_{\beta}\rangle$ denotes the permanents that are constructed from the one-particle basis (\ref{basis}), which in the occupation-number representation take the form \begin{equation}\label{manyparticle} |\mathbf{u}_{\beta}\rangle=|n_{0},n_{1} ,...\rangle_{\Omega}. \end{equation} This represents the fact that the one-particle state $|i\rangle$ is occupied $n_{i}$ times, $\sum_{i}n_{i}=N$, and $\beta$ labels the different distributions of the particles. A feature worth stressing here is that if the trapping potential $V$ is symmetric in $x$, then the corresponding Hamiltonian (\ref{Hamiltonian_total}) commutes with the symmetry operator $\hat{{\cal P}}$ defined as $\hat{{\cal P}}\Psi(\mathbf{r})= \Psi(-\mathbf{r})$, the eigenvalues of which are $p=\pm1$, $\mathbf{r}=(x_{1},x_{2},...,x_{N})$. Consequently, the states with parities $p=1$ and $p=-1$ are superpositions of even- and odd-parity permanents, $\sum_{i} i n_{i}$= (even or odd) respectively. In practical calculations, we must truncate the many-particle basis. One reliable way of doing this is to use the basis made up of Fock states in the form $|\mathbf{u}_{\beta}^{K}\rangle=|n_{0},n_{1} ,..., n_{K},0,0...\rangle_{\Omega}$, where $\sum_{i=0}^{K} i n_{i}<K$ \cite{frank,shel,shel1,pl}. From now on $D$ denotes the number of many-body basis functions that compose the truncated basis set. Diagonalization of the corresponding truncated Hamiltonian matrix $[{H}_{\alpha\beta}]$, with ${H}_{\alpha\beta}=\langle{\mathbf{u}}_{\alpha}^K| \hat{{\cal H}}|{\mathbf{u}}_{\beta}^K\rangle$, thus yields a set of approximations to the energies, $E_{i}^{(K)}$, and the corresponding eigenvectors $a^{(K)}_{i}$: \begin{equation}\label{exp1} |\Psi\rangle_{i}\approx\sum_{\beta}(a_{i}^{(K)})_{\beta} |\mathbf{u}_{\beta}^{K}\rangle .\end{equation} By increasing $K$, approximations to a larger and larger number of states are obtained with systematically improved accuracy. However, the truncation of the basis set makes the approximate eigenstates dependent on $\Omega$. Only in the limit as $K$ tends to infinity does the dependency on $\Omega$ vanish altogether. This freedom in choosing the value of $\Omega$ can be used to improve the convergence \cite{okop,saad}. Following the principle of minimal sensitivity \cite{pms}, the parameter $\Omega$ should be chosen so that the approximations to a given physical quantity are as minimally sensitive to its variations as possible. Clearly, the best approximation of the $K$th order to the energy of the desired state is obtained for the value of $\Omega$ at which the corresponding eigenenergy $E_{i}^{(K)}$ attains its minimum, i.e., $dE_{i}^{(K)}/ d\Omega|_{\Omega=\Omega_{opt}}=0$. For large truncation orders, finding $\Omega_{opt}$ requires diagonalization of the truncated Hamiltonian matrix many times for different values of $\Omega$ until the minimum of $E_{i}^{(K)}$ is found. It is worth mentioning that various strategies for fixing the value of $\Omega$ before diagonalization of the Hamiltonian matrix have been tested on single-particle systems (see \cite{Koscik2009} and reference therein for an overview). However, none of these strategies guarantees that the desired state will be estimated with optimal accuracy. Here, we concentrate on testing the effectiveness of the strategy based on the minimization of $ E_{i}^{(K)}$. Although, strictly speaking, this approach yields the best approximation of the $K$th order only to the considered energy level, the corresponding resulting wave function is usually determined with an accuracy that is close to the optimal one. \section{Results}\label{results} For testing the performance of the OED scheme, we first choose the ground states of particles subjected to a harmonic confining potential. Since the ground state is an even-parity state ($p=1$), the dimensions of the Hamiltonian matrix can be reduced by including in the calculations only the permanents that satisfy $\sum_{i=0}^{K} i n_{i}$=(even), which considerably diminishes the computational cost. In our calculations, the numerical minimization of $ E_{0}^{(K)}$ is done in the framework of Newton`s iterative method for finding roots. To illustrate this clearly, we present in Table \ref{tab:table1} an example of results of the first few iterations obtained in the $N=3$ case. Here, we take as the reference points the results obtained for three- and four- particle systems in \cite{Brouzos}, where these have been determined in different ways with satisfactory accuracies. In Figure \ref{Fig1} we present the one-body densities \begin{equation}\rho(x)=\int_{\Re^{N-1}}|\Psi(x,x_2,...,x_{N})|^2dx_{2}...dx_{N}, \end{equation} obtained from the ED calculations before and after optimization of $\Omega$, for different cut-off values of $K$ at the strong interaction strength $g = 10$, in the bottom and top panels, respectively.\begin{figure} \begin{center} \includegraphics[width=0.236\textwidth]{Fig1.eps} \includegraphics[width=0.240\textwidth]{Fig2.eps} \includegraphics[width=0.236\textwidth]{Fig3.eps} \includegraphics[width=0.240\textwidth]{Fig4.eps} \end{center} \caption{\label{Fig1} Profiles of one-body densities obtained for $N=3,4$ at $g=10$ for different cut-off values of $K$. Black continuous lines mark the reference results taken from \cite{Brouzos} (Figure 2 (b-c) in this reference). The top-panel results were obtained through the optimization strategy based on the minimization of $E_{0}^{(K)}$. The corresponding numbers $D$ and the optimal values of $\Omega_{opt}$ are as follows: $\mathbf{N=3}\mathbf{:}K = 15~ (D= 81,\Omega_{opt} =2.59)\mathbf{;} K = 20 ~ (D= 148, \Omega_{opt} =3.13)\mathbf{;} K = 35 ~(D=762 ,\Omega_{opt} =5.11)\mathbf{;}\mathbf{N=4}\mathbf{:}K = 15~ (D=136, \Omega_{opt} =1.88)\mathbf{;} K = 20 ~(D= 284, \Omega_{opt} =2.33)\mathbf{;} K = 30 ~ (D= 1152, \Omega_{opt} =3.38$). The bottom-panel results were obtained at $\Omega=1$. } \end{figure} Black continuous lines mark the reference densities taken from \cite{Brouzos}. For the sake of completeness, we give the optimal values of $\Omega$ in the caption to this figure. We also calculated the deviations of the approximate energies $E_{0}^{(K)}$ from the \textit{exact} energies, as functions of $K$, both with and without optimization of $\Omega$. Our results for three- and four-particle calculations at two transparent values of $g$, including $g = 10$, are displayed in Figure \ref{Fig2}. \begin{table}[h] \begin{center} \begin{tabular}{lllll} \hline $\Omega_{0}=1$&$...$&$\Omega_{5}\approx5.15$&$\Omega_{6}\approx5.11$&$\Omega_{7}\approx5.11$ \\ \hline $ 4.26943$&$...$ & $4.12803 $&$4.12802 $ &$4.12802$\\ \hline \end{tabular} \caption{\label{tab:table1} Applying Newton`s method to the present problem yields the following recurrence equation for the term $\Omega_{n+1}$: $\Omega_{n+1}=\Omega_{n}- e^{(1)}_{n}/e^{(2)}_{n}$, where $ e^{(1)}_{n}$ and $ e^{(2)}_{n}$ are finite difference approximations to first and second derivatives of $E_{0}^{(K)}$ at $\Omega=\Omega_{n}$, which are calculated here with a step length of $d\Omega=0.005$. The table presents the results of the first few Newton iterations $\{\Omega_{n}, E_{0}^{(K)}(\Omega_{n})\} $ obtained for the ground-state of the three-particle system with $g=10$ at $K=35$. In spite of the fact that the starting point differs considerably from an optimal solution, a fast convergence is observed.} \end{center} \vspace{-0.6cm} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=0.239\textwidth]{Fig5.eps} \includegraphics[width=0.239\textwidth]{Fig6.eps} \end{center} \caption{\label{Fig2} Relative deviations $\delta E=(E_{0}^{(K)}(\Omega)-E_{exact})/E_{exact}$ for ground states of three-and four- particle systems, calculated at $\Omega=1$ and at $\Omega=\Omega_{opt}$ for two different interaction strengths $g=2$ and $g=10$, as functions of the cut-off $K$. We have taken the reference energies $E_{exact}$ from Figure 2a in \cite{Brouzos}.} \end{figure} It is apparent from our results that there are clear benefits to optimizing the ED method. We can conclude from the caption to the figure \ref{Fig1} that the rate of convergence strongly depends on $\Omega$. In fact, the optimal value $\Omega_{opt}$ increases with the increase in $K$. As might be expected, it is clear that approximations to the energies are better after applying the optimization. This effect is more pronounced when the parameters of $g$ enter the strong-correlation regime; that is, where the true wave functions have more complex structures. However, the improvement of the convergence is more pronounced in one-body densities. For example, the \textit{exact} three-particle density is well reproduced in the OED calculation when $K$ is about three times smaller than in the case where optimization is absent; that is, $K=35~(D=765)$ vs. $K=100~(D=14739)$. It should be emphasized that the calculation time in the former case (iterative minimization) was considerably shorter than in the latter one. The optimization of $ \Omega $ is therefore crucial for reducing the number of basis functions needed for accurate numerical calculations, as well as for shortening the computation time. In general, the size of the computations that can be done depends not only on the efficiency of the numerical calculation software that is used but also on the computer system itself, especially on the size of its memory. Our analysis clearly shows that the OED method consumes less memory than the standard ED approach ($\Omega=1$), and this makes it possible for systems with larger numbers of particles to be treated. Here, we leave an open question regarding how many particles can be treated effectively by modern supercomputers when using the OED approach. However, this number can be expected to be much larger than in the cases where there is no optimization of $\Omega$. Finally, we shed some light on the applicability of the OED approach with HO eigenfunctions to systems with trapping potential that is different from the harmonic trap. Here, we consider a model of a double-well potential of the form \begin{equation}\label{exp2} { V}(x)={1\over 2}x^2+h \mathrm{e}^{-{x^2\over 2\delta^2}}, \end{equation} which is often used in literature to simulate different physical situations. To illustrate this, we demonstrate the convergence of the OED method for the example of a four-particle system with control parameter values: $h=4$ and $\delta=0.2$. Figure \ref{Fig3} shows the ground-state one-body densities for $g=4$ and $g=20$ that are obtained with the OED method for different cut-off values of $K$. In the case of $g=20$, the system enters the TG regime, and in order to confirm the validity of our calculations the results are shown along with the \textit{exact} solution in the TG limit as $g\rightarrow \infty$, $\rho_{TG}(x)=1/4\sum_{i=0}^{3} \phi_{i}^2(x)$ \cite{Girardeau}, where $\phi_{i}$ are the solutions of the one-particle Hamiltonian with (\ref{exp2}). Indeed, as can be seen, the limit of infinite interaction is almost reached at $g=20$. Clearly, in both cases that are considered, well-converged density profiles are obtained with moderate numbers of many-body basis functions. These results are very promising and allow us to expect that the ED method with HO eigenfunctions optimized by $\Omega$ may also be an effective tool for handling systems with more complex forms of trapping potentials, such as multi-well potentials. \begin{figure} \begin{center} \includegraphics[width=0.239\textwidth]{Fig7.eps} \includegraphics[width=0.233\textwidth]{Fig8.eps} \end{center} \caption{\label{Fig3} Profiles of the one-body densities for the four-particle system in the presence of the trapping potential (\ref{exp2}), with $h=4$ and $\delta=0.2$, obtained for different values of the interaction strength $g$ and the cut-off $K$.} \end{figure} \section{Conclusions}\label{con}In conclusion, the ED method with an optimized basis of harmonic oscillator eigenfunctions has been tested on systems with delta interactions. Our results shows that minimization of the eigenenergy of the desired level with respect to the frequency parameter greatly improves the accuracy of the resulting approximate energy and also that of its wave function. Careful testing of the optimized ED method for harmonic and double-well potentials has proved its efficiency even in the regime of strong correlations. The use of the optimization scheme for other systems with delta interactions that are currently under intensive study, such as quantum mixtures, is a straightforward task which could also lead to many benefits. \bibliographystyle{abbrv}
{ "timestamp": "2018-06-15T02:06:55", "yymm": "1804", "arxiv_id": "1804.05585", "language": "en", "url": "https://arxiv.org/abs/1804.05585" }
\section{Introduction and main results} \subsection{Introduction} The Beals criterion \cite{B} naturally characterizes pseudo-differential operators by their commutation properties with fundamental objects like multiplication and differentiation operators; the basics of Weyl pseudo-differential calculus can be found in e.g. \cite{H-3}. To the best of our knowledge, all existent proofs of Beals' criterion use in an essential way some special properties of the Fourier transform and the translation invariance of the seminorms in $\mathscr{S}({\mathbb R}^{2d})$, see for example Lemma 2.2 in \cite{Bo} or Proposition 8.2 in \cite{Di-Sj}. \\ In recent years it appeared useful to introduce a magnetic pseudo-differential calculus (see \cite{IMP1,IMP2,IP1} and references therein) which is adapted to the presence of long-range magnetic fields. The main motivation behind this particular class of operators was the need of highlighting the magnetic flux effects and building up a gauge covariant calculus. Therefore, it was natural to search for a magnetic Beals-like criterion where the commutation with the plain momentum operators should be replaced by the commutation with their magnetic counterparts. Such a criterion was indeed proved by Iftimie-M{\u a}ntoiu-Purice \cite{IMP2} and one of the technically heavy points in that work was the extension of Bony's lemma to the magnetic case. \\ The main motivation of our paper is to propose an alternative proof of Beals' classical criterion based on the use of a normalized tight Gabor frame and to show how this approach can be quite naturally extended to the magnetic case and recover the criterion established in \cite{IMP2}. Note that no a-priori knowledge of the magnetic calculus is needed in order to understand the current manuscript. \subsection{The non-magnetic case} Let $X_j$ be the multiplication operator by $x_j$, $1\leq j\leq d$, while $D_j:=-i\partial_{x_j}$. We introduce $W_k:=X_k$ when $1\leq k\leq d$ and $W_k:=D_{k-d}$ when $d+1\leq k\leq 2d$. The operators $W_k$ leave the Schwartz space $\mathscr{S}({\mathbb R}^d)$ invariant. Let us consider a bounded map $T:\mathscr{S}({\mathbb R}^d)\mapsto \mathscr{S}'({\mathbb R}^d)$. Seen as maps from $\mathscr{S}({\mathbb R}^d)$ to $\mathscr{S}'({\mathbb R}^d)$, the following multiple commutators \begin{align}\label{1} [W_{j_1},[W_{j_2},...,[W_{j_m},T]]...]\,,\quad m\geq 1, \quad j_\ell \in\{1,2,..., 2d\}, \end{align} are also bounded. Then the classical Beals criterion \cite{B} reads as follows: \begin{theorem}\label{T-Beals} Let us assume that both $T$ and all possible commutators as in \eqref{1} can be extended to bounded operators on $L^2({\mathbb R}^d)$. Then there exists a symbol $a_0(x,\xi)\in S^{0}_{0,0}({\mathbb R}^{2d})$ such that for every $\Psi,\Phi\in \mathscr{S}({\mathbb R}^d)$ we have: $$\langle \Psi,T\Phi\rangle_{L^2(\mathbb{R}^d)}=(2\pi)^{-d}\int_{{\mathbb R}^{d}}\left (\int_{{\mathbb R}^{2d}} e^{i\xi \cdot (x-x')}\overline{\Psi(x)}a_0((x+x')/2,\xi)\Phi(x')dxdx' \right )d\xi, $$ i.e. $T={\rm Op}^w(a_0)$ is the Weyl quantization of $a_0$. \end{theorem} In this paper, the scalar product of $L^2({\mathbb R}^d)$ is linear in the second variable and we use the standard notation for H\"{o}rmander type symbols (see Section 7.8 in \cite{H-1}). \subsection{The magnetic case} Let $d\geq 2$. Consider a $2$-form $B(x)=\sum_{1\leq j,k\leq d} B_{jk}(x)dx_j\wedge dx_k$ where $B_{jk}=-B_{kj}$ are in $BC^\infty({\mathbb R}^d)$ (i.e. the space of $C^\infty$ bounded functions together with all their derivatives). We assume that the form is "magnetic" , i.e. that $\partial_j B_{k\ell } +\partial_k B_{\ell j} +\partial_\ell B_{jk}\equiv 0$ holds. This simply expresses that the $2$-form is closed. Given any fixed $y\in{\mathbb R}^d$ one can construct a $1$-form $A(\cdot,y)$ such that $B=dA(\cdot,y)$ and \begin{align}\label{9} A_j(x,y)=-\sum_{k=1}^d\int_0^1 s\; (x_k-y_k)\; B_{jk}(y+s(x-y))ds. \end{align} We observe that $A_j(x,y)$ grows at most linearly in $|x-y|$, and this fact remains true for all its derivatives in $x$. Let $\Gamma_{y,x}$ denote the straight oriented segment joining $y$ with $x$. Since $A(\cdot,0)-A(\cdot,y)$ is closed and exact, we have the identity \begin{align}\label{10} A_j(x,0)-A_j(x,y)=\partial_{x_j}\varphi(x,y),\quad 1\leq j\leq d\,, \end{align} \begin{equation}\label{defvarphi} \varphi(x,y)=\int_{\Gamma_{y,x}}(A(\cdot,0)-A(\cdot,y))=\int_{\Gamma_{y,x}}A(\cdot,0)\,. \end{equation} Here $A(\cdot,y)$ does not contribute to the integral because it is "orthogonal" to the integration path. The same orthogonality property allows us to identify $\varphi(x,y)$ with the circulation of $A(\cdot,0)$ on the oriented triangle generated by the origin, $y$ and $x$. Stokes' theorem implies that $\varphi(x,y)$ is equal with the magnetic flux through this triangle. We denote the magnetic flux through the oriented triangle $\Delta(u,v,w)$ having vertices at $u,v,w\in {\mathbb R}^d$ as: $$ {\mathfrak f} (u,v,w):=\int_{\Delta(u,v,w)}B\,,\qquad {\mathfrak f}(x,y,0)=\varphi(x,y). $$ We note the identities \begin{align}\label{11} \varphi(x,y)=-\varphi(y,x)\quad {\rm and}\quad \varphi(u,v)+\varphi(v,w)-\varphi(u,w)={\mathfrak f}(u,v,w). \end{align} Now we can formulate the magnetic version of Beals' criterion as stated in Theorem 1.1 of \cite{IMP2}. Let $\Pi_j:=D_j-A_j(\cdot,0)$ be the "magnetic" momenta which also leave $\mathscr{S}({\mathbb R}^d)$ invariant. We denote by $W_k$ either $X_k$ if $1\leq k\leq d$, or $\Pi_{k-d}$ if $d+1\leq k\leq 2d$. Let us consider a bounded map $T:\mathscr{S}({\mathbb R}^d)\mapsto \mathscr{S}'({\mathbb R}^d)$ and all possible commutators as in \eqref{1} but with the new $W_k$'s. Here is the magnetic Beals criterion: \begin{theorem}\label{mainth}Let us assume that both $T$ and all the "magnetic" commutators as in \eqref{1} can be extended to bounded operators on $L^2({\mathbb R}^d)$. Then there exists a symbol $a_0(x,\xi)\in S^{0}_{0,0}({\mathbb R}^{2d})$ such that for every $\Psi,\Phi\in \mathscr{S}({\mathbb R}^d)$ we have: $$\langle \Psi,T\Phi\rangle_{L^2(\mathbb{R}^d)}=(2\pi)^{-d}\int_{{\mathbb R}^{d}}\left (\int_{{\mathbb R}^{2d}} e^{i\varphi(x,x')}e^{i\xi \cdot (x-x')}\overline{\Psi(x)}a_0((x+x')/2,\xi)\Phi(x')dxdx' \right )d\xi\,. $$ \end{theorem} \vspace{0.5cm} The rest of this manuscript is as follows: in Section \ref{sec2} we construct a family of tight frames which generalizes the classical Gabor case, in Section \ref{sec3} we give the proof of Theorem \ref{T-Beals}, and in Section \ref{sec4} we prove Theorem \ref{mainth}. \section{A magnetic normalized Gabor tight frame}\label{sec2} Let $g\in C_0^\infty({\mathbb R}^d;\mathbb{R})$ such that ${\rm supp}\,g\subset (-1,1)^d$ and \begin{align}\label{2} g_\gamma(x):=g(x-\gamma),\quad \sum_{\gamma\in {\mathbb Z}^d} g_\gamma^2(x) =1\,,\quad \forall x\in {\mathbb R}^d\,. \end{align} Let $\psi_m(x):= e^{i m\cdot x}$, for any $m\in {\mathbb Z}^d$. Denote by $\big(\tau_zf\big)(x):=f(x-z)$ the translation with $z\in\mathbb{R}^d$. \begin{lemma}\label{lema-gabor} The functions \begin{equation}\label{eq:gabfra} \big \{G^\varphi_{\gamma,m}(x):=(2\pi)^{-d/2}e^{i \varphi(x,\gamma)}(\tau_\gamma g\psi_m)(x):\; \gamma,m\in{\mathbb Z}^d\big \}\,, \end{equation} with $\varphi$ defined in \eqref{defvarphi}, satisfy the identity \begin{equation}\label{F-desc-ftest} f(x)=\sum_{\gamma,m} G^\varphi_{\gamma,m}(x)\langle G^\varphi_{\gamma,m},f\rangle_{L^2(\mathbb{R}^d)}\,,\quad\forall f\in\mathscr{S}(\mathbb{R}^d)\,, \end{equation} where the series is absolutely convergent. In particular, these functions generate a normalized tight frame in $L^2({\mathbb R}^d)$ (see \cite{Gr, Ch}). \end{lemma} \begin{proof} We have \begin{align}\label{horia1} \langle G^\varphi_{\gamma,m},f\rangle_{L^2(\mathbb{R}^d)}& = (2\pi)^{-d/2}\int_{\{\max_{j=1}^d |x_j-\gamma_j|\leq 1\}}\,g(x-\gamma)e^{- i m\cdot(x-\gamma)}e^{-i \varphi(x,\gamma)}f(x)\,dx \nonumber \\ & =(2\pi)^{-d/2}\int_{\{\max_{j=1}^d |x_j|\leq 1\}}\,e^{- i m\cdot x }g(x)\big(\tau_{-\gamma}e^{-i \varphi(\cdot,\gamma)}f\big)(x)\, dx \nonumber\\ & =:\mathcal{F}\big (g\tau_{-\gamma}e^{-i \varphi(\cdot,\gamma)}f\big )(m) \end{align} where $g\tau_{-\gamma}e^{-i \varphi(\cdot,\gamma)}f\in C^\infty_0\big((-1,1)^d\big)$ may be naturally considered, via its $(2\pi{\mathbb Z})^d$-periodic extension to ${\mathbb R}^d$, as a function in $C^\infty (\mathbb{T}^d)$, and where the right hand side of \eqref{horia1} is nothing but the $m$'th Fourier coefficient of $g\tau_{-\gamma}e^{-i \varphi(\cdot,\gamma)}f$. Integrating by parts in \eqref{horia1}, using \eqref{10} and the fact that $f$ is a Schwartz function, then given any $N\geq 1$ we may find a constant $C_{f,N}$ such that for every $m$ and $\gamma$ we have: \begin{align}\label{hc1} |\langle G^\varphi_{\gamma,m},f\rangle_{L^2(\mathbb{R}^d)}|\leq C_{f,N}\; <\gamma>^{-N} <m>^{-N} . \end{align} By the Fourier inversion formula and \eqref{horia1} we obtain: $$ g\tau_{-\gamma}e^{-i \varphi(\cdot,\gamma)}f=(2\pi)^{-d/2}\underset{m\in\mathbb{Z}^d}{\sum}\psi_m \langle G^\varphi_{\gamma,m},f\rangle_{L^2(\mathbb{R}^d)} $$ where the series converges absolutely due to \eqref{hc1} where we fix for example $N\geq 2d$. Translating by $\gamma\in\mathbb{Z}^d$ we obtain: $$ g_\gamma(x) f(x)=(2\pi)^{-d/2}\underset{m\in\mathbb{Z}^d}{\sum}e^{i \varphi(x,\gamma)}(\tau_\gamma\psi_m)(x)\langle G^\varphi_{\gamma,m},f\rangle_{L^2(\mathbb{R}^d)}, $$ which coupled with \eqref{2} leads to: $$ f= \underset{\gamma\in\mathbb{Z}^d}{\sum}g_\gamma(g_\gamma f)=\underset{\gamma\in\mathbb{Z}^d}{\sum}\underset{m\in\mathbb{Z}^d}{\sum}G^\varphi_{\gamma,m} \langle G^\varphi_{\gamma,m},f\rangle_{L^2(\mathbb{R}^d)}. $$ This proves \eqref{F-desc-ftest}. \end{proof} \section{Proof of Theorem \ref{T-Beals}}\label{sec3} In order to simplify notation, in the non-magnetic case $(\varphi\equiv 0$) we denote the Gabor frame by $G_{\gamma,m}$. By assumption, $T$ can be extended to a bounded operator on $L^2({\mathbb R}^d)$ with norm $\|T\|$, thus: \begin{equation}\label{est-Tmatrix-2} \mathcal{T}_{\gamma,\gamma';m,m'}:= \big \langle G_{\gamma,m}, T G_{\gamma',m'}\big \rangle,\quad \big|\mathcal{T}_{\gamma,\gamma';m,m'}\big|\leq (2\pi)^{-d}\|g\|_{L^2(\mathbb{R}^d)}^2\,\|T\|\,. \end{equation} For every $N\in \mathbb N $, an application of the form $$ \mathscr{S}(\mathbb{R}^d)\times\mathscr{S}(\mathbb{R}^d)\ni (\Phi,\Psi) \mapsto \underset{\max(|\gamma|,|m|,|\gamma'|,|m'|)\leq N}{\sum}\mathcal{T}_{\gamma,\gamma';m,m'} \langle\Psi, G_{\gamma,m} \big\rangle_{L^2(\mathbb{R}^d)}\langle G_{\gamma',m'}, \Phi\big\rangle_{L^2(\mathbb{R}^d)} $$ defines a tempered distribution on $\mathbb{R}^d\times\mathbb{R}^d$. Then the distribution kernel of the bounded operator $T:L^2(\mathbb{R}^d)\rightarrow L^2(\mathbb{R}^d)$ is given by the series \begin{align}\label{5} \mathring{T}=\underset{N\nearrow\infty}{\lim}\underset{\max(|\gamma|,|m|)\leq N}{\sum}\ \underset{\max(|\gamma'|,|m'|)\leq N}{\sum}\mathcal{T}_{\gamma,\gamma';m,m'} \big(G_{\gamma,m}\otimes \overline{G_{\gamma',m'}}\big), \end{align} where each finite sum belongs to $BC^\infty(\mathbb{R}^d\times\mathbb{R}^d)$ and converges weakly in the space of the tempered distributions on $\mathbb{R}^d\times\mathbb{R}^d$. If we restrict the distribution $\mathring{T}$ to a compact in $\mathbb R^d\times \mathbb R^d$ there exists a finite number of non-zero contributions from the series in $\gamma$ and $\gamma'$, but generally, the series in $m$ and $m'$ are not absolutely convergent. In order to remedy that difficulty, we make a regularization and define for $\varepsilon>0$: \begin{equation}\label{F-Tepsilon} \mathring{T}_\varepsilon(x,x'):=(2\pi)^{-d}\sum_{\gamma,\gamma'} \sum_{m,m'}\mathcal{T}_{\gamma,\gamma';m,m'} {g}_{\gamma}(x){g}_{\gamma'}(x') e^{-\varepsilon(|m|^2+|m'|^2)}e^{im\cdot (x-\gamma)}e^{-im'\cdot (x'-\gamma')}\,. \end{equation} Due to \eqref{est-Tmatrix-2}, it is not difficult to see that for a fixed $\varepsilon>0$ the function $\mathring{T}_\varepsilon$ is jointly continuous. We will later see that it is much more regular. We start by proving an estimate which is stronger than \eqref{est-Tmatrix-2}. \begin{lemma}\label{lema-horia-1} Given any pair $N,M\geq 1$, there exists a constant $C_{N,M}$ such that \begin{align}\label{6} |\mathcal{T}_{\gamma,\gamma';m,m'}|\leq C_{N,M} <\gamma-\gamma'>^{-N} <m-m'>^{-M}. \end{align} \end{lemma} \begin{proof} The decay in $\gamma-\gamma'$ is a consequence of the fact that all commutators of $T$ with the position operators are bounded, see \eqref{1}. For example: $$ \begin{array}{ll} (\gamma_1-\gamma'_1) \mathcal{T}_{\gamma,\gamma';m,m'}&= (2\pi)^{-d}\big \langle (\gamma_1-X_1) g_\gamma \psi_m(\cdot -\gamma), T g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\& \quad + (2\pi)^{-d}\big \langle g_\gamma \psi_m(\cdot -\gamma), [X_1,T] g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\& \quad + (2\pi)^{-d}\big \langle g_\gamma \psi_m(\cdot -\gamma), T (X_1-\gamma'_1) g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\,. \end{array} $$ The decay in $m-m'$ is due to the boundedness of the commutators of $T$ with the momentum operators (one has to integrate by parts). For example: $$ \begin{array}{ll} (m_1-m'_1) \mathcal{T}_{\gamma,\gamma';m,m'}&= (2\pi)^{-d}\big \langle m_1 g_\gamma \psi_m(\cdot -\gamma), T g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\& \qquad \qquad - (2\pi)^{-d}\big \langle g_\gamma \psi_m(\cdot -\gamma), T m'_1 g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ &= (2\pi)^{-d}\big \langle g_\gamma (D_{x_1}\psi_m)(\cdot -\gamma), T g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\& \qquad \qquad - (2\pi)^{-d}\big \langle g_\gamma \psi_m(\cdot -\gamma), T g_{\gamma'} (D_{x_1}\psi_{m'}) (\cdot -\gamma')\big \rangle\\ & = (2\pi)^{-d}\big \langle (D_{x_1} g)_\gamma \psi_m(\cdot -\gamma), T g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & \qquad - (2\pi)^{-d}\big \langle g_\gamma \psi_m(\cdot -\gamma), T (D_{x_1}g)_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & \qquad + (2\pi)^{-d}\big \langle g_\gamma \psi_m(\cdot -\gamma), [D_{x_1},T] g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle.\\ \end{array} $$ \end{proof} \vspace{0.2cm} The next Lemma will show that the approximating kernel $\mathring{T}_\varepsilon$ has a fast off-diagonal decay. \begin{lemma}\label{lema-horia-2} Let $\varepsilon>0$. Then for every fixed $t\in {\mathbb R}^d$, the function $${\mathbb R}^d\ni s\mapsto \mathring{T}_\varepsilon(t+s/2,t-s/2)\in {\mathbb C}$$ belongs to $\mathscr{S}({\mathbb R}^d)$. \end{lemma} \begin{proof} Given $x$ and $x'$, the only $\gamma$'s and $\gamma'$'s contributing to \eqref{F-Tepsilon} must obey the conditions $|x-\gamma|\leq \sqrt{d}$ and $|x'-\gamma'|\leq \sqrt{d}$. Given $t=(x+x')/2$, the only $\gamma$'s and $\gamma'$'s contributing to \eqref{F-Tepsilon} must also obey $|(\gamma+\gamma')/2-t|\leq \sqrt{d}$. Thus: \begin{align}\label{horia3} \mathring{T}_\varepsilon(t+s/2,t-s/2) =&\sum_{ |(\gamma+\gamma')/2-t|\leq \sqrt{d}}(2\pi)^{-d} \sum_{m,m'}\mathcal{T}_{\gamma,\gamma';m,m'} {g}(t+s/2-\gamma){g}(t-s/2-\gamma')\nonumber \\ &\qquad \qquad\qquad \times e^{-\varepsilon(|m|^2+|m'|^2)}e^{im\cdot (t+s/2-\gamma)}e^{-im'\cdot (t-s/2-\gamma')}\,. \end{align} The series in $m$ and $m'$ are absolutely convergent due to the regularizing Gaussians, while the sum in the direction of $\gamma-\gamma'$ is convergent due to \eqref{6}. We can also differentiate as many times as we want with respect to $s$ in \eqref{horia3}, and the series remain absolutely convergent. Given $s=x-x'$, the only $\gamma$'s and $\gamma'$'s contributing to \eqref{horia3} must obey $|(\gamma-\gamma')-s|\leq 2\sqrt{d}$, thus when we estimate $s^\alpha D_s^\beta \mathring{T}_\varepsilon(t+s/2,t-s/2)$ we may replace $|s|$ with $|\gamma-\gamma'|+2\sqrt{d}$ and obtain something bounded (actually independent of $t$). More precisely, given multi-indices $\alpha$ and $\beta$, there exists a constant $C(\alpha,\beta,\varepsilon)$ such that, for any $t\in \mathbb R$, $$\sup_{s\in{\mathbb R}^d}|s^\alpha D_s^\beta \mathring{T}_\varepsilon(t+s/2,t-s/2)|\leq C(\alpha,\beta,\varepsilon).$$ \end{proof} Let us consider the symbol associated by the Weyl quantization with the distribution kernel $\mathring{T}_\epsilon$: \begin{equation}\label{F-distr-symbol} a_\varepsilon(t,\xi):= \int_{{\mathbb R}^d} e^{-i\xi \cdot s} \mathring{T}_\varepsilon(t+s/2,t-s/2)\, ds. \end{equation} Due to Lemma \ref{lema-horia-2}, for fixed $t\in\mathbb{R}^d$ and $\varepsilon >0$, the function $\xi \mapsto a_\varepsilon(t,\xi)$ is a Schwartz function. \begin{lemma}\label{lema-horia-3} The function $a_\varepsilon(t,\xi)$ converges uniformly on compact sets of ${\mathbb R}^{2d}$ to a smooth function $a_0(t,\xi)$. More precisely: $$\sup_{t\in {\mathbb R}^d}\sup_{\xi\in {\mathbb R}^d}|D_t^\alpha D_\xi^\beta a_\varepsilon(t,\xi)|\leq C(\alpha,\beta),\quad \forall \alpha,\beta\in \mathbb{N}^d,\quad \varepsilon\geq 0\,,$$ and given any compact $K\subset {\mathbb R}^{2d}$ we have $$\lim_{\varepsilon\searrow 0}\sup_{(t,\xi)\in K}|D_t^\alpha D_\xi^\beta\{a_\varepsilon(t,\xi)-a_0(t,\xi)\}|=0\,,\quad \forall \alpha,\beta\in \mathbb{N}^d.$$ In particular, $a_0\in S_{0,0}^0({\mathbb R}^{2d})$. \end{lemma} \noindent{\bf Remark}. Before proving the lemma, let us show how we can conclude the proof of Theorem \ref{T-Beals}. If $\Psi,\Phi\in C_0^\infty({\mathbb R}^d)$ we have: \begin{align*} \langle \Psi, T\Phi\rangle &=\lim_{\varepsilon\searrow 0}\int_{{\mathbb R}^{2d}} \overline{\Psi(x)}\mathring{T}_\varepsilon(x,x')\Phi(x')dxdx'\\ &=(2\pi)^{-d}\lim_{\varepsilon\searrow 0} \int_{{\mathbb R}^{d}}\left (\int_{{\mathbb R}^{2d}} e^{i\xi \cdot (x-x')}\overline{\Psi(x)}a_\varepsilon((x+x')/2,\xi)\Phi(x')dxdx' \right )d\xi\\ &= (2\pi)^{-d}\int_{{\mathbb R}^{d}}\left (\int_{{\mathbb R}^{2d}} e^{i\xi \cdot (x-x')}\overline{\Psi(x)}a_0((x+x')/2,\xi)\Phi(x')dxdx' \right )d\xi\,, \end{align*} where the last equality follows from the Lebesgue dominated convergence theorem applied to the $\xi$ integral, for which we use Lemma \ref{lema-horia-3}. Then the identity can be extended to $\mathscr{S}({\mathbb R}^d)$ because $a_0\in S_{0,0}^0({\mathbb R}^{2d})$. \vspace{0.2cm} \noindent{ \it Proof of Lemma \ref{lema-horia-3}.} Let us introduce the notation $$ \kappa:=(\gamma+\gamma')/2\in\big(2^{-1}\mathbb{Z}\big) ^d,\ \kappa':=\gamma-\gamma'\in\mathbb{Z}^d,\ n:= (m+m')/2\in\big(2^{-1}\mathbb{Z}\big)^d,\ n':=m-m'\in\mathbb{Z}^d. $$ Using \eqref{F-Tepsilon} and \eqref{horia3} we obtain \begin{align*} a_\varepsilon(t,\xi)=&(2\pi)^{-d}\sum_{|\kappa-t|\leq \sqrt{d}}\sum_{\kappa'} \sum_{n,n'} \int_{{\mathbb R}^d} e^{-i(\xi-n)\cdot s}{g}(t-\kappa+(s-\kappa')/2){g}(t-\kappa-(s-\kappa')/2) ds \\ &\times e^{i n'\cdot t} e^{-i(n\cdot \kappa'+n'\cdot \kappa)} e^{-\varepsilon(2|n|^2+|n'|^2/2)}\mathcal{T}_{\kappa,\kappa';n,n'}\,, \end{align*} where in order to simplify notation we write $\mathcal{T}_{\kappa,\kappa';n,n'}$ instead of $\mathcal{T}_{\gamma,\gamma';m,m'}$. The estimate \eqref{6} insures a strong localization in both the $\kappa'$ and $n'$ series. The only series which apparently still needs $\varepsilon>0$ in order to converge, is the series in $n$. Define \begin{align}\label{horia10} F(t-\kappa,\xi-n,\kappa'):=(2\pi)^{-d}\int_{{\mathbb R}^d} e^{-i(\xi-n)\cdot s}{g}(t-\kappa+(s-\kappa')/2){g}(t-\kappa-(s-\kappa')/2) ds \end{align} so that \begin{align*} a_\varepsilon(t,\xi)=&\sum_{|\kappa-t|\leq \sqrt{d}}\sum_{\kappa'} \sum_{n,n'} e^{i n'\cdot t} F(t-\kappa,\xi-n,\kappa') e^{-i(n\cdot \kappa'+n'\kappa)} e^{-\varepsilon(2|n|^2+|n'|^2/2)}\mathcal{T}_{\kappa,\kappa';n,n'}\,. \end{align*} It is important to remember that in the integral of \eqref{horia10}, the integrand is different from zero only if $s$ is of the order of $\kappa'$, i.e. $|s-\kappa'|\leq 2\sqrt{d}$. By differentiating $F$ with respect to $\xi$ we produce a polynomial growth in $s$ which can be traded off with a growth in $|\kappa'|$. Also, by standard partial integration with respect to $s$ we can generate a strong localization in $|\xi-n|$. In conclusion, one can prove the following statement: given any two multi-indices $\alpha,\beta\in \mathbb{N}^d$, there exists a constant $C(\alpha,\beta)<\infty$ such that \begin{align}\label{horia11} |D_t^\alpha D_\xi^\beta F(t-\kappa,\xi-n,\kappa')|\leq C(\alpha,\beta)\; <\xi-n>^{-2d} <\kappa'>^{|\beta|}. \end{align} The growth in $\kappa'$ is controlled by the decay of the matrix element $\mathcal{T}_{\kappa,\kappa';n,n'}$, while due to \eqref{horia11} the series in $n$ converges absolutely without any help from the $\varepsilon$-dependent Gaussian. Now we can take $\varepsilon$ to zero and define \begin{align*} a_0(t,\xi):=&\sum_{|\kappa-t|\leq \sqrt{d}}\sum_{\kappa'} \sum_{n,n'} e^{i n'\cdot t} F(t-\kappa,\xi-n,\kappa') e^{-i(n\cdot \kappa'+n'\kappa)} \mathcal{T}_{\kappa,\kappa';n,n'}\,. \end{align*} The limit can be taken uniformly on compacts in ${\mathbb R}^{2d}$, and remains valid for all possible derivatives with respect to both $\xi$ and $t$. This concludes the proof of the lemma. \qed \section{Proof of Theorem \ref{mainth}}\label{sec4} This time we let $\varphi\neq 0$ in \eqref{eq:gabfra} and have $$G_{\gamma,m}^\varphi(x)=g(x-\gamma)e^{i\varphi(x,\gamma)} (2\pi)^{-d/2}\psi_m(x-\gamma),\quad\gamma,m\in{\mathbb Z}^d.$$ Then \eqref{5} reads as: \begin{align}\label{13} \mathring{T}(x,x')=\sum_{\gamma,\gamma'} \sum_{m,m'} {G}_{\gamma,m}^\varphi(x)\overline{{G}_{\gamma',m'}^\varphi(x')} \mathcal{T}_{\gamma,\gamma';m,m'}^\varphi,\quad \mathcal{T}_{\gamma,\gamma';m,m'}^\varphi:= \big \langle {G}_{\gamma,m}^\varphi, T {G}_{\gamma',m'}^\varphi\big \rangle. \end{align} Let us prove that $\mathcal{T}_{\gamma,\gamma';m,m'}^\varphi$ obeys exactly the same type of localization as in \eqref{6}. The localization in $\gamma-\gamma'$ follows just like before from the boundedness of commutators with the position operators, while the localization in $m-m'$ is obtained by integration by parts and the use of the gauge covariance \eqref{10}. For example, we have $$ \begin{array}{ll} &(m_1-m'_1) \mathcal{T}_{\gamma,\gamma';m,m'}^\varphi= (2\pi)^{-d}\big \langle m_1 e^{i\varphi(\cdot ,\gamma)} g_\gamma \psi_m(\cdot -\gamma), T g_{\gamma'} e^{i\varphi(\cdot ,\gamma')} \psi_{m'}(\cdot -\gamma')\big \rangle\\& \qquad \qquad - (2\pi)^{-d}\big \langle g_\gamma e^{i\varphi(\cdot ,\gamma)} \psi_m(\cdot -\gamma), T m'_1 g_{\gamma'} e^{i\varphi(\cdot ,\gamma')} \psi_{m'}(\cdot -\gamma')\big \rangle\\ &= (2\pi)^{-d}\big \langle g_\gamma e^{i\varphi(\cdot ,\gamma)} (D_{x_1}\psi_m)(\cdot -\gamma), T g_{\gamma'} e^{i\varphi(\cdot ,\gamma')} \psi_{m'}(\cdot -\gamma')\big \rangle\\& \qquad \qquad - (2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} g_\gamma \psi_m(\cdot -\gamma), T e^{i\varphi(\cdot ,\gamma')} g_{\gamma'} (D_{x_1}\psi_{m'}) (\cdot -\gamma')\big \rangle\\ & = (2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} (D_{x_1} g)_\gamma \psi_m(\cdot -\gamma), T e^{i\varphi(\cdot ,\gamma')} g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & \qquad - (2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} g_\gamma \psi_m(\cdot -\gamma), T e^{i\varphi(\cdot ,\gamma')}(D_{x_1}g)_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & \qquad +(2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} g_\gamma \psi_m(\cdot -\gamma), (-i\partial_{x_1}-\partial _{x_1}\varphi (\cdot ,\gamma))\, T \, e^{i\varphi(\cdot ,\gamma')} g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & \qquad -(2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} g_\gamma \psi_m(\cdot -\gamma), T (-i\partial_{x_1}-\partial_{x_1}\varphi (\cdot,\gamma')) e^{i\varphi(\cdot ,\gamma')} g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & = (2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} (D_{x_1} g)_\gamma \psi_m(\cdot -\gamma), T e^{i\varphi(\cdot ,\gamma')} g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & \qquad - (2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} g_\gamma \psi_m(\cdot -\gamma), T e^{i\varphi(\cdot ,\gamma')}(D_{x_1}g)_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & \qquad +(2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} g_\gamma \psi_m(\cdot -\gamma), [D_{x_1}-A_1(\cdot,0),\, T] \, e^{i\varphi(\cdot ,\gamma')} g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & \qquad - (2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} A_1(\cdot ,\gamma)) g_\gamma \psi_m(\cdot -\gamma), \, T \, e^{i\varphi(\cdot ,\gamma')} g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\\ & \qquad +(2\pi)^{-d}\big \langle e^{i\varphi(\cdot ,\gamma)} g_\gamma \psi_m(\cdot -\gamma), T e^{i\varphi(\cdot ,\gamma')} A_1 (x,\gamma') g_{\gamma'} \psi_{m'}(\cdot -\gamma')\big \rangle\,. \end{array} $$ Here the last formula was obtained by integration by parts. We also used \eqref{10} for writing \break $\partial_{x_1} \varphi (x,\gamma) = A_1(x,0) - A_1(x,\gamma)$ and the fact that on the support of ${g}_\gamma$ the function $A(\cdot,\gamma)$ is bounded uniformly in $\gamma$. We now regularize the distributional kernel in \eqref{13} and introduce: \begin{align}\label{14} T_\varepsilon^\varphi \left (t+\frac{s}{2},t-\frac{s}{2}\right )&:= (2\pi)^{-d}\nonumber \sum_{\gamma,\gamma'} \sum_{m,m'} {g}_\gamma(t+s/2)e^{i\varphi(t+s/2,\gamma)}e^{-i\varphi(t-s/2,\gamma')}{g}_{\gamma'}(t-s/2) \nonumber \\ &\qquad \times e^{i(m-m')\cdot t} e^{i(m+m')\cdot s/2} e^{-im\cdot\gamma}e^{im'\cdot \gamma'}e^{-\varepsilon(|m|^2+|m'|^2)}\mathcal{T}_{\gamma\, ,\gamma';m,m'}^\varphi. \end{align} This function has a rapid decay in $s$ and is smooth in both $t$ and $s$ when $\varepsilon>0$. Using twice the second identity of \eqref{11} we obtain: \begin{align}\label{15} &\varphi(t+s/2,t-s/2)=\varphi(t+s/2,\gamma)+\varphi(\gamma,t-s/2)-{{\mathfrak f}}(t+s/2,\gamma,t-s/2)\nonumber \\ &= \varphi(t+s/2,\gamma)+\varphi(\gamma,\gamma')+\varphi(\gamma',t-s/2)-{{\mathfrak f}}(\gamma,\gamma',t-s/2)-{{\mathfrak f}}(t+s/2,\gamma,t-s/2). \end{align} Let us introduce the quantity (we use \eqref{15} in the second equality): \begin{align}\label{16} a_\varepsilon(t,\xi):=& \int_{{\mathbb R}^d} e^{-i\varphi(t+s/2,t-s/2)}e^{-i\xi \cdot s} T_\varepsilon^\varphi(t+s/2,t-s/2)ds \\ =&(2\pi)^{-d}\sum_{|(\gamma +\gamma')/2-t|\leq \sqrt{d}} e^{-i\varphi(\gamma,\gamma')} \nonumber \\ &\quad \times \sum_{m,m'} \int_{{\mathbb R}^d} e^{i {{\mathfrak f}}(\gamma,\gamma',t-s/2)} e^{i{{\mathfrak f}}(t+s/2,\gamma,t-s/2)} e^{-i[\xi-(m+m')/2]\cdot s}{g}_\gamma(t+s/2){g}_{\gamma'}(t-s/2) ds \nonumber \\ & \qquad \qquad \qquad \times e^{i(m-m')\cdot t} e^{-im\cdot\gamma}e^{im'\cdot \gamma'}e^{-\varepsilon(|m|^2+|m'|^2)}\mathcal{T}_{\gamma,\gamma';m,m'}^\varphi.\nonumber \end{align} As in the non-magnetic case, the only series which apparently poses convergence problems is the one with respect to the "direction" $(m+m')/2$. It turns out (as in the non-magnetic case) that the integral: $$\int_{{\mathbb R}^d} e^{i {{\mathfrak f}}(\gamma,\gamma',t-s/2)} e^{i{{\mathfrak f}}(t+s/2,\gamma,t-s/2)} e^{-i[\xi-(m+m')/2]\cdot s}{g}_\gamma(t+s/2){g}_{\gamma'}(t-s/2) ds $$ is the one which insures decay in that direction. In order to prove it, let us notice that the fluxes ${{\mathfrak f}}(t+s/2,\gamma,t-s/2)$ and ${{\mathfrak f}}(\gamma,\gamma',t-s/2)$ grow like the area of the corresponding triangle, hence only like $|\gamma-\gamma'|$ because both $t+ s/2-\gamma$ and $t-s/2-\gamma'$ have a length of order one on the joint support of $g_\gamma$ and $g_{\gamma'}$; the same is true for their derivatives with respect to both $t$ and $s$. Integrating by parts with respect to $s$ we can generate a decay of the type $<\xi-(m+m')/2>^{-2d}$ at the price of a polynomial growth in $|\gamma-\gamma'|$, a growth which is taken care of by the decay of the matrix element $\mathcal{T}_{\gamma,\gamma';m,m'}^\varphi$. Thus the same strategy which was used in the previous section concerning the limit $\varepsilon\searrow 0$ can be repeated. We conclude that $a_\varepsilon(t,\xi)\in S_{0,0}^0({\mathbb R}^d)$ uniformly in $\varepsilon\geq 0$ and thus the symbol we are looking for is: \begin{align}\label{17} a_0(t,\xi) =&\sum_{|(\gamma +\gamma')/2-t|\leq \sqrt{d}} e^{-i\varphi(\gamma,\gamma')}\nonumber \\ & \quad \times \sum_{m,m'} \int_{{\mathbb R}^d} e^{i {\mathfrak f} (\gamma,\gamma',t-s/2)} e^{i{{\mathfrak f}}(t+s/2,\gamma,t-s/2)} e^{-i[\xi-(m+m')/2]\cdot s}{g}_\gamma(t+s/2){g}_{\gamma'}(t-s/2) ds \nonumber \\ & \qquad \qquad \times e^{i(m-m')\cdot t} e^{-im\cdot\gamma}e^{im'\cdot \gamma'}\mathcal{T}_{\gamma,\gamma';m,m'}^\varphi.\nonumber \end{align} \qed
{ "timestamp": "2018-07-03T02:05:30", "yymm": "1804", "arxiv_id": "1804.05220", "language": "en", "url": "https://arxiv.org/abs/1804.05220" }
\section{Introduction} Magnetism, ferroelectricity and superconductivity are some of the quantum phenomena that is manifested at the macroscopic scale and we experience in our real life. In strongly correlated electron systems, these properties arise because of the intricate coupling among the charge, spin, orbital and lattice degrees of freedom. The materials in which there is coexistence of two or more ferroic orders are known as the multiferroics \cite{Schmid}. The interesting aspect of these kinds of materials is that one can obtain not only its parent property but also the properties that arises due to its cross coupling between the different ferroic orders that leads to the behaviour of multi-functionality \cite{Wang}. Such behaviour is in high demand with the increase in device maniaturization \cite{Yang}. Recently, coexistence of ferroelectricity and antiferromagnetism (AFM) has drawn great attention both from fundamental physics point of view and also its application in high density memory devices and magnetic field sensor \cite{Schmid}. These kinds of systems exhibit close coupling between spin and lattice degrees of freedom. LiFeSi$_2$O$_6$ is one such compound that displays ferrotoroidicity, a new class of primary ferroic order in which the toroidal moment align parallel spontaneously and antiferromagnetism with Neel temperature around 18K \cite{Jodlauk, Baum, Toledano}.Ferrotoroidicity involves cross product of space part and magnetic part. In this material, the ferrotoroidicity is driven by the magnetic field \cite{Toledano}.\\ \begin{figure} \vspace{-1ex} \includegraphics [scale=0.4, angle=0]{Fig_1.pdf} \vspace{-5ex} \caption{(a) crystal structure of LiFeSi$_2$O$_6$; (b) The intra chain Fe-Fe bonds (Fe-Fe(intra)) and inter chain Fe-Fe bonds namely Fe-Fe(1) and Fe-Fe(2) are depicted pictorially.} \vspace{-2ex} \end{figure} LiFeSi$_2$O$_6$, the compound under study belongs to pyroxene family of AMSi$_2$O$_6$ type (A= mono or di valent metal and M=di or tri valent metal). At room temperature, this material stabilizes in the monoclinic structure with the space group C2/c and stabilizes in P2$_1$/c space group of monoclinic around 230K \cite{Redhammer}. In this compound, the FeO$_6$ octahedra are edge shared along the c-axis forming zig-zag chains and each of these chains are connected to each other through SiO$_4$ octahedra. Magnetically, these FeO$_6$ chains are ferromagnetic and between the chains, it is antiferromagnetic interaction.Because of the formation of FeO$_6$ chains, it is expected to show quasi one dimensional behaviour. The spontaneous toroidal moments arise due to the formation of spin rings. However, the formation of toroidal moment demands that the exchange interaction between the Fe ions that forms a spin ring should have the same strength \cite{Changhoon}. Apart from these, there are reports that mention about magnetic pre-ordering but the information about clear onset temperature of the magnetic interaction is lacking. To understand about the structural connectivity with the magnetsim and dimensionality and the magnetic pre-ordering temperature, we have carried out temperature dependent x-ray diffarction experiments on LiFeSi$_2$O$_6$. Our results show the clear connection between the structural parameters and the pre-prdering of magnetic interactions. The magnetism existing in this compound appears to be of three dimensional character. \begin{figure} \vspace{-1ex} \includegraphics [scale=0.4, angle=0]{Fig_2.pdf} \vspace{-20ex} \caption{(a) Rietveld refinement of the xrd patterns of LiFeSi$_2$O$_6$ collected at (a) 300 K and (b) 18 K.} \vspace{-2ex} \end{figure} \section{Results and Discussions} In Fig.1(a), we show the crystal structure of LiFeSi$_2$O$_6$ compound. The Rietveld refinement of the x-ray diffraction patterns of this compound collected at 300K and 18K are displayed in Fig.2. Our results show that down to low temperatures, the compound remains in the monoclinic crystal system but there occurs phase transition from C2/c to P2$_1$/c space group around 230K (T$_S$) \cite{Roth}.\\ The temperature dependent dc magnetic susceptobility at an external magnetic field of 0.5T is shown in Fig.3(a). The susceptibility curve reaches a maximum around 18K. This marks the Neel temperature (T$_N$). The field dependent magnetization plot shows S-shape curve suggesting spin flop transition, inset of Fig.3(a). \begin{figure} \vspace{-1ex} \includegraphics [scale=0.4, angle=0]{Fig_3.pdf} \vspace{-10ex} \caption{(a) The temperature evolution of the dc magnetic susceptibility at an applied magnetic field of 0.5 T; the inset shows magnetisation as a function of applied magnetic field collected at 4 K (b) The temperature dependence of heat capacity C$_p$ plotted in the form of C$_p$/T vs T. Black curve is simulated lattice contribution. Inset highlights the region close to transition temperature; (c)Isothermal entropy change ($\Delta$S$_t$$_{h}$) as a function of temperature; the inset shows adiabatic temperature change as function of temperature; (d) magnetic contribution to heat capacity in the form C$_m$$_a$$_g$/T vs T. Inset shows the temperature dependence of zero field total entropy (S$_t$$_o$$_t$$_a$$_l$) calculated by integrating the measured C$_p$/T vs. temperature curve.} \vspace{-2ex} \end{figure} The temperature dependent heat capacity C$_P$ plotted in the form of C$_P$/T vs T is shown in Fig.3(b). The inset of Fig. 3(b) shows the heat capacity collected at different applied magnetic fields (2,6 and 12T). A peak is observed around 18K and shifts to low temperature with the application of magnetic field of 6 and 12 T as highlighted in the inset of Fig. 3(b). This behaviour suggests the nature of the ordering is antiferromagnetic which becomes more evident in the magnetocaloric curve shown in Fig.3(c). These magnetocaloric curves i.e. (a) isothermal entropy chnage ($\Delta$S$_th$) as well as (b) adiabatic temperature change ($\Delta$T$_ad$) are calculated from the measured heat capacity data shown in Fig. 3(c). For a paramagnetic system $\Delta$S$_th$ is expected to be negative as application of magnetic field results in reduction of spin disorder. In the case of antiferromagnetic system, the application of magnetic field creates more disorder as it is against the ordering effect of exchange intercation. Therefore, $\Delta$S$_th$ is expected to be positive in the antiferromagnetic state and across the antiferro to paramagnetic transition, a sign reversal will be observed \cite{Rawat}. Fig. 3(c) shows sign reversal around the magnetic ordering temperature giving a clear signature of antiferromagnetic like order. As expected the magnitude of both $\Delta$S$_th$ and $\Delta$T$_ad$ are small.\\ To separate the magnetic contribution from the measured heat capacity, lattice contribution is simulated with two Einstein temperatures (220K and 700K) and one Debye temperature (600K) as has been done in earlier studies \cite{Baker}. The simulated cirve is shown as black curve in Fig. 3(b). The magnetic contribution taken as the difference of simulated C/T and measured C/T (0 Tesla) curves is shown in Fig. 3(d). It shows non-zero contribution well above transition temperature. \begin{figure} \vspace{-1ex} \includegraphics [scale=0.4, angle=0]{Fig_4.pdf} \vspace{-17ex} \caption{The first column shows the temperature behaviour of the lattice parameters where $\beta$ is the monoclinic angle; the second column shows temperature dependent lattice parameters in the temperature range 5 to 100 K. } \vspace{-2ex} \end{figure} The entropy associated with the transition is found to be about 14J/mol K around 20K(near T$_N$) which reaches to 20J/mol K at ~ 60K (where C$_mag$ reaches zero). For S=5/2 system it is expected to 14.89J/mol K. This is close to what has been observed near T$_N$. However, magnetic contribution is present up to ~ 60K, giving rise to much larger entropy change. This could be due to incorrect lattice contribution estimate as modeling with multiple Einstein and Debye function is not unique. To circumvent this ambiguity, we considered another approach. We calculated the total entropy from the measured zero field heat capacity data, which is shown as an inset to the Fig. 3(d). The entropy curve above the transition temperature is fitted with a third order polynomial (in the temperature range 30-90K) and extrapolated down to zero K. This intercept is found to be 12J/mol K which can be considered as the entropy of transition. This is about 80\% of that expected for the spin 5/2 system and is consistent with earlier reports \cite{Baker}.\\ The above results show the onset of magnetic contribution well above T$_N$. To understand this aspect, temperature dependent x-ray diffraction experiments were carried out.The lattice parameters obtained during the heating and cooling cycles from the Rietveld refinement are shown in Fig. 4. From the figure, we observe that with decrease in temperature until 240K, all the lattice parameters decrease linearly. As the compound enters T<T$_S $, the a and c parameters decreases while the b-parameter show an increase. Below T$_S$, the lattice parameters exhibit non-linear behaviour. The changes in the lattice parameters in the temperature region, below 240K to 50K are significant. In the temperature range, below 50 K to the lowest recorded temperature the a parameter remains almost the same and increases below T$_N$; the b-parameter remains almost the same and decreases below T$_N$ and the c-parameter is found to decrease, Figs. 4(a$'$-c$'$). Based on the behaviour of the cell parameters, we have divided the structural parameters in three regions. Regions I, II and III are the temperature ranges 300K to 240K; below 240K to 50K and below 50K, respectively. In region I, the lattice parameters a, b and c decrease by 0.046\%, 0.02\% and 0.02\%, respectively. In region II, the changes in the lattice parameters are significant. The a and c parameters decrease by 0.44\% and 0.5\%, respectively and b-parameter increases by 0.11\%. In region III, the a nd b parameters remain almost the same unti around T$_N$ and later on these parameters show opposite behaviours. The c-parameter decreases down to the lowest collected temperature, Figs. 4(a$'$-c$'$). These results clearly show that there occurs sigificant change in the lattice parameters around T$_S$ and T$_N$. To understand these results, we look into the other structural parameters which also play significant role in deciding the magnetism within the Fe chain and between the Fe chains. In this paper, the thermal behaviour of the sructutal parameters are explained as a function of decrease in temperature. In Fig, 5, we show the temperature variation of Fe-O bond lengths. For T>T$_S$, there are 3 types of oxygen ions, O1 (apical), O1 (basal) and O2 (basal). The apical and basal are referred to those oxygen ions that lie along the c-axis and in the ab-plane, respectively. For T<T$_S$, there are 6 types of oxygen ions. The O1 (apical) is split into O1A (apical) and O1B (apical); O1(basal) is split into O1A(basal) and O1B(basal) and O2(basal) is split into O2A and O2B. In region I, the temperature variation in the Fe-O bond distances remains almost the same. As the sample enters region II, the Fe-O1A apical bond becomes longer than Fe-O1B apical bond and later on no significant variation in Fe-O1A and Fe-O1B bonds occurs. In the case of Fe-O1A (basal) and Fe-O1B(basal), there occurs slight increment and decrement, respectively. In the case of Fe-O2A and Fe-O2B (basal) bonds, there occurs significant increment and decrement, respectively. The Fe-O2A and B bonds decrease and increase, respectively. In the region III, Figs. 5(a$'$-c$'$), the Fe-O1A (apical), Fe-O1A(basal) and Fe-O2B are found to decrease and the Fe-O1B (apical), Fe-O1B (basal) and Fe-O2A show opposite behaviour. The Fe-O1A,B (apical) bond lengths are found to cross around T$_N$, Fig 5(a$'$). Similar behaviour is also observed in the case of Fe-O2A,B bond lengths, Fig. 5(c$'$). In the case of Fe-O1A,B (basal), they become almost identical around 10K, Fig. 5(b$'$). \begin{figure} \vspace{-1ex} \includegraphics [scale=0.4, angle=0]{Fig_5.pdf} \vspace{-7ex} \caption{The first column shows the temperature behaviour of (a)Fe-O1 (apical) (b) Fe-O1 (basal) and (c) Fe-O2; the second column shows temperature dependent (a$'$)Fe-O1 (apical) (b$'$) Fe-O1 (basal) and (c$'$) Fe-O2 in region III; (d) the pictorial representation of FeO$_{6}$ octahedra in the P2$_{1}$/c phase.} \vspace{-2ex} \end{figure} In Fig. 6, we show the temperature variation of Si-O bonds. For T>T$_S$, there is only one kind of Si ion. For T<T$_S$, there are two kinds of Si ions labeled as SiA and SiB. In region I, there is no significant variation in the Si-O1 and Si-O2 bonds while the Si-O3(u) (u means 'up' along c-axis) bond shows marginal increment and then decrement and Si-O3(d)(d means 'down' in c-axis) shows opposite behaviour. As the sample enters region II, the Si-O1 bond is split into SiA-O1A and SiB-O1B. The SiB-O1B bonds are longer than the Si-O1A. The variation in these bond distances remain almost the same in this region. The Si-O2 bonds are split into SiA-O2A and SiB-O2B. The value of SiB-O2B is more than SiA-O2A. In this region, SiB-O2B bonds decrease by 1.2\% and SiA-O2A bonds increase by 0.7\%, respectively. The Si-O3(u) is split into SiA-O3A(u) and SiB-O3B (u). In this region, the values of both these bond distances are almost the same. The Si-O3(d) is split into SiA-O3A (d) and SiB-O3B (d). These bond distances are almost the same in this region. In region III, the Si-O1A increases by ~ 11\% and SiB-O1B decreases by ~ 10.8\%. The SiB-O2B decreases by ~ 3\% and SiA-O2A increases by ~ 0.9\%. The SiA-O3A(u) decreases by ~ 2.7\% and SiB-O3B(u) increases by ~2\%. The SiB-O3B(d) bond distances increases by ~ 0.8\% and SiA-O3A(d) bond distances remains unaltered. \begin{figure} \vspace{-1ex} \includegraphics [scale=0.4, angle=0]{Fig_6.pdf} \vspace{-7ex} \caption{The first column shows the temperature behaviour of (a)Si-O1 (apical) (b) Si-O2 and (c) Si-O3(u) and (d) Si-O3(d) bonds; the second column shows temperature dependent (a$'$)Si-O1 (b$'$) Si-O2 (c$'$) Si-O3(u) and (d$'$) Si-O3(d) bonds in region III and (e) the pictorial representation of SiO$_{4}$ tetrahedra in the P2$_{1}$/c phase.} \vspace{-2ex} \end{figure} In Fig. 7(a-b), we show temperature behaviour of the Fe-Fe bonds. The Fe-Fe (intra) constitutes the bond distance between the edge shared FeO$_6$ octahedra (within the Fe chain) and Fe-Fe(1$' $) cinstitutes the bond distance between the Fe chains which is separated by SiO$_4$ tetrahedra. In region I, the Fe-Fe(intra) bond decreases by ~ 0.12\%. The Fe-Fe(1$'$) decreases by ~ 0.05\%. When the sample enters region II, the Fe-Fe(1$'$) splits into Fe-Fe(1) and Fe-Fe(2). These bonds are separated by SiB and SiA tetrahedra, respectively. The Fe-Fe(1) bond is longer than Fe-Fe(2). In this region, all the Fe-Fe bonds within and between the chains remain almost the same. In region III, the Fe-Fe(1) bonds and Fe-Fe(2) bonds increase and decrease by ~ 0.82\% and ~ 0.77\%, respectively. The Fe-Fe(intra) bonds show slight decrement below 25K by ~ 0.16\%. \begin{figure} \vspace{-1ex} \includegraphics [scale=0.4, angle=0]{Fig_7.pdf} \vspace{-27ex} \caption{The first column shows the temperature behaviour of (a) Fe-Fe (1,2) (b) Fe-Fe (intra) (c) O1-O2 (A,B) bonds and the second column shows the pictorial representation of these bonds.} \vspace{-2ex} \end{figure} The above results show that the structural parameters exhibit significant changes in the region III which is below 50K. This behaviour is in line with the heat capacity results where the entropy change with regard to the magnetic interaction was found to be sufficient only in the temperature range 60K to 5K. The changes are observed in the Fe-O and Fe-Fe bond distances (between the chains).Toledano et al.\cite{Toledano} have shown the spin exchange pathway depicting the intra-chain ferromagnetic(Fe-Fe(intra)) and inter-chain intiferromagnetic (Fe-Fe (1)) interaction, Fig. 1(b) that form a spin ring. With the application of magnetic field, the spin ring is expected to give finite toroidal moment. In the region II, we observe discernible changes in the basal Fe-O bond distances. These changes are in such a way to preserve the volume of the FeO$_6$ octahedra. The unaltered Fe-Fe intra-chain bond distances suggest insignificant role played by the direct exchange d-d orbital exchange of the Fe ions. This result is also in line with the prediction of Streltsov and Khomskii \cite{Streltsov}. They have shown that as one goes from light to heavy transition metal ion i.e. from Ti to Fe ions, the hopping integral (t$_dd$) between the direct d-d orbitals decrease provided the distance (D) between the transition metal ions are identical. The hopping integral t$_dd$ is given as rd3/d5, where r is the radius of the d states \cite{Harrison}. The inter-chain Fe-Fe(1) bonds are connected to the SiB-O4 tetrahedra as shown in Fig.7. Each of the Fe is connected to the SiB through O1B(basal) and O2B(basal). The O1B-O2B bond distances remain almost the same in region II due to rearragement of the Fe-O1B and Fe-O2B bonds. In region III, we observe decrement in the O1B-O2B bonds suggesting that super-super exchange interaction between the chains dominates over the thermal effect. In this region, we also observe slight decrement in the Fe-Fe(intra) and the av. Fe-O bond remains almost the same. In the case of edge shared FeO$_6$ octahedra, antiferromagnetic interaction is predicted to set in, if the Fe-Fe bond angle is more than 97 degrees. It has has been observed in literature \cite{Redhammer,Streltsov}, the Fe-O1A-Fe(apical) and Fe-O1B-Fe(apical) bond angles are around 97.7 and 99.1 degrees at 1.4K. Hence the exchange pathway for the setting up of the ferromagnetic interaction within the chain is along Fe-O1A-Fe bond angle. We also observe significant decrement in the Fe-O1A(apical) bond distances in this region where ferromagnetic interaction occurs through O 2p orbitals. Apart from these behaviours, significant changes in the SiB-O1B and SiB-O2B bond distances are also observed. This suggests the possible involvement of Si 3s/3p orbitals in the setting up of super-super exchange interaction between the chains. Our results suggest that the onset of both the antiferromagnetic and ferromagnetic interactions occur around the same temperature and thus the magnetism is of three dimensional nature. The results obtained in the present study are different from the compounds that show low dimensional magnetism. In the case of quasi one dimensional Ca$_3$Co$_2$O$_6$\cite{Bindu} and Sr$_3$NiRhO$_6$ compounds\cite{Navneet}, from our EXAFS studies, through the behaviour of the local structural parameters, reports have shown clear signature of the onset of the ferromagnetic interactions (within the chain) at higher temperatures as compared to the onset of the antiferromagnetic interaction (between the chain). In the case of Ca$_3$Co$_2$O$_6$ compound, Mossbauer studies have also shown similar behaviour\cite{Paulose}. Similarly, in the case of quasi two dimensonal MnTiO$_3$ compound, our structural studies have shown clear signature of the onset of the magnetism within the ab-plane stabilizing at higher temperature than the onset of the magnetism between the planes\cite{Maurya}. \section{Summary} Temperature dependent x-ray diffraction experiments were carried out on LiFeSi$_2$O$_6$ compound. We observe interesting evolution of the structural parameters across the structural and magnetic phase transitions. The behaviour of the structural parameters reveal signature of magnetoc pre-ordering at temperatures well above T$_N$. Our results show that both antiferromagnetic interaction between the chain and ferromagnetic interaction within the chain order below 50K. Hence the magnetism existing in this compound is of three dimensional character. We believe that our results will be helpful in understanding the evolution of the spin rings that give rise to net toroidal moment if x-ray diffraction experiments are carried out in the presence of magnetic field.
{ "timestamp": "2018-04-17T02:15:27", "yymm": "1804", "arxiv_id": "1804.05594", "language": "en", "url": "https://arxiv.org/abs/1804.05594" }
\section{Introduction} Fuzzy set theory, which was first initiated by Zadeh \cite{zadeh} in 1965, has become a very important tool to solve many complicated problems arising in the fields of economics, social sciences, engineering, medical sciences and so forth, involving uncertainties and provides an appropriate framework for representing vague concepts by allowing partial membership. Many researchers have worked on theoretical aspects and applications of fuzzy set theory over the years, such as fuzzy control systems, fuzzy logic, fuzzy automata, fuzzy topology, fuzzy topological groups, fuzzy topological vector spaces, fuzzy differentiation etc. \cite{chang,ferraro,katsaras,lee,lowen,ming,wong}. A new type of fuzzy set (multi-fuzzy set ) was introduced in a paper of Sebastian and Ramkrishnan \cite{sebastian} by ordered sequences of membership function. This notion of multi-fuzzy sets provides a new method to represents some problems which are difficult to explain in other extensions of fuzzy set theory\cite{sebastian2}. The topological structure in this setting has been defined by Sebastian and Ramkrishnan\cite{sebastian1}. Also, Dey and Pal \cite{Dey} introduced the notion of multi-fuzzy complex numbers, multi-fuzzy complex sets, multi-fuzzy vector spaces etc. In this paper Lowen type multi-fuzzy topology is introduced and characterization of nbd system is done. Continuity of functions is studied. A notion of product of multi-fuzzy topologies is introduced and productive properties of 2nd countability and compactness are also established. \section{Preliminaries} \begin{Definition} \cite{sebastian} \label{mftvs1.1} Let $X$ be a non-empty set and $\{L_{i}:i\in P\}$ be a family of complete lattices. A multi-fuzzy set $A$ in $X$ is a set of ordered sequences $A=\{\prec x,\mu_{A_{1}}(x),\mu_{A_{2}}(x),...,\mu_{A_{i}}(x),...\succ:x\in X\},$ where $\mu_{A_{i}}(x)\in L_{i}^{X},$ for $i\in P.$ For the sake of simplicity we denote the multi- fuzzy set $A=\{\prec x,\mu_{A_{1}}(x),\mu_{A_{2}}(x),...,\mu_{A_{i}}(x),....\succ:x\in X\}$ as $A=\prec\mu_{A_{1}},\mu_{A_{2}},...,\mu_{A_{i}},...\succ.$ The set of all multi-fuzzy sets in $X$ with $\{L_{i}:i\in P\}$ is denoted by $\underset{i\in P}{\prod}L_{i}^{X}.$ \end{Definition} \begin{Definition} \cite{sebastian} \label{mftvs1.2} Let $A,B$ be multi-fuzzy sets in $\underset{i\in P}{\prod}L_{i}^{X}.$ Then we have the following relations and operations.\begin{itemize} \item[(i)] $A$ is said to be a multi-fuzzy subset of $B,$ denoted by $A\sqsubseteq B$, if $\mu_{A_{i}}\leq\mu_{B_{i}},$ for all $i\in P;$ \item[(ii)] $A$ is said to be equal to $B,$ denoted by $A=B$, if $\mu_{A_{i}}=\mu_{B_{i}},$ for all $i\in P;$ \item[(iii)] Intersection of $A$ and $B$, denoted by $A\sqcap B,$ is defined by $\mu_{A\sqcap B}=\prec(\mu_{A_{i}}\wedge\mu_{B_{i}})_{i\in P}\succ;$ \item[(iv)] Union of $A$ and $B$, denoted by $A\sqcup B,$ is defined by $\mu_{A\sqcup B}$ $=\prec(\mu_{A_{i}}\vee\mu_{B_{i}})_{i\in P}\succ;$ \item[(v)] complement of $A$, denoted by $A^{C}$, is defined by $\mu_{A^C}=\prec(\mu_{A^{C}_{i}})_{i\in P}\succ$, where $\mu_{A^{C}_{i}}$ is the complement of the fuzzy set $\mu_{A_{i}}$; \item[(vi)] a multi-fuzzy set is said to be null multi-fuzzy set, denoted by $\bar{\Phi},$ if $\mu_{\bar{\Phi}_{i}}=\bar{0},$ for all $i\in P;$ where $\bar{0}$ is the null fuzzy set; \item[(vii)] a multi-fuzzy set is said to be absolute multi-fuzzy set, denoted by $\bar{X},$ if $\mu_{\bar{X}_{i}}=\bar{1},$ for all $i\in P;$ where $\bar{1}$ is the absolute fuzzy set.\end{itemize} \end{Definition} \begin{Definition} \cite{sebastian1} \label{mftvs1.3} A subset $\delta$ of $\underset{i\in P}{\prod}L_{i}^{X}$ is called a multi-fuzzy topology on $X,$ if it satisfies the following conditions:\begin{itemize} \item[(i)] $\bar{\Phi},\bar{X}\in\delta;$ \item[(ii)] the intersection of any two multi-fuzzy sets in $\tau$ belongs to $\tau$. \item[(iii)] the union of any number of multi-fuzzy sets in $\tau$ belongs to $\tau$. \end{itemize} \noindent The triplet $(X,\underset{i\in P}{\prod}L_{i}^{X},\delta)$ is called multi-fuzzy topological space.The members of $\delta$ are said to be $\delta-$ open multi-fuzzy sets or simply open multi-fuzzy sets in $X.$ A multi-fuzzy set $A\in\underset{i\in P}{\prod}L_{i}^{X}$ is called $\delta-$ closed if and only if its complement $A^{C}$ is $\delta-$ open. \end{Definition} \begin{Remark} \cite{sebastian}\label{mftvs1.4} If the sequences of membership functions have only $n-$terms (finite number of terms), $n$ is called the dimension of $A.$ If $L_{i}=[0,1]$ (for $i=1,2,...n$ ), then the set of all multi-fuzzy sets in $X$ of dimension $n$ is denoted by $M^{n}FS(X).$ The multi-membership function $\mu_{A}$ is a function from $X$ to $I^{n}$ such that for all $x$ in $X,$ $\mu_{A}(x)=\prec\mu_{A_{1}}(x),\mu_{A_{2}}(x),...,\mu_{A_{n}}(x)\succ.$ For the sake of simplicity we denote the multi- fuzzy set $A=\{\prec x,\mu_{A_{1}}(x),\mu_{A_{2}}(x),...,\mu_{A_{n}}(x)\succ:x\in X\}$ as $A=\prec\mu_{A_{1}},\mu_{A_{2}},...,\mu_{A_{n}}\succ.$ In this paper $I_{i}=[0,1]$ (for $i=1,2,..n$) and $M^{n}FS(X)$ i.e. the set of all multi-fuzzy sets in $X$ of dimension $n$ is denoted by $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$. $\newline$\\ Following Sebastian et al. \cite{sebastian1} some definitions and preliminary results are presented in the rest part of this section in our form.\end{Remark} \begin{Definition} \label{mftvs1.6}Let $X$ and $Y$ be two non-empty sets and $f:X\rightarrow Y$ be a mapping. Then\begin{itemize} \item[(i)] the image of a multi-fuzzy set $A$ $\in$ $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$ under the mapping $f$ is a multi-fuzzy set $\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y}$, which is defined by $\mu_{f(A)}(y)=\underset{y=f(x)}{\vee}\mu_{A}(x),$ $A\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},y\in Y.$ \item[(ii)] the inverse image of a multi-fuzzy set $B$ $\in$ $\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y}$ under the mapping $f$ is a multi fuzzy set in $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$, which is defined by $\mu_{f^{-1}(B)}(x)=\mu_{B}(f(x)),$ $B\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},x\in X.$ \end{itemize} \end{Definition} \begin{Proposition} \label{mftvs1.7} Let $f:X\rightarrow Y$ be a mapping and $F^{1},F^{2},F^{k}\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},$ $G,G^{1},G^{2},G^{k}\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y}$, for $k\in\triangle.$ Then \begin{itemize} \item[(i)] $f(\bar{\Phi}_{X})=\bar{\Phi}_{Y};$ \item[(ii)] $F^{1}\sqsubseteq F^{2}$ implies $f(F^{1})\sqsubseteq f(F^{2});$ \item[(iii)] $f(\underset{k\in\triangle}{\sqcap}F^{k})\sqsubseteq\underset{k\in\triangle}{\sqcap}f(F^{k});$ \item[(iv)] $f(\underset{k\in\triangle}{\sqcup}F^{k})=\underset{k\in\triangle}{\sqcup}f(F^{k});$ \item[(v)] $f^{-1}(\bar{\Phi}_{Y})=\bar{\Phi}_{X}$ and $f^{-1}(\bar{Y})=\bar{X};$ \item[(vi)] $G^{1}\sqsubseteq G^{2}$ implies $f^{-1}(G^{1})\sqsubseteq f^{-1}(G^{2});$ \item[(vii)] $f^{-1}(\underset{k\in\triangle}{\sqcup}G^{k})=\underset{k\in\triangle}{\sqcup}f^{-1}(G^{k});$ \item[(viii)] $f^{-1}(\underset{k\in\triangle}{\sqcap}G^{k})=\underset{k\in\triangle}{\sqcap}f^{-1}(G^{k});$ \item[(ix)] $f^{-1}(G^{C})=[f^{-1}(G)]^{C};$ \item[(x)] $F^{k}\sqsubseteq f^{-1}(f(F^{k})),$ equality holds if $f$ is injective; \item[(xi)] $f(f^{-1}(G^{k}))\sqsubseteq G^{k},$ equality holds if $f$ is surjective.\end{itemize}\end{Proposition} \section{Lowen type multi-fuzzy topology} Unless otherwise mentioned by $\hat{0}$ we denote the n-tuple $(0,0,..,0)$. \begin{Definition} \label{mftvs1.5} A multi-fuzzy set $A$ in $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$ is said to be non-null constant multi-fuzzy set if $\mu_{A_{i}}=\bar{c_{i}},$ for all $i=1,2,..,n;$ where $\bar{c_{i}}(x)=c_{i},$ $\forall x\in X$ is a constant fuzzy set with $c_{i}\in(0,1].$ This is denoted by $C_X^n$.\end{Definition} \begin{Note} The null multi-fuzzy set in $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$ as defined in Definition \ref{mftvs1.2} $(vi)$ will be denoted by $\Phi_{X}^{n}$. \end{Note} \begin{Definition} \label{mftvs2.7} For a multi-fuzzy set $F\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},$ \begin{itemize} \item[(1)] $\mu_{F}(x)\succ\hat{0}\Leftrightarrow\mu_{F_{i}}(x)>0,$ for all $i=1,2,..,n.$ \item[(2)] $\mu_{F}(x)= \hat{0}\Leftrightarrow\mu_{F_{i}}(x)=0,$ for all $i=1,2,..,n.$ \end{itemize}\end{Definition} \begin{Definition} Let $\mathcal{M}_X$ denote the collection of all multi-fuzzy sets in $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$ such that $F \in \mathcal{M}_{X}$ $\implies$ for any $x\in X,$ either $\mu_{F}(x)\succ\hat{0}$ or $\mu_{F}(x)= \hat{0}.$ \end{Definition} \begin{Definition} \label{mftvs2.1} Let $\tau$ be a sub-collection of $\mathcal{M}_{X}.$ Then $\tau$ is said to be a Lowen type multi-fuzzy topology on $X$ if\begin{itemize} \item[(i)] $\Phi_{X}^{n},~C_{X}^{n}\in\tau$, where $\mu_{C_{X,i}^{n}}=\bar{c_{i}}$, for all $i=1,2,..,n,$ and $\bar{c_{i}}(x)=c_{i},$ $\forall x\in X$ is constant fuzzy set with $c_{i}\in(0,1]$. \item[(ii)] the intersection of any two multi-fuzzy sets in $\tau$ belongs to $\tau$. \item[(iii)] the union of any number of multi-fuzzy sets in $\tau$ belongs to $\tau$. \end{itemize} \noindent The triplet $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ is called Lowen type multi-fuzzy topological space over $X$. \\ The members of $\tau$ are called multi-fuzzy open sets and a multi-fuzzy set is said to be multi-fuzzy closed if its complement is in $\tau$. \end{Definition} \begin{Definition} \label{mftvs2.2} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ be a Lowen type multi-fuzzy topological space. A sub-collection $\mathcal{B}$ of $\tau$ is said to be an open base of $\tau$ if every member of $\tau$ can be expressed as the union of some members of $\mathcal{B}$.\end{Definition} \begin{Definition}\label{mftvs2.3} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ and $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ be two Lowen type multi-fuzzy topological spaces. A mapping $f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ is said to be \begin{itemize} \item[(i)] multi-fuzzy continuous if $f^{-1}(A)\in\tau,\forall A\in\nu$. \item[(ii)] multi-fuzzy homeomorphism if $f$ is bijective and $f,f^{-1}$ are multi-fuzzy continuous. \item[(iii)] multi-fuzzy open if $A\in\tau$ $\Rightarrow$ $f(A)\in\nu$; \item[(iv)] multi-fuzzy closed if $A$ is multi-fuzzy closed in $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ $\Rightarrow$ $f(A)$ is multi-fuzzy closed in $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$.\end{itemize} \end{Definition} \begin{Proposition} \label{mftvs2.4}Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$, $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ and $(Z,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Z},\omega)$ be Lowen type multi-fuzzy topological spaces. If $f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ and $g:(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)\rightarrow(Z,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Z},\omega)$ are multi-fuzzy continuous (open) and $f(X)\subseteq Y$, then the composition $g\circ f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Z,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Z},\omega)$ is multi-fuzzy continuous (open).\end{Proposition} \begin{proof} Let $H\in\omega.$ Since $g:(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)\rightarrow(Z,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Z},\omega)$ is a multi-fuzzy continuous mapping, it follows that $g^{-1}(H)\in\nu.$ Again since $f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ is a multi-fuzzy continuous mapping, it follows that $f^{-1}(g^{-1}(H))\in\tau\Rightarrow(f\circ g)^{-1}(H)\in\tau.$ So, the composition $g\circ f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Z,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Z},\omega)$ is multi-fuzzy continuous.\\ The proof in the case of multi-fuzzy open mapping is similar.\end{proof} \begin{Proposition} \label{mftvs2.5} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ and $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ be two Lowen type multi-fuzzy topological spaces. For a bijective mapping $f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$, the following statements are equivalent: \begin{itemize} \item[(i)] $f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ is multi-fuzzy homeomorphism; \item[(ii)] $f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ and $f^{-1}:(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)\rightarrow(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ are multi-fuzzy continuous; \item[(iii)] $f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ is both multi-fuzzy continuous and multi-fuzzy open. \end{itemize} \end{Proposition} \begin{Definition}\label{mftvs2.6} If $\tau_{j},j\in J$ are Lowen type multi fuzzy topologies in $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},$ then $\underset{j\in J}{\cap}\tau_{j}$ is a multi-fuzzy topology in $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}.$\end{Definition} \begin{Definition} \label{mftvs2.8} A multi-fuzzy set $F$ in a Lowen type multi-fuzzy topological space $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ is said to be a multi-fuzzy neighbourhood of a point $x\in X$ if there is a $G\in\tau$ such that $G\sqsubseteq F$ and $\mu_{F}(x)= \mu_{G}(x)\succ \hat{0}$.\\ By $\mathcal{F}_{x}$ we denote the family of all multi-fuzzy neighbourhoods of $x$ which are determined by the multi-fuzzy topology $\tau$ on $X.$ \end{Definition} \begin{Proposition}\label{mftvs2.9} If $F_{x}$ and $G_{x}$ are multi-fuzzy neighbourhoods of $x,$ then $F_{x}\sqcap G_{x}$ is also a multi-fuzzy neighbourhood of $x.$ \end{Proposition} \begin{Proposition}\label{mftvs2.12} A multi-fuzzy set $A$ in $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$ is open in the Lowen type multi-fuzzy topological space $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ iff for every $x\in X$ satisfying $\mu_{A}(x)\succ\hat{0},$ there is $F_{x}\sqsubseteq A,F_{x}\in\tau$ and $\mu_{F_{x}}(x)=\mu_{A}(x).$ \end{Proposition} \begin{Proposition}\label{mftvs2.14} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ be a Lowen type multi-fuzzy topological space. Then for each $x\in X,$ the family $\mathcal{F}_{x}$ of all multi-fuzzy neighbourhoods of $x$ satisfies:\\ $(i)$ every non-null constant multi-fuzzy set belongs to $\mathcal{F}_{x}.$\\ $(ii)$ $N(x)\succ\hat{0},$ for each $N\in\mathcal{F}_{x}.$ \\ $(iii)$ If $N,M\in\mathcal{F}_{x},$ then $N\sqcap M\in\mathcal{F}_{x}.$ \\ $(iv)$ Let $F\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$ and $x\in X$ with $\mu_{F}(x)\succ\hat{0}$. If for each $i\in \{1,2,...,n\}$ and for each $0<r_{i}<\mu_{F_{i}}(x)$ there exists $F^{r_{i}}\in\mathcal{F}_{x}$ with $F^{r_{i}}\sqsubseteq F$ and $\mu_{F_{i}^{r_{i}}}(x)>r_{i},$ then $F\in\mathcal{F}_{x}.$\\ $(v)$ If $N\in\mathcal{F}_{x},$ then there exists $G\in\mathcal{F}_{x}$ such that $G\sqsubseteq N$ and $\mu_{N}(x)=\mu_{W}(x)$ and if $\mu_{G}(y)\succ\hat{0},$ then $G\in\mathcal{F}_{y}.$\end{Proposition} \begin{proof} We only give the proof of $(iv).$ We see that if $N\in\mathcal{F}_{x},$ $N\sqsubseteq W$ and $\mu_{N}(x)=\mu_{W}(x),$ then $W\in\mathcal{F}_{x}$ $.......(*)$ and if $N_{j}\in\mathcal{F}_{x},$ $j\in\triangle,$ then $\underset{j\in\triangle}{\sqcup}N_{i}\in\mathcal{F}_{x}$$.........(**)$. Let $F\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$ and $x\in X$ with $\mu_{F}(x)\succ\hat{0}$. Also let for each $i\in \{1,2,..,n\}$ and for each $0<r_{i}<\mu_{F_{i}}(x)$ there exists $F^{r_{i}}\in\mathcal{F}_{x}$ with $F^{r_{i}}\sqsubseteq F$ and $\mu_{F_{i}^{r_{i}}}(x)>r_{i}$. Choose $j\in \{1,2,..,n\}.$ Then for each $0<r_{j}<\mu_{F_{j}}(x)$ there exists $F^{r_{j}}\in\mathcal{F}_{x}$ with $F^{r_{j}}\sqsubseteq F$ and $\mu_{F_{j}^{r_{j}}}(x)>r_{j}$. Let $\mu_{F_{i}^{j}}(x)=\underset{0<r_{j}<\mu_{F_{j}}(x)}{\vee}\mu_{F_{i}^{r_{j}}}(x),$ for all $i=1,2,..,n.$ Then $F^{j}\sqsubseteq F$ and $\mu_{F_{j}^{j}}(x)=\mu_{F_{j}}(x).$ Thus for each $i\in \{1,2,...,n\},$ there exists $F^{i}$ such that $F^{i}\sqsubseteq F$ and $\mu_{F_{i}^{i}}(x)=\mu_{F_{i}}(x).$ Let $F_{0}=\sqcup\{F^{i}:i=1,2,...,n\}$. Then $\mu_{F_{0}}(x)=\mu_{F}(x)$ and $F_{0}\sqsubseteq F.$ Hence by $(*)$ and $(**),$ $F\in\mathcal{F}_{x}.$\end{proof} \begin{Definition}\label{mftvs2.15} Let $X$ be a set and $\mathcal{L}$ be a function from $X$ into the power set of $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$. Then $\mathcal{L}$ is called a multi-fuzzy neighbourhood system on $X$ if $\mathcal{L}$ satisfies:\\ $(N1)$ for each $x\in X,$ every non-null constant multi-fuzzy set belongs to $\mathcal{L}(x).$\\ $(N2)$ If $N\in\mathcal{L}(x),$ then $\mu_{N}(x)\succ\hat{0}.$\\ $(N3)$ If $N,M\in\mathcal{L}(x),$ then $N\sqcap M\in\mathcal{L}(x).$ \\ $(N4)$ Let $F\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$ and $x\in X$ with $\mu_{F}(x)\succ\hat{0}$. If for each $i\in\{1,2,..,n\}$ and for each $0<r_{i}<\mu_{F_{i}}(x)$ there exists $F^{r_{i}}\in\mathcal{L}(x)$ with $F^{r_{i}}\sqsubseteq F$ and $\mu_{F_{i}^{r_{i}}}(x)>r_{i},$ then $F\in\mathcal{L}(x).$\\ $(N5)$ If $N\in\mathcal{L}(x),$ then there exists $G\in\mathcal{L}(x)$ such that $G\sqsubseteq N$ and $\mu_{N}(x)=\mu_{G}(x)$ and if $\mu_{G}(y)\succ\hat{0},$ then $G\in\mathcal{L}(y).$ \end{Definition} \begin{Proposition}\label{mftvs2.16} If $\mathcal{L}$ is a multi-fuzzy neighbourhood system on $X,$ we define $\tau_{\mathcal{L}}$ as the family of all multi-fuzzy sets $G$ in $X$ with the property that if $\mu_{G}(x)\succ\hat{0},$ then $G\in\mathcal{L}(x)$ and $\varPhi\in\tau_{\mathcal{L}}.$ Then $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau_{\mathcal{L}})$ is a Lowen type multi-fuzzy topological space. Also, for every $x\in X,$ the family $\mathcal{F}_{x}$ of all multi-fuzzy neighbourhoods of $x$ with respect to the multi-fuzzy topology $\tau_{\mathcal{L}}$ is exactly $\mathcal{L}(x).$\end{Proposition} \begin{proof} Clearly every constant multi-fuzzy set belongs to $\tau_{\mathcal{L}}.$ Now we see that from $(N4),$ we have $N\in\mathcal{L}(x)$, $N\sqsubseteq W$ and $\mu_{N}(x)=\mu_{W}(x),$ then $W\in\mathcal{L}(x).$$......(*)$\\ and if $N_{j}\in\mathcal{L}(x),$ $j\in\triangle,$ then $\underset{j\in\triangle}{\sqcup}N_{i}\in\mathcal{L}(x).$$.......(**)$ \\ For each $j\in\triangle,$ let $G_{j}\in\tau_{\mathcal{L}}.$ Set $H=\underset{j\in\triangle}{\sqcup}G_{j}$. If $\mu_{H}(x)\succ\hat{0},$ then there exists nonempty $J_{x}\subset\triangle$ such that $\mu_{G_{j}}(x)\succ\hat{0}$ for all $j\in J_{x}$ and $\underset{j\in\triangle}{\vee}\mu_{G_{j}}(x)=\underset{j\in J_{x}}{\vee}\mu_{G_{j}}(x).$ By definition of $\tau_{\mathcal{L}},$ if $j\in J_{x},$ $G_{j}\in\mathcal{L}(x).$ From $(**)$ we conclude that $\underset{j\in J_{x}}{\sqcup}G_{j}\in\mathcal{L}(x).$ By $(*)$, $H\in\mathcal{L}(x).$ Therefore $H\in\tau_{\mathcal{L}}.$ \\ Let $G,H\in\tau_{\mathcal{L}}$ and suppose that $\mu_{G\sqcap H}(x)\succ \hat{0}.$ Then $\mu_{G}(x)\succ\hat{0}$ and $\mu_{H}(x)\succ\hat{0}$. Hence, by the definition of $\tau_{\mathcal{L}}$, $G$ and $H$ are in $\mathcal{L}(x).$ It follows from $(N3)$ that $G\sqcap H\in\mathcal{L}(x).$ Thus $G\sqcap H\in\tau_{\mathcal{L}}.$\\ If $N\in\mathcal{F}_{x},$ then there exists $G\in\tau_{\mathcal{L}}$ such that $G\sqsubseteq N$ and $\mu_{N}(x)=\mu_{G}(x)\succ\hat{0}.$ By definition of $\tau_{\mathcal{L}}$, $G\in\mathcal{L}(x).$ \\ Conversely let $N\in\mathcal{L}(x).$ Then by $(N2),$ $\mu_{N}(x)\succ\hat{0}$ and by $(N5)$ there exists $G\in\mathcal{L}(x)$ such that $G\sqsubseteq N$ and $\mu_{G}(x)=\mu_{N}(x)$ and if $\mu_{G}(y)\succ\hat{0},$ then $G\in\mathcal{L}(y).$ It follows from definition of $\tau_{\mathcal{L}}$, $G\in\tau_{\mathcal{L}}.$ Consequently $N\in\mathcal{F}_{x}.$ \end{proof} \begin{Proposition}\label{mftvs2.18} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ be a Lowen type multi-fuzzy topological space and $\mathcal{L}_{\tau}$ be a function from $X$ to power set of $\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},$ defined by $\mathcal{L}_{\tau}(x)=\mathcal{F}_{x},$ where $x\in X$ and $\mathcal{F}_{x}$ is the family of all multi-fuzzy neighbourhoods of $x$ with respect to $\tau.$ Then $\mathcal{L}_{\tau}$ is a multi-fuzzy neighbourhood system on $X$ and $\tau_{\mathcal{L}_{\tau}}=\tau.$ \end{Proposition} \begin{proof} By Proposition \ref{mftvs2.14}, $\mathcal{L}_{\tau}$ satisfies the conditions $(N1)$ to $(N5)$ and therefore is a multi-fuzzy neighbourhood system on $X.$ By Proposition \ref{mftvs2.16}, $\tau_{\mathcal{L}_{\tau}}$ is a multi-fuzzy topology on $X.$ Also, from Proposition \ref{mftvs2.16}, we can say that for each $x\in X,$ the multi-fuzzy neighbourhoods of $x$ with respect to $\tau_{\mathcal{L}_{\tau}}$ are exactly same as the members of $\mathcal{F}_{x}.$ Since a multi-fuzzy set $U$ is open iff it is a multi-fuzzy neighbourhood of each point $x$ satisfying $\mu_{U}(x)\succ \hat{0}$, it follows that $\tau_{\mathcal{L}_{\tau}}=\tau.$\end{proof} \begin{Proposition}\label{mftvs2.19} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ and $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ be two multi-fuzzy topological spaces and $f:X\rightarrow Y$ be any map. Then the following conditions are equivalent:\\ $(a)$ The function $f$ is multi-fuzzy continuous. \\ $(b)$ The inverse image of every multi-fuzzy closed set is multi fuzzy closed.\\ $(c)$ For every $x\in X$ and every multi-fuzzy nbd $N$ of $f(x),$ $f^{-1}(N)$ is a multi-fuzzy nbd of $x.$ \\ $(d)$ For every $x\in X$ and every multi-fuzzy nbd $N$ of $f(x)$, there is a multi-fuzzy neighbourhood $M$ of $x$ such that $f(M)\sqsubseteq N$ and $\mu_{M}(x)=\mu_{f^{-1}(N)}(x).$ \end{Proposition} \section{Product multi-fuzzy topology} Unless otherwise mentioned, in the rest part of this paper, multi-fuzzy topological space means Lowen type multi-fuzzy topological space. \begin{Definition}\label{mftvs3.1} Let $F,G$ be two fuzzy subsets of $X$ and $Y$ respectively. Then their product, denoted by $F\times G,$ is defined by $\mu_{\left(F\times G\right)}(x,y)=min\{\mu_{F}(x),$ $\mu_{G}(y)\},\forall(x,y)\in X\times Y.$ \end{Definition} \begin{Definition}\label{mftvs3.2} Let $F\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{X}$ and $G\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y}$ respectively. Then their product is defined by $\mu_{(F\times G)_{i}}=\mu_{F_{i}\times G_{i}},$ for $i=1,2,...,n$, i.e. $F\times G=\prec\left(\mu_{F_{i}}(x)\wedge\mu_{G_{i}}(y)\right)_{i=1}^{n}\succ.$ It is clear that $F\times G$ is a multi-fuzzy set over $X\times Y$, i.e. $F\times G\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{X\times Y}.$ \end{Definition} \begin{Definition}\label{mftvs3.3} Let $F\in I^{X}$ and $G\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y}$. Then their product is defined by $\mu_{(F\times G)_{i}}=\mu_{F\times G_{i}},$ for $i=1,2,...,n$, i.e. $F\times G=\prec\left(\mu_{F}(x)\wedge\mu_{G_{i}}(y)\right)_{i=1}^{n}\succ.$ Also, $F\times G$ is a multi-fuzzy set over $X\times Y$, i.e. $F\times G\in\overset{n}{\underset{i=1}{\prod}}I_{i}^{X\times Y}.$ \end{Definition} \begin{Proposition}\label{mftvs3.4} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ and $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ be two multi-fuzzy topological spaces. Then $\mathcal{F}=\{F\times G:F\in\tau,G\in\nu\}$ forms an open base for a multi-fuzzy topology on $X\times Y$. \end{Proposition} \begin{Definition} \label{mftvs3.5}Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ and $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ be two multi-fuzzy topological spaces. The multi-fuzzy topology in $X\times Y$ induced by the open base $\mathcal{F}=\{F\times G:A\in\tau,G\in\nu\}$ is said to be the product multi-fuzzy topology of the multi-fuzzy topologies $\tau$ and $\nu$. It is denoted by $\tau\times\nu$. The multi-fuzzy topological space $[X\times Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X\times Y},\tau\times\nu]$ is said to be the multi-fuzzy topological product of the multi-fuzzy topological spaces $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ and $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$. \end{Definition} \begin{Proposition}\label{mftvs3.6} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ be the product space of two multi-fuzzy topological spaces $(X_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{1}},\tau_{1})$ and $(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})$ respectively. Then the projection mappings $\pi_{j}:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(X_{j},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{j}},\tau_{j}),j=1,2$ are multi-fuzzy continuous and multi-fuzzy open. Also $\tau_{1}\times\tau_{2}$ is the smallest multi-fuzzy topology in $X_{1}\times X_{2}$ for which the projection mappings are multi-fuzzy continuous. \\ If further, $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\nu)$ be any multi fuzzy topological space then the mapping $f:(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)\rightarrow(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ is multi-fuzzy continuous iff the mappings $\pi_{j}\circ f:(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)\rightarrow(X_{j},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau_{j}),j=1,2$ are multi-fuzzy continuous. \end{Proposition} \begin{Proposition} \label{mftvs3.7}Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ be a multi-fuzzy topological space. Then the mapping $f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ defined by $f(x)=x,$ $\forall x\in X$ is multi-fuzzy continuous.\end{Proposition} \begin{proof} The proof is straightforward. \end{proof} \begin{Proposition}\label{mftvs3.8} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ and $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ be two multi-fuzzy topological spaces. Then the mapping $f:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ defined by $f(x)=y_{0,}$ $\forall x\in X,$ where $y_{0}$ is a fixed element of $Y$ is multi-fuzzy continuous.\end{Proposition} \begin{proof} Let $F\in\nu.$ Then for each $i=1,2,..,n,$ $\mu_{\left(f^{-1}(F)\right)_{i}}(x)=\mu_{f^{-1}(F_{i})}(x)=\mu_{F_{i}}(f(x))=\mu_{F_{i}}(y_{0})=c_{i}$(say), $\forall x\in X.$ So, $\left(f^{-1}(F)\right)_{i}=\overline{c_{i}}$. If $c_{i}\neq 0,$ let $C_{X}^{n}$ be the multi-fuzzy set such that $\mu_{({C_{X}^{n}})_{i}}=\overline{c_{i}},$ for $i=1,2,..,n.$ Then $f^{-1}(F)=C_{X}^{n}\in\tau.$ Also, if $c_{i}=0,$ then $f^{-1}(F)=\Phi_{X}^{n}\in \tau$. Therefore $f$ is multi-fuzzy continuous. \end{proof} \begin{Proposition}\label{mftvs3.9} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ be the product space of two multi-fuzzy topological spaces $(X_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{1}},\tau_{1})$ and $(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})$ respectively, Let $a\in X_{1}$ (or $X_{2}$). Then the mapping $f:(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})$\\ $\rightarrow(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ (or $f:(X_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{1}},\tau_{1})\rightarrow(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$) defined by $f(x_{2})=(a,x_{2})$ (or $f(x_{1})=(x_{1},a)$) is multi fuzzy continuous $\forall x_{2}\in X_{2}$(or $\forall x_{1}\in X_{1})$. \end{Proposition} \begin{proof} Let $\pi_{j}:(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)\rightarrow(X_{j},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{j}},\tau_{j}),j=1,2$ be the projection mappings. Now $\pi_{1}\circ f:(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})\rightarrow(X_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{1}},\tau_{1})$ is such that $\pi_{1}(f(x_{2}))=a,$ $\forall x_{2}\in X_{2}$ and $\pi_{2}\circ f:(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})\rightarrow(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})$ is such that $\pi_{2}(f(x_{2}))=x_{2},$ $\forall x_{2}\in X_{2}.$ So, by Proposition \ref{mftvs3.7} and Proposition \ref{mftvs3.8}, mappings $\pi_{2}\circ f$ and $\pi_{1}\circ f$ are multi-fuzzy continuous. Therefore by Proposition \ref{mftvs3.6}, $f$ is multi-fuzzy continuous. \end{proof} \begin{Proposition} \label{mftvs3.10}Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ be the product space of two multi-fuzzy topological spaces $(X_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{1}},\tau_{1})$ and $(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})$ and $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ be the product space of two multi-fuzzy topological spaces $(Y_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y_{1}},\nu_{1})$ and $(Y_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y_{2}},\nu_{2})$. If the mappings $f_{j}$ of $(X_{j},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{j}},\tau_{j})$ into $(Y_{j},\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y_{j}},\nu_{j}),$ $j=1,2$ are multi-fuzzy open, then the product mapping $f=f_{1}\times f_{2}$ from $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ into $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ defined by $f(x_{1},x_{2})=(f_{1}(x_{1}),f_{2}(x_{2}))$ is multi-fuzzy open.\end{Proposition} \begin{proof} Let $U\in\tau.$ Then there exist multi-fuzzy open sets $U_{jm}\in\tau_{j},j=1,2,m\in\triangle$ such that $U=\underset{m\in\triangle}{\sqcup}[U_{1m}\times U_{2m}].$ \\ Now $f(U)=\underset{m\in\triangle}{\sqcup}[f_{1}(U_{1m})\times f_{2}(U_{2m})]$. \\ Since $f_{j},j=1,2$ are multi-fuzzy open, $f(U)$ is multi-fuzzy open in $\nu$ and hence the product mapping $f=f_{1}\times f_{2}$ of $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ into $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu),$ defined by $f(x_{1},x_{2})=(f_{1}(x_{1}),f_{2}(x_{2}))$ is multi-fuzzy open. \end{proof} \begin{Proposition}\label{mftvs3.11} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ be the product space of two multi-fuzzy topological spaces $(X_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{1}},\tau_{1})$ and $(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})$ and $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ be the product space of two multi-fuzzy topological spaces $(Y_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y_{1}},\nu_{1})$ and $(Y_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y_{2}},\nu_{2})$. If the mappings $f_{j}$ of $(X_{j},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{j}},\tau_{j})$ into $(Y_{j},\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y_{j}},\nu_{j}),$ $j=1,2$ are multi-fuzzy continuous, then the product mapping $f=f_{1}\times f_{2}$ from $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ into $(Y,\overset{n}{\underset{i=1}{\prod}}I_{i}^{Y},\nu)$ defined by $f(x_{1},x_{2})=(f_{1}(x_{1}),f_{2}(x_{2}))$ is multi-fuzzy continuous.\end{Proposition} \begin{proof} Since $\left(\pi_{Y_{1}}\circ f\right)(x_{1},x_{2})=\pi_{Y_{1}}(f_{1}(x_{1}),f_{2}(x_{2}))=f_{1}(x_{1})=f_{1}[\pi_{X_{1}}(x_{1},x_{2})]=\left(f_{1}\circ\pi_{X_{1}}\right)(x_{1},x_{2}),$ $\forall(x_{1},x_{2})\in X_{1}\times X_{2},$ $\pi_{Y_{1}}\circ f=f_{1}\circ\pi_{X_{1}}.$ Also, $f_{1}$ and $\pi_{X_{1}}$ are multi-fuzzy continuous and hence from Proposition \ref{mftvs2.4}, $\pi_{Y_{1}}\circ f$ is multi-fuzzy continuous.\\ Similarly, $\pi_{Y_{2}}\circ f$ is multi-fuzzy continuous. Therefore from Proposition \ref{mftvs3.6}, $f$ is multi-fuzzy continuous. \end{proof} \begin{Definition}\label{mftvs3.12} A multi-fuzzy topological space $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ is said to be second countable if there exists a countable open base $\mathcal{B}$ for $\tau.$\end{Definition} \begin{Proposition}\label{mftvs3.13} Let $(X_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{1}},\tau_{1})$ and $(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})$ be two second countable multi-fuzzy topological spaces. Then their product space $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ is also second countable. \end{Proposition} \begin{proof} Let $\mathcal{B}_{1}=\{B_{j}:j\in J\}$ and $\mathcal{B}_{2}=\{B_{k}^{\prime}:k\in K\}$ be countable open base for $\tau_{1}$ and $\tau_{2}$ respectively. Now, $\mathcal{B}=\{F\times G:F\in\tau_{1},G\in\tau_{2}\}$ is an open base for $\tau.$ \\ Any $F\in\tau_{1},G\in\tau_{2}$ can be written as $F=\underset{j\in J_{F}}{\sqcup}B_{j}$ and $G=\underset{k\in K_{G}}{\sqcup}B_{k}^{\prime}$, for some $J_{F}\subseteq J$ and $K_{G}\subseteq K$. \\ Then $F\times G=\left(\underset{j\in J_{F}}{\sqcup}B_{j}\right)\sqcap\left(\underset{k\in K_{G}}{\sqcup}B_{k}^{\prime}\right)$\\ $=\underset{(j,k)\in J_{F}\times K_{G}}{\sqcup}(B_{j}\sqcap B_{k}^{\prime})$\\ $=\underset{(j,k)\in J_{F}\times K_{G}}{\sqcup}(B_{j}\times B_{k}^{\prime}).$\\ Since $J,K$ are countable, hence $\mathcal{B}$ is countable.\end{proof} \begin{Definition} \label{mftvs3.14} Let $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ be a multi-fuzzy topological space. A family $\mathcal{A}$ of multi-fuzzy sets is a cover of a multi-fuzzy set $F$ if $F\sqsubseteq\sqcup\{A:A\in\mathcal{A}\}.$ It is an open cover if each member of $\mathcal{A}$ is a multi-fuzzy open set. A subcover of $\mathcal{A}$ is a subfamily which is also a cover.\end{Definition} \begin{Definition}\label{mftvs3.15} A multi-fuzzy topological space $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ is said to be compact if each open cover of the space has a finite sub-cover.\end{Definition} \begin{Definition}\label{mftvs3.16} Let $X$ be a non-empty set and $Q$ be a subset of $X.$ A family $\mathcal{A}$ of multi-fuzzy sets is a cover of $Q$ if $\underset{A\in\mathcal{A}}{sup}\mu_{A_{i}}(x)=1,\forall x\in X,\forall i=1,2,..,n.$\end{Definition} \begin{Proposition} \label{mftvs3.17}Let $(X_{1},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{1}},\tau_{1})$ and $(X_{2},\overset{n}{\underset{i=1}{\prod}}I_{i}^{X_{2}},\tau_{2})$ be two compact multi-fuzzy topological spaces. Then their product space $(X,\overset{n}{\underset{i=1}{\prod}}I_{i}^{X},\tau)$ is also compact. \end{Proposition}
{ "timestamp": "2018-04-17T02:10:15", "yymm": "1804", "arxiv_id": "1804.05346", "language": "en", "url": "https://arxiv.org/abs/1804.05346" }
\section*{Introduction} The network of neurons in cerebral cortex displays rich and complex dynamics even when not engaged by any particular sensory or motor interaction with the external world \cite{fox2007, arieli1996}. From one point of view, such ongoing internal dynamics are thought to mediate memory consolidation and other internal cognitive processes \cite{Luczak2009,Han2008,berkes2011, romano2015, miller2014}. On the other hand, ongoing fluctuations in cortical network dynamics have often been considered a nuisance, imposing noisy fluctuations in neural response to sensory input \cite{ecker2014, lee1998, averbeck2006}. In both of these contexts, it is important to understand the mechanisms which govern the fluctuations of ongoing cortical network dynamics. Here we investigate Shannon entropy of macroscopic network dynamics. In the context of internal cognitive processes, high entropy might be beneficial, corresponding to a larger repertoire of internal states to mediate internal information transfer \cite{fagerholm2016}. When considered as noise, high entropy can be a hindrance to effective sensory coding. Indeed, in principle, encoding of sensory input would be most reliable if the cortex was totally silent (low entropy) until the stimulus excited it. However, real cortex does not operate this way; it has many jobs to do beyond encoding sensory input and is never silent. Previous studies have shown that ongoing cortical dynamics with high entropy occurs together with high mutual information between stimulus and response \cite{shew2011, fagerholm2016}, suggesting that a large repertoire of ongoing dynamical states may be necessary for a large repertoire of stimulus-evoked states \cite{Luczak2009, berkes2011}. A crucial factor for determining the entropy of network dynamics in the cortex is the competition between two types of neurons: excitatory (E) and inhibitory (I). The importance of balanced excitation and inhibition is most apparent in previous experiments that directly manipulated the E/I balance pharmacologically. These studies have shown that ongoing network dynamics can vary dramatically when GABA synapses are either enhanced or suppressed \cite{mao2001,shew2011,fagerholm2016, gautam}. Enhanced inhibition (GABA agonists) often results in a dynamical regime characterized by low firing rates and weak population-level correlations, while decreased inhibition (GABA antagonists) tends to result in a regime with higher firing rates and strong correlations. Two studies in particular have shown that entropy can be increased by tuning the E/I balance to the tipping point between these two distinct dynamical regimes \cite{shew2011,fagerholm2016}. However, more systematic understanding of how E/I balance impacts entropy is difficult to obtain experimentally because pharmacological manipulations are rather difficult to precisely control. Moreover, with a few interesting exceptions \cite{chen2010,hunt2013}, experiments do not vary the numbers of excitatory or inhibitory neurons. Computational models offer an alternative approach in which the number of excitatory and inhibitory neurons, as well as strength of excitatory and inhibitory synapses, can easily be controlled. A few previous computational studies have addressed similar topics, but typically have neglected inhibition \cite{shew2011, ferraz} or have not considered the effects of changing the E/I ratio \cite{scarpetta2013,zhou}. Thus, theoretical and experimental understanding of the relationship between the entropy of ongoing dynamics and the balance of excitation and inhibition---mediated by both relative strengths of excitatory and inhibitory synapses and relative numbers of excitatory and inhibitory cells---remains unresolved. Here we attempt to improve the theoretical understanding of entropy of ongoing dynamics by studying a network model of binary neurons in detail. We consider how network entropy depends on the fraction of inhibitory neurons $\alpha$ and the strengths of E and I interactions, $W_E$ and $W_I$. We find maximal entropy near the tipping point between the low and high firing rate dynamical regimes, as seen in experiments. For a given choice of $W_E$ and $W_I$, we find that the tipping point can be achieved by adjusting the value of $\alpha$; this raises the question of why any particular configuration of parameters should be favored over another. We find that there is a trade-off between high and robust network entropy: networks with weak synapses can achieve a high entropy when excitation and inhibition are balanced, but the entropy degrades significantly upon small deviations from the balanced state. On the other hand, networks with stronger synapses have a lower optimal entropy, but they are more robust to parameter changes. We also find that if E and I synaptic strengths are proportional to each other, as found in many experiments \cite{deneve, wehr, haider}, then robust, high entropy requires a small fraction of I neurons ($\alpha$ of order 0.1). In mammalian cortex, $\alpha$ has been found to be near 0.2 with remarkable consistency over the lifetime of an organism \cite{sahara} and over different regions of cortex \cite{hendry, meinecke}. Our results suggest that mammalian cortex strikes a compromise with intermediate, but robust entropy. In what follows, we introduce and analyze the binary neuron model which both predicts and provides insight into the results of model numerical simulations. \section*{Model and theory}\label{binarymodel} \subsection{Binary neuron model}\label{binary} We explore the effects of excitation and inhibition balance on entropy using a simple, analytically tractable model. The model, studied previously in Ref.~\cite{larremore2014}, consists of a network of $N$ stochastic binary neurons, indexed $i = 1,2,\dots, N$. The state of neuron $i$ at time $t$ is denoted by $x_i^t$, which can take the values $x_i^t = 0$ if the neuron is resting and $x_i^t = 1$ if the neuron is spiking. Time is assumed to evolve in discrete steps $t = 0,1,2,\dots$. The evolution of each neuron's state is stochastic and depends on the states of other neurons at the previous time step. It is given by \begin{align}\label{xs} x_i^{t+1} = \left\{ \begin{array}{ll } 1 & \text{with probability } \eta + (1-\eta)\sigma\left(\sum_{j=1}^N \epsilon_j w_{ij} x_j^t \right),\\ 0 & \text{otherwise,} \end{array}\right. \end{align} where $\epsilon_j = 1$ ($\epsilon_j = -1$) if neuron $j$ is excitatory (inhibotory), $w_{ij} > 0$ is the strength of the synapse from neuron $j$ to neuron $i$ (which is taken to be zero if neuron $j$ does not connect to neuron $i$), and $\sigma(x) = \min(1, \max(0,x))$ is a transfer function that converts the input to neuron $i$ into a probability. The constant $\eta = 1/(100 N)$ represents independent spontaneous activation due to external sources, resulting in one spike per 100 time steps among all neurons, on average. We consider Erd\H{o}s-R\'enyi networks where a directed link is made independently from neuron $j$ to neuron $i$ with probability $k/(N-1)$ for all $i \neq j$. The parameter $k$ is the expected number of outgoing connections from a given neuron. To control the relative number of excitatory and inhibitory neurons, we assign each neuron to be inhibitory with probability $\alpha$ and excitatory otherwise. Finally, we assume for simplicity that $w_{ij} = w_E$ for excitatory synapses (i.e., if $\epsilon_j = 1$) and $w_{ij} = w_I$ for inhibitory synapses (i.e., if $\epsilon_j = -1$), and define the effective excitatory weight as $W_E = k w_E$ and the effective inhibitory weight as $W_I$ = $k w_I$. The model is characterized by the parameters $N$, $k$, $W_E$, $W_I$, and $\alpha$. For definiteness, in the rest of the paper we will consider, unless otherwise indicated, only the parameters $N = 10000$ and $k = 100$, and study the macroscopic dynamics of the model as a function of $(W_E, W_I, \alpha)$. As a measure of collective network dynamics we study the fraction of spiking neurons, or {\it network activity}, given by \begin{align} S^t = \frac{1}{N} \sum_{i=1}^N x^t_i. \label{eq:activity} \end{align} \begin{figure}[t] \includegraphics{Fig1.eps} \renewcommand{\figurename}{Fig} \caption{{\bf Network activity and dynamics of binary model.} Time series of network activity (a) show diverse fluctuations when excitation and inhibition are balanced ($\lambda=1$). Similarly, probability distributions (b) of network activity are broadest when $\lambda=1$. All probability distributions have been normalized by their peak probability to facilitate comparison of their shapes. Dynamical parameters: $\alpha$ = 0.11 (Blue), $0.1$ (Red), $0.09$ (Yellow); $W_E$=$W_I$=$1.25$.} \label{fig1} \end{figure} In Ref.~\cite{larremore2014} it was found that the collective dynamics of the network is determined by the largest eigenvalue $\lambda$ of the connection strength matrix $A$ with entries $\{\epsilon_j w_{ij}\}_{i,j = 1}^N$. Network activity saturates at a high value for $\lambda > 1$ and dies out or reaches a steady low value for $\lambda < 1$. At the tipping point between these two regimes, defined by $\lambda = 1$, excitation and inhibition are balanced such that network activity is characterized by large fluctuations that are effectively ceaseless (their lifetime scales exponentially with $N$) \cite{larremore2014}. Figure~\ref{fig1}a shows an example of the time series of network activity for these three regimes. For the Erd\H{o}s-R\'enyi networks considered here, $\lambda$ can be approximated by the expected row sum of $A$, \begin{align}\label{estimate} \lambda \approx k w_E (1 - \alpha) - k w_I \alpha = W_E (1-\alpha) -W_I \alpha. \end{align} With this approximation, then, the parameters that give $\lambda = 1$ form a 2-dimensional surface in the $(W_E,W_I,\alpha)$ parameter space. \subsection{Entropy} We consider the Shannon entropy of the time-series of network activity, which quantifies the size of the repertoire of accessible macroscopic network states. The network activity is discrete (i.e., $0,1/N,2/N\dots,1)$. For a given set of network parameters, $(W_E,W_I,\alpha)$ we consider the steady-state probability distribution of network activity $P(S)$ and the associated entropy, \begin{align}\label{entropy} H = -\sum_S P(S) \log_2(P(S)), \end{align} where the sum runs over the allowed values $S = 0, 1/N,2/N,\dots, 1$. In practice, we estimate $P(S)$ numerically from a time series of $S^t$ obtained from model simulations (Fig.~\ref{fig1}b) or from our semi-analytical theory, presented below, that treats the evolution of $S^t$ as a biased random walk. \subsection{Simulation-free theory} Here we present a semi-analytical approach to compute the entropy for a given set of parameter values in the binary model. While numerical simulation of Eqs.~(\ref{xs}) for different values of $(W_E, W_I, \alpha)$ allows us to search for regimes yielding high and robust entropy, the complementary semi-analytical approach presented below is useful because it does not suffer from fluctuations caused by specific network realizations and, especially as its computational complexity does not depend on $N$, it can be faster than direct simulation of Eqs.~(\ref{xs}). Most importantly, our semi-analytical approach provides insights into our main findings. The main idea of our approach is to treat the evolution of the macroscopic variable $S^t$ as a biased random walk. Although in principle the dynamics of the system depends on the microscopic states $\{x_n\}_{n=1}^N$, for large homogeneous networks one can describe the evolution of the system in terms of the macroscopic variable $S^t$. To analyze this random variable, one should determine if at any given time it is expected to decrease or increase. This information is encapsulated in the {\it branching function} introduced in Ref~\cite{larremore2014} as the ratio $\Lambda(S) = E[S^{t+1}| S^t = S]/S$, where the expected value is taken over realizations of the stochastic dynamics and microscopic configurations with activity $S$. In our case, the branching function can be approximated by \cite{larremore2014}, \begin{align}\label{Lambdaeq} \Lambda(S) = \frac{1}{S}E_P[\sigma( w_E n_E - w_I n_I)]\ , \end{align} where the random variables $n_E$ and $n_I$ represent the number of active E and I inputs to a single neuron, respectively. Because we consider random networks, $n_E$ and $n_I$ are given by Poisson random variables with means $k S (1-\alpha)$ and $k S \alpha$, respectively. The expected value $E_P[\cdot]$ is an expected value over the random variables $n_E$ and $n_I$. By assuming that the statistics of the macroscopic dynamics depend only on $S$, one can then write a random walk model for $S^t$ as \begin{align} S^{t+1} = S^t\Lambda(S^t) + r(S^t)\ , \end{align} where $r$ represents statistical noise which, by the definition of $\Lambda$, has mean zero. To obtain a tractable model we assume that $r(S^t)$ is normally distributed and has variance $V(S^t) = S^t(1-S^t)/N$, as estimated in Ref.~\cite{larremore2014}. This approximation is what one would obtain if each of the $N$ neurons is independently assumed to be active with probability $S$ and inactive with probability $1-S$. In this approximation, the probability that the system makes a transition from a state with activity $S'$ to a state with activity $S$ is given by \begin{align} T(S|S') = \frac{1}{\sqrt{2 \pi V(S')}} \exp\left(-\frac{[S'\Lambda(S') - S]^2}{2 V(S')}\right)\ . \end{align} The distribution $P^t(S)$ of $S$ at time $t$ evolves following the master equation \begin{align}\label{mapint} P^{t+1}(S) = \int_0^1 T(S|S') P^t(S') d S'\ , \end{align} and as $t \to \infty$ it converges to a steady-state, which may be calculated numerically as the Perron-Frobenius eigenvector (with eigenvalue 1) of the linear operator \begin{align}\label{integral} \mathcal{L}\{P\}(S) = \int_0^1 T(S|S') P(S') d S'\ . \end{align} The eigenvector can be calculated numerically by discretization of the integral in Eq.~(\ref{integral}) or as the limit of repeated iterations of Eq.~(\ref{mapint}). The entropy is then calculated directly from Eq. (\ref{entropy}). \section*{Results} Our primary goal is to determine how the entropy of a network varies with the relative numbers of E and I neurons and the relative strength of E and I synapses. We first describe our results from numerical simulations of the binary model and then describe results from the theory. First, we show in Fig.~1 that the system network activity visits the widest variety of states when excitation and inhibition are balanced at the tipping point between high and low firing rate regimes. This is visible in time series (Fig.~1a) as well as empirical distributions $P(S)$ of network activity (based on $10^4$ time steps of simulation). Correspondingly, entropy $H$ is greatest along the boundary between low and high firing regimes (Fig.~2). In the three-dimensional $(W_E,W_I, \alpha)$ parameter space this boundary forms a curved surface, which we henceforth refer to as the {\it maximum entropy surface}. As discussed in Sec.~\ref{binary}, we expect that the transition from the low to the high firing regimes occurs at the {\it critical surface} of parameters where $\lambda = 1$. While we find this is usually an excellent approximation to our numerical results, the maximum entropy and critical surfaces differ slightly for high values of $\alpha$, and therefore we will only use the critical surface as a qualitative guide to the location of the maximum entropy surface. To numerically identify the maximum entropy surface, for each fixed value of $(W_E, W_I)$ we compute entropy across a wide range of values of $\alpha$, finding the value $\alpha^*$ that maximizes $H(W_E,W_I,\alpha)$. In Fig.~\ref{fig3}a we show $\alpha^*$ as a function of {$W_E$ and $W_I$}. As one might expect, higher values of $W_E$ require a larger number of I neurons (higher $\alpha^*$) in order to maintain a balanced network, and vice versa. This agrees qualitatively with the estimate using the critical surface, $\alpha^* \approx (W_E - 1)/(W_E + W_I)$ obtained from \eqref{estimate} with $\lambda = 1$. \begin{figure*}[t] \includegraphics{Fig2.eps} \renewcommand{\figurename}{Fig} \caption{{\bf High entropy at boundary between high and low firing regimes.} Each panel shows how entropy (color) varies across a two-dimensional section of the three-dimensional $W_E$-$W_I$-$\alpha$ parameter space. Relative orientation of the six different sections are illustrated and labeled (i-iv) in the cartoon (left). For i and ii, $\alpha$ is fixed at $0.1$ and $0.2$. For iii and iv $W_I$ is fixed at $1.5$ and $2.5$. For v and vi $W_E$ is fixed at $1.5$ and $2.5$. A curved critical surface in $W_E$-$W_I$-$\alpha$ space separates the high firing regime (H) from a low firing regime (L). Entropy is high along this regime boundary. Note that as I or E synapse strength increases the width of the peak in entropy also increases, indicating increased robustness (decreased fragility). } \label{fig2} \end{figure*} Having identified the parameters that characterize the maximum entropy surface, we next ask two questions. First, where on the surface is entropy highest? Second, where on the surface is entropy most robust? We consider the entropy to be robust if it does not drop dramatically when we make a small perturbation in $W_E$, $W_I$, and $\alpha$ away from the peak entropy surface. This approach is similar to other ways to quantify sensitivity to model parameters, such as Fisher information \cite{Lehmann1998}. To quantify how much the entropy decreases if parameters are perturbed away from the maximum entropy surface, we define {\it fragility} $F(W_E,W_I)$ as follows. For a given pair of $(W_E,W_I)$ values, we first calculate the entropy at the corresponding point on the maximum entropy surface, $H^* = H(W_E,W_I,\alpha^*)$. Then, we calculate the entropy at two points at a small distance $\delta$ above and below the surface, $H_{\text{up}}=H(W_E+\Delta W_E,W_I+\Delta W_I,\alpha+\Delta\alpha)$ and $H_{\text{down}}=H(W_E-\Delta W_E,W_I-\Delta W_I,\alpha-\Delta\alpha)$. The perturbations $\pm(\Delta W_E,\Delta W_I,\Delta\alpha)$ are defined to be normal to the maximum entropy surface, which will give the largest drop in entropy for a given perturbation size. The size of the perturbation was chosen to be small (Euclidean norm $\delta = 0.01$, about 1\% variation in parameters) because naturally occurring changes in E, I, and $\alpha$ are not likely to be large. Finally, we define {\it fragility} $F(W_E,W_I)$ as the mean of the entropy difference; \begin{align} F(W_E,W_I) = \frac{(H^*-H_{\text{up}})+(H^*-H_{\text{down}})}{2} . \end{align} \begin{figure*}[t] \includegraphics{Fig3.eps} \renewcommand{\figurename}{Fig} \caption{{\bf Trade off between high entropy and robust entropy.} a) For each combination of $W_E$ and $W_I$ effective synaptic weights, we identify the critical fraction of inhibitory neurons ($\alpha$*) with the highest entropy. b) Comparing all critical entropy $H^*$ across the entire critical surface, entropy was highest for low $W_E$ and $W_I$. c) Highest fragility was also found for low $W_E$ and $W_I$.} \label{fig3} \end{figure*} Our main results are in Figs.~\ref{fig3}b and \ref{fig3}c. Figure~\ref{fig3}b shows the entropy $H^*$ on the maximum entropy surface as a function of the effective E and I weights $W_E$ and $W_I$. Networks with weak effective synapse strengths (low values of $W_E$ and $W_I$) can achieve a higher entropy $H^*$ than networks with strong effective synapse strengths. However, as shown in Fig.~\ref{fig3}c, high entropy comes at the cost of high fragility: networks with weak effective synapse strengths have the highest fragility, while networks with strong effective synapse strengths are the most robust. We note that while the variation in entropy $H^*$ is relatively moderate across the range studied (approximately $10\%$), the fragility ranges from $3$ to $6$, indicating that our $1\%$ perturbation of parameters results in a dramatic drop in entropy of approximately $30\%$ to $60\%$. One could argue that what matters are the final values of entropy after perturbation (i.e., $H_{up}$ and $H_{down}$) rather than how much entropy drops due to perturbation (i.e., $F$). From this perspective, strong synapses are also better; $H_{up}$ and $H_{down}$ are lower for weak synapses than for strong synapses. This can be seen by subtracting Fig.~\ref{fig3}c from \ref{fig3}b. We conclude that there is a trade-off between high and robust entropy, with stronger effective synapse strengths promoting lower but more robust entropy, and weaker effective synapse strengths promoting a high but fragile entropy. Finally, we address the role of the fraction $\alpha$ of I neurons in promoting entropy robustness. We note that if the choices of E and I synapse strengths are constrained to be proportional to each other, as experiments suggest \cite{deneve, wehr, haider}, then $W=W_E =b W_I$ and the estimate $\alpha^* \approx (W_E - 1)/(W_E + W_I)$ becomes $\alpha^* = (1+1/b)^{-1} (1 - 1/W)$. Thus, $\alpha^*$ is a monotonically increasing function of synapse strength. Therefore, for such constrained networks, entropy and fragility decreases with the fraction of I neurons $\alpha$. Thus, a small non-zero $\alpha$, similar to mammalian cortex, is needed to obtain high and robust entropy. \begin{figure*}[t] \includegraphics{Fig4.eps} \renewcommand{\figurename}{Fig} \caption{{\bf Interpretation of results based on Branching function formalism.} Branching functions $\Lambda(S)$, for a) low effective excitatory and inhibitory weight ($W_E = W_I= 1.25$) with $S_0 \sim 0.015$ and $S_1 \sim 0.883$, and b) high effective excitatory and inhibitory weight ($W_E = W_I= 3.25$) with $S_0 \sim 0.105$ and $S_1 \sim 0.724$. The probability distributions c) low effective weights and d) high effective weights. All probability distributions have been normalized by their peak probability to facilitate comparison of their shapes.} \label{fig4} \end{figure*} In the following, we present an interpretation of our results based on the branching function formalism presented above and studied in previous work \cite{larremore2014}. If one treats the time series of network activity $S^t$ as a random walk, its bias, or expected velocity, is given by $S^t (\Lambda(S^t)-1)$. Therefore, when $\Lambda(S^t) > 1$ ($\Lambda(S^t) < 1$), $S^t$ tends to increase (decrease). Since $\Lambda(0) \geq 1$ and $\Lambda(1) \leq 1$ \cite{larremore2014}, the long-time distribution of $S^t$ will be concentrated around the region where $\Lambda(S^t) \approx 1$. The wider this region is, the wider the distribution of $S$ will be, and the larger its associated entropy. To understand how the size of this region depends on the weights $W_E$ and $W_I$, we note that, at the tipping point between low and high firing regimes, the branching function deviates from $1$ in an interval $[0,S_0)$ on which it is appreciably larger than $1$, and in an interval $(S_1, 1]$ on which it is less than $1$. The branching function deviates from $1$ in these intervals because the distribution of the random variable $w_E n_E - w_I n_I$ in Eq.~(\ref{Lambdaeq}) extends below $0$ or above $1$ when $S$ is too close to $0$ or $1$, respectively. In these cases, the nonlinearity in the transfer function $\sigma$ causes the expected value in Eq.~(\ref{Lambdaeq}) to be different from $1$. The larger the values of $w_E$ and $w_I$, the wider the distribution of $w_E n_E - w_I n_I$, and therefore the larger these intervals are. More precisely, we can estimate the scaling of $S_0$ and $S_1$ as follows. The variance of the variable $w_E n_E - w_I n_I$ is $V(S) =w_E^2(1-\alpha) k S + w_I^2 \alpha k S$. Estimating $S_0$ and $S_1$ as the values where $S_0^2 \sim V(S_0)$ and $(1-S_1)^2 \sim V(S_1)$ we obtain $S_0 \sim w_E^2(1-\alpha) k + w_I^2 \alpha k$ and $S_1 \sim 1-\sqrt{\left(\frac{1}{2} S_0+1\right)^2-1} + \frac{1}{2} S_0$. Using the approximation that in the balanced state $\alpha = (k w_E -1)/(k w_E + k w_I)$, this gives closed expressions for the estimates of $S_0$ and $S_1$ as a function of $w_E$ and $w_I$. For low values of $w_E$ and $w_I$, $S_0 \ll 1$ and $1-S_1 \ll 1$, and therefore the branching function will be close to $1$ over a large region in $[0,1]$. This is illustrated in the left panel of Fig.~\ref{fig4}, which shows the branching function $\Lambda(S)$ and associated probability distribution $P(S)$ for the balanced state (red lines), high-firing (yellow lines) and low-firing (blue lines) cases. While the wide region over which the branching function is approximately one results in a relatively large entropy, a perturbation away from the balanced state displaces the branching function so that it is below or above $1$ over this large region, and is close to $1$ over a much smaller region. Thus, the entropy decreases significantly. On the other hand, if the weights $w_E$ and $w_I$ are larger, both $S_0$ and $1-S_1$ will be of order $1$. This is illustrated in the right panel of Fig.~\ref{fig4}. While the region over which $\Lambda(S)$ is close to $1$ is smaller, resulting in a smaller entropy, it does not change significantly in the low-firing or high-firing cases, resulting in lower fragility. \section*{Discussion} Here we have shown that Shannon entropy of neural network dynamics is sensitive to the structure of excitatory and inhibitory interactions. Generally, high entropy is obtained by balancing E and I synaptic efficacy such that the system operates near the tipping point between two phases of network dynamics. Entropy is high all along this boundary, i.e., for a wide range of properly balanced E/I combinations. However, the regions within this boundary with the highest entropy are not robust; small variations in the synaptic strengths $W_E$, $W_I$, and in the fraction of inhibitory neurons $\alpha$ could cause entropy to plummet, drastically reducing the accessible states and disrupting the functioning of the network. We found that entropy is more robust when the effective synaptic strengths are larger. Given that $W_E$, $W_I$, and $\alpha$ are inevitably somewhat variable during development, across brain regions, and across individuals \cite{sahara, meinecke, hendry}, robustness to $W_E$, $W_I$, and $\alpha$ variability may be important. For networks constrained such that $W_E \sim W_I$ \cite{deneve, wehr, haider}, our findings imply that a small, nonzero fraction $\alpha >0$ of inhibitory neurons would result in a more robust network entropy. Our results suggest that a population of organisms with reliable and high entropy brains requires that small, nonzero fraction of neurons be inhibitory, which is consistent with what exists in mammalian cortex \cite{sahara, meinecke, hendry}. Although high entropy is likely to be beneficial for certain functions of cerebral cortex, other functions might be better served by a low entropy condition. For example, as discussed in the introduction, lower entropy might improve sensory signal processing by increasing the signal-to-noise ratio. In this context, a small shift towards the lower firing side of the phase transition might be beneficial. Such temporary shifts can occur due to neuromodulation; for example, attention is known to shift cortical dynamics towards a regime with smaller collective fluctuations \cite{harris2011}. However, a shift towards the high firing regime or too large a shift towards the extremely inhibition-dominant regime would likely be bad for function. Indeed, extreme deviation from well-balanced excitation and inhibition is implicated in a variety of brain disorders. For instance, when inhibition is sufficiently weak relative to excitation, seizures occur, as in epilepsy \cite{dichter}. Too much inhibition is associated with Down's syndrome \cite{fernandez}. Autism is also associated with imbalanced excitation and inhibition \cite{merzenich, nelson}, both in terms of abnormal numbers of inhibitory neurons and strengths of synapses \cite{gogolla}. Our work suggests that the dysfunction associated with these disorders may be, in part, due to abnormal entropy of cortical network dynamics. If high entropy is a beneficial property for brain circuits, then the robust maximization of entropy could be a phenotypic target of evolution in the nervous system. Our results suggest that hitting this target requires neural circuits that include some inhibitory neurons and operate near the tipping point of a phase transition. \section*{Acknowledgments} Calculations were performed on Trestles at the Arkansas High Performance Computing Center, which is funded through multiple National Science Foundation grants and the Arkansas Economic Development Commission.
{ "timestamp": "2018-04-17T02:08:12", "yymm": "1804", "arxiv_id": "1804.05266", "language": "en", "url": "https://arxiv.org/abs/1804.05266" }
\section*{Introduction} \subsection{The GKZ hypergeometric system} Let $$A=\left(\begin{array}{ccc} w_{11}&\cdots&w_{1N}\\ \vdots&&\vdots\\ w_{n1}&\cdots&w_{nN}\end{array} \right)$$ be an $(n\times N)$-matrix of rank $n$ with integer entries. Denote the column vectors of $A$ by ${\mathbf w}_1,\ldots, {\mathbf w}_N\in \mathbb Z^n$. It defines an action of the $n$-dimensional torus $\mathbb T_{\mathbb Z}^n=\mathrm{Spec}\, \mathbb Z[t_1^{\pm 1},\ldots, t_n^{\pm 1}]$ on the $N$-dimensional affine space $\mathbb A_{\mathbb Z}^N=\mathrm{Spec}\, \mathbb Z[x_1,\ldots, x_N]$: $$ \mathbb T_{\mathbb Z}^n\times \mathbb A_{\mathbb Z}^N \to \mathbb A_{\mathbb Z}^N, \quad \Big((t_1,\ldots, t_n),(x_1,\ldots, x_N)\Big)\mapsto (t_1^{w_{11}}\cdots t_n^{w_{n1}}x_1,\ldots, t_1^{w_{1N}}\cdots t_n^{w_{nN}}x_N). $$ Let $\gamma_1,\ldots, \gamma_n\in \mathbb C$. In \cite{GKZ1}, Gelfand, Kapranov and Zelevinsky define the \emph{$A$-hypergeometric system} to be the system of differential equations \begin{eqnarray}\label{GKZeqn} \begin{array}{l} \sum_{j=1}^N w_{ij} x_j\frac{\partial f}{\partial x_j}+\gamma_i f=0 \quad (i=1,\ldots, n),\\ \prod_{\lambda_j>0} \left(\frac{\partial}{\partial x_j}\right)^{a_j}f= \prod_{\lambda_j<0} \left(\frac{\partial}{\partial x_j}\right)^{-a_j}f, \end{array} \end{eqnarray} where for the second system of equations, $(\lambda_1,\ldots, \lambda_N)\in\mathbb Z^N$ goes over the family of integral linear relations $$\sum_{j=1}^N \lambda_j{\mathbf w}_j=0$$ among ${\mathbf w}_1,\ldots, {\mathbf w}_N$. We call the $A$-hypergeometric system as the \emph{GKZ hypergeometric system}. An integral representation of a solution of the GKZ hypergeometric system is given by \begin{eqnarray}\label{intrepgkz} f(x_1,\ldots, x_N)=\int_{\Sigma} t_1^{\gamma_1}\cdots t_n^{\gamma_n} e^{\sum_{j=1}^N x_jt_1^{w_{1j}}\cdots t_n^{w_{nj}}}\frac{dt_1}{t_1}\cdots \frac{dt_n}{t_n} \end{eqnarray} where $\Sigma$ is a real $n$-dimensional cycle in $\mathbb T^n$. Confer \cite[equation (2.6)]{A1}, \cite[section 3]{Fu1} and \cite[Corollary 2 in \S 4.2]{GGR1}. \subsection{The GKZ hypergeometric function over finite fields} Let $p$ be a prime number, $q$ a power of $p$, $\mathbb F_q$ the finite field with $q$ elements, $\psi: \mathbb F_q\to\overline{\mathbb Q}^\ast$ a nontrivial additive character, and $\chi_1,\ldots, \chi_n:\mathbb F_q^\ast\to \overline {\mathbb Q}^\ast$ multiplicative characters. In \cite{GG} and \cite{GGR}, Gelfand and Graev define the \emph{hypergeometric function over the finite field} to be the function defined by the family of twisted exponential sums \begin{eqnarray}\label{finitegkz} \mathrm{Hyp}(x_1,\ldots, x_N) =\sum_{t_1,\ldots, t_n\in \mathbb F_q^\ast}\chi_1(t_1)\cdots \chi_n(t_n)\psi\Big( \sum_{j=1}^N x_j t_1^{w_{1j}}\cdots t_n^{w_{nj}}\Big), \end{eqnarray} where $(x_1, \ldots, x_N)$ varies in $\mathbb A^N(\mathbb F_q)$. It is an arithmetic analogue of the expression (\ref{intrepgkz}). In \cite{Fu2}, we introduce the $\ell$-adic GKZ hypergeometric sheaf $\mathrm{Hyp}$ which is a perverse sheaf on $\mathbb A_{\mathbb F_q}^N$ such that for any rational point $x=(x_1,\ldots, x_N)\in \mathbb A^N(\mathbb F_q)$, we have $$\mathrm{Hyp}(x_1,\ldots, x_N)= (-1)^{n+N}\mathrm{Tr}(\mathrm {Frob}_x,\mathrm{Hyp}_{\bar x}),$$ where $\mathrm{Frob}_x$ is the geometric Frobenius at $x$. In this paper, we study the $p$-adic counterpart of the GKZ hypergeometric system. It is a complex of ${\mathcal O}^\dagger$-modules with integrable connections and with Frobenius structures defined on the dagger space (\cite{GK}) corresponding to the unit polydisc so that traces of Frobenius on fibers at Techm\"uller points are given by $\mathrm{Hyp}(x_1,\ldots, x_N)$. \subsection{The $p$-adic GKZ hypergeometric complex} For any $\mathbf v=(v_1, \ldots, v_N)\in\mathbb Z_{\geq 0}^N$ and $\mathbf w=(w_1, \ldots, w_n)\in \mathbb Z^n$, write $$\mathbf x^{\mathbf v}=x_1^{v_1}\cdots x_N^{v_N},\quad \mathbf t^{\mathbf w}=t_1^{w_1}\cdots t_n^{w_n}, \quad \vert\mathbf v\vert =v_1+\cdots+v_N.$$ Let $K$ be a finite extension of $\mathbb Q_p$ containing an element $\pi$ satisfying $$\pi^{p-1}+p=0.$$ Denote by $\vert\cdot\vert$ the $p$-adic norm on $K$ defined by $\vert a\vert=p^{-\mathrm{ord}_p(a)}$. For each real number $r>0$, consider the algebras \begin{eqnarray*} K\{r^{-1}\mathbf x\} &=&\{\sum_{\mathbf v\in\mathbb Z^N_{\geq 0}} a_{\mathbf v} \mathbf x^{\mathbf v}:\; a_{\mathbf v}\in K,\; \vert a_{\mathbf v}\vert r^{\vert \mathbf v\vert} \hbox { are bounded}\},\\ K\langle r^{-1}\mathbf x\rangle &=&\{\sum_{\mathbf v\in\mathbb Z^N_{\geq 0}} a_{\mathbf v} \mathbf x^{\mathbf v}:\; a_{\mathbf v}\in K,\; \lim_{\vert \mathbf v\vert\to \infty}\vert a_{\mathbf v}\vert r^{\vert \mathbf v\vert}=0\}. \end{eqnarray*} They are Banach $K$-algebras with respect to the norm $$\Vert\sum_{\mathbf v\in\mathbb Z^N_{\geq 0}} a_{\mathbf v} \mathbf x^{\mathbf v}\Vert_r=\sup \vert a_{\mathbf v} \vert r^{\vert \mathbf v\vert}.$$ We have $K\langle r^{-1}\mathbf x\rangle\subset K\{r^{-1}\mathbf x\}.$ Elements in $K\langle r^{-1}\mathbf x\rangle$ are exactly those power series converging in the closed polydisc $\{(x_1, \ldots, x_N):\; x_i\in \overline{\mathbb Q}_p,\; \vert x_i\vert \leq r\}.$ Moreover, for any $r<r'$, we have $$K\{r'^{-1}\mathbf x\}\subset K\langle r^{-1}\mathbf x\rangle\subset K\{ r^{-1}\mathbf x\}.$$ Let $$ K\langle\mathbf x\rangle^\dagger=\bigcup_{r>1} K\{r^{-1}\mathbf x\}=\bigcup_{r>1}K\langle r^{-1}\mathbf x\rangle.$$ $K\{\mathbf x\}^\dagger$ is the ring of \emph{over-convergent} power series, that is, series converging in closed polydiscs of radii $>1$. Let $\Delta$ be the convex hull of $\{0, \mathbf w_1, \ldots, \mathbf w_N\}$ in $\mathbb R^n$, and let $\delta$ be the convex polyhedral cone generated by $\{\mathbf w_1, \ldots, \mathbf w_N\}$. For any $\mathbf w\in \delta$, define $$d(\mathbf w)=\inf\{a>0:\; \mathbf w\in a\Delta\}.$$ We have $$d(a\mathbf w)=ad(\mathbf w), \quad d(\mathbf w+\mathbf w')\leq d(\mathbf w)+d(\mathbf w')$$ whenever $a\geq 0$ and $\mathbf w,\mathbf w'\in\delta$. There exists an integer $d>0$ such that we have $d(\mathbf w)\in\frac{1}{d}\mathbb Z$ for all $\mathbf w\in\mathbb Z^n$. For any real numbers $r>0$ and $s\geq 1$, define \begin{eqnarray*} L(r,s)&=&\{\sum_{\mathbf w\in\mathbb Z^n\cap\delta} a_{\mathbf w}(\mathbf x) \mathbf t^{\mathbf w}:\; a_{\mathbf w}(\mathbf x) \in K \{r^{-1}\mathbf x\}, \; \Vert a_{\mathbf w}(\mathbf x)\Vert_r s^{d(\mathbf w)} \hbox { are bounded}\}\\ &=&\{\sum_{\mathbf v\in\mathbb Z^N_{\geq 0},\; \mathbf w\in\mathbb Z^n\cap\delta} a_{\mathbf v\mathbf w} \mathbf x^{\mathbf v}\mathbf t^{\mathbf w}:\; a_{\mathbf v\mathbf w}\in K, \; \vert a_{\mathbf v\mathbf w}\vert r^{\vert \mathbf v\vert} s^{d(\mathbf w)} \hbox { are bounded}\},\\ L^\dagger&=& \bigcup_{r>1,\;s>1} L(r,s). \end{eqnarray*} Note that $L(r,s)$ and $L^\dagger$ are rings. Let $$F(\mathbf x, \mathbf t)=\sum_{j=1}^N x_j t_1^{w_{1j}}\cdots t_n^{w_{nj}},$$ Consider the \emph{twisted de Rham complex} $C^\cdot(L^\dagger)$ defined as follows: We set $$C^k(L^\dagger)= \{\sum_{1\leq i_1<\cdots < i_k\leq n}f_{i_1\ldots i_k}\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}}{t_{i_k}}:\;f_{i_1\ldots i_k}\in L^\dagger\} \cong L^{\dagger {n\choose k}}$$ with differential $d: C^k(L^\dagger)\to C^{k+1}(L^\dagger)$ given by \begin{eqnarray*} d(\omega)&=&\Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf x,\mathbf t))\Big)^{-1} \circ \mathrm d_{\mathbf t} \circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf x,\mathbf t))\Big)(\omega) \\&=&\mathrm d_{\mathbf t}\omega + \sum_{i=1}^n \Big(\gamma_i+\pi\sum_{j=1}^N w_{ij}x_j\mathbf t^{\mathbf w_j}\Big) \frac{\mathrm dt_i}{t_i}\wedge \omega \end{eqnarray*} for any $\omega\in C^k(L^\dagger)$, where $\mathrm d_{\mathbf t}$ is the exterior derivative with respect to the $\mathbf t$ variable. For each $j\in\{1, \ldots, N\}$, define $\nabla_{\frac{\partial}{\partial x_j}}: C^\cdot(L^\dagger)\to C^\cdot(L^\dagger)$ by \begin{eqnarray*} \nabla_{\frac{\partial}{\partial x_j}}(\omega) &=&\Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n}\exp (\pi F(\mathbf x, \mathbf t))\Big)^{-1} \circ {\frac{\partial }{\partial x_j}}\circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n}\exp (\pi F(\mathbf x, \mathbf t))\Big)\\ &=&\frac{\partial \omega}{\partial x_j}+\pi\mathbf t^{\mathbf w_j}\omega. \end{eqnarray*} Since $\frac{\partial}{\partial x_j}$ commutes with $\mathrm d_{\mathbf t}$, $\nabla_{\frac{\partial}{\partial x_j}}$ commutes with $d: C^k(L^\dagger)\to C^{k+1}(L^\dagger)$. We have integrable connections $$\nabla: C^\cdot (L^\dagger)\to C^\cdot(L^\dagger) \otimes_{K\langle \mathbf x\rangle^\dagger}\Omega^1_{K\langle\mathbf x\rangle^\dagger} $$ defined by $$\nabla(\omega)=\sum_{j=1}^N \nabla_{\frac{\partial}{\partial x_j}}(\omega)\otimes\mathrm dx_j,$$ where $\Omega^{1}_{K\langle\mathbf x\rangle^\dagger}$ is the free $K\{\mathbf x\}^\dagger$-module with basis $\mathrm dx_1, \ldots, \mathrm dx_N$. Consider the lifting of the Frobenius correspondence in the variable $\mathbf t$ defined by $$\Phi(f({\mathbf x},{\mathbf t}))=f(\mathbf x, \mathbf t^q).$$ One verifies directly that $\Phi(L(r, s))\subset L(r, \sqrt[q]{s})$ and hence $\Phi(L^\dagger)\subset L^\dagger$. It induces maps $\Phi: C^k(L^\dagger)\to C^k(L^\dagger)$ on differential forms commuting with $\mathrm d_{\mathbf t}$: $$\Phi\Big(\sum_{1\leq i_1<\cdots < i_k\leq n}f_{i_1\ldots i_k}(\mathbf x, \mathbf t)\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}} {t_{i_k}}\Big) =\sum_{1\leq i_1<\cdots < i_k\leq n}q^kf_{i_1\ldots i_k}(\mathbf x, \mathbf t^q)\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}}{t_{i_k}}$$ Suppose furthermore that $\gamma_1,\ldots, \gamma_n\in\frac{1}{1-q}\mathbb Z$ and $(\gamma_1, \ldots, \gamma_n)\in\delta$. Consider the maps $F: C^k(L^\dagger)\to C^k(L^\dagger)$ defined by \begin{eqnarray} F&=&\Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n}\exp (\pi F(\mathbf x, \mathbf t))\Big)^{-1} \circ\Phi\circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n}\exp (\pi F(\mathbf x^{q}, \mathbf t))\Big)\\ \label{F} &=&\Big(t_1^{\gamma_1(q-1)}\cdots t_n^{\gamma_n(q-1)}\exp \big(\pi F(\mathbf x^q,\mathbf t^q)- \pi F(\mathbf x, \mathbf t)\big)\Big) \circ\Phi. \end{eqnarray} Even though $t_1^{\gamma_1}\cdots t_n^{\gamma_n}\exp (\pi F(\mathbf x, \mathbf t))$ does not lie in $L^\dagger$ and multiplication by it does not define an endomorphism on $C^\cdot(L^\dagger)$, the next Lemma \ref{estimation} (i) shows that $t_1^{\gamma_1(q-1)}\cdots t_n^{\gamma_n(q-1)}\exp \Big(\pi F(\mathbf x^q,\mathbf t^q)- \pi F(\mathbf x, \mathbf t)\Big)$ lie in $L^\dagger$, and hence the expression (\ref{F}) shows that $F$ defines endomorphism on each $C^k(L^\dagger)$. \begin{lemma}\label{estimation} ${}$ (i) $t_1^{\gamma_1(q-1)}\cdots t_n^{\gamma_n(q-1)}\exp \big(\pi F(\mathbf x^q,\mathbf t^q)- \pi F(\mathbf x, \mathbf t)\big)$ and $t_1^{\gamma_1(1-q)}\cdots t_n^{\gamma_n(1-q)}\exp\big(\pi F(\mathbf x,\mathbf t)- \pi F(\mathbf x^q,\mathbf t^{q})\big)$ lie in $L(r,r^{-1} p^{\frac{p-1}{pq}})$ for any $0<r\leq p^{\frac{p-1}{pq}}$. (ii) Let $C^{(1)\cdot}(L^\dagger)$ be the twisted de Rham complex so that $C^{(1) ,j}(L^\dagger)=C^j(L^\dagger)$ for each $k$, and $d^{(1)}:C^{(1),k}\to C^{(1),k+1}$ is given by \begin{eqnarray*} d^{(1)}&=&\Big (t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf x^q,\mathbf t))\Big)^{-1} \circ \mathrm d_{\mathbf t} \circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf x^q,\mathbf t))\Big)\\ &=&\mathrm d_{\mathbf t}+ \sum_{i=1}^n \Big(\gamma_i+\pi\sum_{j=1}^N w_{ij}x_j^q\mathbf t^{\mathbf w_j}\Big) \frac{\mathrm dt_i}{t_i}. \end{eqnarray*} Let $\nabla^{(1)}$ be the connection on $C^{(1)\cdot}(L^\dagger)$ defined by \begin{eqnarray*} \nabla^{(1)}_{\frac{\partial}{\partial x_j}} &=&\Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n}\exp (\pi F(\mathbf x^{q}, \mathbf t))\Big)^{-1} \circ {\frac{\partial }{\partial x_j}}\circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n}\exp (\pi F(\mathbf x^{q}, \mathbf t))\Big)\\ &=&\frac{\partial}{\partial x_j}+q\pi x_j^{q-1}\mathbf t^{\mathbf w_j}. \end{eqnarray*} Then $F$ defines a horizontal morphism of complexes of $K\langle \mathbf x\rangle^\dagger$-modules with connections $$F: (C^{(1)\cdot}(L^\dagger), \nabla^{(1)})\to (C^\cdot(L^\dagger), \nabla).$$ (iii) Let $E(0,1)^N$ be the closed unit polydisc with the dagger structure sheaf (\cite{GK}) associated to the algebra $K\langle\mathbf x\rangle^\dagger$, and let $\mathrm{Fr}$ be the lifting $$\mathrm{Fr}:E(0,1)^N\to E(0,1)^N,\quad (x_1, \ldots, x_N)\to (x_1^q,\ldots, x_N^q)$$ of the geometric Frobenius correspondence. We have an isomorphism $$\mathrm{Fr}^*(C^\cdot(L^\dagger), \nabla)\cong (C^{(1)\cdot}(L^\dagger), \nabla^{(1)}).$$ \end{lemma} \begin{proof} (i) Write $\exp(\pi z-\pi z^q)=1+ \sum_{i=1}^\infty c_i z^i$. We have $\vert c_i \vert\leq p^{-\frac{p-1}{pq}i}$ by \cite[Theorem 4.1]{M1}. Write \begin{eqnarray*} \exp(\pi z^q-\pi z)&=&1-(\sum_{i=1}^\infty c_i z^i)+(\sum_{i=1}^\infty c_i z^i)^2-\cdots\\ &=& 1+ \sum_{i=1}^\infty c'_i z^i. \end{eqnarray*} Then we also have the estimate $\vert c'_i \vert\leq p^{-\frac{p-1}{pq}i}$. For the monomial $x_j\mathbf t^{\mathbf w_j}$, we have \begin{eqnarray*} &&\exp\big(\pi (x_j\mathbf t^{\mathbf w_j})^q-\pi x_j\mathbf t^{\mathbf w_j}\big)=\sum_{i=0}^\infty c'_i x_j^i\mathbf t^{i\mathbf w_j},\\ &&\Vert c'_i x_j^i\Vert_r \leq p^{-\frac{p-1}{pq}i} r^i = \Big(r^{-1} p^{\frac{p-1}{pq}}\Big)^{-i} \leq \Big(r^{-1} p^{\frac{p-1}{pq}}\Big)^{-d(i\mathbf w_j)} \end{eqnarray*} Here for the last inequality, we use the fact that $d(i\mathbf w_j)\leq i$ and the assumption that $r\leq p^{\frac{p-1}{pq}}$. So we have $\exp\big(\pi (x_j\mathbf t^{\mathbf w_j})^q-\pi x_j\mathbf t^{\mathbf w_j}\big)\in L(r,r^{-1} p^{\frac{p-1}{pq}})$. Since $r^{-1} p^{\frac{p-1}{pq}} \geq 1,$ the space $L(r,r^{-1} p^{\frac{p-1}{pq}})$ is a ring. So $t_1^{\gamma_1(q-1)}\cdots t_n^{\gamma_n(q-1)}\exp\big (\pi F(\mathbf x^q,\mathbf t^q)- \pi F(\mathbf x, \mathbf t)\big)$ lies in $L(r,r^{-1} p^{\frac{p-1}{pq}})$. Similarly $t_1^{\gamma_1(1-q)}\cdots t_n^{\gamma_n(1-q)}\exp\big(\pi F(\mathbf x,\mathbf t)- \pi F(\mathbf x^q,\mathbf t^{q})\big)$ lies in $L(r,r^{-1} p^{\frac{p-1}{pq}})$. (ii) Using the fact that $\Phi \circ \mathrm d_{\mathbf t}=\mathrm d_{\mathbf t} \circ \Phi$ and $\Phi \circ \frac{\partial}{\partial x_j}=\frac{\partial}{\partial x_j}\circ \Phi$, one checks that $F \circ d^{(1)}=d\circ F$ and $F\circ\nabla^{(1)}=\nabla\circ F.$ (iii) Consider the $K$-algebra homomorphism $$K\langle y_1, \ldots, y_N\rangle^\dagger \to K\langle x_1, \ldots, x_N\rangle^\dagger,\quad y_j\mapsto x_j^{q}.$$ This makes $K\langle\mathbf x\rangle^\dagger$ a finite $K\langle\mathbf y\rangle^\dagger$-algebra. We have a canonical isomorphism \begin{eqnarray*} \tilde L^\dagger \otimes_{K\langle\mathbf y\rangle^\dagger}K\langle\mathbf x\rangle^\dagger \stackrel \cong\to L^{\dagger}, \end{eqnarray*} where $\tilde L^{\dagger}$ is defined in the same way as $L^\dagger$ except that we change the variables from $x_j$ to $y_j$. The connection $\nabla$ on $\tilde L^\dagger$ defines a connection on $\tilde L^\dagger\otimes_{K\langle\mathbf y\rangle^\dagger}K\langle\mathbf x\rangle^\dagger$ via the Leibniz rule. Via the above isomorphism, it defines the connection $\mathrm{Fr}^*\nabla$ on $L^{\dagger}$. Let's verify that it coincides with the connection $\nabla^{(1)}$ on $L^\dagger$. Any element in $L^{\dagger}$ can be written as a finite sum of elements of the form $f(\mathbf x)g(\mathbf y, \mathbf t)$ with $f(\mathbf x)\in K [\mathbf x]$ and $g(\mathbf y,\mathbf t)\in \tilde L^\dagger$. By the Leibniz rule, we have \begin{eqnarray*} (\mathrm{Fr}^*\nabla)_{\frac{\partial}{\partial x_j}}(f(\mathbf x)g(\mathbf y,\mathbf t)) &=&\frac{\partial}{\partial x_j}(f(\mathbf x)) g(\mathbf y,\mathbf t)+ f(\mathbf x) \Big(\nabla(g(\mathbf y,\mathbf t)),\frac{\partial}{\partial x_j}\Big)\\ &=& \frac{\partial}{\partial x_j}(f(\mathbf x)) g(\mathbf y,\mathbf t)+ f(\mathbf x) \Big(\sum_m \nabla_{\frac{\partial}{\partial y_m}}(g(\mathbf y,\mathbf t)) \mathrm d y_m, \frac{\partial}{\partial x_j}\Big)\\ &=& \frac{\partial}{\partial x_j}(f(\mathbf x)) g(\mathbf y,\mathbf t)+ f(\mathbf x) \nabla_{\frac{\partial}{\partial y_j}}(g(\mathbf y,\mathbf t)) q x_j^{q-1} \\ &=& \frac{\partial}{\partial x_j}(f(\mathbf x)) g(\mathbf y,\mathbf t)+ f(\mathbf x)\Big({\frac{\partial}{\partial y_j}}(g(\mathbf y,\mathbf t))+ \pi \mathbf t^{\mathbf w_j}g(\mathbf y,\mathbf t) \Big)q x_j^{q-1}\\ &=& \frac{\partial}{\partial x_j}(f(\mathbf x)) g(\mathbf y,\mathbf t)+ f(\mathbf x)\frac{\partial}{\partial x_j}(g(\mathbf y,\mathbf t))+ q\pi x_j^{q-1} \mathbf t^{\mathbf w_j} f(\mathbf x)g(\mathbf y,\mathbf t)\\ &=& \frac{\partial}{\partial x_j}\Big(f(\mathbf x)g(\mathbf y,\mathbf t)\Big)+ q\pi x_j^{q-1} \mathbf t^{\mathbf w_j} f(\mathbf x)g(\mathbf y,\mathbf t) \\ &=& \nabla^{(1)}(f(\mathbf x)g(\mathbf y,\mathbf t)). \end{eqnarray*} This proves our assertion. Similarly, one verifies that the connection $\mathrm{Fr}^*\nabla$ on $\mathrm{Fr}^*C^j(\tilde L^{\dagger})$ can be identified with the connection $\nabla^{(1)}$ on $C^\cdot(L^{\dagger})$. \end{proof} \begin{definition} Suppose $\gamma_1,\ldots, \gamma_n\in\frac{1}{1-q}\mathbb Z$ and $(\gamma_1, \ldots, \gamma_n)\in\delta$. The \emph{$p$-adic GKZ hypergeometric complex} is defined to be the tuple $(C^\cdot(L^\dagger), \nabla, F)$ consisting of the complex $C^\cdot(L^\dagger)$ of $K\langle\mathbf x\rangle^\dagger$-module modules with the connection $\nabla$ and the horizontal morphism $F: \mathrm{Fr}^*(C^\cdot(L^\dagger), \nabla)\to (C^\cdot(L^\dagger), \nabla).$ \end{definition} \subsection{The GKZ hypergeometric ${\mathcal D}^\dagger$-module} Let $${\mathcal D}^\dagger=\bigcup_{r>1,\;s>1} \{\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\pi^{\vert \mathbf v\vert}}:\; f_{\mathbf v}(\mathbf x)\in K\{ r^{-1}\mathbf x\},\; \Vert f_{\mathbf v}(\mathbf x)\Vert_r s^{\vert \mathbf v\vert} \hbox { are bounded}\},$$ where for any $\mathbf v=(v_1,\ldots, v_N)\in\mathbb Z_{\geq 0}^N$, we set $\partial^{\mathbf v}=\frac{\partial^{v_1+\cdots+v_N}} {\partial x_1^{v_1}\cdots\partial x_N^{v_N}}.$ $\mathcal D^\dagger$ is a ring of differential operators possibly of infinite orders. This $\mathcal D^\dagger$ is also used in \cite{K}. Let $\mathcal D^\dagger_{\mathbb P^N,\mathbb Q}(\infty)$ be the sheaf of differential operators of finite level and of infinite order on the formal projective space $\mathbb P^N$ over the integer ring of $K$ with over-convergent poles along the $\infty$ divisor. For the definition of this sheaf, see \cite{B}. By \cite{Hu}, we have $$\Gamma(\mathbb P^N, \mathcal D^\dagger_{\mathbb P^N, \mathbb Q}(\infty))=\bigcup_{r>1,\;s>1} \{\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\mathbf v_!}:\; f_{\mathbf v}(\mathbf x)\in K\{ r^{-1}\mathbf x\},\; \Vert f_{\mathbf v}(\mathbf x)\Vert_r s^{\vert \mathbf v\vert} \hbox { are bounded}\},$$ where $\mathbf v!=v_1!\cdots v_N!$. In section 1, we prove the following proposition. \begin{proposition}\label{berthelot} We have $\mathcal D^\dagger=\Gamma(\mathbb P^N, \mathcal D^\dagger_{\mathbb P^N, \mathbb Q}(\infty)).$ \end{proposition} In particular, by the result in \cite{Hu}, ${\mathcal D}^\dagger$ is a coherent ring. Let $\frac{\partial}{\partial x_j}\in \mathcal D^\dagger$ act via $\nabla_{\frac{\partial}{\partial x_j}}$. Then $L^\dagger$ is a left ${\mathcal D}^\dagger$-module, and the twisted de Rham complex $C^\cdot(L^\dagger)$ is a complex of ${\mathcal D}^\dagger$-modules. The cohomology groups $H^k(C^\cdot(L^\dagger))$ are also left ${\mathcal D}^\dagger$-modules. Let \begin{eqnarray*} C(A)&=&\{k_1\mathbf w_1+\cdots +k_N \mathbf w_N:\; k_i\in\mathbb Z_{\geq 0}\},\\ L^{\dagger\prime}&=&\bigcup_{r>1,\;s>1} \{\sum_{\mathbf w\in C(A)} a_{\mathbf w}(\mathbf x)\mathbf t^{\mathbf w}:\; a_{\mathbf w}(\mathbf x)\in K\{r^{-1}\mathbf x\}, \; \Vert a_{\mathbf w}(\mathbf x)\Vert_r s^{d(\mathbf w)} \hbox{ are bounded}\}. \end{eqnarray*} $C(A)$ is a submonoid of $\mathbb Z^n\cap \delta$, and $L^{\dagger\prime}$ is both a subring and a $\mathcal D$-submodule of $L^\dagger$. Let $$C^k(L^{\dagger\prime})= \{\sum_{1\leq i_1<\cdots < i_k\leq n}f_{i_1\ldots i_k}\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}}{t_{i_k}}:\;f_{i_1\ldots i_k} \in L^{\dagger\prime}\}\cong L^{\dagger\prime{n\choose j}}.$$ Note that $d: C^k(L^\dagger)\to C^{k+1}(L^\dagger)$ (resp. $\nabla_{\frac{\partial}{\partial x_j}}$) maps $C^k(L^{\dagger\prime})$ to $C^{k+1}(L^{\dagger\prime})$ (resp. $C^k(L^{\dagger\prime})$). So $C^\cdot (L^{\dagger\prime})$ is a subcomplex of ${\mathcal D}^\dagger$-modules of $C^\cdot(L^\dagger)$. Let $$F_{i,\gamma}=t_i\frac{\partial}{\partial t_i}+\gamma_i+\pi\sum_{j=1}^N w_{ij}x_j \mathbf t^{\mathbf w_j}.$$ It follows from the definition of the twisted de Rham complex that the homomorphism $$L^{\dagger\prime}\to C^n(L^{\dagger\prime}),\quad f\mapsto f\frac{\mathrm dt_1}{t_1}\wedge \cdots\wedge \frac{\mathrm dt_n}{t_n}$$ induces an isomorphism $$L^{\dagger\prime} / \sum_{i=1}^n F_{i,\gamma} L^{\dagger\prime}\cong H^n(C^\cdot(L^{\dagger\prime})). $$ Let's give an explicit presentation of the $\mathcal D^\dagger$-module $H^n(C^\cdot(L^{\dagger\prime}))$. Let \begin{eqnarray*} \Lambda&=&\{\lambda=(\lambda_1, \ldots, \lambda_N)\in\mathbb Z^N:\;\sum_{j=1}^N \lambda_j{\mathbf w}_j=0\},\\ \Box_{\lambda}&=&\prod_{\lambda_j>0} \left(\frac{1}{\pi}\frac{\partial}{\partial x_j}\right)^{\lambda_j}- \prod_{\lambda_j<0} \left(\frac{1}{\pi}\frac{\partial}{\partial x_j}\right)^{-\lambda_j}\quad (\lambda\in \Lambda)\\ E_{i,\gamma}&=&\sum_{j=1}^N w_{ij} x_j\frac{\partial}{\partial x_j}+\gamma_i\; (i=1,\ldots, n), \end{eqnarray*} Consider the map $$\varphi: {\mathcal D}^\dagger\to L^{\dagger\prime}, \quad \sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x) \frac{\partial^{\mathbf v}}{\pi^{\vert \mathbf v\vert}} \mapsto(\sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\pi^{\vert \mathbf v\vert}} )\cdot 1=\sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x) \mathbf t^{v_1\mathbf w_N+\cdots +v_N\mathbf w_N}.$$ It is a homomorphism of ${\mathcal D}^\dagger$-modules. In \S 1, we prove the following theorems. \begin{theorem} \label{pde} $\varphi$ induces isomorphisms \begin{eqnarray*} {\mathcal D}^\dagger/\sum_{\lambda\in \Lambda}{\mathcal D}^\dagger \Box_\lambda&\stackrel \cong\to& L^{\dagger\prime},\\ {\mathcal D}^\dagger/(\sum_{i=1}^n {\mathcal D}^\dagger E_{i,\gamma}+\sum_{\lambda\in \Lambda}{\mathcal D}^\dagger \Box_\lambda) &\stackrel \cong\to& L^{\dagger\prime} / \sum_{i=1}^n F_{i,\gamma} L^{\dagger\prime}\cong H^n(C^\cdot(L^{\dagger\prime})). \end{eqnarray*} Moreover, there exist finitely many $\mu^{(1)}, \ldots, \mu^{(m)} \in\Lambda$ such that $$\sum_{i=1}^m{\mathcal D}^\dagger \Box_{\mu^{(i)}}=\sum_{\lambda\in \Lambda}{\mathcal D}^\dagger \Box_\lambda.$$ \end{theorem} \begin{theorem}\label{coherent} $C^\cdot(L^{\dagger})$ and $C^\cdot(L^{\dagger\prime})$ are complexes of coherent $\mathcal D^\dagger$-modules. \end{theorem} \begin{definition} The \emph{GKZ hypergeometric ${\mathcal D}^\dagger$-module} is defined to be the left ${\mathcal D}^\dagger$-module $${\mathcal D}^\dagger/(\sum_{i=1}^n {\mathcal D}^\dagger E_{i,\gamma}+\sum_{\lambda\in \Lambda}{\mathcal D}^\dagger \Box_\lambda) \cong H^n(C^\cdot(L^{\dagger\prime})).$$ \end{definition} The GKZ hypergeometric $\mathcal D^\dagger$-module is the $p$-adic analogue of the (complex) hypergeometric $D$-module (\cite{A1}) associated to the GKZ hypergeometric system of differential equations (\ref{GKZeqn}). \subsection{Fibers of the GKZ hypergeometric complex} Let $\mathbf a=(a_1, \ldots, a_N)$ be a point in the closed unit polydisc $E(0,1)$, where $a_i\in K'$ for some finite extension $K'$ of $K$. Let's specialize at $\mathbf x=\mathbf a$, that is, apply the functor $\hbox{-} \otimes_{K\langle \mathbf x\rangle^\dagger} K',$ where $K'$ is regarded as a $K\langle \mathbf x\rangle^\dagger$-algebra via the homomorphism $$K\langle \mathbf x\rangle^\dagger \to K',\quad x_i\mapsto a_i.$$ Let \begin{eqnarray*} L^\dagger_0&=&\bigcup_{s>1} \{\sum_{\mathbf w\in\mathbb Z^n\cap\delta } a_{\mathbf w} t^{\mathbf w}:\; a_{\mathbf w} \in K', \; \vert a_{\mathbf w}\vert s^{d(\mathbf w)} \hbox { are bounded} \}. \end{eqnarray*} In section 1, we prove the following. \begin{lemma}\label{flat} $L^\dagger$ is flat over $K\langle \mathbf x\rangle^\dagger$ and $$L^\dagger\otimes_{K\langle \mathbf x\rangle^\dagger} K'\cong L_0^\dagger.$$ \end{lemma} Consider the twisted de Rham complex $C^\cdot(L_0^\dagger)$ defined as follows: We set $$C^k(L^\dagger_0)= \{\sum_{1\leq i_1<\cdots < i_k\leq n}f_{i_1\ldots i_k}\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}}{t_{i_k}}:\;f_{i_1\ldots i_k}\in L^\dagger_0\} \cong L_0^{\dagger {n\choose k}}$$ with differential $d: C^k(L_0^\dagger)\to C^{k+1}(L_0^\dagger)$ given by \begin{eqnarray*} d(\omega)&=&\Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf a,\mathbf t))\Big)^{-1} \circ \mathrm d_{\mathbf t} \circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf a,\mathbf t))\Big)(\omega) \\&=&\mathrm d_{\mathbf t}\omega + \sum_{i=1}^n \Big(\gamma_i+\pi\sum_{j=1}^N w_{ij}a_j\mathbf t^{\mathbf w_j}\Big) \frac{\mathrm dt_i}{t_i}\wedge \omega \end{eqnarray*} for any $\omega\in C^k(L_0^\dagger)$. By Lemma \ref{flat}, we have the following corollary. \begin{corollary} \label{specialize} In the derived category of complexes of $K\langle \mathbf x\rangle^\dagger$-modules, we have $$C^\cdot(L^\dagger)\otimes^L_{K\langle \mathbf x\rangle^\dagger} K'\cong C^\cdot (L^\dagger_0).$$ \end{corollary} The specialization of $\Phi$ at $\mathbf a$ is the lifting of the Frobenius correspondence defined by $$\Phi_{\mathbf a}: L^\dagger_0\to L^\dagger_0, \quad f(\mathbf t)\mapsto f(\mathbf t^q).$$ It induces the maps $\Phi_{\mathbf a}: C^k(L^\dagger)\to C^k(L^\dagger)$ on differential forms commuting with $\mathrm d_{\mathbf t}$: $$\Phi_{\mathbf a}\Big(\sum_{1\leq i_1<\cdots < i_k\leq n}f_{i_1\ldots i_k}(\mathbf t)\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}} {t_{i_k}}\Big) =\sum_{1\leq i_1<\cdots < i_k\leq n}q^kf_{i_1\ldots i_j}(\mathbf t^q)\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}}{t_{i_k}}$$ The specialization of $F: C^\cdot(L^\dagger)\to C^\cdot(L^\dagger)$ at $\mathbf a$ is given by \begin{eqnarray*} F_a&=&\Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n}\exp (\pi F(\mathbf a, \mathbf t))\Big)^{-1} \circ\Phi_{\mathbf a}\circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n}\exp (\pi F(\mathbf a^{q}, \mathbf t))\Big)\\ &=&\Big(t_1^{\gamma_1(q-1)}\cdots t_n^{\gamma_n(q-1)}\exp\big (\pi F(\mathbf a^q,\mathbf t^q)- \pi F(\mathbf a, \mathbf t)\big)\Big) \circ\Phi_{\mathbf a}. \end{eqnarray*} By Lemma \ref{estimation} (i), $t_1^{\gamma_1(q-1)}\cdots t_n^{\gamma_n(q-1)}\exp \big(\pi F(\mathbf a^q,\mathbf t^q)- \pi F(\mathbf a, \mathbf t)\big)$ lie in $L_0^\dagger$, and hence $F_a$ defines an endomorphism on each $C^k(L_0^\dagger)$. From now on, we assume that $\mathbf a$ is a Techm\"uller point, that is, $a_j^q=a_j$ $(j=1, \ldots, N)$. Then $\mathbf a$ is a fixed point of $\mathrm{Fr}$. In this case $F_{\mathbf a}:C^\cdot(L_0^\dagger)\to C^\cdot(L_0^\dagger)$ commutes with $d: C^j(L^\dagger_0) \to C^{j+1}(L^\dagger_0)$ and hence is a chain map. Consider the operator $\Psi_{\mathbf a}:L^\dagger_0\to L^\dagger_0$ defined by $$\Psi_{\mathbf a} (\sum_{\mathbf w}c_{\mathbf w}\mathbf t^{\mathbf w})=\sum_{\mathbf w}c_{q\mathbf w}\mathbf t^{\mathbf w}.$$ We extend it to differential forms by $$\Psi_{\mathbf a}\Big(\sum_{1\leq i_1<\cdots < i_k\leq n}f_{i_1\ldots i_k}(\mathbf t)\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}} {t_{i_k}}\Big) =\sum_{1\leq i_1<\cdots < i_k\leq n}q^{-k} \Psi_{\mathbf a}(f_{i_1\ldots i_j}(\mathbf t))\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}}{t_{i_k}}.$$ It commutes with $\mathrm d_{\mathbf t}$. Let $G_{\mathbf a}: C^\cdot(L^\dagger_0) \to C^\cdot(L^\dagger_0)$ be the map defined by \begin{eqnarray*} G_{\mathbf a}&=&\Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf a,\mathbf t))\Big)^{-1} \circ \Psi_{\mathbf a} \circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf a,\mathbf t))\Big)\\ &= &\Psi_{\mathbf a} \circ \Big(t_1^{\gamma_1(1-q)}\cdots t_n^{\gamma_n(1-q)} \exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a^{\mathbf a}, \mathbf t^{q})\big)\Big). \end{eqnarray*} Here by Lemma \ref{estimation} (i), $t_1^{\gamma_1(1-q)}\cdots t_n^{\gamma_n(1-q)} \exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a, \mathbf t^{q})\big)$ lies in $L_0^\dagger$ and hence $G_{\mathbf a}$ defines an operator on $C^\cdot(L^\dagger_0)$. Then $G_{\mathbf a}$ commutes with $d: C^k(L_0^\dagger)\to C^{k+1}(L_0^\dagger)$. We thus get a chain map $G_{\mathbf a}: C^\cdot(L_0^\dagger)\to C^\cdot(L_0^\dagger)$. \begin{lemma}\label{FG} We have $G_{\mathbf a}\circ F_{\mathbf a}=\mathrm{id}$ and $F_{\mathbf a}\circ G_{\mathbf a}$ is homotopic to $\mathrm{id}$. In particular, $F_{\mathbf a}$ and $G_{\mathbf a}$ induce isomorphisms on $H^\cdot (C^\cdot(L^\dagger_0))$. \end{lemma} In section 3, we show that each $G_{\mathbf a}:C^k(L^\dagger_0)\to C^k(L^\dagger_0)$ is a nuclear operator and hence the homomorphism on each $H^k(C^\cdot(L^\dagger_0))$ induced by $G_{\mathbf a}$ is also nuclear. We can talk about their traces and characteristic power series. But $F_{\mathbf a}$ does not have this property. Let \begin{eqnarray*} \mathrm{Tr}\big(G_{\mathbf a}, C^\cdot(L^\dagger_0)\big)&=&\sum_{k=0}^n (-1)^k \mathrm{Tr}\big( G_{\mathbf a}, C^k(L^\dagger_0)\big)\\ &=&\sum_{k=0}^n (-1)^k \mathrm{Tr}\big( G_{\mathbf a} ,H^k(C^\cdot(L^\dagger_0))\big)\\ &=&\sum_{k=0}^n (-1)^k \mathrm{Tr}\big( F^{-1}_{\mathbf a} ,H^k(C^\cdot(L^\dagger_0))\big),\\ \mathrm{det}\big(I-T G_{\mathbf a}, C^\cdot(L^\dagger_0)\big)&=&\prod_{k=0}^n \mathrm{det}\big(I-TG_{\mathbf a}, C^k(L^\dagger_0)\big)^{(-1)^k}\\ &=& \prod_{k=0}^n \mathrm{det}\big(I-T G_{\mathbf a}, H^k(C^\cdot(L^\dagger_0))\big)^{(-1)^k}\\ &=& \prod_{k=0}^n \mathrm{det}\big(I-TF^{-1}_{\mathbf a}, H^k(C^\cdot(L^\dagger_0))\big)^{(-1)^k}. \end{eqnarray*} Let $\chi:\mathbb F_q^*\to \overline{\mathbb Q}_p$ be the Techm\"uller character which maps each $u$ in $\mathbb F_q^*$ to its Techm\"uller lifting. By \cite[Theorems 4.1 and 4.3]{M1}, the formal power series $\theta(z)=\exp(\pi z-\pi z^p)$ converges in a disc of radius $>1$, and its value $\theta(1)$ at $z=1$ is a primitive $p$-th root of unity in $K$. Let $\psi:\mathbb F_q\to K^*$ be the additive character defined by $$\psi(\bar a)=\theta(1)^{\mathrm{Tr}_{\mathbb F_q/\mathbb F_p}(\bar a)}$$ for any $\bar a\in \mathbb F_q$. Let $\bar a_j\in\mathbb F_q$ be the residue class $a_j\mod p$, let \begin{eqnarray*} &&S_m(F(\bar{\mathbf a},\mathbf t))\\ &=&\sum_{\bar u_1,\ldots, \bar u_n\in \mathbb F_{q^m}^*}\chi_1 (\mathrm{Norm}_{\mathbb F_q^m/\mathbb F_q}(\bar u_1)) \cdots\chi_n(\mathrm{Norm}_{\mathbb F_q^m/\mathbb F_q}(\bar u_n)) \psi\Big (\mathrm{Tr}_{\mathbb F_{q^m}/\mathbb F_q}\Big(\sum_{j=1}^N \bar a_j \bar u_1^{w_{1j}}\cdots \bar u_n^{w_{nj}}\Big)\Big) \end{eqnarray*} be the twisted exponential sums for the multiplicative characters $\chi_i=\chi^{(1-q)\gamma_i}$, the nontrivial additive character $\psi:\mathbb F_q\to \mathbb C_p^*$, and the polynomial $F(\bar{\mathbf a},\mathbf t)$, and let $$L(F(\mathbf {\bar a}, \mathbf t),T)=\exp\Big(\sum_{m=1}^\infty S_m(F(\bar{\mathbf a},\mathbf t))\frac{T^m}{m}\Big)$$ be the $L$-function for the twisted exponential sums. The following theorem is well-known in Dwork's theory. Its proof is given in section 2 for completeness. \begin{theorem} \label{arithmetic} Suppose $\gamma_1, \ldots, \gamma_n\in \frac{1}{1-q}\mathbb Z$, $\gamma=(\gamma_1, \ldots, \gamma_n)\in \delta$, and suppose $K'$ contains all $(q-1)$-th root of unity. Let $\mathbf a=(a_1,\ldots, a_n)$ be a Techm\"uller point, that is, $a_j^q=a_j$. Then each $G_{\mathbf a}: C^k(L^\dagger_0)\to C^k(L^\dagger_0)$ is nuclear. Moreover, we have \begin{eqnarray*} S_m(F(\bar{\mathbf a},\mathbf t))&=&\mathrm{Tr}\big( (q^nG_{\mathbf a})^m, C^\cdot(L^\dagger_0)\big)\\ &=& \sum_{k=0}^n(-1)^k\mathrm{Tr}\big( (q^n F^{-1}_{\mathbf a})^m, H^k(C^\cdot(L^\dagger_0))\big),\\ L(F(\bar{\mathbf a},\mathbf t),T)&=&\mathrm{det}\big(I-q^n T G_{\mathbf a}, C^\cdot(L^\dagger_0)\big)^{-1}\\ &=&\prod_{k=0}^n \mathrm{det}\big(I-q^nT F_{\mathbf a}^{-1}, H^k(C^\cdot(L^\dagger_0))\big)^{(-1)^{k+1}}, \end{eqnarray*} \end{theorem} In \cite{A2}, Adolphson shows that $L(F(\mathbf {\bar a}, \mathbf t),T)$ depends analytically on the parameters $\mathbf a$ and $\gamma$. \subsection{The GKZ hypergeometric $F$-crystal} It follows from the definition of the twisted de Rham complex that the homomorphism $$L^{\dagger}\to C^n(L^{\dagger}),\quad f\mapsto f\frac{\mathrm dt_1}{t_1}\wedge \cdots\wedge \frac{\mathrm dt_n}{t_n}$$ induces an isomorphism $$L^{\dagger} / \sum_{i=1}^n F_{i,\gamma} L^{\dagger}\cong H^n(C^\cdot(L^{\dagger})). $$ $\nabla$ defines a connection on $H^n(C^\cdot(L^\dagger))$, and $F$ defines a horizontal morphism $$F: \mathrm{Fr}^* (H^n(C^\cdot(L^\dagger)),\nabla)\to (H^n(C^\cdot(L^\dagger)),\nabla).$$ Let $U$ be the affinoid subdomain of the closed unit polydisc $E(0,1)^N$ parametrizing those points $\mathbf a=(a_1, \ldots, a_N)$ so that $F(\bar {\mathbf a}, \mathbf t)=\sum_{j=1}^N \bar a_j \mathbf t^{\mathbf w_j}$ is \emph{non-degenerate} in the sense that for any face $\tau$ of $\Delta$ not containing the origin, the system of equations $$\frac{\partial}{\partial t_1}F_\tau(\bar{\mathbf a},\mathbf t)=\cdots =\frac{\partial}{\partial t_n}F_\tau(\bar{\mathbf a},\mathbf t)=0$$ has no solution in $(\overline{\mathbb F}_p^\ast)^n$, where $F_\tau(\bar{\mathbf a},\mathbf t)=\sum_{\mathbf w_j\in \tau}\bar a_j \mathbf t^{\mathbf w_j}$. When restricted to $U$, we have $$H^k(C^\cdot(L^\dagger))=0$$ for $k\not=n$, and $H^n(C^\cdot(L^\dagger))$ defines a vector bundle on $U$ of rank $n!\mathrm{vol}(\Delta)$. Denote this vector bundle by $\mathrm{Hyp}$. \begin{definition} We define the \emph{GKZ hypergeometric crystal} to be $({\mathrm{Hyp}}, \nabla, F)$. \end{definition} Let $\mathbf a=(a_1, \ldots, a_N)$ be a point in $U$ with coordinates in $K'$, and let ${\mathrm{Hyp}}(\mathbf a)$ be the fiber of $\mathrm{Hyp}$ at $\mathbf a$. By Corollary \ref{specialize}, the fact $C^k(L^\dagger)=0$ for $k>n$, and the fact that $-\otimes_{K\langle \mathbf x\rangle^\dagger}K'$ is right exact, we have $$H^n(C^\cdot(L^\dagger))\otimes_{K\langle \mathbf x\rangle^\dagger}K'\cong H^n(C^\cdot(L^\dagger_0)).$$ So we have $${\mathrm{Hyp}}(\mathbf a)\cong L_0^\dagger/ \sum_{i=1}^n F_{i,\gamma,\mathbf a} L_0^\dagger, $$ where $F_{i,\gamma,\mathbf a}=t_i\frac{\partial}{\partial t_i}+\gamma_i+\pi\sum_{j=1}^N w_{ij}a_j \mathbf t^{\mathbf w_j}.$ If $\mathbf a$ is a Techm\"uller point, then we have \begin{eqnarray*} S_m(F(\bar{\mathbf a},\mathbf t))&=&(-1)^n\mathrm{Tr}\big((q^nF_{\mathbf a}^{-1})^m, {\mathrm{Hyp}}(\mathbf a)\big),\\ L(F(\mathbf {\bar a}, \mathbf t),T)&=&\mathrm{det}\big(I-q^n T F_{\mathbf a}^{-1}, \mathrm{Hyp}(\mathbf a)\big)^{(-1)^{n-1}}. \end{eqnarray*} Let $\mathbf a=(a_1, \ldots, a_N)$ and $\mathbf b=(b_1, \ldots, b_N)$ be points in $U$ with coordinates in $K'$, and let $$T_{\mathbf a,\mathbf b}:\mathrm{Hyp}({\mathbf a})\stackrel\cong\to \mathrm{Hyp}({\mathbf b})$$ be the parallel transport for $\mathrm{Hyp}$. It is well-defined if $\vert b_i-a_i\vert<1$ for all $i$. It can be described as follows: For any formal power series $f(t)\in \overline{\mathbb Q}_p[[\mathbb Z^n\cap \delta]]$, we have \begin{eqnarray*} \nabla_{\frac{\partial}{\partial x_j}}\Big(\exp(-\pi F(\mathbf x,\mathbf t))f(t)\Big)&=&\exp(-\pi F(\mathbf x,\mathbf t))\circ \frac{\partial}{\partial x_j} \circ \exp(\pi F(\mathbf x,\mathbf t)) \Big(\exp(-\pi F(\mathbf x,\mathbf t))f(t)\Big)\\ &=&0. \end{eqnarray*} So $\exp(-\pi F(\mathbf x,\mathbf t))f(t)$ is horizontal with respect to $\nabla$. But it is only a formal horizontal section since it may not lie in $L^\dagger$. Formally, $T_{\mathbf a, \mathbf b}$ maps $\exp(-\pi F(\mathbf a,\mathbf t))f(t)$ to $\exp(-\pi F(\mathbf b,\mathbf t))f(t)$. So $T_{\mathbf a,\mathbf b}:\mathrm{Hyp}({\mathbf a})\stackrel\cong\to \mathrm{Hyp}({\mathbf b})$ can be identified with the isomorphism $$T_{\mathbf a,\mathbf b}:L_0^\dagger/ \sum_{i=1}^n F_{i,\gamma,\mathbf a} L_0^\dagger\to L_0^\dagger/ \sum_{i=1}^n F_{i,\gamma,\mathbf b} L_0^\dagger, \quad g(t)\mapsto \exp\big(\pi F(\mathbf a, \mathbf t)-\pi F(\mathbf b,\mathbf t)\big)g(t).$$ This is well-defined if $\vert b_i-a_i\vert<1$ for all $i$ since we then have $\exp\big(\pi F(\mathbf a, \mathbf t)-\pi F(\mathbf b,\mathbf t)\big)\in L_0^\dagger$. Since $F: \mathrm{Fr}^*(\mathrm{Hyp}, \nabla) \to (\mathrm{Hyp}, \nabla)$ is a horizontal morphism, we have a commutative diagram $$\begin{array}{rcl} \mathrm{Hyp}({\mathbf a}^q)&\stackrel{T_{\mathbf a^q, \mathbf x^q}}\to &\mathrm{Hyp}({\mathbf x}^q)\\ {\scriptstyle F_{\mathbf a}}\downarrow \quad&&\quad\downarrow{\scriptstyle F_{\mathbf x}} \\ \mathrm{Hyp}(\mathbf a)&\stackrel{T_{\mathbf a, \mathbf x}}\to &\mathrm{Hyp}({\mathbf x}). \end{array}$$ Let $\{e_1(\mathbf x),\ldots, e_M(\mathbf x)\}$ be a local basis for $\mathrm{Hyp}$ over $U$. Write \begin{eqnarray*} (q^n F_{\mathbf x}^{-1})\Big(e_1(\mathbf x), \ldots, e_M(\mathbf x)\Big)&=&(e_1(\mathbf x^q),\ldots, e_M(\mathbf x^q))Q(\mathbf x), \\ T_{\mathbf a,\mathbf x}(e_1(\mathbf a), \ldots, e_M(\mathbf a))&=& (e_1(\mathbf x), \ldots, e_M(\mathbf x))P(\mathbf x) \end{eqnarray*} where $P(\mathbf x)$ and $Q(\mathbf x)$ are matrices of power series. Then we have $$ Q(\mathbf x)=P(\mathbf x^q)Q(\mathbf a)P(\mathbf x)^{-1}$$ and hence \begin{eqnarray}\label{L} (-1)^nS_m(F(\bar{\mathbf x},\mathbf t))&=& \mathrm{Tr}\big((P(\mathbf x^q)Q(\mathbf a)P(\mathbf x)^{-1})^m\big),\\ L(F(\mathbf {\bar x}, \mathbf t),T)^{(-1)^{n+1}}&=&\mathrm{det}\big(I-TP(\mathbf x^q)Q(\mathbf a)P(\mathbf x)^{-1}\big) \end{eqnarray} whenever $x_j^{q-1}=1$ and $a_j^{q-1}=1$. Write $$\nabla_{\frac{\partial}{\partial x_j}}\Big(e_1(\mathbf x), \ldots, e_M(\mathbf x)\Big)=(e_1(\mathbf x), \ldots, e_M(\mathbf x))A_j(\mathbf x).$$ As $\nabla_{\frac{\partial}{\partial x_j}}(T_{\mathbf a,\mathbf x}(e_k(\mathbf a)))=0$ for all $k$, $P(\mathbf x)$ satisfies the system of differential equations \begin{eqnarray}\label{DE} \frac{\partial}{\partial x_j}(P(\mathbf x))+A_j(\mathbf x)P(\mathbf x)=0. \end{eqnarray} Equations (\ref{L})-(\ref{DE}) give formulas for calculating the exponential sums and the $L$-function using a solution of a system of differential equations. \section{${\mathcal D}^\dagger$-modules} \begin{lemma}\label{prepare} Let $m$ be a positive integer and let $$m=a_0+a_1p+a_2p^2+\cdots$$ be its $p$-expansion, where $0\leq a_i\leq p-1$ for all $i$. Define $$\sigma(m)=a_0+a_1+a_2+\cdots.$$ (i) We have $$\mathrm{ord}_p\Big(\frac{\pi^m}{m!}\Big)=\frac{\sigma(m)}{p-1}.$$ (ii) For any real number $\epsilon>0$, there exists $\delta>0$ such that $$\sigma(m)\leq \epsilon m+\delta.$$ \end{lemma} \begin{proof} (i) We have \begin{eqnarray*} \mathrm{ord}_p(m!)&=& \Big[\frac{m}{p}\Big]+\Big[\frac{m}{p^2}\Big]+\cdots\\ &=& (a_1+a_2p+\cdots)+(a_2+a_3p+\cdots)+\cdots\\ &=& a_1+a_2(1+p)+a_3(1+p+p^2)+\cdots\\ &=& \frac{a_1(p-1)}{p-1}+\frac{a_2(p^2-1)}{p-1}+\frac{a_3(p^3-1)}{p-1}+\cdots\\ &=& \frac{m-\sigma(m)}{p-1}. \end{eqnarray*} So we have $$\mathrm{ord}_p\Big(\frac{\pi^m}{m!}\Big)=\frac{m}{p-1}- \frac{m-\sigma(m)}{p-1}=\frac{\sigma(m)}{p-1}.$$ (ii) Choose $M$ sufficiently large so that for any $x\geq M$, we have $$(p-1)(x+1)\leq \epsilon p^x.$$ Let $$m=a_0+a_1p+\cdots +a_lp^l$$ be the expansion of $m$, where $0\leq a_i\leq p-1$ and $a_l\not=0$. If $m\geq p^M$, then we have $l\geq M$ and hence $(p-1)(l+1)\leq \epsilon p^l.$ So we have $$\sigma(m)=a_0+a_1+\cdots +a_l\leq (p-1)(l+1)\leq \epsilon p^l\leq \epsilon m$$ for any $m\geq p^M$. Take $\delta=\max(\sigma(1), \ldots, \sigma(p^M))$. Then we have $\sigma(m)\leq \epsilon m+\delta$ for all $m$. \end{proof} \subsection{Proof of Proposition \ref{berthelot}} Set $${\mathcal B}^\dagger=\bigcup_{r>1,\;s>1} \{\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\mathbf v_!}:\; f_{\mathbf v}(\mathbf x)\in K\{ r^{-1}\mathbf x\},\; \Vert f_{\mathbf v}(\mathbf x)\Vert_r s^{\vert \mathbf v\vert} \hbox { are bounded}\}.$$ Let's prove $\mathcal B^\dagger=\mathcal D^\dagger$. Given $\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\mathbf v_!}$ in $\mathcal B^\dagger$, choose real numbers $r>1, s>1$ and $C>0$ such that $$\Vert f_{\mathbf v}(\mathbf x)\Vert_r s^{\vert \mathbf v\vert}\leq C.$$ We have $$\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\mathbf v_!} =\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} \Big(f_{\mathbf v}(\mathbf x)\frac{\pi^{\vert\mathbf v\vert}}{\mathbf v!}\Big)\frac{\partial^{\mathbf v}}{\pi^{\vert \mathbf v\vert}}.$$ By Lemma \ref{prepare} (i), we have $\mathrm{ord}_p \Big(\frac{\pi^{\vert \mathbf v\vert}}{\mathbf v!}\Big)\geq 0.$ Hence $$\Big\Vert f_{\mathbf v}(\mathbf x)\frac{\pi^{\vert\mathbf v\vert}}{\mathbf v!}\Big\Vert_r s^{\vert \mathbf v\vert}\leq \Vert f_{\mathbf v}(\mathbf x)\Vert_r s^{\vert \mathbf v\vert}\leq C.$$ So $\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\mathbf v_!}$ lies in $\mathcal D^\dagger$. Conversely, given $\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\pi^{\vert \mathbf v\vert}}$ in $\mathcal D^\dagger$, choose real numbers $r>1, s>1$ and $C>0$ such that $$\Vert f_{\mathbf v}(\mathbf x)\Vert_r s^{\vert \mathbf v\vert}\leq C.$$ We have $$\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\pi^{\vert \mathbf v\vert}} =\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} \Big(f_{\mathbf v}(\mathbf x)\frac{\mathbf v!}{\pi^{\vert \mathbf v\vert}}\Big)\frac{\partial^{\mathbf v}}{\mathbf v_!}.$$ Choose $\epsilon>0$ so that $$s> p^{\frac{\epsilon}{p-1}},$$ and choose $\delta$ as in Lemma \ref{prepare} (ii). We have $$\mathrm{ord}_p \Big(\frac{\pi^{\vert \mathbf v\vert}}{\mathbf v!}\Big)\leq \frac{\epsilon \vert \mathbf v\vert +\delta n}{p-1}.$$ Let $s'=sp^{-\frac{\epsilon}{p-1}}>1$ and let $C'=C p^{\frac{\delta n}{p-1}}$. We have \begin{eqnarray*} \Big\Vert f_{\mathbf v}(\mathbf x)\frac{\mathbf v!}{\pi^{\vert\mathbf v\vert}}\Big\Vert_r s'^{\vert \mathbf v\vert}&\leq& \Vert f_{\mathbf v}(\mathbf x)\Vert_r p^{\frac{\epsilon \vert \mathbf v\vert +\delta n}{p-1}} s'^{\vert \mathbf v\vert}\\ &=& \Vert f_{\mathbf v}(\mathbf x)\Vert_r s^{\vert \mathbf v\vert} p^{\frac{\delta n}{p-1}} \\ &\leq& C'. \end{eqnarray*} So $\sum_{\mathbf v\in\mathbb Z_{\geq 0}^N} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\mathbf v_!}$ lies in $\mathcal B^\dagger$. \begin{lemma}\label{comb} Let $S$ be any subset of $\mathbb Z^n_{\geq 0}$. There exists a finite subset $S_0$ of $S$ such that $S\subset \bigcup_{\mathbf v\in S_0}(\mathbf v+\mathbb Z_{\geq 0}^n)$. \end{lemma} \begin{proof} We use induction on $n$. When $n=1$, we have $S\subset v+\mathbb Z_{\geq 0}$, where $v$ is the minimal element in $S\subset \mathbb Z_{\geq 0}$. Suppose the assertion holds for any subset of $\mathbb Z^n_{\geq 0}$, and let $S$ be a subset of $\mathbb Z^{n+1}_{\geq 0}$. If $S$ is empty, our assertion holds trivially. Otherwise, we fix an element $\mathbf a=(a_1, \ldots, a_{n+1})$ in $S$. For any $i\in\{1,\ldots, n+1\}$ and any $0\leq b_i\leq a_i$, let $$S_{i, b_i}=\{(c_1, \ldots, c_{n+1})\in S:\; c_i=b_i\}.$$ By the induction hypothesis, there exists a finite subset $T_{i, b_i}\subset S_{i, b_i}$ such that $$S_{i,b_i}\subset \bigcup_{\mathbf v\in T_{i,b_i}} (\mathbf v+\mathbb Z_{\geq 0}^{n+1}).$$ We have \begin{eqnarray*} S&\subset& \Big(\bigcup_{1\leq i\leq n+1} \bigcup_{0\leq b_i\leq a_i} S_{i,b_i}\Big)\bigcup (\mathbf a+\mathbb Z^{n+1}_{\geq 0})\\ &\subset& \Big(\bigcup_{1\leq i\leq n+1} \bigcup_{0\leq b_i\leq a_i} \bigcup_{\mathbf v\in T_{i,b_i}} (\mathbf v+\mathbb Z_{\geq 0}^{n+1})\Big) \bigcup (\mathbf a+\mathbb Z^{n+1}_{\geq 0}). \end{eqnarray*} We can take $S_0=\bigcup_{1\leq i\leq n+1} \bigcup_{0\leq b_i\leq a_i} T_{i, b_i}\bigcup \{\mathbf a\}.$ \end{proof} \begin{lemma}\label{LL} ${}$ (i) The ring homomorphism $$\phi: K\langle \mathbf x, \mathbf y\rangle^\dagger\to L^{\dagger\prime}, \quad \sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x) \mathbf y^{\mathbf v} \mapsto\sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x) \mathbf t^{v_1\mathbf w_N+\cdots +v_N\mathbf w_N}$$ is surjective, where $\mathbf y=(y_1, \ldots, y_N)$ and $$K\langle \mathbf x, \mathbf y\rangle^\dagger=\bigcup_{r>1,\;s>1}\{ \sum_{\mathbf v\in \mathbb Z^N_{\geq 0}} f_{\mathbf v}(\mathbf x){\mathbf y}^{\mathbf v}:\; f_{\mathbf v}(\mathbf x)\in K\{r^{-1}x\} \hbox { and } \Vert f_{\mathbf v}(x)\Vert_r s^{\vert \mathbf v\vert} \hbox { is bounded}\}.$$ (ii) The homomorphism of $\mathcal D^\dagger$-modules $$\varphi: {\mathcal D}^\dagger\to L^{\dagger\prime}, \quad \sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x) \frac{\partial^{\mathbf v}}{\pi^{\vert\mathbf v\vert}} \mapsto(\sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x) \frac{\partial^{\mathbf v}}{\pi^{\vert\mathbf v\vert}} )\cdot 1=\sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x) \mathbf t^{v_1\mathbf w_N+\cdots +v_N\mathbf w_N}$$ is surjective. \end{lemma} \begin{proof} Decompose $\Delta$ into a finite union $\bigcup_\tau \tau$ so that each $\tau$ is a simplicial complex of dimension $n$ with vertices $\{0,\mathbf w_{i_1},\ldots, \mathbf w_{i_n}\}$ for some subset $\{i_1,\ldots, i_n\}\subset \{1,\ldots, N\}$. For each $\tau$, let $\delta(\tau)$ be the cone generated by $\tau$, and let \begin{eqnarray*} B(\tau)&=&\mathbb Z^n\cap \{c_1\mathbf w_{i_1}+\cdots +c_n\mathbf w_{i_n}:\; 0\leq c_i\leq 1\},\\ C(\tau)&=& \{k_1\mathbf w_{i_1}+\cdots +k_n\mathbf w_{i_n}:\; k_i\in\mathbb Z_{\geq 0}\}. \end{eqnarray*} Being a discrete bounded set, $B(\tau)$ is finite. Every element $\mathbf w\in \mathbb Z^n\cap \delta(\tau)$ can be written uniquely as $$\mathbf w=b(\mathbf w)+c(\mathbf w)$$ with $b(\mathbf w)\in B(\tau)$ and $c(\mathbf w)\in C(\tau)$. So we have $\mathbb Z^n\cap\delta(\tau)=\bigcup_{\mathbf w\in B(\tau)}(\mathbf w+C(\tau))$, and hence $$C(A)=\bigcup_\tau(C(A)\cap \delta(\tau))=\bigcup_\tau \bigcup_{\mathbf w\in B(\tau)}\big (C(A)\cap (\mathbf w+C(\tau))\big ).$$ For each $C(A)\cap (\mathbf w+C(\tau))$, the map $$\mathbb Z^n_{\geq 0}\to \mathbf w+C(\tau),\quad (k_1, \ldots, k_n)\mapsto \mathbf w+k_1\mathbf w_{i_1}+\cdots +k_n\mathbf w_{i_n}$$ is a bijection. Applying Lemma \ref{comb} to the inverse image of $C(A)\cap (\mathbf w+C(\tau))$, we can find finitely many $\mathbf u_1, \ldots, \mathbf u_m\in C(A)\cap (\mathbf w+C(\tau))$ such that $$C(A)\cap (\mathbf w+C(\tau))= \bigcup_{i=1}^m (\mathbf u_i+C(\tau)). $$ We thus decompose $C(A)$ into a finite union of subsets of the form $\mathbf u+C(\tau)$ such that $\tau$ is a simplicial complex of dimension $n$ with vertices $\{0,\mathbf w_{i_1},\ldots, \mathbf w_{i_n}\}$ for some subset $\{i_1,\ldots, i_n\}\subset \{1,\ldots, N\}$, and $\mathbf u\in C(A)\cap (\mathbf w+C(\tau))$ for some $\mathbf w\in B(\tau)$. Elements in $L^{\dagger\prime}$ is a sum of elements of the form $\sum_{\mathbf w\in \mathbf u+C(\tau)} a_{\mathbf w}(\mathbf x)\mathbf t^{\mathbf w}$, where $a_{\mathbf w}(\mathbf x)\in K\{r^{-1}\mathbf x\}$ and $\Vert a_{\mathbf w}(\mathbf x)\Vert_r s^{d(\mathbf w)}$ are bounded for some $r,s>1$. To prove $\phi: K\langle \mathbf x, \mathbf y\rangle ^\dagger\to L^{\dagger\prime}$ is surjective, it suffices to show $\sum_{\mathbf w\in \mathbf u+C(\tau)} a_{\mathbf w}(\mathbf x)\mathbf t^{\mathbf w}$ lies in the image of $\phi$. Write $\mathbf u=c_1\mathbf w_1+\cdots+c_N\mathbf w_N$, where $c_i\in\mathbb Z_{\geq 0}$. A preimage for $\sum_{\mathbf w\in \mathbf u+C(\tau)} a_{\mathbf w}(\mathbf x)\mathbf t^{\mathbf w}$ is \begin{eqnarray*} \sum_{v_1, \ldots, v_n\geq 0} a_{\mathbf u+v_1 \mathbf w_{i_1}+\cdots +v_n\mathbf w_{i_n}} (\mathbf x) y_{i_1}^{c_{i_1}+v_{1}}\cdots y_{i_n}^{c_{i_n}+v_{n}}\prod_{j\in\{1, \ldots,N\}-\{i_1,\ldots,i_n\}} y_j^{c_j}. \end{eqnarray*} Here to verify this element lies in $K\langle \mathbf x, \mathbf y\rangle^\dagger$, we use the fact that $$d(\mathbf u+v_1 \mathbf w_{i_1}+\cdots +v_n\mathbf w_{i_n})=d(\mathbf u)+v_1+\cdots +v_n$$ since $\mathbf u, \mathbf w_{i_1},\ldots, \mathbf w_{i_n}$ all lie in the simplicial cone $\delta(\tau)$. This prove $\phi: K\langle \mathbf x, \mathbf y\rangle ^\dagger\to L^{\dagger\prime}$ is surjective. It implies that $\varphi: {\mathcal D}^\dagger\to L^{\dagger\prime}$ is also surjective. \end{proof} \subsection{Proof of Theorem \ref{pde}} We have shown that $\varphi$ is surjective in the proof of Lemma \ref{LL}. The ring $D=K\Big[\frac{\partial}{\partial x_1},\ldots, \frac{\partial}{\partial x_N}\Big]$ of algebraic differential operators with constant coefficients is isomorphic to the polynomial ring and is noetherian. So we can finitely many $\mu^{(1)},\ldots, \mu^{(m)}\in\Lambda$ such that $\Box_{\mu^{(1)}},\ldots, \Box_{\mu^{(m)}}$ generate the ideal $\sum_{\lambda\in\Lambda} D \Box_\lambda$ of $D$. Then they also generate the left ideal $\sum_{\lambda\in\Lambda}\mathcal D^\dagger \Box_\lambda$ of $\mathcal D^\dagger$. Suppose $\sum_{\mathbf v} f_{\mathbf v}(\mathbf x) \frac{\partial^{\mathbf v}}{\pi^{\vert\mathbf v\vert}}$ lies in the kernel of $\varphi$, that is, $$\sum_{\mathbf v} f_{\mathbf v}(\mathbf x)\mathbf t^{v_1\mathbf w_1+\cdots+v_N\mathbf w_N}=0,$$ where $f_{\mathbf v}(\mathbf x)\in K\{r^{-1}\mathbf x\}$ and $\Vert f_{\mathbf v}(\mathbf x)\Vert_r s^{\vert\mathbf v\vert}$ are bounded for some $r,s>1$. For each $\mathbf w\in C(A)$, let $$S_{\mathbf w}=\{\mathbf v\in \mathbb Z_{\geq 0}^n:\; \mathbf w=v_1\mathbf w_1+\cdots+v_N\mathbf w_N\}.$$ Then we have $$\sum_{\mathbf v\in S_{\mathbf w}} f_{\mathbf v}(\mathbf x)=0.$$ For each nonempty $S_{\mathbf w}$, fix an element $\mathbf v^{(0)}=(v^{(0)}_1, \ldots, v^{(0)}_N)\in S_{\mathbf w}$. For any $\mathbf v\in S_{\mathbf w}$, let $\lambda_{\mathbf v}=\mathbf v-\mathbf v^{(0)}$. We have $\lambda_{\mathbf v}\in\Lambda$. Write $$\Box_{\lambda_{\mathbf v}}=P_{\mathbf v, 1}\Box_{\mu^{(1)}}+\cdots+P_{\mathbf v, m}\Box_{\mu^{(m)}}$$ for some differential operators $P_{\mathbf v, 1},\ldots, P_{\mathbf v, m}\in D$. We have \begin{eqnarray*} \frac{\partial^{\mathbf v}}{\pi^{\vert\mathbf v\vert}}-\frac{\partial^{\mathbf v^{(0)}}}{\pi^{\vert \mathbf v^{(0)}\vert}} &=&\frac{\partial^{\min(\mathbf v,\mathbf v^{(0)})}}{\pi^{\sum_j \min(v_j, v_j^{(0)})}}\Big( \prod_{v_j>v^{(0)}_j}\Big(\frac{1}{\pi}\frac{\partial}{\partial x_j}\Big)^{v_j-v^{(0)}_j} - \prod_{v_j<v^{(0)}_j}\Big(\frac{1}{\pi}\frac{\partial}{\partial x_j}\Big)^{v^{(0)}_j-v_j}\Big)\\ &=&\frac{\partial^{\min(\mathbf v,\mathbf v^{(0)})}}{\pi^{\sum_j \min(v_j, v_j^{(0)})}}\Box_{\lambda_{\mathbf v}}\\ &=& \frac{\partial^{\min(\mathbf v,\mathbf v^{(0)})}}{\pi^{\sum_j \min(v_j, v_j^{(0)})}} (P_{\mathbf v, 1}\Box_{\mu^{(1)}}+\cdots+P_{\mathbf v, m}\Box_{\mu^{(m)}}),\\ \sum_{\mathbf v} f_{\mathbf v}(\mathbf x) \frac{\partial^{\mathbf v}}{\pi^{\vert\mathbf v\vert}} &=&\sum_{\mathbf w\in C(A)} \sum_{\mathbf v\in S_{\mathbf w}} f_{\mathbf v}(\mathbf x)\frac{\partial^{\mathbf v}}{\pi^{\vert\mathbf v\vert}}\\ &=& \sum_{\mathbf w\in C(A)}\sum_{\mathbf v\in S_{\mathbf w}} f_{\mathbf v}(\mathbf x) \Big(\frac{\partial^{\mathbf v}}{\pi^{\vert\mathbf v\vert}} -\frac{\partial^{\mathbf v^{(0)}}}{\pi^{\vert\mathbf v^{(0)}\vert}}\Big)\\ &=& \sum_{\mathbf w\in C(A)}\sum_{\mathbf v\in S_{\mathbf w}} f_{\mathbf v}(\mathbf x) \frac{\partial^{\min(\mathbf v,\mathbf v^{(0)})}}{\pi^{\sum_j \min(v_j, v_j^{(0)})}} (P_{\mathbf v, 1}\Box_{\mu^{(1)}}+\cdots+P_{\mathbf v, m}\Box_{\mu^{(m)}})\\ &=& \sum_{k=1}^m \Big(\sum_{\mathbf w\in C(A)}\sum_{\mathbf v\in S_{\mathbf w}} f_{\mathbf v}(\mathbf x) \frac{\partial^{\min(\mathbf v,\mathbf v^{(0)})}}{\pi^{\sum_j \min(v_j, v_j^{(0)})}} P_{\mathbf v, k}\Big)\Box_{\mu^{(k)}} \end{eqnarray*} One can verify that $\varphi(\Box_\lambda)=0$ for all $\lambda\in\Lambda$. So we have $$\mathrm{ker}\,\varphi=\sum_{k=1}^m {\mathcal D}^\dagger\Box_{\mu^{(k)}}=\sum_{\lambda\in\Lambda} {\mathcal D}^\dagger\Box_{\lambda}.$$ For any $g_i \in L^{\dagger\prime}$ $(i=1, \ldots, n)$, choose $P_i\in {\mathcal D}^\dagger$ such that $\varphi(P_i)=g_i$. One can check directly that $E_{i, \gamma}(1)=F_{i,\gamma}(1)$. Moreover, $F_{i,\gamma}$ commutes with each $\nabla_{\frac{\partial }{\partial x_j}}$ and hence with $P_i$. So we have \begin{eqnarray*} \varphi(\sum_i P_iE_{i,\gamma})= \sum_i P_i E_{i,\gamma}(1) =\sum_i P_i F_{i,\gamma}(1) = \sum_i F_{i,\gamma}P_i(1) = \sum_i F_{i,\gamma} \varphi(P_i) = \sum_i F_{i,\gamma}g_i. \end{eqnarray*} So we have $$\varphi(\sum_i {\mathcal D}^\dagger E_{i,\gamma})=\sum_i F_{i,\gamma} L^{\dagger\prime}.$$ Together with the fact that $\varphi$ is surjective and $\mathrm{ker}\,\varphi=\sum_{\lambda\in\Lambda} {\mathcal D}^\dagger\Box_\lambda$, we get $${\mathcal D}^\dagger/\sum_{\lambda\in \Lambda}{\mathcal D}^\dagger \Box_\lambda\cong L^{\dagger\prime},\quad {\mathcal D}^\dagger/(\sum_{i=1}^n {\mathcal D}^\dagger E_{i,\gamma}+\sum_{\lambda\in \Lambda}{\mathcal D}^\dagger \Box_\lambda)\cong L^{\dagger\prime}/ \sum_{i=1}^n F_{i,\gamma} L^{\dagger\prime}.$$ \subsection{Proof of Theorem \ref{coherent}}\label{noname} It is known that $\mathcal D^\dagger$ is coherent (\cite{Hu}). So by Theorem \ref{pde}, $\mathcal L^{\dagger\prime}$ is a coherent $\mathcal D^\dagger$-module. Keep the notation in the proof of Lemma \ref{LL}. Decompose $\Delta$ into a finite union $\bigcup_\tau \tau$ so that each $\tau$ is a simplicial complex of dimension $n$ with vertices $\{0,\mathbf w_{i_1},\ldots, \mathbf w_{i_n}\}$ for some subset $\{i_1,\ldots, i_n\}\subset \{1,\ldots, N\}$. Let $B=\bigcup_{\tau}B(\tau)$ which is a finite set. Consider the map $$\psi: \bigoplus_{\beta\in B} L^{\dagger\prime}\to L^\dagger,\quad (f_\beta)\mapsto \sum_{\beta\in B}f_\beta {\mathbf t}^\beta. $$ Note that this a homomorphism of $\mathcal D^\dagger$-modules. We will prove $\psi$ is surjective and $\mathrm{ker}\,\psi$ is a finitely generated $\mathcal D^\dagger$-module. Combined with the fact that $\mathcal L^{\dagger\prime}$ is a coherent $\mathcal D^\dagger$-module, this implies that $\mathcal L^\dagger$ is a coherent $\mathcal D^\dagger$-module. We have $\mathbb Z^n\cap \delta=\bigcup_\tau (\mathbb Z^n\cap \delta(\tau)).$ To prove $\psi$ is surjective, it suffices to show every element in $L^\dagger$ of the form $\sum_{\mathbf w\in \mathbb Z^n\cap \delta(\tau)} a_{\mathbf w}(x) t^{\mathbf w}$ lies in the image of $\psi$, where $a_{\mathbf w}(\mathbf x)\in K\{r^{-1}\mathbf x\}$ and $\Vert a_{\mathbf w}(\mathbf x)\Vert_r s^{d(\mathbf w)}$ are bounded for some $r,s>1$. Every element $\mathbf w\in \mathbb Z^n\cap \delta(\tau)$ can be written uniquely as $$\mathbf w=b(\mathbf w)+c(\mathbf w)$$ with $b(\mathbf w)\in B(\tau)$ and $c(\mathbf w)\in C(\tau)$. We have $$\sum_{\mathbf w\in \mathbb Z^n\cap \delta(\tau)} a_{\mathbf w}(x) t^{\mathbf w}=\sum_{\beta\in B(\tau)}\Big(\sum_{\mathbf w\in \mathbb Z^n\cap \delta(\tau),\; b(\mathbf w)=\beta} a_{\mathbf w}(x) t^{c(\mathbf w)}\Big) {\mathbf t}^\beta.$$ Note that $\sum\limits_{\mathbf w\in \mathbb Z^n\cap \delta(\tau),\; b(\mathbf w)=\beta} a_{\mathbf w}(x) t^{c(\mathbf w)}$ lie in $L^{\dagger\prime}$. To see this, we use the fact that $$d(\mathbf w)=d(b(\mathbf w))+d(c(\mathbf w))$$ since $b(\mathbf w)$ and $c(\mathbf w)$ all lie in the simplicial cone $\delta(\tau)$. Thus $\psi$ is surjective. Given $\beta',\beta''\in B$, set \begin{eqnarray*} L_{\beta',\beta''}&=&\{f\in L^{\dagger\prime}:\; f\mathbf t^{\beta'-\beta''}\in L^{\dagger \prime}\}, \\ S_{\beta',\beta''}&=&\{\mathbf w\in C(A): \mathbf w+\beta'-\beta''\in C(A)\}. \end{eqnarray*} Note that elements in $L_{\beta',\beta''}$ are of the form $\sum_{\mathbf w\in S_{\beta',\beta''}} a_{\mathbf w}(\mathbf x)\mathbf t^{\mathbf w}$ with $a_{\mathbf w}(\mathbf x)\in K\{r^{-1}\mathbf x\}$ and $\Vert a_{\mathbf w}(\mathbf x)\Vert_r s^{d(\mathbf w)}$ bounded for some $r, s>1$. We have $S_{\beta',\beta''}+\mathbf w_j\subset S_{\beta',\beta''}$ for all $j$, and $L_{\beta',\beta''}$ is a $\mathcal D^\dagger$-submodule of $L^{\dagger\prime}$. For any $f\in L_{\beta',\beta''}$ and $\beta\in B$, let $$\iota_{\beta',\beta''}(f)_\beta=\left\{ \begin{array}{cl} f &\hbox{if } \beta=\beta', \\ -ft^{\beta'-\beta''}&\hbox{if }\beta=\beta'', \\ 0&\hbox{if } \beta\in B\backslash\{\beta',\beta''\}. \end{array} \right.$$ Then the map $$\iota_{\beta', \beta''}: L_{\beta', \beta''}\to \bigoplus_{\beta\in B} L^{\dagger\prime},\quad f\mapsto (\iota_{\beta',\beta''}(f)_\beta)_{\beta\in B}$$ is a homomorphism of $\mathcal D^\dagger$-modules and its image is contained in $\mathrm{ker}\, \psi$. We will prove each $L_{\beta',\beta''}$ is a finitely generated $\mathcal D^\dagger$-module, and $$\mathrm{ker}\, \psi=\sum_{\beta',\beta''} \iota_{\beta', \beta''}(L_{\beta', \beta''}).$$ It follows that $\mathrm{ker}\,\psi$ is a finitely generated $\mathcal D^\dagger$-module. We have $$S_{\beta',\beta''}=\bigcup_\tau(S_{\beta',\beta''}\cap \delta(\tau))= \bigcup_\tau \bigcup_{\mathbf w\in B(\tau)}(S_{\beta',\beta''}\cap (\mathbf w+C(\tau))).$$ Again by Lemma \ref{comb}, for each $S_{\beta',\beta''}\cap (\mathbf w+C(\tau))$, we can find finitely many $\mathbf u_1, \ldots, \mathbf u_m\in S_{\beta',\beta''}\cap (\mathbf w+C(\tau))$ such that $$S_{\beta',\beta''}\cap (\mathbf w+C(\tau))= \bigcup_{i=1}^m (\mathbf u_i+C(\tau)). $$ We thus decompose $S_{\beta',\beta''}$ into a finite union of subsets of the form $\mathbf u+C(\tau)$ such that $\tau$ is a simplicial complex of dimension $n$ with vertices $\{0,\mathbf w_{i_1},\ldots, \mathbf w_{i_n}\}$ for some subset $\{i_1,\ldots, i_n\}\subset \{1,\ldots, N\}$, and $\mathbf u\in S_{\beta',\beta''}\cap (\mathbf w+C(\tau))$ for some $\mathbf w\in B(\tau)$. We claim that $L_{\beta',\beta''}$ is generated by these $\mathbf t^{\mathbf u}$ as a $\mathcal D^\dagger$-module. Indeed, elements in $L_{\beta',\beta''}$ is a sum of elements of the form $\sum_{\mathbf w\in \mathbf u+C(\tau)} a_{\mathbf w}(\mathbf x)\mathbf t^{\mathbf w}$. We have \begin{eqnarray*} \sum_{\mathbf w\in \mathbf u+C(\tau)} a_{\mathbf w}(\mathbf x)\mathbf t^{\mathbf w} =\sum_{v_1, \ldots, v_n\geq 0} a_{\mathbf u+v_1 \mathbf w_{i_1}+\cdots +v_n\mathbf w_{i_n}} (\mathbf x) \Big(\frac{1}{\pi} \frac{\partial}{\partial x_{i_1}}\Big)^{v_{1}}\cdots \Big(\frac{1}{\pi}\frac {\partial}{\partial x_{i_m}}\Big)^{v_{n}} \cdot \mathbf t^{\mathbf u}. \end{eqnarray*} Suppose $(f^{(0)}_\beta)\in\bigoplus_{\beta\in B}L^{\dagger\prime}$ is an element in $\mathrm{ker}\,\psi$. We then have $$\sum_{\beta\in B} f^{(0)}_\beta \mathbf t^\beta=0.$$ Write $B=\{\beta_1, \ldots, \beta_k\}$, and write $$f^{(0)}_\beta=\sum_{\mathbf w\in C(A)} a_{\beta\mathbf w}(\mathbf x) \mathbf t^{\mathbf w}.$$ Define \begin{eqnarray*} f_\beta^{(1)}&=&\sum_{\mathbf w\in C(A), \; \mathbf w+(\beta-\beta_1)\not\in C(A)} a_{\beta\mathbf w}(\mathbf x)t^{\mathbf w},\\ g_\beta^{(1)}&=&\sum_{\mathbf w\in C(A), \; \mathbf w+(\beta-\beta_1)\in C(A)} a_{\beta\mathbf w}(\mathbf x)t^{\mathbf w}. \end{eqnarray*} In particular, $f_{\beta_1}^{(1)}$ is $0$ since it is a sum over the empty set. We have $g_\beta^{(1)}\in L_{\beta,\beta_1}$ and \begin{eqnarray}\label{fg} (f^{(0)}_\beta)-\sum_{\beta\in B\backslash\{\beta_1\}}\iota_{\beta,\beta_1}(g^{(1)}_\beta)=(f^{(1)}_\beta). \end{eqnarray} To verify this equation, we show it holds componentwisely. The equation clearly holds for those component $\beta\not=\beta_1$. Note that $L^{\dagger\prime}$ is a direct factor of $L^\dagger$ in a canonical way as an abelian group. Applying the projection $L^{\dagger}\to L^{\dagger\prime}$ to the equation $$\sum_{\beta\in B} f^{(0)}_\beta \mathbf t^{\beta-\beta_1}=0,$$ we get $$f_{\beta_1}^{(0)}+\sum_{\beta\in B\backslash\{\beta_1\}} g_\beta^{(1)} \mathbf t^{\beta-\beta_1}=0.$$ This is exactly the $\beta_1$ component of the equation \ref{fg}. In general, for $i=1, \ldots, k$, we define \begin{eqnarray*} f_\beta^{(i)}&=&\sum_{\mathbf w\in C(A), \; \mathbf w+(\beta-\beta_1)\not\in C(A),\ldots,\; \mathbf w+(\beta-\beta_i)\not\in C(A)} a_{\beta\mathbf w}(\mathbf x)t^{\mathbf w},\\ g_\beta^{(i)}&=&\sum_{\mathbf w\in C(A), \; \mathbf w+(\beta-\beta_1)\not \in C(A),\ldots, \; \mathbf w+(\beta-\beta_{i-1})\not\in C(A), \mathbf w+(\beta-\beta_i)\in C(A)} a_{\beta\mathbf w}(\mathbf x)t^{\mathbf w}. \end{eqnarray*} We have $g_\beta^{(i)}\in L_{\beta,\beta_i}$ and $$(f^{(i-1)}_\beta)-\sum_{\beta\in B}\iota_{\beta,\beta_i}(g^{(i)}_\beta)=(f^{(i)}_\beta).$$ We have $f^{(k)}_\beta=0$ for all $\beta\in B=\{\beta_1, \ldots, \beta_n\}$. So we have $$(f_\beta^{(0)})=\sum_{i=1}^k\sum_{\beta\in B}\iota_{\beta,\beta_i}(g^{(i)}_\beta).$$ Hence $\mathrm{ker}\, \psi=\sum_{\beta',\beta''} \iota_{\beta', \beta''}(L_{\beta', \beta''}).$ \subsection{Proof of Lemma \ref{flat}} Let $R$ be the integer ring of $K$, and let \begin{eqnarray*} R\langle \mathbf x\rangle^\dagger&=&\bigcup_{r>1}\{ \sum_{\mathbf v\in\mathbb Z^N_{\geq 0}} a_{\mathbf v}\mathbf x^{\mathbf v}:\; a_{\mathbf v}\in R, \; \vert a_{\mathbf v}\vert r^{\vert \mathbf v\vert} \hbox{ are bounded }\}, \\ R\langle \mathbf x,\mathbf y\rangle^\dagger&=&\bigcup_{r>1, \; s>1}\{ \sum_{\mathbf u,\mathbf v\in\mathbb Z^N_{\geq 0}} a_{\mathbf u\mathbf v}\mathbf x^{\mathbf u}\mathbf y^{\mathbf v}:\; a_{\mathbf u\mathbf v}\in R, \; \vert a_{\mathbf u\mathbf v}\vert r^{\vert \mathbf u\vert}s^{\vert\mathbf v\vert} \hbox{ are bounded }\}, \\ L_R^\dagger&=&\bigcup_{r>1, s>1} \{\sum_{\mathbf v\in\mathbb Z^N_{\geq 0},\; \mathbf w\in\mathbb Z^n\cap\delta} a_{\mathbf v\mathbf w} \mathbf x^{\mathbf v}\mathbf t^{\mathbf w}:\; a_{\mathbf v\mathbf w}\in R, \; \vert a_{\mathbf v\mathbf w}\vert r^{\vert v\vert} s^{d(\mathbf w)} \hbox { are bounded}\},\\ L_R^{\dagger\prime}&=&\bigcup_{r>1, s>1} \{\sum_{\mathbf v\in\mathbb Z^N_{\geq 0},\; \mathbf w\in C(A)} a_{\mathbf v\mathbf w} \mathbf x^{\mathbf v}\mathbf t^{\mathbf w}:\; a_{\mathbf v\mathbf w}\in R, \; \vert a_{\mathbf v\mathbf w}\vert r^{\vert v\vert} s^{d(\mathbf w)} \hbox { are bounded}\}. \end{eqnarray*} We have $$K\langle \mathbf x\rangle^\dagger\cong R\langle \mathbf x\rangle^\dagger\otimes_R K, \quad L^\dagger\cong L_R^\dagger\otimes_R K.$$ To prove $L^\dagger$ is flat over $K\langle \mathbf x\rangle^\dagger$, it suffices to show $L_R^\dagger$ is flat over $R\langle \mathbf x\rangle^\dagger$. Keep the notation in the proof of Lemma \ref{LL} and \ref{noname}. The same proof shows that the following homomorphisms \begin{eqnarray*} \bigoplus_{\beta\in B} L_R^{\dagger\prime}\to L_R^\dagger, && (f_\beta)\mapsto \sum_{\beta\in B}f_\beta {\mathbf t}^\beta,\\ R\langle \mathbf x, \mathbf y\rangle^\dagger\to L_R^{\dagger\prime}, && \sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x) \mathbf y^{\mathbf v} \mapsto\sum_{\mathbf v\in \mathbb Z^N_{\geq 0}}f_{\mathbf v}(\mathbf x) t^{v_1\mathbf w_N+\cdots +v_N\mathbf w_N} \end{eqnarray*} are surjective. It is known that $R\langle\mathbf x, \mathbf y\rangle^\dagger$ is a noetherian ring by \cite{Fulton}. It follows that $L_R^\dagger$ is also noetherian. We have \begin{eqnarray*} L_R^\dagger /\pi^k L_R^\dagger \cong (R/\pi^k)[\mathbf x][\mathbb Z^n\cap \delta],\quad R\langle \mathbf x\rangle^\dagger/\pi^k R\langle x\rangle^\dagger\cong (R/\pi^k)[\mathbf x]. \end{eqnarray*} So $L_R^\dagger /\pi^k L_R^\dagger $ is flat over $R\langle \mathbf x\rangle^\dagger /\pi^kR \langle \mathbf x\rangle^\dagger $ for all $k$. By \cite[IV Th\'eor\`eme 5.6]{SGA1}, $L_R^\dagger$ is flat over $R\langle \mathbf x\rangle^\dagger$. Finally let's prove $L^\dagger\otimes_{K\langle \mathbf x\rangle^\dagger}K'\cong L^\dagger_0.$ One can verify directly that in the case where $K'=K$, the homomorphism $$L^\dagger \to L_0^\dagger,\quad \sum_{\mathbf w\in \mathbb Z^n\cap\delta}a_{\mathbf x}(\mathbf x)\mathbf t^{\mathbf w} \mapsto \sum_{\mathbf w\in \mathbb Z^n\cap\delta}a_{\mathbf x}(0)\mathbf t^{\mathbf w}$$ is surjective with kernel $(x_1, \ldots, x_N)L^\dagger$. This proves our assertion in the case where $K=K'$ and $\mathbf a=(0,\ldots, 0)$. In general, we have an isomorphism $L^\dagger\otimes_K K'\cong L_{K'}^\dagger$, where $$L_{K'}^\dagger=\bigcup_{r>1, s>1} \{\sum_{\mathbf v\in\mathbb Z^N_{\geq 0},\; \mathbf w\in\mathbb Z^n\cap\delta} a_{\mathbf v\mathbf w} \mathbf x^{\mathbf v}\mathbf t^{\mathbf w}:\; a_{\mathbf v\mathbf w}\in K', \; \vert a_{\mathbf v\mathbf w}\vert r^{\vert v\vert} s^{d(\mathbf w)} \hbox { are bounded}\}.$$ By base change from $K$ to $K'$ and using this isomorphism, we can reduce to the case where $K'=K$. Then using the automorphism $$K'\langle \mathbf x\rangle \to K'\langle \mathbf x\rangle,\quad x_i\mapsto x_i-a_i, $$ we can reduce to the case where $\mathbf a=(0,\ldots, 0)$. \subsection{Proof of Lemma \ref{FG}} We first work with de Rham complexes and later with twisted de Rham complexes. We have $$\Psi_{\mathbf a}\circ\Phi_{\mathbf a}=\mathrm{id}$$ on $C^\cdot(L_0^\dagger)$. Since $K$ contains the primitive root of unity $\theta(1)$, it contains all $q$-th roots of unity. Let $\mu_q$ be the group of $q$-th roots of unity in $K$. For any $\zeta=(\zeta_1, \ldots, \zeta_n)\in \mu_q^n$, write $$\zeta\mathbf t=(\zeta_1t_1, \ldots, \zeta_nt_n).$$ We have $$\sum_{\zeta\in \mu_q^n} \zeta^{\mathbf w}=\left\{\begin{array}{cl} q^n &\hbox{if } q|\mathbf w,\\ 0&\hbox{otherwise}. \end{array} \right.$$ So we have \begin{eqnarray*} \Phi_{\mathbf a}\circ \Psi_{\mathbf a}(\sum_{\mathbf w}c_{\mathbf w}\mathbf t^{\mathbf w})=\sum_{\mathbf w} c_{q\mathbf w}\mathbf t^{q\mathbf w}=\frac{1}{q^n} \sum_{\zeta\in \mu_q^n} \sum_{\mathbf w} c_{\mathbf w}(\zeta \mathbf t)^{\mathbf w}. \end{eqnarray*} Let $\Theta_\zeta$ be the endomorphism on differential forms defined by $$\Theta_\zeta\Big(\sum_{1\leq i_1<\cdots < i_k\leq n}f_{i_1\ldots i_k}(\mathbf t)\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}} {t_{i_k}}\Big)= \sum_{1\leq i_1<\cdots < i_k\leq n}f_{i_1\ldots i_k}(\zeta\mathbf t)\frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}} {t_{i_k}}.$$ It commutes with $\mathrm d_{\mathbf t}$. We have $$\Phi_{\mathbf a}\circ \Psi_{\mathbf a}=\frac{1}{q^n}\sum_{\zeta\in \mu_q^n} \Theta_\zeta.$$ Let's show $\Phi_{\mathbf a}\circ \Psi_{\mathbf a}$ is homotopic to $\mathrm{id}$. It suffices to that $\Theta_\zeta$ is homotopic to $\mathrm{id}$ for each $\zeta\in \mu_q^n$. Let $$L_T^\dagger =\bigcup_{r>1,\;s>1}\{\sum_{\mathbf w\in\mathbb Z^n\cap\delta} a_{\mathbf w}(T) \mathbf t^{\mathbf w}:\; a_{\mathbf w}(T) \in K \{r^{-1} T\}, \; \Vert a_{\mathbf w}(T)\Vert_r s^{d(\mathbf w)} \hbox { are bounded}\}.$$ Consider the de Rham complex $(C^\cdot(L_T^\dagger), \mathrm d)$ so that $C^k(L_T^\dagger)$ is the space of $k$-forms which can be written as a sum of products of $\mathrm dT, \frac{\mathrm dt_1}{t_1}, \ldots, \frac{\mathrm dt_n}{t_n}$ and functions in $L_T^\dagger$, and $\mathrm d:C^k(L_T^\dagger)\to C^{k+1}(L_T^\dagger)$ is the usual exterior derivative of differential forms. The substitution $$t_i\to (1+(\zeta_i-1)T) t_i\quad (i=1, \ldots, n)$$ induces a chain map $$\iota: (C^\cdot (L_0^\dagger),\mathrm d_t) \to (C^\cdot(L_T^\dagger),\mathrm d).$$ Here we use the fact that $\zeta_i\equiv 1\mod p$ so that each $1+(\zeta_i-1)T$ is a unit in $L_T^\dagger$. In particular, $\frac{\mathrm d\big((1+(\zeta_i-1)T)t_i\big)}{(1+(\zeta_i-1)T)t_i}$ lies in $C^\cdot(L_T^\dagger)$. The evaluation at $T=0$ (resp. $T=1$) induces a chain map $$\mathrm{ev}_0: (C^\cdot(L_T^\dagger), \mathrm d) \to (C^\cdot(L_0^\dagger),\mathrm d_t) \quad (\hbox{resp. }\mathrm{ev}_1: (C^\cdot(L_T^\dagger), \mathrm d) \to (C^\cdot(L_0^\dagger),\mathrm d_t)). $$ We have $$\mathrm{ev}_1\circ \iota=\Theta_\zeta,\quad \mathrm{ev}_0\circ \iota=\mathrm{id}.$$ To prove $\Theta_\zeta$ is homotopic to identity, it suffices to show $\mathrm{ev}_1$ is homotopic to $\mathrm{ev}_0$. Note that $\int_0^T g(T,\mathbf t)\mathrm dT$ lies in $L_0^\dagger$ for any $g(T,\mathbf t)\in L^\dagger_T$. Define $\Xi: C^k(L^\dagger_T)\to C^{k-1}(L^\dagger_0)$ by \begin{eqnarray*} && \Xi \Big(f(T,\mathbf t) \frac{\mathrm dt_{i_1}}{t_{i_1}}\wedge \cdots\wedge \frac{\mathrm dt_{i_k}}{t_{i_k}}\Big)=0,\\ && \Xi \Big(g(T,\mathbf t) \mathrm dT\wedge \frac{\mathrm dt_{j_1}}{t_{j_1}}\wedge \cdots\wedge \frac{\mathrm dt_{j_{k-1}}}{t_{j_{k-1}}}\Big) =\Big(\int_0^1 g(T,\mathbf t) \mathrm dT\Big) \frac{\mathrm dt_{j_1}}{t_{j_1}}\wedge \cdots\wedge \frac{\mathrm dt_{j_{k-1}}}{t_{j_{k-1}}}. \end{eqnarray*} Then we have $$\mathrm d_{\mathbf t}\Xi +\Xi \mathrm d=\mathrm{ev}_1-\mathrm{ev}_0.$$ We now consider the twisted de Rham complexes. Let $$F_{\mathbf a}, G_{\mathbf a}, T_{\zeta}, L, E_0, E_1, H$$ be the conjugates of $$\Phi_{\mathbf a}, \Psi_{\mathbf a}, \Theta_{\zeta},\iota, \mathrm{ev}_0, \mathrm{ev}_1, \Xi$$ by $t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf a,\mathbf t))$ respectively. One verifies that they are defined on $C^\cdot(L^\dagger_0)$ or $C^\cdot(L^\dagger_T)$. By the discussion above for the untwisted de Rham complexes, we have \begin{eqnarray*} &&G_{\mathbf a}F_{\mathbf a}=\mathrm{id},\quad F_{\mathbf a}G_{\mathbf a}=\frac{1}{q^n}\sum_{\zeta\in \mu_q^n} T_\zeta,\\ && E_1\circ L=T_\zeta,\quad E_0\circ L=\mathrm{id},\quad dH+Hd=E_1-E_1. \end{eqnarray*} It follows that each $T_\zeta$ is homotopic to identity and hence $F_{\mathbf a}G_{\mathbf a}$ is also homotopic to identity. \section{Dwork's theory} \subsection{}\label{trace} Let \begin{eqnarray*} \theta(z)=\exp(\pi z-\pi z^p),\quad \theta_m(z)=\exp(\pi z-\pi z^{p^m})=\prod_{i=0}^{m-1}\theta(z^{p^i}). \end{eqnarray*} Then $\theta_m (z)$ converges in a disc of radius $>1$, and the value $\theta(1)=\theta(z)|_{z=1}$ of the power series $\theta(z)$ at $z=1$ is a primitive $p$-th root of unity in $K$ (\cite[Theorems 4.1 and 4.3]{M1}). Let $\bar u\in\mathbb F_{p^m}$ and let $u\in \overline{\mathbb Q}_p$ be its Techm\"uller lifting, that is, $u^{p^m}=u$ and $u\equiv \bar u\mod p$. Then we have (\cite[Theorem 4.4]{M1}) $$\theta_m(u)=\theta(1)^{\mathrm{Tr}_{\mathbb F_{p^m}/\mathbb F_p}(\bar u)}.$$ From now on, we denote elements in finite fields by letters with bars such as $\bar u, \bar a_j, \bar u_i$ etc and denote their Techm\"uller liftings by the same letters without bars such as $u,a_j, u_i$ etc. Let $\psi_m:\mathbb F_{q^m}\to K^\ast$ be the additive character defined by $$\psi_m(\bar u)=\theta(1)^{\mathrm{Tr}_{\mathbb F_{q^m}/\mathbb F_p}(\bar u)}.$$ Then we have $$\psi_m(\bar u)=\exp(\pi z-\pi z^{q^m})|_{z=u}.$$ Denote $\psi_1$ by $\psi$. We have $\psi_m=\psi\circ \mathrm{Tr}_{\mathbb F_{q^m}/\mathbb F_q}.$ Let $\bar a_1,\ldots, \bar a_N\in \mathbb F_q$. For any $\bar u_1,\ldots, \bar u_n \in \mathbb F_{q^m}^*$, we have \begin{eqnarray}\label{add} \begin{array}{ccl} \psi \Big(\mathrm{Tr}_{\mathbb F_{q^m}/\mathbb F_q}(\sum_{j=1}^N \bar a_j \bar u_1^{w_{1j}}\cdots \bar u_n^{w_{nj}})\Big) &=&\prod_{j=1}^N \psi_m (\bar a_j \bar u_1^{w_{1j}}\cdots \bar u_n^{w_{nj}}) \\ &= &\prod_{j=1}^N\exp(\pi z-\pi z^{q^m})|_{z=a_ju_1^{w_{1j}}\cdots u_n^{w_{nj}}}. \end{array} \end{eqnarray} Let $\chi: \mathbb F_q^*\to \overline{\mathbb Q}_p^*$ be the Techm\"uller character, that is, $\chi(\bar u)=u$ is the Techm\"uller lifting of $\bar u\in\mathbb F_q$. It is a generator for the group of multiplicative characters on $\mathbb F_q$. Any multiplicative character $\mathbb F_q^*\to \overline{\mathbb Q}_p^*$ is of the form $\chi_{\gamma}=\chi^{\gamma (1-q)}$ for some rational number $\gamma\in \frac{1}{1-q}\mathbb Z$. Moreover, for any $\bar u\in \mathbb F_{q^m}$, we have \begin{eqnarray}\label{multi} \chi_\gamma (\mathrm{Norm}_{\mathbb F_q^m/\mathbb F_q}(\bar u))= (u^{1+q+\cdots+q^{m-1}})^{\gamma(1-q)}=u^{\gamma(1-q^m)}, \end{eqnarray} Let $\gamma_1, \ldots, \gamma_n\in \frac{1}{1-q}\mathbb Z$. Set $\chi_i=\chi^{\gamma_i(1-q)}$ $(i=1,\ldots, n)$. Consider the twisted exponential sum $$S_m (F(\bar{\mathbf a},\mathbf t))=\sum_{\bar u_1,\ldots, \bar u_n\in \mathbb F_{q^m}^*} \chi_1 (\mathrm{Norm}_{\mathbb F_q^m/\mathbb F_q}(\bar u_1)) \cdots\chi_n(\mathrm{Norm}_{\mathbb F_q^m/\mathbb F_q}(\bar u_n))\psi \Big(\mathrm{Tr}_{\mathbb F_{q^m}/\mathbb F_q}\Big(\sum_{j=1}^N \bar a_j \bar u_1^{w_{1j}}\cdots \bar u_n^{w_{nj}}\Big)\Big).$$ Write $\exp(\pi z-\pi z^{q^m})=\sum_{i=1}^\infty c_i z^i$. By the equations (\ref{add}) and (\ref{multi}), we have \begin{eqnarray*} &&S_m (F(\bar{\mathbf a},\mathbf t))\\ &=&\sum_{u_i^{q^m-1}=1} u_1^{\gamma_1(1-q^m)}\cdots u_n^{\gamma_n(1-q^m)} \prod_{j=1}^N \exp(\pi z-\pi z^{q^m})|_{z=a_ju_1^{w_{1j}}\cdots u_n^{w_{nj}}}\\ &=& \sum_{u_i^{q^m-1}=1} u_1^{\gamma_1(1-q^m)}\cdots u_n^{\gamma_n(1-q^m)} \prod_{j=1}^N \Big(\sum_{i=1}^\infty c_i (a_ju_1^{w_{1j}}\cdots u_n^{w_{nj}})^i\Big)\\ &=& \sum_{u_i^{q^m-1}=1} \left(t_1^{\gamma_1(1-q^m)}\cdots t_n^{\gamma_n(1-q^m)} \prod_{j=1}^N \Big(\sum_{i=1}^\infty c_i (a_jt_1^{w_{1j}}\cdots t_n^{w_{nj}})^i\Big)\right)|_{t_i=u_i}\\ &=& \sum_{u_i^{q^m-1}=1} \left(t_1^{\gamma_1(1-q^m)}\cdots t_n^{\gamma_n(1-q^m)}\prod_{j=1}^N \exp\big(\pi a_jt_1^{w_{1j}}\cdots t_n^{w_{nj}}- \pi a_jt_1^{q^mw_{1j}}\cdots t_n^{q^mw_{nj}}\big) \right)|_{t_i=u_i} \\ &=& \sum_{u_i^{q^m-1}=1} \left(t_1^{\gamma_1(1-q^m)}\cdots t_n^{\gamma_n(1-q^m)}\exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a,\mathbf t^{q^m})\big) \right)|_{t_i=u_i}. \end{eqnarray*} We thus have \begin{eqnarray}\label{addagain} \qquad \qquad S_m (F(\bar{\mathbf a},\mathbf t)) =\sum_{u_i^{q^m-1}=1} \left(t_1^{\gamma_1(1-q^m)}\cdots t_n^{\gamma_n(1-q^m)}\exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a,\mathbf t^{q^m})\big) \right)|_{t_i=u_i}. \end{eqnarray} \subsection{} Let $K'$ be a finite extension of $K$ containing all $q$-th roots of unity. Set \begin{eqnarray*} L(s)_0&=&\{\sum_{\mathbf w\in\mathbb Z^n\cap\delta} a_{\mathbf w} t^{\mathbf w}:\; a_{\mathbf w}\in K', \; \vert a_{\mathbf w}\vert s^{d(\mathbf w)} \hbox { are bounded}\}. \end{eqnarray*} We have $L^\dagger_0=\bigcup_{s>1}L(s)_0.$ Note that $L(s)_0$ $(s\geq 1)$ and $L^\dagger_0$ are rings. Each $L(s)_0$ is a Banach space with respect to the norm $$\Vert \sum_{\mathbf w\in\mathbb Z^n\cap\delta} a_{\mathbf w} t^{\mathbf w}\Vert=\sup_{\mathbf w\in\mathbb Z^n\cap \delta} \vert a_{\mathbf w}\vert s^{d(\mathbf w)}.$$ \begin{theorem}[Dwork trace formula] \label{dwork1} The operator $G_{\mathbf a}:\; L^\dagger_0\to L^\dagger_0$ is nuclear, and we have \begin{eqnarray*} (q^m-1)^n \mathrm{Tr}(G_{\mathbf a}^m, {L^\dagger_0})= \sum_{u_i^{q^m-1}=1} \left(t_1^{\gamma_1(1-q^m)}\cdots t_n^{\gamma_n(1-q^m)}\exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a,\mathbf t^{q^m})\big) \right)|_{t_i=u_i}. \end{eqnarray*} \end{theorem} \begin{proof} For any real number $s\geq 1$, define $$\tilde L(s)_0=\{\sum_{\mathbf w\in\mathbb Z^n\cap\delta} a_{\mathbf w} t^{\mathbf w}:\; a_{\mathbf w}\in K', \; \lim_{d(\mathbf w)\to \infty} \vert a_{\mathbf w}\vert s^{d(\mathbf w)} =0\}.$$ For any $s<s'$, we have $$L(s')_0\subset \tilde L(s)_0\subset L(s)_0,$$ and $L^\dagger_0=\bigcup_{s>1}\tilde L(s)_0$. Endow $\tilde L(s)_0$ with the norm $$\Vert \sum_{\mathbf w\in\mathbb Z^n\cap\delta} a_{\mathbf w}t^{\mathbf w}\Vert=\sup_{\mathbf w\in\mathbb Z^n\cap \delta} \vert a_{\mathbf w}\vert s^{d(\mathbf w)}.$$ Then $\tilde L(s)_0$ is a Banach space with the orthogonal basis $\{t^{\mathbf w}\}_{\mathbf w\in\mathbb Z^n\cap \delta}$. The inclusion $L(s')_0\hookrightarrow \tilde L(s)_0$ is completely continuous. Indeed, choose $s<s''<s'$. We can factorize this inclusion as the composite $$L(s')_0\hookrightarrow \tilde L(s'')_0\hookrightarrow \tilde L(s)_0.$$ It suffices to verify the inclusion $i: \tilde L(s'')_0\hookrightarrow\tilde L(s)_0$ is completely continuous. Indeed, let $L_S$ be the finite dimensional $K'$-vector space spanned by a finite subset $S$ of $\{t^{\mathbf w}\}_{\mathbf w\in\mathbb Z^n\cap \delta}$, and let $$i_S: \tilde L(s'')_0\to\tilde L(s)_0$$ be the composite of the projection $\tilde L(s'')_0\to L_S$ and the inclusion $L_S\hookrightarrow\tilde L(s)_0$. One can verify that $$\Vert i_S -i\Vert\leq \sup_{\mathbf w\not\in S}\Big(\frac{s}{s''}\Big)^{d(\mathbf w)}.$$ So $i_S$ converges to $i$ as $S$ goes over all finite subsets of $\{t^{\mathbf w}\}_{\mathbf w\in\mathbb Z^n\cap \delta}$. Moreover $i_S$ has finite ranks. So $i$ is completely continuous. Let $H(\mathbf t) = t_1^{\gamma_1(1-q)}\cdots t_n^{\gamma_n(1-q)}\exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a,\mathbf t^{q})\big).$ By Lemma \ref{estimation}, we have have $H_q(\mathbf t)\in L(p^{\frac{p-1}{pq}})_0$. For any $s\geq 1$, we have $\Psi_{\mathbf a}(L(s)_0)\subset L(s^q)_0$. Consider the operator \begin{eqnarray*} G_{\mathbf a}&=& \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf a,\mathbf t))\Big)^{-1} \circ \Psi_{\mathbf a}\circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf a,\mathbf t))\Big)\\ &=& \Psi_{\mathbf a}\circ \Big(t_1^{\gamma_1(1-q)}\cdots t_n^{\gamma_n(1-q)} \exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a, \mathbf t^{q})\big)\Big). \end{eqnarray*} If $1< s< p^{\frac{p-1}{p}}$, then $G_{\mathbf a}$ induces a map $G_{\mathbf a}:\; \tilde L(s)_0\to \tilde L(s)_0$. It is the composite $$\tilde L(s)_0\hookrightarrow L(s)_0\stackrel{H(\mathbf t)}\to L\Big(\min\Big(s,p^{\frac{p-1}{pq}}\Big)\Big)_0\stackrel {\Psi_{\mathbf a}}\to L\Big(\min\Big(s^q,p^{\frac{p-1}{p}}\Big)\Big)_0\hookrightarrow \tilde L(s)_0.$$ $G_{\mathbf a}:\; \tilde L(s)_0\to \tilde L(s)_0$ is completely continuous since the last inclusion in the above composite is completely continuous. In particular, it is nuclear (\cite[Theorem 6.9]{M1}). Write $$ t_1^{\gamma_1(1-q)}\cdots t_n^{\gamma_n(1-q)}\exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a,\mathbf t^{q})\big)=\sum_{\mathbf w} c_{\mathbf w} t^{\mathbf w}.$$ We have \begin{eqnarray*} G_{\mathbf a}(t^{\mathbf u})&=&\Psi_{\mathbf a}(\sum_{\mathbf w} c_{\mathbf w}t^{\mathbf w+\mathbf u})\\ &=& \Psi_{\mathbf a}(\sum_{\mathbf w} c_{\mathbf w-\mathbf u}t^{\mathbf w})\\ &=& \sum_{\mathbf w} c_{q\mathbf w-\mathbf u} t^{\mathbf w}, \end{eqnarray*} where $c_{q\mathbf w-\mathbf u}$ is nonzero only if $\mathbf u,\mathbf w,\mathbf q\mathbf w-\mathbf u\in \delta$. The matrix of $G_{\mathbf a}$ on $\tilde L(s)_0$ with respect to the orthogonal basis $\{t^{\mathbf w}\}$ is $(c_{q\mathbf w-\mathbf u})$. By \cite[Theorem 6.10]{M1} we have $$\mathrm{Tr}(G_{\mathbf a}, {\tilde L(s)_0})=\sum_{\mathbf u}c_{q\mathbf u-\mathbf u}.$$ In particular, $\mathrm{Tr}(G_{\mathbf a},{\tilde L(s)_0})$ is independent of $s$. Similarly, $\mathrm{Tr}(G^m_{\mathbf a}, {\tilde L(s)_0})$ and $$\mathrm{det}(I-TG_{\mathbf a}, {\tilde L(s)_0})=\exp\Big(-\sum_{m=1}^\infty \frac {\mathrm{Tr}(G^m_{\mathbf a}, {\tilde L(s)_0})}{m}T^m\Big)$$ are independent of $s$. For any monic irreducible polynomial $f(T)\in K'[T]$ with nonzero constant term, write (\cite[Theorem 6.9]{M1}) $$\tilde L(s)_0=N(s)_f\bigoplus W(s)_f,$$ where $N(s)_f$ and $W(s)_f$ are $G_{\mathbf a}$-invariant spaces, $N(s)_f$ is finite dimensional over $K'$, $f(G_{\mathbf a})$ is nilpotent on $N(s)_f$ and bijective on $W(s)_f$. We have $$N(s)_f=\bigcup_{m=1}^\infty \mathrm{ker}\, (f(G_{\mathbf a}))^m,\quad W(s)_f=\bigcap_{m=1}^\infty \mathrm{im}\, (f(G_{\mathbf a}))^m.$$ For any pair $s<s'$, we have $$\tilde L(s')_0\subset \tilde L(s)_0,\quad N(s')_f\subset N(s)_f,\quad W(s')_f\subset W(s)_f.$$ Let $N_f=\bigcup_{1< s< p^{\frac{p-1}{p}}} N(s)_f$ and $W_f=\bigcup_{1< s< p^{\frac{p-1}{p}}}W(s)_f$. Then $$L_0^\dagger=N_f\bigoplus W_f,$$ $N_f$ and $W_f$ are $G_{\mathbf a}$-invariant, $f(G_{\mathbf a})$ is nilpotent on $N_f$ and bijective on $W_f$. Since $\mathrm{det}(I-TG_{\mathbf a},\tilde L(s)_0)$ is independent of $s$, all $N(s)_f$ have the same dimension, and hence we have $N_f=N(s)_f$ for all $1< s< p^{\frac{p-1}{p}}$. This shows that $G_{\mathbf a}:L^\dagger_0\to L^\dagger_0$ is nuclear and $$\mathrm{Tr}(G_{\mathbf a}, {L^\dagger_0})=\sum_{\mathbf u}c_{q\mathbf u-\mathbf u}.$$ On the other hand, we have $$\sum_{u^{q-1}=1} u^{w}=\left\{\begin{array}{cl} q-1 &\hbox{if } {q-1}|w,\\ 0&\hbox{otherwise}. \end{array} \right.$$ So we have \begin{eqnarray*} && \sum_{u_i^{q-1}=1} \left(t_1^{\gamma_1(1-q)}\cdots t_n^{\gamma_n(1-q)}\exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a,\mathbf t^{q})\big) \right)|_{t_i=u_i}\\ &=&\sum_{\mathbf w} \sum_{u_i^{q-1}=1} c_{\mathbf w} u_1^{w_1}\cdots u_n^{w_n}\\ &=&(q-1)^n \sum_{\mathbf u}c_{(q-1)\mathbf u}. \end{eqnarray*} We thus get $$(q-1)^n \mathrm{Tr}(G_{\mathbf a}, L^\dagger_0)= \sum_{u_i^{q-1}=1} \left(t_1^{\gamma_1(1-q)}\cdots t_n^{\gamma_n(1-q)}\exp\big(\pi F(\mathbf a,\mathbf t)- \pi F(\mathbf a,\mathbf t^{q})\big) \right)|_{t_i=u_i}.$$ This proves the theorem for $m=1$. We have \begin{eqnarray*} G_{\mathbf a}^m &=&\Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf a,\mathbf t))\Big)^{-1} \circ \Psi_{\mathbf a}^m\circ \Big(t_1^{\gamma_1}\cdots t_n^{\gamma_n} \exp(\pi F(\mathbf a,\mathbf t))\Big)\\ &=&\Psi_{\mathbf a}^m\circ\Big(t_1^{\gamma_1(1-q^m)}\cdots t_n^{\gamma_n(1-q^m)} \exp\big(\pi F(\mathbf a, \mathbf t)- \pi F(\mathbf a,\mathbf t^{q^m})\big)\Big). \end{eqnarray*} So the assertion for general $m$ follows from the case $m=1$. \end{proof} \subsection{Proof of Theorem \ref{arithmetic}} By the equation (\ref{addagain}) and the Dwork trace formula \ref{dwork1}, we have \begin{eqnarray*} S_m(F(\bar{\mathbf a},\mathbf t))&=&(q^m-1)^n \mathrm{Tr}(G_{\mathbf a}^m, {L^\dagger_0})\\ &=& \sum_{k=0}^n {n\choose k} (-1)^k (q^m)^{n-k} \mathrm{Tr}(G_{\mathbf a}^m, L^\dagger_0)\\ &=& \sum_{k=0}^n (-1)^k \mathrm{Tr}\Big((q^{n-k}G_{\mathbf a})^m, L_0^{\dagger{n\choose k}}\Big). \end{eqnarray*} For the $L$-function, we have \begin{eqnarray*} L(F(\bar{\mathbf a},\mathbf t),T)&=&\exp\Big(\sum_{m=1}^\infty S_m(F(\bar{\mathbf a},\mathbf t))\frac{T^m}{m}\Big)\\ &=&\exp\Big(\sum_{m=1}^\infty \sum_{k=0}^n (-1)^k \mathrm{Tr}\Big((q^{n-k}G_{\mathbf a})^m, L_0^{\dagger{n\choose k}}\Big)\frac{T^m}{m}\Big)\\ &=&\prod_{k=0}^n \exp\Big((-1)^k\sum_{m=1}^\infty \mathrm{Tr}\Big((q^{n-k}G_{\mathbf a})^m, L_0^{\dagger{n\choose k}}\Big)\frac{T^m}{m}\Big)\\ &=& \prod_{k=0}^n \mathrm{det}\Big(I-T q^{n-k}G_{\mathbf a}, {L_0^{\dagger{n\choose k} }}\Big)^{(-1)^{k+1}} \end{eqnarray*} This prove Theorem \ref{arithmetic}.
{ "timestamp": "2018-04-17T02:09:21", "yymm": "1804", "arxiv_id": "1804.05297", "language": "en", "url": "https://arxiv.org/abs/1804.05297" }
\section{Boltzmann's H-functional and (classical) entropy} Let $\Gamma$ be a phase space associated with a system. We fix a reference measure $\lambda$ on $\Gamma$, usually it will be the Lebesgue measure. A function $f$ such that $f \in \{ g; g \in L^1(\Gamma, d\lambda), g\geq 0, \int_{\Gamma} g d\lambda = 1 \}\equiv \mathfrak{S}$ defines a probability measure $d\mu = f d \lambda$. In the Boltzmann theory, such an $f$ has the interpretation of velocity distribution function, cf \cite{thompson} see also \cite{ruelle} . On the other hand we note that $g \in \mathfrak{S}$ can be written as \begin{equation} g = \frac{d\mu}{d\lambda} \end{equation} where $\frac{d\mu}{d\lambda}$ stands for the Radon-Nikodym derivative. Hence, the Boltzmann $H$-functional can be written as \begin{equation} H(g) \equiv \int g \log (g) d\lambda = \int \frac{d\mu}{d\lambda} \log\left(\frac{d\mu}{d\lambda}\right) d\lambda = \mu\left(\log\left(\frac{d\mu}{d\lambda}\right)\right), \end{equation} provided that the above integrals exist. In \cite{[1]}, \cite{[2]}, \cite{[3]} we have argued that for states (probability measures) $\mu$ such that $\frac{d\mu}{d\lambda} \in L\log(L+1) \cap L^1$, the functional $H(\cdot)$ is well defined. \begin{remark} As the (classical) continuous entropy $S$ differs from the functional $H$ only by sign, the above means that the entropy $S(\frac{d\mu}{d\lambda})$ is well defined if $\frac{d\mu}{d\lambda} \in L\log(L+1)\cap L^1$. \end{remark} Let $\mu$ and $\nu$ be probability measures over a set $X$, and assume that $\mu$ is absolutely continuous with respect to $\nu$. The relative entropy (also known as Kullback-Leibner divergence) is defined as \begin{equation} \label{1.3} S(\mu|\nu) = \int_X \log \left(\frac{d\mu}{d\nu}\right)d \mu = \int_X \frac{d\mu}{d\nu} \log\left(\frac{d\mu}{d\nu}\right) d\nu \equiv \left\langle\log\frac{d\mu}{d\nu}\right\rangle_{\mu}, \end{equation} provided that the integrals in the above formulas exist, where $\frac{d\mu}{d\nu}$ is the Radon-Nikodym derivative of $\mu$ with respect to $\nu$. Assume additionally that $\nu$ (so also $\mu$ is absolutely continuous with respect to the reference measure $\lambda$. Then \begin{equation} \frac{d\mu}{d\nu} = \frac{d\mu}{d\lambda} \cdot \frac{d\lambda}{d\nu}, \end{equation} and under some additional assumptions one has the more familiar formula for the relative entropy \begin{equation} \label{1.5} S(\mu|\nu) = \int_X p \log \frac{p}{q} d\lambda, \end{equation} where $p = \frac{d\mu}{d\lambda}$ and $q = \frac{d\nu}{d\lambda}$. Intuitively, it is easily seen that for a discrete case, the entropy of a random variable $f$ on $X$ with a probability distribution $p(x)$ is related to how much $p(x)$ diverges from the uniform distribution on the support of $f$. In particular, putting $q=1$ in the formula \ref{1.5} one gets \begin{equation}\label{1.6} S(\mu|\tau) = H(p), \end{equation} where the (non-normalized) functional $\tau$ is defined by the reference measure $\lambda$. As ``uniformity'' can be related to the ``most'' chaotic state (each microstate is equally probable), the basic property of statistical entropy expressing how far the given state is from the most chaotic, is recovered. To clarify this point as well as to gain some intuition for a noncommutative generalization, we turn to the algebraic approach to the just defined concepts. For a fixed measure space $(X, \Sigma, \lambda)$, let $L^{\infty}(X, \Sigma, \lambda) \equiv L^{\infty}$ denote the set of all $\lambda$-measurable, essentially bounded functions on $X$. The absolute continuity of $\mu$ with respect to $\lambda$ is equivalent to the condition that $\mu$ can be regarded as a normal functional on $L^{\infty}(X, \Sigma, \lambda)$, cf \cite{Ber} Theorem 1 , p. 167. Since $L^{\infty}(X, \Sigma, \lambda)$ is the prototype of abelian von Neumann algebras, one can rewrite definitions as well as the basic properties of the above concepts in (abelian) von Neumann algebraic terms. To this end, let $\vartheta_{\mu}(f) = \int_X f \cdot (\frac{d\mu}{d \lambda}) d\lambda(x)$ denote the functional over $L^{\infty}(X, \Sigma, \lambda)$, for the reference measure $\lambda$. In particular, the trace $\tau$ over $L^{\infty}(X, \Sigma, \lambda)$ is given by \begin{equation} \tau(f) = \int_X f d\lambda(x). \end{equation} It is worth pointing out that the existence of such a trace affords the possibility of dealing with uniform distribution (as was indicated above). In other words, such existence affords the possibility of discussing the relation between entropy and relative entropy! Consequently, the entropy formula can be given as \begin{equation} S(\mu) \equiv S(\vartheta_{\mu}) = \tau\left(\left(\frac{D\vartheta_{\mu}}{D\tau}\right) \log \left(\frac{D\vartheta_{\mu}}{D\tau}\right)\right) \equiv \int_X \left(\frac{D\vartheta_{\mu}}{D\tau}\right) \log \left(\frac{D\vartheta_{\mu}}{D\tau}\right) d\lambda(x) \end{equation} $$ = \left\langle \log\left(\frac{D\vartheta_{\mu}}{D\tau}\right)\right\rangle_{\mu},$$ while the relative entropy formula reads \begin{equation}\label{commcase} S(\vartheta_{\mu}|\vartheta_\nu) = \left\langle \log\left(\frac{D\vartheta_{\mu}}{D\vartheta_\nu}\right)\right\rangle_{\mu}, \end{equation} where $\frac{D\vartheta_{\mu}}{D\vartheta_\nu}$ stands for the Radon-Nikodym derivative of functional $\vartheta_{\mu}$ with respect to the functional $\vartheta_\nu$, see the next section. \begin{remark} \textit{Classical equilibrium thermodynamics.} \label{classical thermodynamics} To get some better intuition, let us consider specific case, when the velocity distribution function $\frac{d\mu}{d\lambda}$ is given by Maxwell-Boltzmann distribution \begin{equation} \frac{d\mu}{d\lambda} = Z e^{- \beta H} = e^{log Z - \beta H} \equiv e^K, \end{equation} where $Z$ is the normalization constant, $\beta >0$ (usually interpreted as ``the inverse temperature''), and $H$ is the Hamiltonian of the system under consideration. For such cases, the above formulas for entropies read \begin{equation} \label{1.11} S(\frac{d\mu}{d\lambda}) \equiv S(\frac{D\upsilon_{\mu}}{D\tau}) = \langle \log(e^K)\rangle_{\mu} = \langle K \rangle_{\mu}, \end{equation} while for the relative entropy of $\frac{d\mu}{d\lambda} = Z_1 e^{- \beta_1H_1} \equiv e^{K_1}$, $\frac{d\nu}{d\lambda} = Z_2 e^{- \beta_2H_2} \equiv e^{K_2}$, one has \begin{equation} S\left(\frac{d\mu}{d\lambda}|\frac{d\nu}{d\lambda}\right) = \langle \log \frac{e^{K_1}}{e^{K_2}}\rangle_{\mu} = \langle K_1\rangle_{\mu} - \langle K_2\rangle_{\mu}. \end{equation} It is \textit{important to note that (\ref{1.11}) is in perfect agreement with the second law of thermodynamics}; see section 32 in \cite{chin}. The above formulas can be rewritten as \begin{equation} S\left(\frac{d\mu}{d\lambda}\right) = -i\frac{d}{dt}\langle e^{itK}\rangle_{\mu}|_{t=0} \end{equation} and \begin{equation} S\left(\frac{d\mu}{d\lambda} | \frac{d\nu}{d\lambda}\right) = -i \frac{d}{dt}\langle e^{itK_1} e^{-itK_2}\rangle_{\mu}|_{t=0} \end{equation} As it will be seen in the next Sections, the above recipe can easily be generalized and quantized. \end{remark} To clarify the significance of derivatives and to proceed with our exposition we need some preliminaries, which for the reader's convenience will be given in a separate section. \section{Algebraic preliminaries} As the concepts of entropy and relative entropy involve Radon-Nikodym derivatives, for the reader convenience, we here provide the relevant material on noncommutative Radon-Nikodym and cocycle derivatives, thus making our exposition self-contained. The theory of such cocycles goes back to \cite{Con1}, \cite{Con2}, \cite{ConTak}. In particular, Connes proved, see \cite{Con1} \begin{theorem} \label{connes} Let $\qM$ be a von Neumann algebra and $\phi$, $\psi$ faithful semifinite normal weights on $\qM$. Then there exists a $\sigma$-strongly continuous one parameter family $\{u_t\}$ of unitaries in $\qM$ with the following properties: \begin{itemize} \item $u_{t+t^{\prime}} = u_t \sigma_t^{\phi}(u_{t^{\prime}}), \mbox{ for all } t, t^{\prime} \in \Rn,$ \item \begin{equation} \label{2.1a} \sigma^{\psi}_t (x) = u_t \sigma_t^{\phi}(x)u_t^*, \mbox{ for all } x \in \qM, t \in \Rn, \end{equation} \item a unitary $u\in \qM$ satisfies $\psi(x)=\phi(uau^*)$ for all $x\in \qM$, if and only if $u_t=u^*\sigma^\phi_t(u)$ for all $t\in \mathbb{R}$, \end{itemize} where $\sigma_t^{\varphi}$ ($\sigma_t^{\psi}$) stands for the modular evolution determined by $\varphi$ ($\psi$ respectively). \end{theorem} \begin{definition} The family of unitaries described by the above theorem is called the cocycle derivative of $\varphi$ with respect to $\psi$ and is denoted by \begin{equation} (D\varphi : D\psi)_t = u_t. \end{equation} \end{definition} To understand fully the next remark we need, cf \cite{BR}, Theorem 5.3.10. \begin{theorem} (\textit{Takesaki}) \label{takesaki} Let $\qM$ be a von Neumann algebra, and $\omega$ a normal state on $\qM$. The following are equivalent: \begin{enumerate} \item $\omega$ is a faithful as a state on $\pi_{\omega}(\qM)$, i.e. there exists a projector $E \in \qM \cap\qM^{\prime}$ such that $\omega(\jed - E) = 0$ and $\omega|_{\qM E}$ is a faithful state. \item There exists a $\sigma$-weakly continuous one-parameter group $\sigma$ of ${}^*$-automorphisms of $\qM$ such that $\omega$ is $\sigma$-KMS state. \end{enumerate} Moreover, $\sigma|{\qM E}$ is the modular group of $\qM E$ associated with $\omega$. \end{theorem} This theorem legitimises the application of KMS theory to our approach to quantum entropy. In other words, our scheme is related to \textit{quantum equilibrium thermodynamics}. Now we are in position to present \begin{remark} \label{2.4a} We note that $u_0 = \jed$ (see the proof of Theorem 3.3, Chapter VIII in \cite{Tak2}). Further, let us take (formally) a derivation of (\ref{2.1a}) at $t=0$. Then, denoting the infinitesimal generator of $\sigma_t^{\psi}$ ($\sigma^{\varphi}_t$) by $L^{\psi}$ ($L^{\varphi}$ respectively) one gets \begin{equation} \label{derivation} L^{\psi}(x) = \left.\frac{du_t}{dt}\right\bracevert_{t=0} x + L^{\varphi} (x) + x \left(\left.\frac{du_t}{dt}\right\bracevert_{t=0}\right)^*, \end{equation} or equivalently \begin{equation} L^{\psi}(x) - L^{\varphi} (x) = \left.\frac{du_t}{dt}\right\bracevert_{t=0} x + x \left(\left.\frac{du_t}{dt}\right\bracevert_{t=0}\right)^*. \end{equation} Theorem \ref{takesaki} implies that the modular evolution for a fixed faithful normal state $\varphi$ on $\qM$, can be interpreted as Hamilton type dynamics for the equilibrium (KMS) state on $\qM$. This means that the derivative of $u_t$ at $t=0$ determines the difference of two ``equilibrium'' type generators $L^{\psi}$ and $L^{\varphi}$. The important point to note here, is the fact that in general, $L^{\psi}$ and $L^{\varphi}$ are unbounded derivations. Thus, the equality (\ref{derivation}) is not well defined for each $x$. This clearly indicates that derivatives of $u_t$ should be studied carefully, and this will be done in the ensuing sections. To say more, let $\psi$ be a perturbed $\varphi$-state, so $\psi \equiv \varphi^P$; for all details see section 5.4.1 in \cite{BR}. In particular, for $P\in \qM$ there exists an explicit form of $u_t$. Furthermore, it is easy to note that $L^{\varphi^P}x - L^{\varphi}x = i[P,x]$, which is well defined. Consequently, comparing two states which differ by finite a energy perturbation, does not lead to any problem. Finally, we note that KMS states can be characterized by passivity, see \cite{PW} and/or section 5.4.4 in \cite{BR}. We remind that among other things passivity ensures compatibility with the second law of thermodynamics. Therefore, our scheme based on Tomita-Takesaki theory, seems to be a natural quantization of the classical case presented in Remark \ref{classical thermodynamics}. \end{remark} The Radon-Nikodym theorem used in the previous section has generalizations to general von Neumann algebras. The first generalization, for traces, is extracted from Pedersen's book \cite{Peder}, see Theorem 5.3.11 and remarks in 5.3.12. \begin{theorem} \label{2.1} Let $\tau$ be a normal semifinite trace over $\mathfrak{M}$. For each normal semifinite weight $\psi$ on $\mathfrak{M}$ which is absolutely continuous with respect to $\tau$ in the sense that for any $a \in \mathfrak{M}$, the fact that $\tau(a^*a) = 0$ implies $\psi(a^*a)=0$, there exists a unique positive operator $h$ on $\cH_{\tau}$ ($\cH_{\tau}$ is GNS space for $(\mathfrak{M}, \tau))$ such that \begin{equation} \psi(x) = \tau(h x). \end{equation} \end{theorem} For a general von Neumann algebra $\mathfrak{M}$ and two normal faithful semifinite weights such that one dominates the other one has (see Theorem VIII.3.17 in \cite{Tak2}) \begin{theorem} \label{2.2} For a pair $\vartheta$, $\psi$ of faithful semifinite normal weights on $\qM$, the following conditions are equivalent: \begin{enumerate} \item There exists $M>0$ such that \begin{equation} \vartheta(x) \leq M \psi(x), \quad x \in \qM_+, \end{equation} \item The cocycle derivative $({D\psi}\colon{D\vartheta})_t \equiv u_t$ can be extended to an $\qM$-valued $\sigma$-weakly continuous bounded function on the horizontal strip $\overline{D}_{\frac{1}{2}} = \{ z \in \Cn; -\frac{1}{2}\leq Im(z) \leq 0 \}$, which is holomorphic in the interior of the strip. \end{enumerate} If these conditions hold, then \begin{equation} \vartheta (x) = \psi(u^*_{-\frac{i}{2}} x u_{-\frac{i}{2}}), \quad x\in \{ \sum_i^n y^*_ix_i; \quad x_i,y_j \in n_{\psi}\}, \end{equation} where $n_{\psi} = \{ x\in \qM; \psi(x^*x) < \infty \}$. \end{theorem} \begin{remark} We emphasize that a domination of one weight by another is a stronger property than ``absolute continuity'' described in Theorem \ref{2.1}, but the domination condition is in the same vein as the condition of absolute continuity. Also notice from part (2) of the above theorem, that $|u_{-i/2}^*|^2$ in a very real sense fulfills the role of the ``density'' of $\psi$ with respect to $\vartheta$. \end{remark} One may ask whether there is a relation, based on the Connes characterization of unitary Radon-Nikodym cocycles, between cocycle derivatives and the relative modular operator. More precisely, see \cite{ara1}, \cite{Araki1}, let $\phi$, $\vartheta$ be normal semifinite weights on $\qM$, and $\phi$ be faithful. Then \begin{equation} \label{2.4} u_t \equiv \left({D \vartheta}\colon {D\phi}\right)_t = \Delta^{it}_{\vartheta, \phi} \Delta_{\phi}^{-it}. \end{equation} In particular, if $\qM$ is semifinite von Neumann algebra, $\psi$ and $\vartheta$ faithful semifinite normal weights, $\tau$ a faithful, normal semifinite trace on $\qM$, then one has (see \cite{connes}, p.470) that there exist positive operators affiliated with $\qM$ such that $\psi(x) = \tau(\varrho_{\psi} x)$, $\vartheta(x) = \tau(\varrho_{\vartheta}x)$ for each $x \in \qM$, and \begin{equation} \left({D\vartheta}\colon {D\psi}\right)_t = \varrho_{\vartheta}^{it}\varrho_{\psi}^{-it}. \end{equation} Hence, on applying this equality to the abelian von Neuma algebra $L^{\infty}$ (cf. the discussion at the end of the previous section), one has \begin{equation} \frac{d}{dt}\left(D\varphi_{\mu} \colon D\varphi_{\nu}\right)|_{t = 0} = i \log f_{\mu} - i\log f_{\nu}, \end{equation} where $\mu = f_{\mu}d\lambda$, $\nu = f_{\nu} d \lambda$, and $f_{\mu}>0$, $f_{\nu}>0$. Thus \begin{equation} -i \left\langle \frac{d}{dt}\left.\left(D\varphi_{\mu} \colon D\varphi_{\nu}\right)\right\bracevert_{t=0}\right\rangle_{\mu} = \int f_{\mu} \log\frac{f_{\mu}}{f_{\nu}} d\lambda = \int\left(f_{\mu}\log f_{\mu} - f_{\mu}\log f_{\nu}\right)d \lambda, \end{equation} which is in perfect agreement with the definition of the relative entropy, cf. formula \ref{1.3}. \vskip 1cm We remind the reader that the proper basic structure for a description of large quantum systems, is a von Neumann algebra of type III. In other words, one is forced to deal with a von Neumann algebra which is not equipped with a nontrivial trace. \textit{Consequently, to be able to study entropy, access to the type of functional calculus required for an effective description of uniform distribution would be a powerful tool, which can be accessed by passing to a larger super-algebra, i.e. to the crossed-product $\cM$.} It is in this larger super-algebra that we have access to the functional calculus for $\tau$-measurable operators. If $\qM$ together with a canonical faithful normal semifinite weight $\omega$ is given on a Hilbert space $\cH$, then $\cM$ is the von Neumann algebra on the Hilbert space $L^2(\bR, \cH)$ generated by the following operators: \begin{equation} (\pi(x)\xi)(t) = \sigma_{-t}(x) \xi(t), \\ \xi \in L^2(\bR, \cH), t \in \bR, x \in \qM, \end{equation} \begin{equation} (\lambda(s)\xi)(t) = x(t - s), \\ \xi \in L^2(\bR, \cH), t \in \bR, x \in \qM, \end{equation} where $\sigma_t=\sigma^\omega_t$ stands for the modular automorphism. \begin{remark} \begin{enumerate} \item $\qM$ can be identified with its image $\pi(\qM)$ in $\cM$. \item If $\qM$ is type III then $\cM$ is a semifinite. Thus, on $\cM$ there is a semifinite normal faithful trace! \end{enumerate} \end{remark} We wish to close this section with a deep result of Haagerup, see \cite{haare} Theorem 4.7 or/and \cite{terp} pp. 26-27. Let $\psi$, $\vartheta$ be normal, faithful semifinite weights on $\qM$. $\tilde{\psi}$ and $ \tilde{\vartheta}$ stand for the corresponding dual weights on $\cM$. Then, for any $t \in \bR$ \begin{equation} \label{2.10} \left({D\tilde{\psi}}\colon {D\tilde{\vartheta}}\right)_t = \left({D\psi}\colon {D\vartheta}\right)_t. \end{equation} \section{The von Neumann entropy and Dirac's formalism.} In Dirac's formalism, a (small) quantum system is described by an infinite dimensional Hilbert space $\cH$ and the von Neumann algebra $B(\cH)$. A normal state $\psi$ on $B(\cH)$ has the form $\psi(a) = \Tr \varrho_{\psi} a$ where $\varrho_{\psi}$ is a positive trace class operator, with trace equal to $1$, i.e. $Tr \varrho_{\psi} = 1$. Here the set of states $\mathfrak{S}$ is given by $\mathfrak{S} = \{ \varrho \in B(\cH); \varrho^* = \varrho, \varrho \geq 0, \Tr \varrho = 1\}$. Applying the non-commutative Radon-Nikodym theorem, see Theorem 1, pp. 469-470 in \cite{connes}, one has \begin{equation} \varrho_{\psi}^{it} = (D\psi : D\Tr)_t \end{equation} We remind that von Neumann entropy $S(\varrho_{\psi})$ was defined as \begin{equation} S(\varrho_{\psi}) = \Tr (\varrho_{\psi} \log \varrho_{\psi}). \end{equation} This definition can be rewritten in Radon-Nikodym terms in the following way \begin{equation} S(\varrho_{\psi}) = -i \Tr \left(\varrho_{\psi}\left.\frac{d}{dt} (D\psi : D\Tr )\right\bracevert_{t= 0}\right) \equiv -i \psi \left(\left.\frac{d}{dt}(D\psi : D\Tr)\right\bracevert_{t=0} \right), \end{equation} and, for $\psi(\cdot) = \Tr (\varrho_{\psi} \cdot)$, $\varphi(\cdot) = \Tr(\varrho_{\varphi} \cdot)$ \begin{equation} \label{3.4} S(\psi|\varphi) = \Tr\left(\varrho_{\psi} \log \varrho_{\psi} - \varrho_{\psi} \log \varrho_{\varphi}\right) = -i \psi\left(\left.\frac{d}{dt}(D\psi \colon D\varphi)\right\bracevert_{t=0} \right), \end{equation} where we assumed that the states are faithful. \begin{remark} As was pointed out at the end of \cite[Section 6]{[2]}, within Dirac's formalism the Orlicz space scheme for selecting ``good'' states with well defined entropy, agrees with the standard approach to elementary quantum mechanics. Specifically in this setting the space $L^1\cap L(\log(L+1))(B(\cH))$ is precisely the trace class operators $L^1(B(\cH))$. In fact for $B(\cH)$, all noncommutative measurable operators are already bounded; for details see cf \cite{[2]}, and \cite{Maj}. This behaviour is not unexpected as on the one hand Dirac's formalism is designed for small systems, and on the other hand, restricting to $B(\cH)$, noncommutative integration theory is oversimplified. The entropy for large systems will be examined in the next section. \end{remark} \section{General quantum case}\label{sec4} Let us consider a general quantum system and let $\mathfrak{M}$ be a von Neumann algebra associated with the system. In general, for large systems, $\mathfrak{M}$ is a type III von Neumann algebra. Let $\omega$ be a normal semifinite faithful weight on $\mathfrak{M}$. The weight $\omega$ will play the role of a noncommutative probability reference measure. We denote by $\cM$ the cross product of $\mathfrak{M}$ associated with the modular morphism $\sigma_{\omega}$ produced by $\omega$, cf. Section 2. By $\tilde{\omega}$ we denote the dual (and hence normal semifinite faithful) weight on $\cM$, and $\tau$ stands for the canonical trace on $\cM$. We remind that the modular automorphism $\tilde{\sigma}$ produced by the dual weight $\tilde{\omega}$ has the form $\tilde{\sigma}_t(\cdot) = \lambda(t) \cdot \lambda(t)^*$ - for details see \cite{Tak}, \cite{terp} and \cite{LM}. By Stone's theorem one has $\lambda(t) = h^{it}$. We note that $\log h$ can be identified with $-i\frac{d}{dt}(D\tilde{\omega}:{D\tau})|_{t=0}$ where $\tau$ is the canonical trace on $\cM$. Based on the foregoing analysis we propose: \begin{definition}\label{gqdef} Let $\vartheta$, $\psi$ be faithful states on $\qM$. We define the relative entropy $S(\vartheta|\psi)$ to be $S(\vartheta|\psi)=\lim_{t\to 0}\frac{-i}{t}\vartheta[(D{\vartheta} \colon D{\psi})_t-\I]$ if the limit exists, and assign a value of $\infty$ to $S(\vartheta|\psi)$ otherwise. \end{definition} Let $\qM$ be a $\sigma$-finite von Neumann algebra in standard form described above, and let $\psi$ and $\phi$ be two faithful normal states with unit vector representatives $\Psi, \Phi \in \mathcal{H}$. The basic theory of Tomita-Takesaki theory easily extends to show that the densely defined anti-linear operator $S_{\phi,\psi}(a\Psi)=a^*\Phi$ is in fact closable. The operator $\Delta_{\phi,\psi}$ is then defined to be the modulus of the closure of $S_{\phi,\psi}$. In the same way that the ``standard'' modular operator may be used to generate the modular automorphism group of a given state, this operator in a very real sense encodes the manner in which the dynamics determined by the modular automorphism group of one state, differs from other. Using this fact, Araki then defined the relative entropy of $\psi$ and $\phi$ to be $-\langle \Psi,\log(\Delta_{\phi,\psi}) \Psi\rangle$. We refer the interested reader to \cite{ara1, ara2} and the references therein for a survey of the basic properties of this entropy. However despite the success of Araki's approach, we prefer the above definition, since on the one hand it is more overtly based on modular dynamics, and on the other it more easily allows for the incorporation of crossed product techniques in the study of this entropy -- as we shall subsequently see. The two approaches turn out to be equivalent -- a fact which is the content of the next theorem. One of the crucial facts which help to establish this link, is the fact that the Connes cocycle derivative $(D{\psi} \colon D{\phi})_t$ may be described in terms of $\Delta_{\phi,\psi}$ and $\Delta_\phi$ (see Appendix B of \cite{Araki1} for details). Another is that any normal state $\vartheta$ on a $\sigma$-finite von Neumann algebra $\qM$ in standard form, must have a vector representative \cite[Theorem 2.5.31]{BR}. A version of this theorem appears in the book of Ohya and Petz \cite[Theorem 5.7]{OP}, but under the assumption that $-\langle \Psi,\log(\Delta_{\phi,\psi}) \Psi\rangle$ is finite. Our proof is quite different from the one used by Ohya and Petz - a fact which enables us to make the more general conclusion stated below. In particular we are able to apply the Dominated Convergence theorem directly to the functions $|\frac{1}{t}(\lambda^{it}-1)|$, rather than to $|\log\lambda - \frac{1}{it}(\lambda^{it}-1)|$. \begin{theorem}\label{ar-ent} Let $\qM$ be a $\sigma$-finite von Neumann algebra in the standard form described above, and let $\psi$ and $\phi$ be two faithful normal states with unit vector representatives $\Psi, \Phi \in \mathcal{H}$. Then $S(\psi|\phi)$ as defined in Definition \ref{gqdef} agrees exactly with Araki's definition of relative entropy \cite{ara1}. \end{theorem} \begin{proof} To start the proof we recall that in this case $(D\psi \colon D\phi)_t=\Delta_{\psi, \phi}^{it} \Delta_\phi^{-it}$. The chain rule for cocycle derivatives informs us that $\I= (D\psi \colon D\psi)_t =(D\psi \colon D\phi)_t(D\phi\colon D\psi)_t$, and hence that we also have that $$(D\psi \colon D\phi)_t=(D\phi\colon D\psi)_t^{-1}= (\Delta_{\phi,\psi}^{it} \Delta_\psi^{-it})^{-1} = \Delta_\psi^{it}\Delta_{\phi,\psi}^{-it}.$$ Note that by construction we have that $\Psi$ is an eigenvector of $\Delta_\psi$ corresponding to the eigenvalue 1. Hence $\Psi$ must then be an eigenvector of $\Delta_\psi^{-it}$ corresponding to the eigenvalue $1^{-it}=1$. So for any $t$ we must then have that $$\langle \Psi,(D{\psi} \colon D{\phi})_t \Psi\rangle = \langle \Psi,\Delta_\psi^{it}\Delta_{\phi,\psi}^{-it} \Psi\rangle$$ $$= \langle \Delta_\psi^{-it}\Psi,\Delta_{\phi,\psi}^{-it} \Psi\rangle = \langle \Psi,\Delta_{\phi,\psi}^{-it} \Psi\rangle.$$ It is therefore trivially clear that $$\lim_{t\to 0}\frac{1}{t}\psi[(D{\psi} \colon D{\phi})_{t}-\I] = \lim_{t\to 0}\frac{1}{t}\langle \Psi,(\Delta_{\phi,\psi}^{-it}-\I)\Psi\rangle.$$ Recall that Araki's definition of entropy is $$S(\psi|\phi) = -\langle \Psi,\log(\Delta_{\phi,\psi}) \Psi\rangle$$where the latter term is understood to be $$-\int_0^\infty\log(\lambda)\,d\langle \Psi,e_\lambda \Psi\rangle$$(here $\lambda\to e_\lambda$ is the spectral resolution of $\Delta_{\phi,\psi}$). As Araki points out, the value of this integral is either real (in the case where $\log$ is integrable), or $\infty$ otherwise. In the case where $\log$ is not integrable, it once again follows from \cite{ara1} that in this setting $\log$ is always integrable on $[1,\infty)$,and hence that the non-integrability of $\log$ is derived from the fact that $-\int_0^1\log(\lambda)\,d\langle \Psi,e_\lambda \Psi\rangle=\infty$. First suppose that $\log$ is integrable. For any $t>0$ and any $\lambda>0$, we have that $$|\frac{1}{t}(\lambda^{it}-1)| \leq |\frac{1}{t}(\lambda^{it/2}-1)(\lambda^{it/2}+1)| \leq |\frac{2}{t}(\lambda^{it/2}-1)|.$$ Carrying on inductively, leads to the conclusion that $|\frac{1}{t}(\lambda^{it}-1)| \leq |\frac{2^k}{t}(\lambda^{it/2^k}-1)|$ for any $k\in \mathbb{N}$. If now we let $k\to \infty$, we obtain the inequality $|\frac{1}{t}(\lambda^{it}-1)| \leq |\log(\lambda)|$, which holds for any $t>0$ and any $\lambda>0$. Hence we may apply the dominated convergence theorem to see that for any sequence $\{t_n\}$ converging to 0, we have that \begin{eqnarray*} \lim_{n\to \infty}\frac{-i}{t_n}\langle \Psi,(\Delta_{\phi,\psi}^{-it_n}-\I)\Psi\rangle &=& \lim_{n\to \infty}\frac{-i}{t_n}\int_0^\infty(\lambda^{-it_n}-1)\,d\langle \Psi,e_\lambda \Psi\rangle\\ &=& -\int_0^\infty\log(\lambda)\,d\langle \Psi,e_\lambda \Psi\rangle. \end{eqnarray*} This fact is enough to enable us to conclude that $$\lim_{t\to 0}\frac{-i}{t}\langle \Psi,(\Delta_{\phi,\psi}^{-it}-\I)\Psi\rangle = -\int_0^\infty\log(\lambda)\,d\langle \Psi,e_\lambda \Psi\rangle.$$ Next suppose that $\log$ is not integrable. As was noted earlier, the fact that in this setting $\log$ is always integrable on $[1,\infty)$, ensures that non-integrability of $\log$ is equivalent to the statement that $-\int_0^1\log(\lambda)\,d\langle \Psi,e_\lambda \Psi\rangle=\int_0^1\log(\lambda^{-1})\,d\langle \Psi,e_\lambda \Psi\rangle=\infty$. In fact a slight modification of the above argument shows that for any sequence $\{t_n\}$ decreasing to 0, we then always have that $$\lim_{n\to \infty}\frac{-i}{t_n}\int_1^\infty(\lambda^{-it_n}-1)\,d\langle \Psi,e_\lambda \Psi\rangle = -\int_1^\infty\log(\lambda)\,d\langle \Psi,e_\lambda \Psi\rangle \in \mathbb{R}.$$ We therefore need to investigate this type of behaviour on the interval $[0,1]$. For any sequence $\{t_n\}$ decreasing to 0, we may use Fatou's lemma to conclude that $$\infty=\liminf_n\int_0^1\frac{\sin(t_n\log(\lambda^{-1}))}{t_n}\,d\langle \Psi,e_\lambda \Psi\rangle.$$ Since for any $n$ we have that $\Re[-i(\lambda^{-it_n}-1)]=\sin(-t_n\log(\lambda))=\sin(t_n\log(\lambda^{-1})$, this fact ensures that in this case $\lim_{n\to \infty}\int_0^1\Re\left[\frac{-i}{t_n}(\lambda^{-it_n}-1)\right]\,d\langle \Psi,e_\lambda \Psi\rangle$ does not exist as a real number. Hence neither does $\lim_{t_n\to 0}\Re\left[\frac{-i}{t_n}\langle \Psi,(\Delta_{\phi,\psi}^{-it_n}-\I)\Psi\rangle\right]= \lim_{n\to \infty}\int_0^\infty\Re\left[\frac{-i}{t_n}(\lambda^{-it_n}-1)\right]\,d\langle \Psi,e_\lambda \Psi\rangle$. This suffices to prove the theorem. \end{proof} \begin{remark} If $\qM$ is commutative, then $\qM = L^{\infty}(X, \mu)$, in which case $\psi$ and $\phi$ correspond to positive measures on $X$. In particular (cf Theorem \ref{connes}) there exists a Radon-Nikodym derivative $h=\frac{d\psi}{d\phi}$, and $h^{it} = (D\psi : D\phi)_t$. Therefore, the definition of classical relative entropy is also stemming from Definition \ref{gqdef}. Finally, the definition of relative entropy for Dirac's formalism also follows from Definition \ref{gqdef} (cf formula (\ref{3.4})). \end{remark} To say more, we are going to invoke some results from the theory of $L^p$-spaces associated with von Neumann algebras. We note, cf. \cite{terp}, Theorem 36, that \begin{equation} \left( \lambda(\qM), L^2(\qM), J, L^2(\qM)_+\right) \end{equation} is the standard form of $\qM$, where the right action $\lambda(\cdot)$ is defined as $\lambda(a)a = a, a \in L^2(\qM)$ and $J$ denotes the conjugate isometric involution $a \mapsto a^*$ of $L^2(\qM)$ Next, we note, cf. \cite{terp} Proposition 4, that there is a bijection $\phi \mapsto h_{\phi}$ of the set of all normal semifinite weights on $\qM$ onto the set of all positive selfadjoint operators $h$ affiliated with $\cM$, and satisfying $\theta_s(h) = e^{-s}h$ for any $s \in \Rn$. As we wish to deal with $\tau$-measurable operators we must restrict ourselves to normal functionals on $\qM$. Then, the mapping $\phi \mapsto h_{\phi}$ is an isometry of $\qM_*$ onto $L^1(\qM)$. Consequently, fixing $\phi \in \qM_{*,+}$, one gets $h_{\phi} \in L^1(\qM)_+$. In particular, $h^{\frac{1}{2}}_{\phi} \in L^2(\qM)_+$, and \begin{equation} \phi(x) = tr(h^{\frac{1}{2}}_{\phi} x h^{\frac{1}{2}}_{\phi}) \equiv \langle h^{\frac{1}{2}}_{\phi},x h^{\frac{1}{2}}_{\phi}\rangle_{L^2(\qM)}, \end{equation} where $tr$ stands for a linear functional (having the trace property) on $L^1(\qM)$, see Definition II.13 and Proposition II.21 in \cite{terp}. In other words, $h^{\frac{1}{2}}_{\phi}$ is a vector in the natural cone $L^2(\qM)_+$, and this vector represents the state $\phi$. Using the above framework, the proposed definition of entropy may be written as the claim that $S(\psi|\phi)= \lim_{t\to 0} \frac{-i}{t}tr(h_\psi^{1/2}[(D{\psi} \colon D{\phi})_t-\I]h_\psi^{1/2})$ whenever the limits exists, with $S(\psi|\phi)= \infty$ otherwise. The next objective in this section, is to show that this definition can very concretely be reformulated in a manner which is a faithful noncommutative analogue of the classical formula presented in Equation \ref{1.5}. However for this we will need some additional technology which we now review. The first factor that suggests that such a formula may well be within reach is the fact that in the above framework we have that $$(D{\vartheta} \colon D{\psi})_t = h_\vartheta^{it}h_\psi^{-it}$$where $h_\vartheta=\frac{D\widetilde{\vartheta}}{D\tau}$ and $h_\psi=\frac{D\widetilde{\psi}}{D\tau}$. To see that the above claim is true, we may use Haagerup's result and the cocycle chain rule to see that $$\I = (D{\widetilde{\psi}} \colon D\widetilde{\psi})_t = (D{\widetilde{\psi}} \colon D\tau)_t (D\tau \colon D\widetilde{\psi})_t.$$Equivalently $$(D\tau \colon D\widetilde{\psi})_t=(D{\widetilde{\psi}} \colon D\tau)_t^{-1}.$$But from section 2 we know that $(D{\widetilde{\psi}} \colon D\tau)_t=h_\psi^{it}$. Hence $(D\tau \colon D\widetilde{\psi})_t = h_\psi^{-it}$. Since also $(D{\widetilde{\vartheta}} \colon D\tau)_t=h_\vartheta^{it}$, we may once again use Haagerup's result and the chain rule to see that \begin{eqnarray}\label{alt-cocycle} (D{\vartheta} \colon D{\psi})_t &=& (D{\widetilde{\vartheta}} \colon D\widetilde{\psi})_t\\ &=& (D{\widetilde{\vartheta}} \colon D\tau)_t (D\tau \colon D\widetilde{\psi})_t\nonumber\\ &=& h_\vartheta^{it}h_\psi^{-it}\nonumber \end{eqnarray} Another major factor to take into account is that $tr$ is only defined on $L^1(\qM)$. Thus, to proceed with our objective of developing a noncommutative version of formula (\ref{1.5}) we must show that in some sense $h_{\vartheta} \log h_{\vartheta} - h_{\vartheta} \log h_{\phi}$ is in $L^1$. As we shall see below, this can indeed be achieved in a limiting sense. Following Terp's arguments, see \cite{terp} Lemma II.19, we consider the function \begin{equation} \label{terp'sfunction} S^0 \ni \alpha \mapsto h_{\vartheta}^{\alpha} h_{\phi}^{1-\alpha} \in L^1(\qM), \end{equation} where obviously $h_{\vartheta}, h_{\phi} \in L^1(\qM)$, and $S$ is the closed complex strip $\{ \alpha \in \Cn; 0\leq \Re(\alpha) \leq 1 \}$ and $S^0$ stands for the corresponding open strip. Terp's Lemma II.19 easily adapts to show that the function (\ref{terp'sfunction}) is analytic on $S^0$. Taking the derivative, in the Banach space language, one gets that \begin{equation} \label{4.6} \alpha \mapsto h_{\vartheta}^{\alpha} \cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - \alpha} - h_{\vartheta}^{\alpha} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - \alpha} \in L^1(\qM) \end{equation} inside $S^0$. More importantly, the analyticity ensures that this derivative varies continuously with respect to $\alpha$ in $L^1$-norm. A fact which underlies the above very regular behaviour on this strip, is that for any $0<s$, $x^s\log(x)$ is very well behaved continuous function which is 0 at 0, and for which $x^s\leq x^s\log(x) \leq x^{s+1}$ whenever $x\geq e$. This fact can be used to show that for any positive $\tau$-measurable operator $g$, $g^s\log(g)$ will again be $\tau$-measurable. Using the above formula and letting $\alpha\to 1$, leads us to the promised noncommutative analogue of formula (\ref{1.5}). To understand how this is achieved, assume that $\alpha=s+it$ where $0< s < 1$. Then the fact that $$h_{\vartheta}^{\alpha} h_{\phi}^{1-\alpha}=h_{\vartheta}^{s}[h_{\vartheta}^{it} h_{\phi}^{-it}] h_{\phi}^{1-s}$$leads to the conclusion that \begin{eqnarray*} -i\frac{d}{dt}(h_{\vartheta}^{s}(D{\vartheta}:D{\phi})_t h_{\phi}^{1-s})&=&-i\frac{d}{dt}(h_{\vartheta}^{s}[h_{\vartheta}^{it} h_{\phi}^{-it}] h_{\phi}^{1-s})\\ &=&\frac{d}{d\alpha} h_{\vartheta}^{\alpha} h_{\phi}^{1-\alpha}\\ &=& h_{\vartheta}^{\alpha} \cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - \alpha} - h_{\vartheta}^{\alpha} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - \alpha}. \end{eqnarray*} To see this, notice that in computing a limit of the form $\lim_{\Delta\alpha\to 0} \frac{f(\alpha+\Delta\alpha)-f(\alpha)}{\alpha}$, we may as well assume that $\Delta\alpha=i\Delta t$. In particular when computing the derivative anywhere along the line segment $0< s < 1$, $t=0$, we always have that $$-i\frac{d}{dt}\left.(h_{\vartheta}^{s}(D{\vartheta}:D{\phi})_t h_{\phi}^{1-s})\right\bracevert_{t=0} = h_{\vartheta}^{s} \cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - s} - h_{\vartheta}^{s} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - s}\in L^1(\qM).$$ With the groundwork having been done, we are now ready to present the promised result. The concept that is crucial in guaranteeing the validity of the theorem, is the following ordering defined by Takesaki and Connes. (See \cite[Definition 4.1]{ConTak}.) \begin{definition} For two normal weights $\vartheta$ and $\phi$ on a von Neumann algebra $\qM$, and some positive $\delta$, we say that $\vartheta\leq \phi(\delta)$ if the function $t\to (D\vartheta:D\phi)_t=u_t$ extends to an $\qM$-valued map $z\to u_z$ which is point to weak*-continuous and bounded on the closed strip $\{z\in\mathbb{C}: -\delta\leq\Im(z)\leq 0\}$, and analytic on the open strip $\{z\in\mathbb{C}: -\delta<\Im(z)< 0\}$. \end{definition} In the above ordering the case $\delta=\frac{1}{2}$ corresponds exactly to Theorem \ref{2.2}. \begin{theorem}\label{thm4.3} Let $\qM$ be a $\sigma$-finite von Neumann algebra in the standard form described above, and let $\vartheta$ and $\phi$ be two faithful normal states with unit vector representatives $h_\vartheta^{1/2}, h_\phi^{1/2} \in L^2(\qM)$. If $\phi\leq \vartheta(\delta)$, then $S(\vartheta|\phi)$ is finite if and only if the limit $$\lim_{s\nearrow 1} tr(h_{\vartheta}^{s}\cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - s} - h_{\vartheta}^{s} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - s})$$exists, in which case they are equal. \end{theorem} \begin{proof} First suppose that $S(\vartheta|\phi)$ is finite and let $\epsilon>0$ be given. This means that $\lim_{t\to 0}\frac{-i}{t}\vartheta[(D{\vartheta} \colon D{\psi})_t-\I]= \lim_{t\to 0}\frac{-i}{t}tr(h_{\vartheta}^{1/2} [h_{\vartheta}^{it} h_{\phi}^{-it}-\I]h_{\vartheta}^{1/2})$ exists. Next let $s$ be given with $\frac{1}{2}<s<1$. So for $t_\epsilon>0$ small enough, we will have that \begin{itemize} \item $\frac{-i}{t_\epsilon}tr(h_{\vartheta}^{1/2} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_{\vartheta}^{1/2})$ is within $\epsilon$ of $S(\vartheta|\phi)$, \item and $\frac{-i}{t_\epsilon}h_{\vartheta}^{s} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_\phi^{1-s}$ is within $\epsilon$ of $-i\frac{d}{dt}\left.(h_{\vartheta}^{s}(D{\vartheta}:D{\phi})_t h_{\phi}^{1-s})\right\bracevert_{t=0}=h_{\vartheta}^s \cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - s} - h_{\vartheta}^{s} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - s}$ with respect to $L^1$-norm. \end{itemize} Notice that by the properties of the trace functional $tr$ we have that $tr(h_{\vartheta}^{s} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_\phi^{1-s})=tr(h_{\vartheta}^{1/2} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_\phi^{1-s}h_{\vartheta}^{s-1/2})$. By assumption $\phi\leq \vartheta(\delta)$. This means that $t\to(D\phi:D\vartheta)_t$ extends to an $\qM$-valued function $f(z)$ which is point to weak*-continuous on the closed strip, $\{z\in\mathbb{C}: -\delta\leq\Im(z)\leq 0\}$, and analytic on the open strip $\{z\in\mathbb{C}: -\delta<\Im(z)< 0\}$. For each $z$ the value $f(z)$ is essentially just an extension of $h_\phi^{iz}h_{\vartheta}^{-iz}$. (For details of this construction see \cite{Kos}). In view of this we will simply write $[h_\phi^{iz}h_{\vartheta}^{-iz}]$ for $f(z)$. So if we set $z=ir$ where $0\leq r\leq \delta$, we obtain that as $r\searrow 0$ we will have that $[h_\phi^{-r}h_{\vartheta}^r]\to \I$ in the weak* topology on $\qM$. Next notice that for $0\leq 1-s\leq\delta$, we have that $h_\phi^{1-s}h_{\vartheta}^{s-1/2}= [h_\phi^{1-s}h_{\vartheta}^{-(1-s)}]h_\vartheta^{1/2}$. Therefore as $s\nearrow 1$ on the interval $[1-\delta, 1]$, we must have that $\frac{-i}{t_\epsilon}tr(h_{\vartheta}^{1/2} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_\phi^{1-s}h_{\vartheta}^{s-1/2})=\frac{-i}{t_\epsilon}tr(h_{\vartheta} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I][h_\phi^{1-s}h_{\vartheta}^{-(1-s)}])$ converges to $\frac{-i}{t_\epsilon}tr(h_{\vartheta}[h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I])=\frac{-i}{t_\epsilon}tr(h_{\vartheta}^{1/2} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_{\vartheta}^{1/2})$. There must therefore exist a $\widetilde{\delta}>0$ such that for any $s$ with $1-\widetilde{\delta} <s <1$, the term $\frac{-i}{t_\epsilon}tr(h_{\vartheta}^{1/2} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_\phi^{1-s}h_{\vartheta}^{s-1/2})$ will be within $\epsilon$ of $\frac{-i}{t_\epsilon}tr(h_{\vartheta}^{1/2} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_{\vartheta}^{1/2})$. If we combine all the above observations, it follows that for any $s$ with $1-\widetilde{\delta} <s <1$, $tr(h_{\vartheta}^{s}\cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - s} - h_{\vartheta}^{s} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - s})$ will be within $3\epsilon$ of $S(\vartheta|\phi)$. This proves the ``only if'' part of the theorem. Next suppose that $S(\vartheta|\phi)=\infty$. From the proof of Theorem \ref{ar-ent} is is clear that given $M>0$ we may in this case find some $t_\epsilon>0$ such that $|\frac{-i}{t_\epsilon}tr(h_{\vartheta}^{1/2} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_{\vartheta}^{1/2})|\geq M$ with additionally (as before) $\frac{-i}{t_\epsilon}h_{\vartheta}^{s} [h_{\vartheta}^{it_\epsilon} h_{\phi}^{-it_\epsilon}-\I]h_\phi^{1-s}$ within $\epsilon$ of $-i\frac{d}{dt}\left.(h_{\vartheta}^{s}(D{\vartheta}:D{\phi})_t h_{\phi}^{1-s})\right\bracevert_{t=0}=h_{\vartheta}^s \cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - s} - h_{\vartheta}^{s} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - s}$ with respect to $L^1$-norm. The constant $\widetilde{\delta}$ is selected as in the first part of the proof. Combining these estimates, now leads to the conclusion that $|tr(h_{\vartheta}^{s}\cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - s} - h_{\vartheta}^{s} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - s})|\geq M-2\epsilon$ for all $s$ with $1-\widetilde{\delta} <s <1$. Since both $M>0$ and $\epsilon>0$ were arbitrary, the limit $\lim_{s\nearrow 1} tr(h_{\vartheta}^{s}\cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - s} - h_{\vartheta}^{s} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - s})$ can then clearly not exist. The theorem therefore follows. \end{proof} \section{An alternative approach to the general quantum case}\label{sec5} Here we propose a means for defining the entropy of a single state $\vartheta$. This definition turns out to be equivalent to von Neumann entropy in the tracial case. Some careful preparation for and justification of this definition is required. As a first step in identifying a suitable prescription for defining entropy of a single state, we take some time to see what Theorem \ref{thm4.3} looks like when the states in question commute. A crucial tool in this endeavour, is the recently developed theory of Orlicz spaces for type III algebras (see \cite{L}). A crucial construct in the development of this theory of ``type III'' Orlicz spaces, is the concept of a fundamental function. The fundamental function of a rearrangement invariant Banach function space on $(\mathbb{R},\mathcal{B}_{\mathbb{R}}, \lambda)$, say $L^\rho(\mathbb{R})$, is defined on $[0,\infty)$ by the prescription $\varphi(t) = \|\chi_E\|_\rho$, where $E$ is any measurable subset of $\mathbb{R}$ with $\lambda(E)=t$. The rearrangement invariance of the space in question, ensures the well-definedness of the corresponding fundamental function. The interested reader may find a more detailed introduction to fundamental functions on pages 65-73 of \cite{BS}. The two facts regarding fundamental functions that we need, is that for an Orlicz space $L^\Psi(\mathbb{R})$, the fundamental function is given by the prescription $t\to\frac{1}{\Psi^{-1}(1/t)}$ when the Luxemburg norm is in view, and by $t\to t(\Psi^*)^{-1}(1/t)$ when the Orlicz norm is in view. (See \cite[II.5.2, IV.8.15 \& IV.8.17]{BS}.) We will need the following lemma in our investigation. (The proof is contained in the proof of \cite[Theorem 2.2]{L}.) \begin{lemma}\label{mainthm} Let $a, b\, \eta \,\qM^+$ be commuting affiliated operators. Let $\Psi$ be an Orlicz function and let $\varphi_\psi$ be the fundamental function of $L^\Psi(\mathbb{R})$ equipped with the Luxemburg norm. Then $$\chi_{(1,\infty)}(a\varphi_\Psi(b)) = \chi_{(1,\infty)}(\Psi(a)b).$$ \end{lemma} \begin{proof} Let $\alpha, \beta > 0$ be given. It is a known fact that $\alpha \Psi(\beta) \leq 1 \Leftrightarrow \beta \leq \Psi^{-1}(\frac{1}{\alpha})$. If we apply this fact to the Borel functional calculus for the commuting positive operators $a$ and $b$, we have that $\chi_{(1,\infty)}(a\varphi_\Psi(b)) = \chi_{(1,\infty)}(\Psi(a)b)$ as required. \end{proof} The above lemma now enables us to make the following conclusion: \begin{proposition}\label{prop5.2} Let $\vartheta$, $\phi$ be faithful normal states on $\qM$ with unit vector representatives $h_\vartheta^{1/2}$, $h_\phi^{1/2}$, which commute in the sense that they satisfy one (and therefore all) of the criteria described in \cite[Corollary VIII.3.6]{Tak2}. Assume in addition that $\phi\leq \vartheta(\delta)$ for some $\delta>0$, . With $\varphi_{\log}$ denoting the fundamental function of the space $L\log(L+1)(\mathbb{R})$ (equipped with the Luxemburg norm), we then have that \begin{itemize} \item $h_\vartheta$ and $h_\phi$ are commuting operators affiliated to $\mathcal{M}$, \item $f= h_\vartheta h_\phi^{-1}$ extends uniquely to an element of $\qM$, \item and $S(\vartheta|\phi)= \phi(f\log(f)) =\inf_{\epsilon >0} [\epsilon\tau(\chi_{(\epsilon, \infty)}(\varphi_{\log}(h_\phi)f) + \log(\epsilon)\|h_\phi f\|_1]$. \end{itemize} \end{proposition} \begin{proof} The first step is to show that $h_\vartheta$ and $h_\phi$ commute. It is clear from the proof of \cite[Corollary VIII.3.6]{Tak2}, that the commutation of $\vartheta$ and $\phi$, ensures the existence of an operator $h$ affiliated to the centraliser $\qM_\phi$ of $\phi$ for which we have that $(D\phi:D\vartheta)_t=h^{it}$. But from the discussion preceding Theorem \ref{thm4.3} we know that $(D\vartheta:D\phi)_t=(D\widetilde{\vartheta}:D\widetilde{\phi})_t= h^{it}_\vartheta h^{-it}_\phi$. In other words for each $t$, $h^{it} = h^{it}_\vartheta h^{it}_\phi$. On appealing to the properties of the cocycle derivative, we may now conclude that \begin{eqnarray*} h^{i(t+s)}&=&(D\widetilde{\vartheta}:D\widetilde{\phi})_{t+s}\\ &=&(D\widetilde{\vartheta}:D\widetilde{\phi})_{s}\sigma^\phi_s((D\widetilde{\vartheta}:D\widetilde{\phi})_{t})\\ &=& h^{is}h_\phi^{is} h^{it} h_\phi^{-is} \end{eqnarray*} or equivalently, $h^{it}=h_\phi^{is} h^{it} h_\phi^{-is}$. So each $h^{it}$ commutes with each $h_\phi^{is}$. But we saw earlier that $h^{it} = h^{it}_\vartheta h^{-it}_\phi$, or equivalently that $h^{it}_\vartheta=h^{it}h^{it}_\phi$. Together these two facts ensure that each $h_\vartheta^{it}$ commutes with each $h_\phi^{is}$. We may now use the Borel functional calculus to conclude from these two facts that $h_\phi$ and $h_\vartheta$ themselves also commute. This proves the first bullet. To see the second bullet, we note from the proof of Theorem \ref{thm4.3} that the requirement that $\phi\leq \vartheta(\delta)$, ensures that for $r>0$ small enough, $h_\vartheta^r h_\phi^{-r}$ extends uniquely to an element of $\qM$. Since $h_\phi$ and $h_\vartheta$ commute, this clearly ensures that the closure of $(h_\vartheta^r h_\phi^{-r})^{1/r}=h_\vartheta h_\phi^{-1}$ also belongs to $\qM$. For the final bullet, note that by the Borel functional calculus, the commutation of $h_\phi$ and $h_\vartheta$, ensures that we may write the limit formula $\lim_{s\nearrow 1} tr(h_{\vartheta}^{s}\cdot \log h_{\vartheta} \cdot h_{\phi}^{1 - s} - h_{\vartheta}^{s} \cdot \log h_{\phi} \cdot h_{\phi}^{1 - s})$ as $\lim_{s\nearrow 1} tr(f^s\log(f)h_{\phi})$ where $f=h_\vartheta h_\phi^{-1}$. It also follows from the proof of Theorem \ref{thm4.3}, that there exists an interval $[\delta, 1]$ for which $s\to f^s$ is point-weak* continuous. So given $\rho$ with $\delta<\rho<1$, we may write the limit formula as $\lim_{r\nearrow \rho} tr(f^r[f^{(1-\rho)}\log(f)]h_{\phi})$. We may now use the continuous functional calculus to see that since $f\in \qM$, we must have that $f^{(1-\rho)}\log(f) \in \qM$. But then $[f^{(1-\rho)}\log(f)]h_{\phi}\in L^1(\qM)$. The point-weak* continuity of the map $r\to f^r$ on $[\delta, \rho]$, now ensures that $\lim_{r\nearrow \rho} tr(f^r[f^{(1-\rho)}\log(f)]h_{\phi})=tr(f\log(f)h_\phi)=\phi(f\log(f))$. To prove the final equality, one firstly uses a similar argument to the one in \cite[Proposition 6.8]{[2]} to see that $tr(h_\phi f\log(f))= \inf_{\epsilon >0}[\epsilon tr(h_\phi(f/\epsilon)\log((f/\epsilon)+\I)) +\log(\epsilon)tr(h_\phi f)]$. On combining the preceding Lemma with \cite[Lemma II.5 \& Def II.13]{terp}, we then have that $$tr(h_\phi(f/\epsilon)\log((f/\epsilon)+\I)) = \tau(\chi_{(1,\infty)}(h_\phi(f/\epsilon)\log((f/\epsilon)+\I))$$ $$=\tau( \chi_{(1,\infty)} (\varphi_{\log}(h_\phi)f/\epsilon))= \tau(\chi_{(\epsilon,\infty)}(\varphi_{\log}(h_\phi)f)).$$This proves the final claim. \end{proof} \bigskip We are now finally ready to present the definition of the entropy ${S}(\vartheta)$ of a faithful normal state $\vartheta$. The basic idea is to use the above result as guide, for the kind of technical prescription that might work. Tempting as it may be to simply replace $f=h_\vartheta h_\phi^{-1}$ with $h_\vartheta$, and $\phi$ with $tr$, to obtain $tr(h_\vartheta\log(h_\vartheta))$ as a definition, this cannot possibly work. The problem with this prescription is that $tr$ is only defined on $L^1(\qM)$ where in the crossed product setting, all the elements $h$ of $L^1(\qM)$ have to satisfy the requirement that $\theta_s(h)=e^{-s}h$ for all $s$. Since $h_\vartheta\in L^1(\qM)$ we do have that $\theta_s(h_\vartheta)=e^{-s}h_\vartheta$, But then $\theta_s(h_\vartheta\log(h_\vartheta))=e^{-s}h_\vartheta\log(e^{-s}h_\vartheta)\neq e^{-s}h_\vartheta\log(h_\vartheta)$. However the final equality in the third bullet of Proposition \ref{prop5.2}, does present us with a means for overcoming this difficulty for a subspace of $L^1(\qM)$. The subspace in question is the noncommutative Orlicz space $L^1\cap L\log(L+1)(\qM)$. Some analysis is necessary before we are able to present the definition. Note that classically $L^1\cap L\log(L+1)$ is an Orlicz space produced by the Young's function $$\Psi_{ent}(t)=\max(t,t\log(t+1))=\left\{ \begin{array}{ll} t & 0\leq t\leq e-1\\ t\log(t+1) & e-1\leq t\end{array} \right. $$We start by describing how to construct the type III analogue of the space $L^1\cap L\log(L+1)$. We will for simplicity of computation assume that each of $L\log(L+1)(0,\infty)$ and $L^1\cap L\log(L+1)(0,\infty)$ are equipped with the Luxemburg norm. It is then an exercise to see that the fundamental function of $L^1\cap L\log(L+1)(0,\infty)$ is of the form $\varphi_{ent}(t)=\max(t,\varphi_{\log}(t))$. It is this fundamental function that one uses to construct the type III analogue of $L^1\cap L\log(L+1)$ in accordance with the prescriptions given in \cite{L, LM}. Let us for the sake of brevity denote this space by $L^{ent}(\mathfrak{M})$. We now show that this space canonically embeds into both $L^1(\qM)$ and $L\log(L+1)(\qM)$. From the above computations, it is clear that the functions $\zeta_1(t)= \frac{t}{\varphi_{ent}(t)}$, and $\zeta_{\log}(t)= \frac{\varphi_{\log}(t)}{\varphi_{ent}(t)}$ are both continuous and bounded above (by 1) on $(0,\infty)$. Hence for $h=\frac{D\tilde{\omega}}{D\tau}$, the operators $\zeta_1(h)$ and $\zeta_{\log}(h)$ are both contractive elements of $\cM$. It is now an exercise to see that the prescriptions $x\to\zeta_1(h)^{1/2}x\zeta_1(h)^{1/2}$ and $x\to\zeta_{\log}(h)^{1/2}x\zeta_{\log}(h)^{1/2}$ respectively yield continuous embeddings of $L^{ent}(\mathfrak{M})$ into $L^1(\mathfrak{M})$ and $L\log(L+1)(\mathfrak{M})$. Using these embeddings, we now make the following definition: \begin{definition}\label{genentropy} \label{defent} A state $\vartheta$ on the von Neumann algebra $\qM$ is called \emph{regular} if for some element $g$ of $[L\log(L+1)\cap L^1](\mathfrak{M})^+=L^{ent}(\mathfrak{M})^+$, $\frac{D\tilde{\vartheta}}{D\tau}$ is of the form $\zeta_1(h)^{1/2}g\zeta_1(h)^{1/2}$. For such a regular state we then define the entropy to be $$\tilde{S}(\vartheta)=\inf_{\epsilon >0}[\epsilon\tau(\chi_{(\epsilon, \infty)}(\zeta_{\log}(h)^{1/2}g\zeta_{\log}(h)^{1/2})) + \log(\epsilon)\|\zeta_1(h)^{1/2}g\zeta_1(h)^{1/2}\|_1].$$(Here $h$ is the density $\frac{D\widetilde{\omega}}{D\tau}$ of the dual weight $\widetilde{\omega}$.) \end{definition} \begin{remark} What we must clarify is the meaning of the term ``regularization'' used in the above definition. The density $h = \frac{D\widetilde{\omega}}{D\tau}$ is related, by Bisognano-Wichmann results \cite{BW1}, \cite{BW2}, to the equilibrium hamiltonian, cf Remark 2.11 in \cite{LM}. This gives a relation to thermodynamics of equilibrium states, cf Remark \ref{2.4a}. Further, $\frac{D\widetilde{\omega}}{D\tau}$ is in $L^1(\qM)$ space. This and Definition \ref{defent} imply that the regularization procedure stems from the prescription leading to the construction of $L^{ent}(\qM)$ space, see Definition 3.4 in \cite{L}. On the other hand, it is worth noting that the same procedure was used to define $\tau$-measurability of quantum field operators, see \cite{LM}. Consequently, the regularization procedure is based on the selection of such measurable operators which are good candidates for representing states and this selection is compatible with the new formalism of statistical mechanics. We remind that this new formalism is based on the distinguished pair of Orlicz spaces $\langle L^{\cosh - 1}(\qM), L\log(L+1)(\qM)\rangle$, for details see \cite{[2]}, \cite{[3]}. \end{remark} One has (cf \cite{LM}) \begin{corollary} If $\vartheta$ is a regular state, then $\tilde{S}(\vartheta)$ is well defined (although possibly infinite valued). \end{corollary} We proceed to prove a result establishing criteria under which a version of Equation \ref{1.6} holds in the present setting. Note that in this result, the faithful KMS state $\omega$ plays the role of the reference measure $\lambda$. \begin{theorem} \label{5.6} Let $\vartheta$ be a regular state with $\frac{D\tilde{\vartheta}}{D\tau}$ of the form $\zeta_1(h)^{1/2}g\zeta_1(h)^{1/2}$, where $g\in [L\log(L+1)\cap L^1](\mathfrak{M})^+=L^{ent}(\mathfrak{M})^+$ commutes with $h$. Then $\tilde{S}(\vartheta)=S(\vartheta|\omega)$. \end{theorem} \begin{proof} We will write $k$ for $\frac{D\tilde{\vartheta}}{D\tau}$. By assumption $g$ and $h$, and therefore $k$ and $h$, are commuting affiliated operators. Hence so are $p=kh^{-1}$ and $h$. The proof makes extensive use of the kinds of techniques employed in Theorem \ref{ar-ent}. Given $a\in \qM$ and $b\in L^1(\qM)^+$, we will for this reason once again employ the notational device (validated by Terp's description of the standard form) of writing $\langle b^{1/2}, ab^{1/2}\rangle$ for $tr(ba)$. In this case let $\lambda\to e^\lambda$ be the spectral resolution of $p$. The fact that both $h$ and $k$ belong to $L^1(\qM)$ ensures that $\theta_s(p)=\theta_s(k)\theta_s(h^{-1})=kh^{-1}=p$ for every $s\in \mathbb{R}$. This in turn is enough to ensure that $p$ is actually affiliated to the ``subalgebra'' $\qM$ of the crossed product $\mathcal{M}$. Hence the spectral projections $e^\lambda$ are all elements of $\qM$. The first stage of the proof is to show that in general $S(\vartheta|\omega)$ is finite if and only if the integral $\int_0^\infty\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle$ converges, in which case they are equal. Additionally the only way the integral can diverge is by diverging to $\infty$. To see this observe that it is an easy consequence of the Borel functional calculus that $p\log(p)\chi_{[0,1]}(p)\in \qM$. In other words we will always have that $\int_0^1\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle=\langle h^{1/2},p\log(p)\chi_{[0,1]}(p)h^{1/2}\rangle=\omega(p\log(p)\chi_{[0,1]}(p))$ is a well defined complex number. Thus the integral $\int_0^\infty\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle$ converges if and only if $\int_1^\infty\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle$ converges. In the case of divergence, we may then justifiably assign a value of $\infty$ to the integral. We proceed with justifying the claimed equality. Observe that we may use Equation \ref{alt-cocycle} and the Borel functional calculus for commuting affiliated operators to see that for any $t>0$ we have that \begin{eqnarray*} \frac{-i}{t}\vartheta[(D\vartheta:D\omega)_t-\I] &=& \frac{-i}{t}\vartheta[k^{it}h^{-it}-\I]\\ &=& \frac{-i}{t}\vartheta[p^{it}-\I]\\ &=& \frac{-i}{t}tr(k[p^{it}-\I])\\ &=& \frac{-i}{t}\langle k^{1/2},(p^{-it}-1)k^{1/2}\rangle\\ &=& \frac{-i}{t}\langle p^{1/2}h^{1/2},(p^{-it}-1)p^{1/2}h^{1/2}\rangle\\ &=& \frac{-i}{t}\int_0^\infty\lambda(\lambda^{it}-1)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle. \end{eqnarray*} Using this fact, it is now a not too onerous exercise to modify the proof of Theorem \ref{ar-ent} to obtain the fact that $S(\vartheta|\omega)$ is finite if and only if the integral $\int_0^\infty\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle$ converges. We briefly pause to indicate how this argument works. In the case where $\lambda\log(\lambda)$ is integrable, we once again use the inequality $|\frac{1}{t}(\lambda^{it}-1)|\leq|\log(\lambda)|$ established earlier, to invoke an application of the dominated convergence theorem, from which the claim will then follow for this case. In the case where $\lambda\log(\lambda)$ is not integrable, it must as noted earlier fail to be integrable on $[1,\infty)$. As in Theorem \ref{ar-ent}, we may then use Fatou's lemma to see that in this case the limit $\lim_{t\to 0}\int_1^\infty\lambda(\lambda^{it}-1)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle$ will fail to exist. On the other hand the integrability of $\lambda\log(\lambda)$ on $[0,1]$ combined with yet another application of the dominated convergence theorem, ensures that $\lim_{t\to 0}\int_0^1\lambda(\lambda^{it}-1)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle = \int_0^1\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle$. Combining these two facts yields the conclusion that the limit $\lim_{t\to 0}\int_1^\infty\lambda(\lambda^{it}-1)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle$ will in this case fail to exist. This concludes the first part of the proof. The next part of the proof is to show that (infinite values included), we have that $$\inf_{\epsilon >0}\int_0^\infty\lambda\log(\lambda+\epsilon)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle=\int_0^\infty\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle$$from which we will then be able to deduce the claim. To see this next fact observe that for any $\lambda>0$ and $\epsilon>0$, we have that $\log(\lambda+\epsilon)>\log(\lambda)$, and that $$0< \lambda(\log(\lambda+\epsilon)-\log(\lambda))=\lambda\log(1+(\epsilon/\lambda))\leq\epsilon.$$This ensures that \begin{eqnarray*} \int_0^\infty\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle &\leq& \int_0^\infty\lambda\log(\lambda+\epsilon)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle\\ &\leq& \int_0^\infty(\lambda\log(\lambda)+\epsilon)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle\\ &=& \int_0^\infty\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle +\epsilon\omega(\I)\\ &=& \int_0^\infty\lambda\log(\lambda)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle +\epsilon, \end{eqnarray*} which establishes the claim. If we combine the two facts we have proved thus far, it yields the conclusion that (infinite values included) we always have that $$S(\vartheta|\omega)=\inf_{\epsilon >0}\int_0^\infty\lambda\log(\lambda+\epsilon)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle.$$We now use this formula to show that $S(\vartheta|\omega)=\tilde{S}(\vartheta)$. This claim will follow if we can show that for any $\epsilon>0$, the equality $\epsilon\tau(\chi_{(\epsilon, \infty)}(\zeta_{\log}(h)^{1/2}g\zeta_{\log}(h)^{1/2})) + \log(\epsilon)\|\zeta_1(h)^{1/2}g\zeta_1(h)^{1/2}\|_1= \int_0^\infty(\lambda\log(\lambda)+\epsilon)\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle$ holds true. Let $\epsilon>0$ be given. Firstly observe that by the definition of $\zeta_1$ and $\zeta_{\log}$, we have that $\zeta_{\log}(h)=\varphi_{\log}(h)\varphi_{ent}(h)^{-1}=\varphi_{\log}(h)h^{-1}\zeta_1(h)$. Thus by the commutation assumption, we have that $\zeta_{\log}(h)^{1/2}g\zeta_{\log}(h)^{1/2}=\varphi_{\log}(h)h^{-1}\zeta_1(h)^{1/2}g\zeta_1(h)^{1/2}=\varphi_{\log}(h)h^{-1}k=\varphi_{\log}(h)p$. Hence we may apply Lemma \ref{mainthm} to see that \begin{eqnarray*} &&\epsilon\tau(\chi_{(\epsilon, \infty)}(\zeta_{\log}(h)^{1/2}g\zeta_{\log}(h)^{1/2})) + \log(\epsilon)\|\zeta_1(h)^{1/2}g\zeta_1(h)^{1/2}\|_1\\ &=& \epsilon\tau(\chi_{(\epsilon, \infty)}(\varphi_{\log}(h)p)) + \log(\epsilon)tr(hp)\\ &=& \epsilon\tau(\chi_{(1, \infty)}(\varphi_{\log}(h)(p/\epsilon)) + \log(\epsilon)tr(hp)\\ &=& \epsilon\tau (\chi_{(1, \infty )}(h(p/\epsilon )\log ((p/\epsilon )+\I ))) + \log(\epsilon)tr(hp). \end{eqnarray*} Since $h\in L^1(\qM)$ with $p$ affiliated to $\qM$, the operator $b=h(p/\epsilon)\log (p/\epsilon)+\I)$ is a positive operator affiliated to the crossed product for which we have that $\theta_s(b)=e^{-s}b$ for each $s\in \mathbb{R}$. By \cite[Proposition II.4]{terp}, $b$ corresponds to a normal weight $\Phi_b$ on $\qM$. If we now apply \cite[Lemma II.5]{terp}, it follows that $\Phi_b(\I)= \tau (\chi_{(1, \infty)}(h(p/\epsilon)\log ((p/\epsilon)+\I)))$. Writing $e_N$ for $\chi_{[0,N]}(p)$, we next again appeal to \cite[Proposition II.4]{terp} to see that for each $N>0$, the weight $f\to\Phi_b(e_Nfe_N)$ corresponds to $e_Nbe_N$. All of these observations may now be combined with the normality of $\Phi_b$, and \cite[Definition II.13]{terp} to see that \begin{eqnarray*} &&\epsilon\tau (\chi_{(1, \infty)}(h(p/\epsilon)\log ((p/\epsilon)+\I))) + \log(\epsilon)tr(hp)\\ &=& \epsilon\Phi_b(\I)+ \log(\epsilon)tr(hp)\\ &=& \lim_{N\to\infty}\Phi_b(e_N)+ \log(\epsilon)tr(hp)\\ &=& \epsilon\lim_{N\to\infty}\tau (\chi_{(1, \infty)}(h(e_Np/\epsilon)\log((e_Np/\epsilon)+\I))) + \log(\epsilon)tr(hp)\\ &=& \epsilon\lim_{N\to\infty}tr(h(e_Np/\epsilon)\log((e_Np/\epsilon)+\I)) + \log(\epsilon)tr(hp)\\ &=& \epsilon\lim_{N\to\infty}\langle h^{1/2}, (e_Np/\epsilon)\log((e_Np/\epsilon)+\I)h^{1/2}\rangle + \log(\epsilon)\langle h^{1/2},ph^{1/2}\rangle\\ &=& \lim_{N\to\infty}\int_0^N \lambda\log((\lambda/\epsilon)+1)\,d\langle h^{1/2}, e^\lambda h^{1/2}\rangle + \log(\epsilon)\int_0^\infty\lambda\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle\\ &=& \int_0^\infty \lambda\log((\lambda/\epsilon)+1)\,d\langle h^{1/2}, e^\lambda h^{1/2}\rangle + \log(\epsilon)\int_0^\infty\lambda\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle\\ &=& \int_0^\infty \lambda\log(\lambda+\epsilon)\,d\langle h^{1/2}, e^\lambda h^{1/2}\rangle. \end{eqnarray*} This proves the claim required to establish the theorem. To obtain the final equality, we silently used the facts that $\int_0^\infty \lambda\log((\lambda/\epsilon)+1)\,d\langle h^{1/2}, e^\lambda h^{1/2}\rangle$ either converges, or diverges to $\infty$, and that we always have that $\int_0^\infty\lambda\,d\langle h^{1/2},e^\lambda h^{1/2}\rangle=tr(hp)=tr(k)=\vartheta(\I)=1<\infty$. \end{proof} \begin{remark} The full significance of Theorem \ref{5.6} will be discussed in Section 6. For now the important point to note here is that to define entropy for large systems (so for type III von Neumann algebras) we were here working within the new formalism, which is based on the distinguished pair of Orlicz spaces $\left\langle L^{\cosh -1}, L\log(L+1)\right\rangle$ - for details see \cite{[1]}, \cite{[2]}, \cite{[3]}. In particular, the superalgebra $\cM$ was employed. In that way it is possible to define entropy for non-semifinite von Neumann algebras, and consequently to study thermodynamics for such systems. Furthermore, this should make clear in which way we avoided the problems discussed in \cite{OP} - see Theorem 6.10 of that monograph. Now let $\qM$ be a semifinite von Neumann algebra and $\omega=\tau_\omega$ a tracial state. Let $\vartheta$ be a faithful normal state for which the Radon-Nikodym derivative $a$ described in Theorem \ref{2.1} belongs to the tracial space $[L\log(L+1)\cap L^1](\mathfrak{M},\tau_\omega)$ (see the prescription in for example section 1 of \cite{L} to see how this space is defined. When passing to the crossed product, it is known that in the case of semifinite algebras equipped with a trace (as is the case here), the crossed product $\mathcal{M}$ of $\qM$ with the modular automorphism group of $\tau_\omega$, is essentially just a copy of $\qM\otimes L^\infty(\mathbb{R})$ \cite[Part II, Proposition 4.2]{vD}. In particular, under this correspondence the canonical trace $\tau$ on $\cM$, may be identified with $\tau_\omega \otimes \int_\mathbb{R}\cdot e^{-t}\,dt$ (see section 2 of \cite{L}). This identification forms the background for the analysis in section 2 of \cite{L}, where certain quantities described by the pair $(\cM,\tau)$, may alternatively be described by the pair $(\qM,\tau_\omega)$. By Proposition 2.5 and Definition 3.4 of \cite{L}, $a$ corresponds to an element $g$ of $[L\log(L+1)\cap L^1](\mathfrak{M})^+=L^{ent}(\qM)^+$, which is of the form $g=a\otimes\varphi_{ent}(e^t)$. Again by \cite[Proposition 2.5]{L}, the operators $\zeta_{\log}(h)^{1/2}g\zeta_{\log}(h)^{1/2}$ and $\zeta_1(h)^{1/2}g\zeta_1(h)^{1/2}$ are respectively of the form $a\otimes\varphi_{\log}(e^t)$ and $a \otimes e^t$. Using the fact that $\tau_\omega \otimes \int_\mathbb{R}\cdot e^{-t}\,dt$, we may therefore apply \cite[Proposition 1.7]{FK} and \cite[Theorem 2.2]{L} to see that we will for any $\epsilon>0$ have that \begin{eqnarray*} &&\epsilon\tau ( \chi_{(\epsilon, \infty )}(\zeta_{\log}(h)^{1/2}g\zeta_{\log}(h)^{1/2})) + \log(\epsilon)\|\zeta_1(h)^{1/2}g\zeta_1(h)^{1/2}\|_1\\ &&\qquad =\epsilon\tau(\chi_{(\epsilon, \infty)}(\zeta_{\log}(h)^{1/2}g\zeta_{\log}(h)^{1/2}))+ \log(\epsilon)\tau ( \chi_{(1, \infty )}(\zeta_1(h)^{1/2}g\zeta_1(h)^{1/2}))\\ &&\qquad =\tau_\omega(a\log (a/\epsilon +\I))+ \log(\epsilon)\tau_\omega(a)\\ &&\qquad =\tau_\omega (a\log (a +\epsilon\I )) \end{eqnarray*} So in this case the formula in the preceding definition corresponds exactly to the more familiar formula $\tilde{S}(\vartheta)=\inf_{\epsilon>0}\tau_\omega(a\log(a+\epsilon\I))= \tau_\omega(a\log(a))$. \end{remark} \section{Discussion} As was noted in the introduction, the standard framework of classical statistical mechanics is based on the pair \begin{equation} \label{1d} \langle L^{\infty}(\Gamma, \mu), L^1(\Gamma, \mu)\rangle, \end{equation} for a measure space $(\Gamma, \mu)$. Let us consider this point in detail. There are two ``extremal cases'' of a measure spaces which are employed in Physics. The first case is a countably totally atomic measure space while the second one is based on non-atomic measure. Let us consider the first case. Then (\ref{1d}) reads \begin{equation} \label{2d} \langle l^{\infty}(\Nn), l^1(\Nn)\rangle \end{equation} and then the states are described by \begin{equation} \label{3d} \{ f \equiv (f_1,f_2,...) \in l^1; f\geq 0, \sum_i f_i = 1 \} \subset l^1. \end{equation} It is important to note that in (\ref{3d}) one has pure states and a general state is a convex combination of pure states. Furthermore, if in $l^1$ there are only finite sequences, so when $l^1 \equiv l^1(1,2,...,N)$, then Boltzmann's $W$-entropy follows from the recipe for the $H$-functional, provided that the probability distribution is uniform. On the other hand, the second case leads to \begin{equation} \label{4d} \langle L^{\infty}(\Gamma, d\mu), L^1(\Gamma, d\mu)\rangle \end{equation} with the states then given by \begin{equation} \label{5d} \{ f \in L^1(\Gamma, d\mu); f\geq 0, \int f d\mu = 1 \}. \end{equation} where the reference measure $\mu$ is non-atomic. It is crucial to note that in (\ref{5d}) there do not exist pure normal states. Therefore, if as in Boltzmann's theory, the reference measure is akin to Lebesgue measure in the sense of being non-atomic, an examination of the behaviour of the H-functional with respect to pure normal states is an example of ``ill posed'' problem. Turning to quantization, the von Neumann entropy (based on Dirac's formalism) uses pure states and hence is related to (\ref{3d}). Contrariwise, a general quantum system, cf Sections 4 and 5, needs to allow for type III von Neumann algebras. It is known that a type III factor $\mathfrak{M}$ does not have normal pure states. Therefore, type III factors have that mathematical feature in common with the abelian von Neumann algebra $L^{\infty}(\Gamma, d\mu)$ given in (\ref{4d}) which also has no pure normal states. Consequently the entropy $\widetilde{S}(\vartheta)$ defined in Definition \ref{defent} in the previous section, has more in common with the H-functional, than with the von Neumann entropy. Before proceeding further let us pause to make some important remarks on the nature of states. \begin{remark} \begin{itemize} \item Although in (\ref{5d}) there are no pure normal states, any probability measure is an accumulation point of the convex hull of Dirac measures. This property of classical measure theory (the weak-$^*$ Riemann approximation property) implies that for a continuous classical system all states are separable, see \cite{WAM}. Furthermore, interpreting a Dirac measure as a pure state, one can again say that a convex combination of pure states leads to a state. \item A non-commutative integral, does not in general have the weak-$^*$ Riemann approximation property. Thus, there is a ``room'' for entangled states, see \cite{WAM}. \item As a type III von Neumann algebra has no pure normal states, the question of whether $\widetilde{S}(\vartheta)$ is zero only for pure states has no sense. \item Finally, to avoid any confusion, we note that each $W^*$ algebra is also a $C^*$ algebra with unit. So, the set of all states of such algebra has pure states (by the Krein-Milmann theorem) but these states are not normal! \end{itemize} \end{remark} Turning to the H-functional, we note that it is an easy observation that $H({\chi}_{{}_{\Gamma_{0}}}) = 0$, where ${\chi}_{{}_{\Gamma}}$ is a characteristic function given by a measurable subset $\Gamma_0 \subset \Gamma$. Clearly, $\chi_{{}_{\Gamma_{0}}}$ is a projector in $L^{\infty}(\Gamma, d\mu)$. However, we are again not able to simplistically translate this property of the H-functional to general quantum systems. To see this, let us assume that a projector $P$ is in $L^1(\qM)$. This means that $\theta_s(P) = e^{-s}P$, for any $s$, where $\theta_s$ stands for the dual action of $\Rn$ on $\cM$. But, one has also $$\theta_s(P) = \theta_s (P \cdot P) = \theta_s(P) \theta_s(P) = e^{-2s}P,$$ which is only possible for $s=0$. The problem here is that the entropy defined in the previous section only makes sense for elements of $L^{ent}(\qM)$. So to make sense of the ``entropy'' of a projector $P$, we first have to embed $P$ into the space $L^{ent}(\qM)$. If indeed $\omega(P)<\infty$ (where $\omega$ is the a priori given faithful normal semifinite weight on $\qM$), then $g=\varphi^{1/2}_{ent}(h)(P)\varphi^{1/2}_{ent}(h)$ (where $h=\frac {d\tilde{\omega}}{d\tau}$) belongs to $L^{ent}(\qM)$ whenever $\omega(P)<\infty$ \cite[Proposition 3.3]{L}. The quantum analogue of $H({\chi}_{{}_{\Gamma_{0}}})$ would then be given by applying the prescription in Definition \ref{defent} with $g$ as above. Hence, to sum up: \vskip 0.5cm \textit{ Some basic properties of classical entropy S as well as of the H-functional have no quantum counterparts in the theory based on type III von Neumann algebras. In particular, the entropy $\widetilde{S}(\vartheta)$ does not exhibit some of the properties typical of its classical counterparts. This is not surprising as entropy being a function of states, should at some level reflect the structure of the state space of the considered system.} \vskip 0.5cm However, despite the above differences between the classical and quantum descriptions, the new approach presented here offers a solution to old open problems. It is well known that in classical statistical mechanics, the Gibbs Ansatz $Z^{-1}e^{-\beta H}$, is designed to describe a classical canonical equilibrium state and that essential thermodynamical information is contained in the partition function $Z = \int e^{-\beta H}d\Gamma$. Here $H$ stands for the Hamiltonian of the considered system, and $\beta$ for the ``inverse'' temperature. The quantization of $e^{-\beta H}$ means that now $H$ is the Hamiltonian operator, and hence to have a quantum state within Dirac's formalism, we require that $e^{-\beta H}$ should then be a trace class operator. But this is only the case when, at the very least, $H$ has a pure point spectrum with accumulation point at infinity. Unfortunately, even the Hamiltonian of the Hydrogen atom does not fulfill this requirement. To see that this question has an easy solution in the presented framework we note: \begin{enumerate} \item As we have seen in Section 4, there is $h_{\omega} = \frac {d\tilde{\omega}}{d\tau}$ where we are using the ``language'' of non-commutative integration theory, cf the previous sections and/or see \cite{terp}. \item $h_{\omega}^{it}$ can be identified with $\lambda(t)$. \item $\theta_s(\lambda(t)) = e^{-ist} \lambda(t)$ \item Writing $\lambda(t) = e^{-iHt}$ one has: $$ \theta_s(e^{-iHt}) = e^{-ist} e^{-iHt} = e^{-i(H+s\jed)t}$$ \item Thus $\theta_s(H) = H+ s\jed$ \item Consequently $\theta_s(e^{-H}) = e^{-s} e^{-H}$ and $e^{-H} \in L^1(\qM)$! \item In the above $\beta = 1$, which follows from the standard scaling of temperature in the KMS theory, cf Chapter 5 in \cite{BR}. \end{enumerate} Consequently, \textit{the quantum analogue of the Gibbs Ansatz is well defined as an element of $L^1(\qM)$}. Furthermore, as there is a linear bijective isometry between $L^1(\qM)$ and $\qM_*$, we obtain a well defined normal functional on $\qM$. In particular, \textit{the quantum analogue of the partition function is also well defined}. Turning to the entropy $\widetilde{S}(\vartheta)$, we wish to get a better understanding of its nature. To this end we will consider the important case when $\vartheta$ is the reference state $\omega$. We remind that $\omega$ is a faithful normal state and by Takesaki's theorem, cf Section 2, $\omega$ is a KMS state in terms of the modular dynamics. In other words, $\omega$ describes the given equilibrium state and we wish to compute the entropy of such a state. Furthermore, $\widetilde{S}(\omega)$ being related to the equilibrium state $\omega$ is a candidate for quantum thermodynamical entropy. We have just seen that $e^{-H}$ is in $L^1(\qM)$. But $\widetilde{S}(\vartheta)$ was defined for functionals of the form $\zeta_1(h)g\zeta_1(h)$, where $h \equiv e^{-H}$, and $g \in L^{ent}(\qM)$. So we must examine what this requirement would mean for $e^{-H}$. We remind, cf. Definition \ref{defent}, that a state ${\vartheta}$ is regular if $\frac{D\tilde{\vartheta}}{D\tau}$ is of the form $\zeta_1(h)g\zeta_1(h)$, where $h \equiv e^{-H}$, and $g \in L^{ent}(\qM)$. Hence we wish to have \begin{equation} e^{-H} \equiv h = \left(\frac{h}{\varphi_{ent}(h)}\right)^{\frac{1}{2}} g \left(\frac{h}{\varphi_{ent}(h)}\right)^{\frac{1}{2}}. \end{equation} Thus $g = \varphi_{ent}(h)$, and hence \begin{equation} \widetilde{S}(\omega) = \inf_{\epsilon >0}\left[\epsilon\, \tau\left(\chi_{(\epsilon, \infty)}(\zeta_{\log}(h))^{\frac{1}{2}} \varphi_{ent}(h) (\zeta_{\log}(h))^{\frac{1}{2}}\right) + \log \epsilon\, ||\zeta_1(h)^{\frac{1}{2}} \varphi_{ent}(h)\zeta_1(h)^{\frac{1}{2}}||_1 \right] \end{equation} We note that (see \cite{FK}), \begin{equation} \tau\left(\chi_{(\epsilon, \infty)}(|T|)\right) = \epsilon^{-1} ||T||_1, \end{equation} for $T \in L^1(\qM)$. Thus \begin{eqnarray*} \widetilde{S}(\omega) &=& \inf_{\epsilon >0}\left[\epsilon\, \tau\left(\chi_{(\epsilon, \infty)}(\zeta_{\log}(h))^{\frac{1}{2}} \varphi_{ent}(h) (\zeta_{\log}(h))^{\frac{1}{2}}\right) + \epsilon \log \epsilon \,\tau\left(\chi_{(\epsilon, \infty)}(\zeta_1(h)^{\frac{1}{2}} \varphi_{ent}(h)\zeta_1(h)^{\frac{1}{2}})\right) \right]\\ &=& \inf_{\epsilon >0}\left[\epsilon\,\tau\left(\chi_{(\epsilon, \infty)}(\varphi_{\log}(h))\right) + \epsilon \log \epsilon\, \tau\left(\chi_{(\epsilon, \infty)}(h)\right) \right] \end{eqnarray*} Now observe that $\varphi_{\log}$ is a continuous strictly increasing function which is 0 at 0. So $t\geq \epsilon>0$ if and only if $\varphi_{\log}(t)\geq \varphi_{\log}(\epsilon)>0$, with $\varphi_{\log}(t)\to\infty$ as $t\to\infty$. If we combine this fact with the Borel functional calculus, it is clear that $\chi_{(\epsilon, \infty)}(\varphi_{\log}(h))=\chi_{(\varphi^{-1}_{\log}(\epsilon), \infty)}(h)$. Consequently \begin{eqnarray} \widetilde{S}(\omega) &=& \inf_{\epsilon >0}\left[\epsilon \,\tau\left(\chi_{(\varphi^{-1}_{\log}(\epsilon), \infty)}(h)\right) + \epsilon \log \epsilon \,\tau\left(\chi_{(\epsilon, \infty)}(h)\right) \right]\nonumber\\ &=& \inf_{\epsilon >0}\left[\frac{\epsilon}{\varphi^{-1}_{\log}(\epsilon)} + \log(\epsilon)\right].\|h\|_1 \end{eqnarray} As $\varphi^{-1}_{\log}(t) = \frac{1}{\Psi^{-1}_{\log}(\frac{1}{t})}$ where $\Psi_{\log}(t)=t\log(t+1)$, we have \begin{equation} \varphi^{-1}_{\log}(t) = \frac{1}{\Psi_{\log}(\frac{1}{t})} = \frac{t}{\log(\frac{1}{t} + 1)}. \end{equation} So \begin{equation} \widetilde{S}(\omega) = \inf_{\epsilon >0} \left[\log\left(\frac{1}{\epsilon} + 1\right) + \log \epsilon\right] ||h||_1 = \left[\inf_{\epsilon >0} \log(1 + \epsilon)\right] ||h||_1 = 0. \end{equation} In commenting on this result, we note that in classical Physics, the entropy is an extensive thermodynamical quantity. The central question then becomes: whether the quantum entropy $\widetilde{S}(\vartheta)$ has the same property. To answer this question we begin by taking closer look at techniques used in definition of $\widetilde{S}(\vartheta)$. The first observation is that, from the very beginning, we employed the approach relevant to a description of large systems, i.e. those systems of statistical physics which can be obtained by thermodynamical limit. The next observation is that Tomita-Takesaki theory was the basic ingredient of our analysis. It is crucial to note that in the representation induced by a KMS state, basic relations of Tomita-Takesaki theory for finite volume systems survive the thermodynamical limit. In particular, the equilibrium state vector is an eigenvector of $h$ corresponding to eigenvalue $1$ -- for more details see Sections V.1.4 and V.2.3 in \cite{haag}. Furthermore, we have already noted, cf. remark given prior to Theorem \ref{5.6}, that in the presented approach, the state $\omega$ (so a quantum counterpart of probability measure) was used as a reference measure. On the other hand, in classical statistical physics, the entropy per unit volume is given by $\frac{S(\varrho_{\Lambda})}{V(\Lambda)}$, where $V(\Lambda)$ stands for the volume of the region $\Lambda$. Note that $V(\Lambda)$ is taken with respect to the reference measure (in classical statistical physics, it is the Lebesgue measure). However, having a probability measure as the reference measure one gets $V(\Lambda) = 1$. In other words, $\widetilde{S}(\vartheta_{\Lambda})$ can be considered as the entropy per unit volume. Consequently, the definition of entropy proposed in the paper in together with the regularization procedure, incorporates some basic ideas of thermodynamic limits. Thus, the entropy $\widetilde{S}(\vartheta)$, defined in terms suitable for large systems, should share its properties with the density of entropy. All of this points to the fact that $\widetilde{S}(\vartheta)$ can be considered as an intensive quantity. To get some intuition about density entropy properties, it seems to be useful to note that the density of entropy for quantum lattice systems is taking its values in the interval $[0, N < \infty]$, where $N$ is the dimension of Hilbert space associated with each site of quantum spin system -- see Section 6.2.4 in \cite{BR}. Finally, the important point to note here is that the result $\widetilde{S}(\omega) = 0$, is compatible with the interpretation of the relative entropy as a ``measure'' of distance between two states, cf Theorem \ref{5.6}. To sum up, we can say that the obtained result $\widetilde{S}(\omega) = 0$ is expected. With a suitable concept of entropy for regular states of general quantum systems thus having been identified, the challenge now is to develop computational algorithms for this entropy. \vskip 0.5cm \section{Conclusions} One of the challenges of contemporary physics is to derive the macroscopic properties of matter from the quantum laws governing the microscopic description of a system. On the other hand, thermodynamics being a prerequisite for (quantum) statistical physics, provides laws governing the behaviour of macroscopic variables. It is well known that entropy is a crucial concept for this scheme. Knowing that statistical physics deals with large systems (so systems with infinite degrees of freedom) we proposed a concise approach to entropy. It was done in operator algebraic language. This language is indispensable as on the one hand it is the basis for noncommutative integration theory, and on the other von Neumann algebra of type III are acknowledged to be the correct formalism for large quantum systems. Consequently, using the algebraic approach, a consistent dynamical description of entropy was achieved. It is worth pointing out that our results can be considered as the first step in getting genuine quantum thermodynamics for general quantum systems.
{ "timestamp": "2018-08-28T02:11:02", "yymm": "1804", "arxiv_id": "1804.05579", "language": "en", "url": "https://arxiv.org/abs/1804.05579" }
\subsection{Please Capitalize the First Letter of Each Notional Word in Subsection Title} \section{Introduction} \label{sect:intro} The regions where high-mass stars and stellar clusters are born are highly turbulent and inhomogeneous (e.g., Tan et al. \cite{tan14}). An extent of turbulence is enhanced in the vicinities of massive stars where gas due to various kinds of instabilities could fragment into small-scale structures down to the scales unresolved by modern instruments. There are many indirect evidences for existence of small-scale unresolved inhomogeneities (fragments, clumps) in regions of high-mass star formation. This follows from the fact that the observed molecular line profiles of different species are close to Gaussian profiles without signs of saturation and their widths are much higher than thermal ones (e.g. Kwan \& Sanders \cite{ks86}). Nearly constant volume densities in clouds with strong column density variations (e.g. Bergin et al. \cite{bergin96}) and detection of C~I emission over large areas correlated with molecular maps (e.g. White \& Padman \cite{wp91}, Kamegai et al. \cite{kamegai03}) also imply a small-scale clumpy structure. An important evidence for existence of small thermal fragments in high-mass star-forming regions is provided by anomalies of relative intensities of the HCN(1--0) hyperfine components. This effect is connected with an overlap of thermally broadened profiles of closely located hyperfine components in the higher HCN rotational transitions (mainly, $J$=2--1) and is efficient at kinetic temperatures $\ga 20$~K (Guilloteau \& Baudry \cite{gb81}). Yet, if the local profiles are broadened by microturbulence and are suprathermal as in high-mass star-forming cores ($\ga 2$~km~s$^{-1}$) it becomes practically impossible to reproduce the observed HCN(1--0) anomalies in a framework of the microturbulent model (Pirogov \cite{pir99}). In opposite, if the cores consist of small thermal fragments with low volume filling factor moving randomly with respect to each other the observed HCN(1--0) profiles with intensity anomalies and high linewidths can be easily reproduced (Pirogov \cite{pir99}). If the observed line profile is a sum of profiles of randomly moving fragments one should expect an existence of intensity fluctuations due to fluctuations of a number of fragments on the line of sight at distinct velocities. Martin et al. (\cite{martin}) derived analytical expression for molecular line emission of the cloud consisted of a large number of small identical fragments and Tauber (\cite{tauber96}) obtained an expression for standard deviation of line intensity fluctuations due to such a structure. Using their approach it is possible to derive from observations parameters of small-scale structure, mainly, a total number of fragments in a telescope beam. Previously, we performed long-time observations in various molecular lines (HCN(1--0), CS(2--1), $^{13}$CO(1--0), HCO$^+$(1--0) and some others) of the high-mass star-forming cores which show the HCN(1--0) hyperfine anomalies (S140, S199, S235) and PDR regions (Orion, W3) (Pirogov \& Zinchenko \cite{pz08}, Pirogov et al. \cite{pir12}). We detected residual fluctuations on line profiles and estimated total number of thermal fragments in a beam using analytical approach. By comparing the results of detailed calculations of line emission in a framework of the model consisted of identical thermal fragments (clumpy model) with the observed nearly Gaussian HCN(1--0) profiles, estimates of sizes and densities of fragments were obtained for S140 and S199. Yet, these results suffered from the drawback connected with arbitrary chosen parameters of the method of extraction residual intensity fluctuations from line profiles. In this paper new observational results of higher quality for the S140 and S199 cores in the HCN(1--0) and HCO$^+$(1--0) lines are presented. The lines in these objects have nearly Gaussian profiles which is important for comparison with the results of the clumpy model calculations. To estimate standard deviations of residual line intensity fluctuations that could be due to small-scale clumpy structure a new Fourier filtering method is used. This helped to recalculate parameters of small-scale structure including total number of fragments in the beam for S140 and S199 and physical parameters of fragments for S140. \section{Analytical Model} Considering model cloud consisted of identical randomly moving fragments with low volume filling factor and assuming that velocity dispersion of fragment motions ($\sigma$) is much higher than inner velocity dispersion ($v_0$) Martin et al. (\cite{martin}) obtained an expression for cloud's optical depth ($\tau$) which is proportional to $N_c$, the number of fragments in a column with cross-sectional area of single fragment. Using this approach for $N_c\la 10$ Tauber (\cite{tauber96}) derived an expression which can be written as follows: \begin{equation} \frac{\Delta T_{\rm R}}{T_{\rm R}}= \frac{\tau} {(e^{\tau}-1)\sqrt{K\,N_{\rm tot} \frac{v_0}{\sigma}}} \hspace{2mm}, \label{eq:Ntotal} \end{equation} \noindent{where $\Delta T_{\rm R}$ is a standard deviation of fluctuations of line radiation temperature in some range near line center, $T_{\rm R}$ is a peak line radiation temperature, $K$ is a factor depending on the optical depth distribution within a fragment.} For the Gaussian distribution $K$ is equal to 1, for the case of opaque discs it is equal to $\pi$. Since contribution of emission of an ensemble of small fragments is statistically independent from atmospheric and instrumental noise, a standard deviation of temperature fluctuations due to small fragments can be calculated as: $\Delta T_{\rm R}=\sqrt{\Delta T_{\rm L}^2-\Delta T_{\rm N}^2}$, where $\Delta T_{\rm L}$ and $\Delta T_{\rm N}$ are the observed standard deviations of temperature fluctuations within and outside line profile range, respectively. Thus, knowing $\Delta T_{\rm R}$, $T_{\rm R}$, kinetic temperature and line optical depth ($\tau$) it is possible to estimate a number of thermal fragments in the beam ($N_{\rm tot}$). Yet, in order to detect radiation temperature fluctuations due to such a structure, observations with high signal-to-noise ratio and with high spectral resolution are needed. Another problem of this approach is connected with correct measurement of $\Delta T_{\rm R}$. \section{The Results of Observations} \label{sec:observations} We carried out observations of two high-mass star-forming cores, S140 and S199, in the HCN(1--0) line at 88.6~GHz with the IRAM-30m telescope in 2010 and in the HCN(1--0) and HCO$^+$(1--0) lines (at 88.6~GHz and 89.2~GHz, respectively) with the OSO-20m telescope in 2017. In addition, we observed these sources in the H$^{13}$CN(1--0) and H$^{13}$CO$^+$(1--0) lines with the OSO-20m telescope in 2017. The IRAM-30m beam at these frequencies is $\sim 29''$, the OSO-20m beam is $\sim 41''$. System noise temperatures were $\sim 130-180$~K and $\sim 170-240$~K, frequency resolutions were 39~kHz and 19~kHz in the IRAM-30m and the OSO-20m observations, respectively. After several hours of integration in the frequency switching mode the noise r.m.s. were $\sim 0.01$~K and $\sim 0.02$~K, for the IRAM-30m and the OSO-20m observations, respectively. The observed profiles towards S140 and S199 contain ``quiet'' nearly Gaussian component (line widths $\sim 2.5$~km~s$^{-1}$) and high-velocity wing emission of lower amplitude. For the purpose of our analysis high-velocity components were subtracted. The source coordinates, distances and linear resolutions at $\sim 89$~GHz are given in Table~\ref{sources}. \begin{table}[hbtp] \caption{Source list} \smallskip \begin{tabular}{lrrcc}\hline\noalign{\smallskip} Source & $\alpha$(2000) & $\delta$(2000) & $D$ & Linear Resolution\\ &${\rm (^h)\ (^m)\ (^s)\ }$ &$(^o)$ $(^{\prime})$ $(^{\prime\prime}$) & (pc) & (pc)\\ \noalign{\smallskip}\hline\noalign{\smallskip} S140 (L1204) & 22 19 18.4 & 63 18 45 & 764(27) (Hirota et al. \cite{hirota08}) & $\sim 0.11$ (IRAM-30m) \\ & & & & $\sim 0.15$ (OSO-20m) \\ S199 (IC1848) & 03 01 32.3 & 60 29 12 & 2200(200) (Lim et al. \cite{lim14}) & $\sim 0.3$ (IRAM-30m) \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \label{sources} \end{table} \begin{figure}[h] \begin{minipage}[t]{0.9\textwidth} \centering \includegraphics[width=3cm,angle=-90]{0024fig1_1.eps} \end{minipage \vskip 1mm \begin{minipage}[t]{0.9\textwidth} \centering \includegraphics[width=3cm,angle=-90]{0024fig1_2.eps} \end{minipage \vskip 1mm \begin{minipage}[t]{0.9\textwidth} \centering \includegraphics[width=3cm,angle=-90]{0024fig1_3.eps} \end{minipage \vskip 1mm \begin{minipage}[t]{0.9\textwidth} \centering \includegraphics[width=3.3cm,angle=-90]{0024fig1_4.eps} \end{minipage \caption{{\small The observed HCN(1--0) and HCO$^+$(1--0) profiles in S140 and in S199 (left panels) and the corresponding power spectra (Fourier transform) for low amplitudes (right panels). Residual noise obtained after filtering power spectra with arbitrary value $F_{\rm eff}$=1~(km~s$^{-1})^{-1}$ and multiplied by a factor of 10 are shown under the observed profiles. Red dashed curves correspond to the fits by overlapping Gaussian functions (triplets in the case of HCN(1--0)) and their power spectra.}} \label{spectra} \end{figure} In order to estimate standard deviations of radiation temperature fluctuations on line profiles that could be due to small-scale structure ($\Delta T_{\rm R}$) it is necessary to remove correctly the main component from line profiles. As was pointed out by Tauber (\cite{tauber96}) one of the possible methods is to do Fourier high-pass filtering. This method is based on the idea that the small-scale structure should produce much broader Fourier (power) spectrum than the main line profile. After filtering, a noise-like residual spectrum, probably, with different standard deviations within and outside line range is obtained. Previously (Pirogov \& Zinchenko \cite{pz08}; Pirogov et al. \cite{pir12}) we used this method taking arbitrary filter boundaries to reject harmonics of power spectra corresponding to the main line profile. In Fig.~\ref{spectra} the observed profiles and the corresponding power spectra are shown on the left and the right panels, respectively. The power spectra contain features with low amplitudes at inverse velocities higher than the main Gaussian profile ranges ($> 0.4-0.5$~(km~s$^{-1})^{-1}$) implying small deviations from the Gaussians. Fitting of the observed profiles by 2-3 overlapping Gaussians (or triplets in the case of HCN) with suprathermal widths it is possible to reproduce some of the low-amplitude features for inverse velocities up to $\sim 0.7$~(km~s$^{-1})^{-1}$ (Fig.~\ref{spectra}, right panels). The spectral features at higher inverse velocities could be attributed to the small-scale clumpy structure as well as to atmospheric and instrumental noise. \section{Fourier Filtering and the $\Delta T_{\rm R}(F_{\rm eff})$ Dependencies} \label{sec:model} In order to select an optimal boundary of the high-pass Fourier filter ($F_{\rm eff}$) the filtering has been done for different values of $F_{\rm eff}$ from 0.7 to 2~(km~s$^{-1})^{-1}$ and the $\Delta T_{\rm R}$ values have been calculated for the 3 km~s$^{-1}$ line range of the observed HCN(1--0) and HCO$^+$(1--0) profiles (23 and 46 velocity channels for the IRAM-30m and OSO-20m data, respectively). The results for S140 (OSO-20m) are shown in Fig.~\ref{dtr} (left). There is a sharp decrease of $\Delta T_{\rm R}$ with increasing $F_{\rm eff}$. For $F_{\rm eff}\ga 1.3$~(km~s$^{-1})^{-1}$ the dependencies become nearly linear. Similar behavior is found for the IRAM-30m data. For comparison we performed test calculations of the HCN and HCO$^+$ excitation in the framework of a model cloud consisted of identical thermal fragments with small volume filling factors moving randomly with respect to each other with random velocities having the Gaussian distribution. The line profile from each fragment is a Gaussian with thermal width. A simplified version of the 1D clumpy model described previously (Pirogov \cite{pir99}, Appendix; Pirogov \& Zinchenko \cite{pz08}) is used. The model matches the conditions of the analytical approach and the model line intensities and widths are close to the observed ones for S140. In order to speed up test calculations a number of fragments in the models was reduced. This led to higher values of model $\Delta T_{\rm R}$ values compared with the observed ones. Varying initial values of random generator one could change spatial distribution and velocities of fragments. We performed hundred runs with different initial values of random number generator for two HCN and one HCO$^+$ test models and processed the results in the same way as the data of observations, namely, by filtering corresponding power spectra for different $F_{\rm eff}$ values and calculating $\Delta T_{\rm R}$ for the 3~km~s$^{-1}$ line range. \begin{figure}[hbtp] \centering \includegraphics[width=55mm,angle=-90]{0024fig2.eps} \caption{{\small The standard deviations $\Delta T_{\rm R}$ calculated for the observed (left) and model (right) HCN(1--0) and HCO$^+$(1--0) profiles for different $F_{\rm eff}$ values. The model $\Delta T_{\rm R}$ are the mean values of hundred model runs and error bars denote their standard deviations. The dashed lines on the right panel correspond to the analytical estimates of $\Delta T_{\rm R}$ derived from the equation (\ref{eq:Ntotal}). Stars on the left panel denote the $\Delta T_{\rm R}$ values taken for calculations of total number of fragments in the beam.}} \label{dtr} \end{figure} There is a scatter in the $\Delta T_{\rm R}$ model values from one model run to another. For each $F_{\rm eff}$ the mean $\Delta T_{\rm R}$ value and dispersion have been calculated and the resulted $\langle\Delta T_{\rm R}\rangle(F_{\rm eff})$ dependencies are plotted in Fig.~\ref{dtr} (right). They are close to linear ones and the $\langle\Delta T_{\rm R}\rangle$ value at $F_{\rm eff}=0.7$~(km~s$^{-1})^{-1}$ is nearly equal to the analytical estimate calculated from the equation (\ref{eq:Ntotal}) for the case of opaque discs. The $\Delta T_{\rm R}(F_{\rm eff})$ dependencies for individual model runs are also found to be more or less linear. Therefore, it is probable that the values of $\Delta T_{\rm R}$ for $F_{\rm eff}\la 1.3$~(km~s$^{-1})^{-1}$ for S140 and $\la 1$~(km~s$^{-1})^{-1}$ for S199 are enhanced by some structures (processes) other than randomly distributed thermal fragments (e.g. gravitationally bounded compact cores, ``tangled structures'' (Hacar et al. \cite{hacar16}) or ``cloudlets'' (Tachihara et al. \cite{tachihara12})). In order to get an unbiased estimate of $\Delta T_{\rm R}$ associated with thermal fragments we calculated linear regressions for the observed $\Delta T_{\rm R}(F_{\rm eff})$ dependencies in the range where dependences are nearly linear and extrapolated them to lower $F_{\rm eff}$. Regression lines are shown in Fig.~\ref{dtr} (left). We took the $\Delta T_{\rm R}$ values calculated from regression lines at $F_{\rm eff}$=0.7~(km~s$^{-1})^{-1}$ as standard deviations of line temperature fluctuations produced by randomly distributed thermal fragments in the beam. Uncertainty of these estimates are assumed to be the same as uncertainty in the model calculations ($\sim 25$\%). \section{Total Number of Fragments in the Beam} \label{sec:ntotal} Knowing $\Delta T_{\rm R}$, $T_{\rm R}$, line width, optical depth and kinetic temperature it is possible to get a total number of thermal fragments ($N_{\rm tot}$) within the telescope beam from equation~(\ref{eq:Ntotal}). Kinetic temperatures ($T_{\rm KIN}$) for S140 and S199 are taken close to the estimates from Malafeev et al. (\cite{mal05}) and Zinchenko et al. (\cite{zin97}), respectively. Optical depths ($\tau$) are calculated from comparison of the HCN(1--0) and HCO$^+$(1--0) line widths with the optically thin H$^{13}$CN(1--0) and H$^{13}$CO$^+$(1--0) line widths. For HCN(1--0) it corresponds to the $F$=2--1 hyperfine component. For S140 HCN(1--0) observed at IRAM-30m the $\tau$ value is taken to be the same as for the OSO-20m observations. Yet, this value is probably underestimated. Detailed model calculations (Section~\ref{sec:phys}) reproduce the IRAM-30m HCN(1--0) profile with $\tau\sim 1$ which lead to $\sim 2.5$ times lower value of $N_{\rm tot}$. The results are given in Table~\ref{Ntotal}. The uncertainties of $N_{\rm tot}$ defined mainly by the $\Delta T_{\rm R}$ and $\tau$ uncertainties are at least $\sim 50$\%. \begin{table}[hbtp] \caption{Total number of thermal fragments in the beam} \begin{tabular}{ccccccr}\hline\noalign{\smallskip} Source & Line & $\Delta T_{\rm R}(K)$ & $T_{\rm R}$(K) & $\tau$ & $T_{\rm KIN}$(K) & $N_{\rm tot}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} S 140 & HCN(1--0) & 0.023 & 15.9(0.1) & 0.15(0.04) & 30 & $\sim 4\cdot 10^6$ \\ OSO-20m \\ \noalign{\smallskip}\hline\noalign{\smallskip} S 140 & HCN(1--0) &$0.017 $ & 17.6(0.1) & & & $\sim 10^7$ \\ IRAM-30m \\ \noalign{\smallskip}\hline\noalign{\smallskip} S 140 & HCO$^+$(1--0) & 0.013 & 21.9(0.1) & 0.42(0.04) & & $\sim 2\cdot 10^7$ \\ OSO-20m \\ \noalign{\smallskip}\hline\noalign{\smallskip} S199(0,0) & HCN(1--0) & 0.016 & 8.0(0.1) & 0.7(0.2) & 30 & $\sim 10^6$ \\ IRAM-30m \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \label{Ntotal} \end{table} \section{Physical parameters of fragments in S140} \label{sec:phys} We used the 1D clumpy model (see Section~\ref{sec:model}) for detailed modeling of the HCN(1--0) profile observed in S140 at IRAM-30m (``quiet'' component). The model calculations reproduce very well the observed HCN(1--0) profile in S140 (Fig.~\ref{S140_profiles}). Varying density and kinetic temperature of fragments, the product of the HCN abundance and a size of the cloud and velocity dispersion of relative motions of fragments it is possible to fit intensities of hyperfine components and widths. Varying size of fragments and its volume filling factor it is possible to fit standard deviation of residual temperature fluctuations ($\Delta T_{\rm R}$). The observed and model profiles and the residuals after Fourier filtering with $F_{\rm eff}=1.3$~(km~s$^{-1})^{-1}$ multiplied by a factor of 40 are shown in Fig.~\ref{S140_profiles}. We added synthetic noise to the model profile with dispersion equals to the observed one. \begin{figure}[hbtp] \includegraphics[width=4.5cm,angle=-90]{0024fig3.eps} \caption{\small{The S140 HCN(1--0) profile observed with IRAM-30m (left) and the profile calculated in a framework of the 1D clumpy model (right). The residuals obtained by Fourier filtering with $F_{\rm eff}$=1.3~(km~s$^{-1})^{-1}$ and multiplied by a factor of 40 are shown under each profile. Dashed vertical lines mark the range for which $\Delta T_{\rm R}$ is calculated. }} \label{S140_profiles} \end{figure} The model parameters of fragments are the following: $T_{\rm KIN}$=30~K, number density is 1.5~10$^6$~cm$^{-3}$, the size and the volume filling factor of fragments are $\sim 40$~a.e. and $\sim 0.014$, respectively. Optical depth of the central component ($F$=2--1) is about 1. The total number of fragments is $\sim 2~10^6$. This is comparable with the analytical estimate for HCN(1--0) in S140 IRAM-30m (Table~\ref{Ntotal}) if ones takes $\tau$=1. \section{Discussion} New high sensitivity observations of the S140 and S199 cores confirmed an existence of residual intensity fluctuations on line profiles found previously (Pirogov \& Zinchenko \cite{pz08}, Pirogov et al. \cite{pir12}). Using a new method of Fourier filtering and comparing the data with the results of detailed calculations of the HCN and HCO$^+$ excitation in a framework of the clumpy model it is shown that intensity fluctuations can be associated with a large number of randomly distributed identical thermal fragments moving randomly with respect to each other with suprathermal velocities. The total number of fragments in the beam for S140 derived from the IRAM-30m HCN(1--0) data and from the OSO-20m data agree with each other if one takes $\tau$=1 for the IRAM-30m data. The same value derived from the HCO$^+$(1--0) data is several times higher (Table~\ref{Ntotal}). This could imply an existence of interfragment gas of lower density which effectively absorbs the HCO$^+$ emission and reduces the corresponding $\Delta T_{\rm R}$ value. In order to prove this assumption model calculations in a framework of the model with interfragment gas are needed. So far, the value $\sim 4~10^6$ is assumed as a reasonable estimate for total number of thermal fragments in S140. The uncertainty of this estimate is at least 50\%. Following the analysis from Pirogov \& Zinchenko (\cite{pz08}) it can be shown that such fragments are unstable and short-lived density enhancements which most probably arise due to turbulence in high-mass star-forming cores. The difference between the new and the previous (Pirogov \& Zinchenko \cite{pz08}) results for S140 and S199 are connected mainly with a new method of estimating $\Delta T_{\rm R}$ based on the regression analysis while the previous estimates were based on Fourier filtering at arbitrary chosen value of $F_{\rm eff}=0.7$. The difference in line ranges for which $\Delta T_{\rm R}$ has been calculated and the difference in $\tau$ also increase the value of total number of fragments in S199. In general, the considered model is oversimplified and the estimates obtained should be treated as mean values for the regions with linear sizes $\sim 0.1-0.3$~pc in the considered cores. More realistic models should be implemented which would combine 3D molecular line radiation transfer in clumpy medium (e.g. Juvela \cite{j97}, Park \& Hong \cite{ph98}) with inhomogeneous turbulent cloud structures followed from modern MHD models (e.g. Haugbolle et al. \cite{h18}). On the other hand, a possibility to resolve by an interferometer the considered small-scale structure is not straightforward (interferometric observations usually reveal compact objects in the field of view and miss more diffuse and extended emission (see, e.g., Maud et al. \cite{maud13}, Palau et al. \cite{palau18})). Long-time observations with high angular resolutions with single-dish telescopes seems still to be important to search for intensity fluctuations on line profiles. An increasing sensitivity of modern receivers and implementation of new broadband spectrometers make it possible to detect residual intensity fluctuations on line profiles of various molecules in a reasonable time for different positions in objects which together with modeling results should help to get more information about their small-scale spatial and kinematic structure. \section{Conclusions} Long-time observations of the S140 and S199 high-mass star-forming cores in the HCN(1--0) and HCO$^+$(1--0) lines were carried out. In order to detect intensity fluctuations on line profiles that could be due to inner small-scale structure the profiles were processed by the Fourier filtering method. The residual fluctuations of line radiation temperature imply an existence a large number of randomly moving thermal fragments in the objects. Using analytical method a total number of fragments was calculated being $\sim 4\cdot 10^6$ for the region with linear size $\sim 0.1$~pc in S140 and $\sim 10^6$ for the region with linear size $\sim 0.3$~pc in S199. Physical parameters of thermal fragments in S140 were obtained from detailed modeling of the HCN excitation in a framework of the clumpy model including their density ($\sim 1.5~10^6$~cm$^{-3}$), size ($\sim 40$~a.e.) and volume filling factor ($\sim 0.014$). Such fragments should be unstable and short-lived objects and are probably connected with enhanced level of turbulence in the core. \begin{acknowledgements} I am grateful to the anonymous referee for critical reading of the manuscipt, valuable comments and questions which improved the paper. I would like to thank Olga Ryabukhina for the help in the OSO-20m observations. I would also like to thank Igor Zinchenko for helpful discussions. The OSO-20m observations and the paper preparation were done under support of the RFBR grants (projects 15-02-06098, 16-02-00761, 18-02-00660), data procession and analysis were done under support of the Russian Science Foundation grant (project 17-12-01256). \end{acknowledgements}
{ "timestamp": "2018-04-17T02:15:36", "yymm": "1804", "arxiv_id": "1804.05600", "language": "en", "url": "https://arxiv.org/abs/1804.05600" }
\subsection{Accuracy results for context fixed- and floating-point} As shown in Table \ref{table:results_context}, the proposed $context$-$fixed[6,6]$ format completely introduces no degradation in accuracy; surprisingly, the $context$-$float[4,7]$ with stochastic rounding, with just 12 bits of representation, surpasses the 32-bit floating-point model with stochastic rounding. In addition, $context$-$float[4,7]$ achieves decent results even without using stochastic rounding. \begin{table}[H] \small \centering \renewcommand{\arraystretch}{0.5} \begin{tabular}{l c c c c} \toprule \textbf{Representation } & \textbf{ Accuracy } & \textbf{ Accuracy no rounding } & \textbf{ Epochs to $\geq$ 70\% } \\ \midrule \midrule \textbf{32 bits:} &&&\\ \midrule Floating-point & $-$ & 75,60\% $\rpm$ 0,4 & 4,8 \\ \midrule \textbf{12 bits:} &&&\\ \midrule Fixed-point & 32,10$\%$ $\rpm$ 1,6 & 10$\%$ & $-$ \\ Scaled fixed-point & 63,03$\%$ $\rpm$ 0,3 & 10$\%$ & $-$ \\ Floating-point & 74,20$\%$ $\rpm$ 0,4 & 10$\%$ & 5,7 \\ \rowcolor{Gray} Context-fixed & 76,32$\%$ $\rpm$ 0,5 & 10$\%$ & 5 \\ \rowcolor{Gray} Context-float & 78,02\% $\rpm$ 0,3 & 71,88\% $\rpm$ 0,4 & 5 \\ \bottomrule \end{tabular} \caption{Training results for the \textit{Context-fixed[6,6]} and the \textit{Context-float[4,7]} representations.} \label{table:results_context} \end{table} We attribute the high accuracy of the $context$-$float[4,7]$ representation to the following two reasons. The first reason is trivial, the range of representation and precision increase; this factor allowed the network to train with no rounding algorithm. The second and not so obvious reason is that the trimming of bits in an intelligent manner may be a valid regularization technique. In the specific case of the weights, they have a tendency to follow a Gaussian distribution which gets wider as the network learns, distinguishing relevant features and thus increasing its assigned weight or decreasing the weight otherwise (Figure \ref{fig:context_weights}). Allocating fewer bits in the scaled exponent limits the width of the distribution and constraints the network from learning too much (Figure \ref{fig:context_training}). { \begin{figure}[] \centering \includegraphics[width=8cm, height=5.5cm]{figures/weights.png} \caption{\small{In \textbf{red} the distribution of weights of the fully connected layer when training the network with a 32-bit floating point representation. In \textbf{blue} the distribution of weights of the same layer in the same iteration, when training the network with the $context$-$float[4,7]$ representation.}} \label{fig:context_weights} \end{figure} } { \begin{figure}[] \centering \includegraphics[width=\textwidth, height=8cm]{figures/accuracy.png} \caption{\small{Training evolution of the CNN.}} \label{fig:context_training} \end{figure} } \subsection{Accuracy results for 12-bit fixed-point} As seen in the Table \ref{table:accuracy_fixed}, 12-bit fixed-point arithmetic is not enough to train the network, stalling the accuracy that is obtained at 32$\%$. The incapacity of the 12-bit fixed-point for training resides on the fact that 12 bits of fraction are not sufficient for some parameters of the network, even when using stochastic rounding. The gradients of the network obtained from the back-propagation stage have small magnitudes and seem to diminish its magnitude slowly epoch after epoch when the network trains. As a result, when constraining weight updates and gradients to $fixed[0,12]$ format, a substantial amount are rounded to 0 and thus halting the network learning. Instead of placing the point of the fixed-point representation to 0, the representation can also be scaled using a global scaling factor if the overall values of the network are significantly small. Enhanced results are obtained if the previous 12-bit fixed-point representation is globally scaled by $2^{-4}$ during the training phase, achieving a mean accuracy of 63\% (see Table \ref{table:accuracy_fixed}). Despite improving the results, the model is still far from the baseline performance. \begin{table}[] \small \centering \renewcommand{\arraystretch}{0.5} \begin{tabular}{l c c c c} \toprule \textbf{Representation } \quad & \textbf{ Accuracy } \quad & \textbf{ Accuracy no rounding } \quad & \textbf{ Epoch $\geq$ 70$\%$ } \\ \midrule \midrule \textbf{32 bits:} &&&\\ \midrule Floating-point & - & 75$,$60$\%$ $\rpm$ 0,4 & 4$,$8 epochs \\ \midrule \textbf{12 bits:} &&&\\ \midrule \rowcolor{Gray} Fixed-point & 32,10$\%$ $\rpm$ 1,6 & 10$\%$ & - \\ \rowcolor{Gray} Scaled Fixed-Point & 63,03$\%$ $\rpm$ 0,3 & 10$\%$ & - \\ \bottomrule\\ \end{tabular} \caption{Results of the model trained with the 12-bit fixed-point formats. The table shows the mean \textbf{Accuracy} employing the stochastic rounding algorithm, the \textbf{Accuracy no rounding} refers to the model mean accuracy when not applying any rounding algorithm and last \textbf{Epochs $\geq$ 70\%} are the epochs taken to reach at least 70\% accuracy.} \label{table:accuracy_fixed} \end{table} \subsection{Accuracy results for 12-bit floating-point} As shown in Table \ref{table:accuracy_float}, compared to the 32-bit floating-point baseline, the 12-bit floating-point representation with stochastic rounding suffers almost no degradation. It is clearly noticed the advantage of the exponent in the floating-point representations since it allows a wider representation and more precision than the fixed-point counterpart (see Figures \ref{fig:gradients} and \ref{fig:outputs}). Besides, unlike fixed-point, it is not required to adjust the format of the 12-bit floating-point to the network. It is also important to remark from these results the importance of using stochastic rounding; otherwise the 12-bit floating-point representation would not be able to train the network. \begin{table}[] \small \centering \renewcommand{\arraystretch}{0.5} \begin{tabular}{l c c c c} \toprule \textbf{Representation } & \textbf{ Accuracy } & \textbf{ Accuracy no rounding } & \textbf{ Epochs to $\geq$ 70\% } \\ \midrule \midrule \textbf{32 bits:} &&&\\ \midrule Floating-point & - & 75,60$\%$ $\rpm$ 0,4 & 4,8 epochs \\ \midrule \textbf{12 bits:} &&&\\ \midrule Fixed-point & 32,10$\%$ $\rpm$ 1,6 & 10\% & - \\ Scaled Fixed-Point & 63,03\% $\rpm$ 0,3 & 10\% & - \\ \rowcolor{Gray} Floating-point & 74,20$\%$ $\rpm$0,4 & 10$\%$ & 5,7 \\ \bottomrule\\ \end{tabular} \caption{Results of the model trained with the $float[5,6]$ format.} \label{table:accuracy_float} \end{table} { \begin{figure}[] \begin{subfigure}{.49\textwidth} \includegraphics[width=\textwidth,height=40mm]{figures/fixed_gradient.png} \centering \caption{12-bit fixed-point} \label{fig:sub1} \end{subfigure}% \hspace{.07\textwidth} \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=\textwidth,height=40mm]{figures/float_gradient.png} \caption{12-bit floating-point} \label{fig:sub2} \end{subfigure} \caption{\small Gradient values in the fully connected layer with respect to the cost function.} \label{fig:gradients} \end{figure} } { \begin{figure}[] \begin{subfigure}{.49\textwidth} \includegraphics[width=\textwidth ,height=40mm]{figures/outs_fixed.png} \centering \caption{12-bit fixed-point} \label{fig:sub1} \end{subfigure}% \hspace{.07\textwidth} \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=\textwidth,height=40mm]{figures/outs_float.png} \centering \caption{12-bit floating-point} \label{fig:sub2} \end{subfigure} \caption{\small Comparing the 12-bit fixed-point and floating-point formats with a set of output values obtained from the 2nd convolutional layer of the network.} \label{fig:outputs} \end{figure} } Within the network, it is common to observe learning parameters differing in several orders of magnitude with respect to their corresponding gradients or updates. As a consequence, the 12-bit fixed-point representation is unable to train the CNN. Furthermore, the 12-bit floating-point approach, despite having the 5-bit exponent and thus a larger range of magnitude representation, it is unable to reach the low magnitudes that gradients and weight updates may have (see Figure \ref{fig:distributions}), slowing down the training. \section{Introduction} \input{introduction.tex} \input{training.tex} \section{12-bit Fixed-Point} \label{fixed_point} \input{fixed_point.tex} \section{12-bit Floating-Point} \label{section:floating_point} \input{floating_point.tex} \section{Context Representation} \label{section:context} \input{context.tex} \section{Power-of-Two Neural Network} \label{section:poweroftwo} \input{power.tex} \section{Time Results and Memory Requirements} \label{sec:time} \input{time.tex} \section{Related Work} \input{relatedwork.tex} \section{Conclusions} \input{conclusions.tex} \section*{Acknowledgments} This work is partially supported by the Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316-P project and by the Generalitat de Catalunya (contracts 2017-SGR-1414 and 2017-SGR-1328). \bibliographystyle{IEEEtran} \subsubsection{a) Forward propagation\\\\} In the forward propagation, the output for each neuron $Out$ is computed by applying an activation function $f$ over the potential of the neuron $P$. The potential of the neurons is determined by a costly dot product between the outputs of the neurons of the previous layer $X$ in $float[6,0]$ format, and the weights associated $W$ in $fixed[0,12]$ format: \begin{equation} Out = f(P)\\ \end{equation} \begin{equation} P = \sum_{j=1}^{n} X_jW_{j} \approx \sum_{j=1}^{n} 2^yW_{j} = \sum_{j=1}^{n} W_{j} << y\\ \end{equation} With ReLU as activation function $f$, max pooling in the convolutional layers and the outputs constrained in the form $2^y$, multiplications and divisions in the forward propagation of the CNN are replaced by simple shifts. While the dot product is being computed, the intermediate values of $P$ and $Out$ are stored in higher precision variables. Moreover, $Out$ has to be formatted to $float[6,0]$ in order to avoid multiplications and divisions in the following layers of the network (Figure \ref{fig:poweroftwo_sample}).\\ { \begin{figure}[] \centering \includegraphics[width=7cm]{figures/poweroftwo_example.png} \caption{\small{Simulation of the forward-propagation algorithm in the Power-of-Two model with 1 neuron and 3 connections. Each parameter displays its corresponding format in the model.}} \label{fig:poweroftwo_sample} \end{figure} } \subsubsection{b) Backward propagation\\\\} The network learns by updating each of the learning parameters so that they cause the actual output to be closer the target output, thereby minimizing the error for each output neuron and the network as a whole. The updates of all learning parameters $w$ are done by modifying it with its gradient towards the local minimum of the cost function. The gradient of a learning parameter can be computed by performing a partial derivative as follows: \begin{equation}\label{eq:chain} \frac{\partial Cost}{\partial w} = \frac{\partial Cost}{\partial f(x)} \times \frac{\partial f(x)}{\partial P} \times \frac{\partial P}{\partial w} \end{equation} Where $f(x)$ is the activation function and $P$ the potential of the neuron. $\frac{\partial f(x)}{\partial P}=1$ with ReLU activation functions. The gradient of the potential of the neuron with respect to the cost function is constrained to $float[6,0]$, a power of two value: \begin{equation} \frac{\partial Cost}{\partial P} = \frac{\partial Cost}{\partial f(x)} \times \frac{\partial f(x)}{\partial P} \approx 2^x \end{equation} Therefore, the gradient of a learning parameter required to compute the parameter update (Equation \ref{eq:weight_update}) and the propagation of gradients to other connected hidden neurons, also known as error propagation (Equation \ref{eq:backprop}) become now: \begin{equation} \label{eq:weight_update} \frac{\partial Cost}{\partial w} = \frac{\partial Cost}{\partial f(x)} \times \frac{\partial f(x)}{\partial P} \times \frac{\partial P}{\partial w} \approx 2^x \times \frac{\partial P}{\partial w} \end{equation} \begin{equation} \label{eq:backprop} \frac{\partial Cost}{\partial f(x)_i^{l-1}} = \sum_{j=1}^{n} \frac{\partial Cost}{\partial P_j^{l}} w_{ji}^{l} \approx \sum_{j=1}^{n} 2^y w_{ji}^{l} \\ \end{equation} Where $l$ is a layer in the network, $i,j$ neurons and $w_{ji}$ the weight of the connection between neurons $i,j$. Although not realized in this study, in order to replace multiplications and divisions by shifts in weight update computations, the learning rate ($\alpha$), momentum ($\mu$) and other hyperparameters could also be expressed as a power of two: \begin{equation} \text{New }w = w + \mu\Delta w^{-1} - \alpha \Delta w \hspace{0.1cm}\approx w + \hspace{0.1cm} 2^y\Delta w^{-1} - 2^z \Delta w \end{equation} where $w$ is a learning parameter and $\Delta w$ is the gradient of the learning parameter with respect to the cost function. The Power-of-Two neural network is based on previous works~\cite{ternary,binary} where the reduction in resource consumption is pushed to the limit when training networks. Results obtained from those studies are yet impressive although those super-simplified models are not able to train on their own and need a side high-precision model to be quantized and thus, the memory requirements are not lowered. The Power-of-Two proposal in this paper does reduce the memory requirements during both inference and training, has a lower quantization overhead, trains with no auxiliary regularization techniques and is able to evade all the multiplications and divisions during training and inference. \subsection{Accuracy results for Power-of-Two} As shown in Table \ref{table:results_pot}, the simplified power-of-two neural network brings only a 2\% average accuracy degradation from the baseline model, the 32-bit floating-point while achieving drastic reductions in training time, memory requirements and energy consumption, as it is shown in section~\ref{sec:time}. If the 2\% degradation of the power-of-two network is considered excessive, it would be possible to use it to accelerate the training up to a certain accuracy and later switch to a more accurate floating-point arithmetic, for instance. \begin{table}[H] \small \centering \renewcommand{\arraystretch}{0.1} \begin{tabular}{l c c c c} \toprule \textbf{Representation } & \textbf{ Accuracy } & \textbf{ Accuracy no rounding } & \textbf{ Epochs to $\geq$ 70\% } \\ \midrule \midrule \textbf{32 bits:} &&&\\ \midrule Floating-point & $-$ & 75,60\% $\rpm$ 0,4 & 4,8\\ \midrule \textbf{12 bits:} &&&\\ \midrule Fixed-point & 32,10$\%$ $\rpm$ 1,6 & 10$\%$ & $-$\\ Scaled fixed-point & 63,03$\%$ $\rpm$ 0,3 & 10$\%$ & $-$\\ Floating-point & 74,20$\%$ $\rpm$ 0,4 & 10$\%$ & 5,7\\ Context-fixed & 76,32$\%$ $\rpm$ 0,5 & 10$\%$ & 5\\ Context-float & 78,02\% $\rpm$ 0,3 & 71,88\% $\rpm$ 0,4 & 5\\ \midrule \textbf{7 or 12 bits}\footnotemark\\ \midrule \rowcolor{Gray} Power-Of-Two & 73,42\% $\rpm$ 0,3 & 10\% & 18,6\\ \bottomrule \end{tabular} \caption{Training results for the power-of-two model.} \label{table:results_pot} \end{table} \footnotetext{7 bits for outputs and gradients, 12 bits for the rest of the network parameters.} \section{Experimental setup} \subsection{Simulation framework} All the experimental evaluation in this paper is done using the deep-learning framework Caffe\footnote{Available at http://caffe.berkeleyvision.org/}. To evaluate the performance of the low-precision arithmetics and representations, we constrain the values of the network model down to 12 bits. Specifically, the network parameters and intermediate values reduced to 12 bits are weights, biases, outputs, weight updates, biases updates, and gradients. However, the 12-bit representation is simulated using Caffe's double precision floating-point implementation, ensuring that the network parameters stored in higher-precision registers are always constrained to 12-bits. The result of arithmetic operations between two already formatted 12-bit values could lead to a non-representable value. Therefore, in order to convert the result to a 12-bit value, we saturate the value if exceeds the largest magnitude of the representation, and we make use of the stochastic rounding algorithm, which has shown great performance in previous studies \cite{fixed1,fixed2} along with 64-bit precision registers, in which we store the numerical value to be rounded: \begin{equation} StochasticRound(x)= \begin{cases} \floor{x} + \epsilon & w.p. \hspace{0.2cm} \frac{x - \floor{x}}{\epsilon} \\ \floor{x} & w.p. \hspace{0.2cm} 1 - \frac{x - \floor{x}}{\epsilon} \\ \end{cases} \end{equation} \vspace{0.5cm} \noindent where $x$ is the number to be rounded, $\floor{x}$ is the closest 12-bit value smaller than $x$ and $\epsilon$ is the smallest representation in terms of magnitude by the 12-bit format. When utilizing stochastic rounding, the value $x$ has more probability to be rounded to the closest 12-bit value although there also exists a smaller probability to be rounded to the second closest 12-bit value thus, preserving the information at least statistically. One of the most executed operations in neural network training is the dot product: $A \cdot B = \sum_{i=0}^{n}a_i \cdot b_i$ where $A$ and $B$ are vectors such that each component is represented in a 12-bit format. In this study, when performing the dot product, the result of each multiplication $a_i \cdot b_i$ is accumulated in a higher precision variable of 64 bits of length. Only at the end of the dot product operation, the stochastic rounding or saturation methods are applied to the result. \subsection{CNN model} To test the performance of the different data representations, we consider a widely used image classification benchmark: the CIFAR-10 dataset\footnote{Available at https://www.cs.toronto.edu/$\sim$kriz/cifar.html}. We construct a Convolutional Neural Network (CNN) similar to the topology proposed in \cite{fixed2}. The CNN is made of 3 Convolutional layers followed by their corresponding Max Pooling layers and a Fully connected layer of 1000 units with dropout probability of 0,4 which is then connected to a 10-way softmax Output layer for classification. The first two convolutional layers consist of 32 Kernels with 5x5 dimensions and the third convolutional layer consists of 64 Kernels with 5x5 dimensions. All convolutional layers have stride=1 and padding=2. The pooling layers have dimensions 3x3 with stride=2. The activation function in the convolutional layers and the fully connected layer is ReLU and we define Cross-Entropy as a cost function of the model. We employ Stochastic Gradient Descend as a minimiser for the model with a fixed learning rate of 0.001, a momentum of 0.9 to speed up the convergence, a weight decay of 0.004 in all layers and a batch size of 100 images during the 40 epochs of training.
{ "timestamp": "2018-04-17T02:08:13", "yymm": "1804", "arxiv_id": "1804.05267", "language": "en", "url": "https://arxiv.org/abs/1804.05267" }
\section{Introduction} The notion of {\sl non-tangential} limit is very important in geometric function theory. A sequence $\{z_n\}\subset \mathbb D:=\{z\in \mathbb C: |z|<1\}$ converges non-tangentially to a point $\sigma\in \partial \mathbb D$ if it converges to $\sigma$ and it is eventually contained in a Stolz region of vertex $\sigma$, that is, if it is eventually contained in the set $\{z\in \mathbb D: |\sigma-z|<R(1-|z|)\}$ for some $R>1$. \medskip {\bf Question.} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain and let $f:\mathbb D \to \Delta$ be a Riemann map. Let $\{z_n\}\subset \Delta$ be a compactly divergent sequence, {\sl i.e.}, with no accumulation points in $\Delta$. How can one decide if $\{f^{-1}(z_n)\}$ converges non-tangentially to a point $\sigma$ by looking only at geometric properties of $\Delta$? \medskip The first aim of this paper is to give an answer to this question using the hyperbolic distance (a similar question for orthogonal convergence has been settled in \cite{BCDG}). The first observation is that, if $\gamma:[0,+\infty)\to \mathbb D$ is a geodesic for the hyperbolic distance $\omega$ of $\mathbb D$ parameterized by arc length, then there exists $\sigma\in \partial \mathbb D$ such that $\lim_{t\to+\infty}\gamma(t)=\sigma\in \partial \mathbb D$. Moreover, for every $R>0$, the set $S_\mathbb D(\gamma, R):=\{z\in \mathbb D: \omega(z, \gamma([0,+\infty))<R\}$, which we call a {\sl hyperbolic sector} around $\gamma$ of amplitude $R$, is equivalent to a Stolz region at $\sigma$. Therefore, a compactly divergent sequence $\{z_n\}\subset \mathbb D$ converges non-tangentially to $\sigma$ if and only if it is eventually contained in a hyperbolic sector around a geodesic converging to $\sigma$. Given a simply connected domain $U\subset \mathbb C$ and a biholomorphism $f:\mathbb D \to U$, the map $f$ is an isometry between the hyperbolic distance $k_U$ of $U$ and $\omega$, thus the previous property is invariant under biholomorphisms and gives a first answer to the previous question. However, such a conclusion is not useful in practice, because knowing geodesics and the hyperbolic distance of a simply connected domain is almost equivalent to knowing the Riemann map of that domain. However, using the Gromov hyperbolicity theory, we prove the following result (see Theorem \ref{Thm:nec-suff-non-tg}): \begin{theorem}\label{Thm:nec-suff-non-tg-intro} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain and let $f:\mathbb D\to \Delta$ be a Riemann map. Let $\{z_n\}\subset \Delta$ be a compactly divergent sequence, {\sl i.e.}, with no accumulation points in $\Delta$. Then $\{f^{-1}(z_n)\}$ converges non-tangentially to a point $\sigma\in \partial \mathbb D$ if and only if there exist a simply connected domain $U\subsetneq \mathbb C$, a geodesic $\gamma:[0,+\infty)\to U$ of $U$ such that $\lim_{t\to+\infty}k_U(\gamma(t), \gamma(0))=+\infty$ and $R>R_0>0$ such that \begin{enumerate} \item $S_U(\gamma, R):=\{w\in U:\, k_{U}(w,\gamma ([0,+\infty)))<R\}\subset\Delta\subseteq U$, \item there exists $n_0\geq 0$ such that $z_n\in S_U(\gamma, R_0)$ for all $n\geq n_0$. \end{enumerate} \end{theorem} A simple consequence of the previous theorem is that if $\Delta$ is a simply connected domain contained in an upper half-plane and containing a {\sl vertical Euclidean sector} $p+\{z\in \mathbb C: {\sf Im}\, z> k| {\sf Re}\, z|\}$, for some $p\in \mathbb C$ and $k>0$, then $f^{-1}(p+it)$ converges non-tangentially to a boundary point of $\mathbb D$ as $t\to+\infty$. Another interesting consequence is that if $\Delta$ is a simply connected domain starlike at infinity (that is, $\Delta+it\subset \Delta$ for all $t\geq 0$), that contains a vertical Euclidean sector, then the curve $[0,+\infty)\ni t\mapsto f^{-1}(f(z)+it)$ converges non-tangentially to some point of $\partial \mathbb D$ as $t\to+\infty$ for every $z\in \mathbb D$. In fact, it can be shown that such a curve is a uniform quasi-geodesic in the sense of Gromov. The latter fact has an interesting application to the study of {\sl one-parameter continuous semigroups} of holomorphic self-maps of $\mathbb D$---or, for short, semigroups in $\mathbb D$. A semigroup in $\mathbb D$ is a continuous homomorphism of the real semigroup $[0,+\infty)$ endowed with the Euclidean topology to the semigroup under composition of holomorphic self-maps of $\mathbb D$ endowed with the topology of uniform convergence on compacta. Semigroups in $\mathbb D$ have been intensively studied (see, {\sl e.g.},\cite{Abate,AhElReSh99,Berkson-Porta, ES,Shb,Siskakis-tesis}). It is known that, if $(\phi_t)$ is a semigroup in $\mathbb D$, which is not a group of hyperbolic rotations, then there exists a unique $\tau\in \overline{\mathbb D}$, the {\sl Denjoy-Wolff point} of $(\phi_t)$ such that $\lim_{t\to +\infty}\phi_t(z)=\tau$, and the convergence is uniform on compacta. In case $\tau\in \mathbb D$, the semigroup is called elliptic. Non-elliptic semigroups can be divided into three types: hyperbolic, parabolic of positive hyperbolic step and parabolic of zero hyperbolic step. It is known (see \cite{CD, CDP, EYRS}) that if $(\phi_t)$ is a hyperbolic semigroup then the trajectory $t\mapsto \phi_t(z)$ always converges non-tangentially to its Denjoy-Wolff point as $t\to +\infty$ for every $z\in \mathbb D$, while, if it is parabolic of positive hyperbolic step then $\phi_t(z)$ always converges tangentially to its Denjoy-Wolff point as $t\to +\infty$ for every $z\in \mathbb D$. In case of parabolic semigroups of zero hyperbolic step, the behavior of trajectories can be rather wild. All the trajectories have the same {\sl slope}, that is the cluster set of $\mathrm{Arg}(1-\overline{\tau}\phi_t(z))$ as $t\to +\infty$---which is a compact subset of $[-\pi/2,\pi/2]$---does not depend on $z\in \mathbb D$ (see \cite{CD, CDP}). In many cases this slope is just a point, but in \cite{Bet, CDG}, examples are constructed such that the slope is the full interval $[-\pi/2,\pi/2]$. Recall (see, {\sl e.g.}, \cite{Abate, BrAr, Cowen, BCD, EKRS}) that $(\phi_t)$ is a parabolic semigroup in $\mathbb D$ of zero hyperbolic step if and only if there exists a univalent function $h$, the {\sl Koenigs function} of $(\phi_t)$, such that $h(\mathbb D)$ is starlike at infinity, $h(\phi_t(z))=h(z)+it$ for all $t\geq 0$ and $z\in \mathbb D$, and for every $w\in \mathbb C$ there exists $t_0\geq 0$ such that $w+it_0\in h(\mathbb D)$. The triple $(\mathbb C, h, z\mapsto z+it)$ is called a {\sl canonical model} for $(\phi_t)$ and it is essentially unique. A straigthforward consequence of our previous discussion is the following (see Proposition \ref{sector-implies-convergnt}): \begin{corollary}\label{non-tg-intro} Let $(\phi_t)$ be a parabolic semigroup of zero hyperbolic step with Denjoy-Wolff point $\tau\in \partial \mathbb D$. Assume that $h(\mathbb D)$ contains a vertical Euclidean sector. Then for every $z\in \mathbb D$ the trajectory $\phi_t(z)$ converges non-tangentially to $\tau$ as $t\to+\infty$. In other words, the slope of $(\phi_t)$ is a set $[a,b]$ with $-\pi/2<a\leq b<\pi/2$. \end{corollary} The condition of $h(\mathbb D)$ containing a vertical Euclidean sector is not necessary for having non-tangential convergence of the orbits: let $\alpha>1$ and let $Z_\alpha:=\{z\in \mathbb C: |{\sf Re}\, z|^\alpha<{\sf Im}\, z\}$, a parabola-like open set. Since $Z_\alpha$ is simply connected and starlike at infinity, if $f:\mathbb D \to Z_\alpha$ is a Riemann map, $\phi_t(z):=f^{-1}(f(z)+it)$, $z\in \mathbb D$, is a semigroup whose canonical model is $(\mathbb C, f, z\mapsto z+it)$, hence, it is parabolic of zero hyperbolic step. Since $Z_\alpha$ is symmetric with respect to the imaginary axis, it follows that $f^{-1}(it)$, $t>0$, is a geodesic in $\mathbb D$. Therefore $\phi_t(f^{-1}(i))$ converges radially to the Denjoy-Wolff point as $t\to +\infty$. Despite the previous example, it turns out that for every $\alpha>1$ there exists a parabolic semigroup $(\phi_t^\alpha)$ of zero hyperbolic step with Koenigs function $h_\alpha$ such that $Z_\alpha\subset h_\alpha(\mathbb D)$ but $\phi_t^\alpha(z)$ does not converge non-tangentially to the Denjoy-Wolff point (see Proposition~\ref{Prop:tang-conve-with-para}). Furthermore, using Corollary \ref{non-tg-intro}, we are able to construct a (rather explicit in terms of the canonical model) example of a parabolic semigroup with zero hyperbolic step whose trajectories converge non-tangentially to the Denjoy-Wolff point but are oscillating (see Proposition \ref{Prop:example-non-tg-osc}): \begin{proposition}\label{main-para} There exists a parabolic semigroup $(\phi_t)$ of zero hyperbolic step in $\mathbb D$ such that for every $z\in \mathbb D$, the slope of $(\phi_t)$ is $[a,b]$ with $-\pi/2<a<b<\pi/2$. \end{proposition} Our technique does not allow to prescribe exactly the values of $a, b$. In \cite{Bet} (see also \cite{K} for details) it is remarked that, with a slight modification of the technique of harmonic measure theory used by the author in order to construct parabolic semigroups with slope $[-\pi/2,\pi/2]$, it is possible to construct examples of parabolic semigroups having slope $[a,b]$ {\sl for every} $-\pi/2<a<b<\pi/2$. The proof of the latter proposition is quite involved and the techniques we use might be interesting in their own right. The main new tools we introduce and exploit in our construction are``good boxes'' (see Section \ref{good}). These are open subsets of simply connected domains where one can estimates hyperbolic distance and displacement of geodesics using the corresponding objects for strips. The plan of the paper is the following. In Section \ref{geo}, we recall the notion of geodesics and Gromov's quasi-geodesics in simply connected domains and state some results we need in the paper. In Section \ref{loc}, we collect some known (and some possibly new) results of localization for the hyperbolic metric and the hyperbolic distance in simply connected domains. In Section \ref{hypse}, we prove Theorem \ref{Thm:nec-suff-non-tg-intro} and Corollary \ref{non-tg-intro}. In Section \ref{good} we introduce ``good boxes'' and prove the results about geodesics and hyperbolic metric we mentioned above. Finally, in Section \ref{Traj}, we prove Proposition \ref{main-para} and Proposition \ref{Prop:tang-conve-with-para}. \smallskip We thank the referees for many useful comments which improved the original manuscript. \medskip {\bf Notations.} In this paper we will freely make use of Carath\'eodory's prime end theory (see, {\sl e.g.},\cite{CL, Pommerenke, Pommerenke2}). In particular, recall that every simply connected domain $\Delta\subsetneq \mathbb C$ has a Carath\'eodory boundary $\partial_C\Delta$ given by the union of the prime ends of $\Delta$. The set $\widehat{\Delta}:=\Delta\cup \partial_C\Delta$ can be endowed with the Carath\'eodory topology. For an open set $U\subset \Delta$, we let $U^\ast$ be the union of $U$ with every prime end of $\Delta$ for which there exists a representing null chain which is eventually contained in $U$. The Carath\'eodory topology is the topology generated by all open sets $U$ of $\Delta$ and the sets $U^\ast$. It is known that $\overline{\mathbb D}$ with the Euclidean topology is homeomorphic to $\widehat{\mathbb D}$ and, if $f:\mathbb D \to \Delta$ is a Riemann map, then $f$ extends to a homeomorphism $\hat{f}: \widehat{\mathbb D}\to \widehat{\Delta}$. In this way, every point $\sigma\in \partial \mathbb D$ corresponds to a unique prime end $\underline{x}_\sigma\in \partial_C\mathbb D$ and, via $f$, to a unique prime end $\hat{f}(\underline{x}_\sigma)\in \partial_C\Delta$. \smallskip We denote by $\mathbb C_\infty$ the Riemann sphere. If $\Delta\subset \mathbb C$ is a domain, we denote by $\partial_\infty \Delta$ its boundary in $\mathbb C_\infty$. Note that $\partial_\infty \Delta=\partial \Delta$ in case $\Delta$ is bounded, otherwise $\partial_\infty \Delta=\partial \Delta\cup\{\infty\}$. Finally, we denote by $\omega(z,w)$ the hyperbolic distance between $z$ and $w\in \mathbb D$. \section{Geodesics and quasi-geodesics in simply connected domains}\label{geo} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. We denote by $\kappa_\Delta$ the infinitesimal metric in $\Delta$, that is, for $z\in \Delta$, $v\in \mathbb C$, we let \[ \kappa_\Delta(z;v):=\frac{|v|}{f'(0)}, \] where $f:\mathbb D\to \Delta$ is the Riemann map such that $f(0)=z$, $f'(0)>0$. The hyperbolic distance $k_\Delta$ in $\Delta$ is defined for $z, w\in \Delta$ as \[ k_\Delta(z,w):=\inf \int_0^1 \kappa_\Delta(\gamma(t);\gamma'(t))dt, \] where the infimum is taken over all piecewise $C^1$-smooth curve $\gamma:[0,1]\to \Delta$ such that $\gamma(0)=z, \gamma(1)=w$. It is well known that, for all $z,w\in \mathbb D$, \[ \omega(z,w):=k_\mathbb D(z,w)=\frac{1}{2}\log \frac{1+\left|\frac{z-w}{1-\overline{z}w} \right|}{1-\left|\frac{z-w}{1-\overline{z}w} \right|}. \] Let $-\infty<a<b<+\infty$ and let $\gamma:[a,b]\to \Delta$ be a piecewise $C^1$-smooth curve. For $a\leq s\leq t\leq b$, we define the {\sl hyperbolic length of $\gamma$ in $\Delta$} between $s$ and $t$ as \[ \ell_\Delta(\gamma;[s,t]):=\int_s^t \kappa_\Delta(\gamma(u);\gamma'(u))du. \] In case $s=a$ and $t=b$, we will simply write \[ \ell_\Delta(\gamma):=\ell_\Delta(\gamma;[a,b]). \] \begin{definition} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. A $C^1$-smooth curve $\gamma:(a,b)\to \Delta$, $-\infty\leq a<b\leq +\infty$ such that $\gamma'(t)\neq 0$ for all $t\in (a,b)$ is called a {\sl geodesic} of $\Delta$ if for every $a< s\leq t< b$, \[ \ell_\Delta(\gamma;[s,t])=k_\Delta(\gamma(s), \gamma(t)). \] Moreover, if $z,w\in \Delta$ and there exist $a<s<t<b$ such that $\gamma(s)=z$ and $\gamma(t)=w$, we say that $\gamma|_{[s,t]}$ is a geodesic which joins $z$ and $w$. With a slight abuse of notation, we call geodesic also the image of $\gamma$ in $\Delta$. \end{definition} Using Riemann maps and the invariance of hyperbolic metric and distance under the action of biholomorphisms, we have the following result: \begin{proposition}\label{Prop:geodesic-in-simply} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. Let $-\infty\leq a<b\leq +\infty$. \begin{enumerate} \item If $\eta:(a,b) \to \Delta$ is a geodesic, then \[ \eta(a):=\lim_{t\to a^+}\eta(t), \quad \eta(b):=\lim_{t\to b^-}\eta(t) \] exist as limits in the Carath\'eodory topology of $\Delta$. Moreover, if $\eta(a), \eta(b)\in \Delta$ then \[ k_\Delta(\eta(a),\eta(b))=\lim_{\epsilon\to 0^+}\ell_{\Delta}(\eta;[a+\epsilon,b-\epsilon]). \] \item If $\eta:(a,b) \to \Delta$ is a geodesic such that $\eta(a), \eta(b)\in \partial_C \Delta$, then $\eta(a)\neq \eta(b)$. \item For any $z,w\in \widehat{\Delta}$, $z\neq w$, there exists a real analytic geodesic $\gamma:(a,b)\to \Delta$ such that $\gamma(a)=z$ and $\gamma(b)=w$. Moreover, such a geodesic is essentially unique, namely, if $\eta:(\tilde a, \tilde b)\to \Delta$ is another geodesic joining $z$ and $w$, then $\gamma([a,b])=\eta([\tilde a,\tilde b])$ in $\widehat{\Delta}$. \item If $\gamma:(a,b)\to \Delta$ is a geodesic such that either $\gamma(a)\in \Delta$ or $\gamma(b)\in \Delta$ (or both), then there exists a geodesic $\eta:(\tilde a,\tilde b)\to \Delta$ such that $\eta(\tilde a), \eta(\tilde b)\in \partial_C \Delta$ and such that $\gamma([a,b])\subset \eta([\tilde a, \tilde b])$ in $\widehat{\Delta}$. \item If $\gamma:(a,b)\to \Delta$ is a geodesic such that $\gamma(a)\in \partial_C\Delta$ then the cluster set $\Gamma(\gamma,a)$ is equal to $\Pi(\gamma(a))$, the principal part of the prime end $\gamma(a)$ (and similarly for $b$ in case $\gamma(b)\in \partial_C\Delta$). \end{enumerate} \end{proposition} Given a simply connected domain, it is in general a hard task to find geodesics. The aim of this section is to recall a powerful method due to Gromov to localize geodesics via simpler curves which are called quasi-geodesics. \begin{definition} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. Let $A\geq 1$ and $B\geq 0$. A piecewise $C^1$-smooth curve $\gamma:[a,b]\to \Delta$, $-\infty<a<b<+\infty$, is a {\sl $(A,B)$-quasi-geodesic} if for every $a\leq s\leq t\leq b$, \[ \ell_\Delta(\gamma; [s,t])\leq A k_\Delta(\gamma(s),\gamma(t))+B. \] \end{definition} The importance of quasi-geodesics is partly justified by the following shadowing lemma (see, {\sl e.g.} \cite{Ghys}), known also as ``geodesics' stability lemma'': \begin{theorem}[Gromov's shadowing lemma]\label{Gromov} For every $A\geq 1$ and $B\geq 0$ there exists $\delta=\delta(A,B)>0$ with the following property. Let $\Delta\subsetneq \mathbb C$ be any simply connected domain. If $\gamma:[a,b]\to \Delta$ is a $(A,B)$-quasi-geodesic, then there exists a geodesic $\tilde\gamma:[\tilde a, \tilde b]\to \Delta$ such that $\tilde\gamma(\tilde a)=\gamma(a)$, $\tilde\gamma(\tilde b)=\gamma(b)$ and for every $u\in [a,b]$ and $v\in [\tilde a, \tilde b]$, \[ k_\Delta(\gamma(u), \tilde\gamma([\tilde a, \tilde b]))<\delta, \quad k_\Delta(\tilde\gamma(v),\gamma([ a, b]))< \delta. \] \end{theorem} The following result is a consequence of Gromov's shadowing lemma and follows by standard normality arguments: \begin{corollary}\label{Cor:shadow} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. Let $\gamma:[0,+\infty)\to \Delta$ be a piecewise $C^1$-smooth curve such that $\lim_{t\to +\infty}k_\Delta(\gamma(0), \gamma(t))=+\infty$ and there exist $A\geq 1$, $B\geq 0$ such that for every fixed $T>0$ the curve $[0,T]\ni t\mapsto \gamma(t)$ is a $(A,B)$-quasi-geodesic. Then there exists a prime end $\underline{x}\in \partial_C\Delta$ such that $\gamma(t)\to \underline{x}$ in the Carath\'eodory topology of $\Delta$ as $t\to +\infty$. Moreover, there exists $\epsilon>0$ such that, if $\eta:[0,+\infty)\to \Delta$ is the geodesic of $\Delta$ parameterized by arc length such that $\eta(0)=\gamma(0)$ and $\lim_{t\to+\infty}\eta(t)=\underline{x}$ in the Carath\'eodory topology of $\Delta$, then, for every $t\in [0,+\infty)$, \begin{equation*} k_\Delta(\gamma(t), \eta([0,+\infty)))<\epsilon, \quad k_\Delta(\eta(t), \gamma([0,+\infty)))<\epsilon. \end{equation*} \end{corollary} \section{Localization of hyperbolic metric and hyperbolic distance}\label{loc} In this section we prove a localization result which allows to get information on the hyperbolic metric and hyperbolic distance of a simply connected domain in a portion of the domain itself. We start with the notion of totally geodesic subsets: \begin{definition} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. A domain $U\subset \Delta$ is said to be {\sl totally geodesic} in $\Delta$ if for every $z, w\in U$ the geodesic of $\Delta$ joining $z$ and $w$ is contained in $U$. \end{definition} We need the following lemma: \begin{lemma}\label{Lem:total-geo-disc} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. Let $\gamma:\mathbb R\to \Delta$ be a geodesic parameterized by arc length. Then $\Delta\setminus\gamma(\mathbb R)$ consists of two simply connected components which are totally geodesic in $\Delta$. \end{lemma} \begin{proof} Let $f:\mathbb D\to \Delta$ be a biholomorphism. Then, $f^{-1}\circ \gamma:\mathbb R \to \mathbb D$ is a geodesic parameterized by arc length. Up to pre-composing with an automorphism of $\mathbb D$, we can assume that $f^{-1}(\gamma(\mathbb R))=(-1,1)$. Consider $\mathbb D^{+}:=\{\zeta\in \mathbb D: {\sf Re}\, \zeta>0\}$. Since the geodesic in $\mathbb D$ joining two points $z,w\in \mathbb D$ is the arc of a circle containing $z, w$ and meeting $\partial \mathbb D$ orthogonally, it is easy to see that $\mathbb D^+$ is totally geodesic in $\mathbb D$. A similar argument shows that $\mathbb D^{-}:=\{\zeta\in \mathbb D: {\sf Re}\, \zeta<0\}$ is totally geodesic. Moving back to $\Delta$ via $f$ and recalling that $f$ is an isometry for the hyperbolic distance, we have the result. \end{proof} Now we can state and prove a localization result for the hyperbolic metric and the hyperbolic distance: \begin{theorem}[Localization Lemma]\label{Thm:localiz} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. Let $p\in \partial_C\Delta$ and let $U^\ast$ be an open set in $\widehat{\Delta}$ which contains $p$. Assume that $U^\ast\cap \Delta$ is simply connected. Let $C>1$. Then there exists an open neighborhood $V^\ast\subset U^\ast$ of $p$ such that for every $z, w\in V^\ast\cap \Delta$ and all $v\in \mathbb C$, \begin{equation}\label{Eq:localization1} \kappa_\Delta(z;v)\leq \kappa_{U^\ast\cap \Delta}(z;v)\leq C \kappa_{\Delta}(z;v), \end{equation} \begin{equation}\label{Eq:localization2} k_\Delta(z,w)\leq k_{U^\ast\cap \Delta}(z,w)\leq C k_\Delta(z,w). \end{equation} In particular, if $\Delta$ is a Jordan domain then for every $\sigma\in \partial_\infty\Delta$, for every $U\subset \mathbb C_\infty$ open set such that $\sigma\in U$ and $U\cap \Delta$ is simply connected, and every $C>1$, there exists an open neighborhood $V\subset U$ of $\sigma$ such that \eqref{Eq:localization1} and \eqref{Eq:localization2} hold (with $U^\ast=U$ and $V^\ast=V$). \end{theorem} \begin{proof} The inequalities on the left in \eqref{Eq:localization1} and \eqref{Eq:localization2} follow immediately from the decreasing properties of the infinitesimal metric and of the distance. As for the inequalities on the right, it is enough to prove them for $\Delta=\mathbb D$. The identity map extends to a homeomorphism $\Phi$ between $\widehat{\mathbb D}$ and $\overline{\mathbb D}$. Hence, there exists an open set (in the Euclidean topology) $W\subset \mathbb C$ such that $\sigma:=\Phi(p)\in W$ and $\Phi(U^\ast)=W\cap \overline \mathbb D$. Since by hypothesis $U^\ast\cap \mathbb D$ is simply connected, then $\Phi(U^\ast\cap \mathbb D)=W\cap \mathbb D$ is simply connected as well. Now we use ideas probably well known to experts. However, we give a sketch here for the reader's convenience. First, one can prove that given $R>0$ such that $(\tanh R)^{-1}<C$, there exists an open set $X\subset W$, $\sigma\in X$, such that for every $z\in X\cap \mathbb D$ the hyperbolic disc $D^{hyp}(z,R):=\{w \in \mathbb D:k_{\mathbb D}(z,w) < R\}$ is contained in $W\cap \mathbb D$. This implies immediately that for all $z\in X\cap \mathbb D$ and $v\in \mathbb C$, \begin{equation}\label{Eq:prima-mericaW} \kappa_{W\cap \mathbb D}(z;v)\leq \kappa_{D^{hyp}(z,R)}(z;v)=(\tanh R)^{-1} \kappa_\mathbb D(z;v)<C \kappa_\mathbb D(z;v). \end{equation} Then, one can find $\epsilon\in (0,\pi/4)$ in such a way that, if $\gamma:(-\infty,+\infty)\to \mathbb D$ is the geodesic in $\mathbb D$ parameterized by arc length such that $\lim_{t\to-\infty}\gamma(t)=e^{\epsilon i}\sigma $ and $\lim_{t\to+\infty}\gamma(t)=e^{-\epsilon i}\sigma$ then $\gamma(\mathbb R)\subset X$. By Lemma \ref{Lem:total-geo-disc}, $\mathbb D\setminus \gamma(\mathbb R)$ is the union of two simply connected components. Since $\overline{\gamma(\mathbb R)}$ does not contain $\sigma$, it follows that $\sigma$ belongs to the closure of one and only one of the connected components of $\mathbb D\setminus \gamma(\mathbb R)$. Call $Y$ such a component. By Lemma \ref{Lem:total-geo-disc}, $Y$ is totally geodesic in $\mathbb D$. Therefore, for every $z,w\in Y$, the geodesic $\eta:[0,1]\to \mathbb D$ of $\mathbb D$ such that $\eta(0)=z, \eta(1)=w$ is contained in $Y\subset X\cap \mathbb D$. Hence, by \eqref{Eq:prima-mericaW}, \begin{equation*} \begin{split} k_{W\cap \mathbb D}(z,w)&\leq \ell_{W\cap \mathbb D}(\eta;[0,1])=\int_0^1\kappa_{W\cap \mathbb D}(\eta(t);\eta'(t))dt\\&\leq C\int_0^1\kappa_\mathbb D(\eta(t);\eta'(t))dt=Ck_\mathbb D(z,w). \end{split} \end{equation*} By the arbitrariness of $z,w$, setting $V^\ast:=\Phi^{-1}(\tilde{Y}\cap \overline{\mathbb D})$, where $\tilde{Y}$ is any open set in $\mathbb C$ such that $\tilde{Y}\cap \mathbb D=Y$, we are done. Finally, if $\Delta$ is a Jordan domain, the result follows since $\overline{\Delta}^\infty$ and $\widehat{\Delta}$ are homeomorphic. \end{proof} If $\Omega\subset \mathbb C$ is a domain, for $z\in \Omega$, we let \[ \delta_\Omega(z):=\inf_{w\in \mathbb C\setminus \Omega} |z-w|, \] the Euclidean distance from $z$ to the boundary $\partial \Omega$. \begin{theorem}\label{Thm:Distance-Lemma-inf} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. Then for every $z\in \Delta$ and $v\in \mathbb C$, \[ \frac{|v|}{4\delta_\Delta(z)}\leq \kappa_\Delta(z;v)\leq \frac{|v|}{\delta_\Delta(z)}. \] Moreover, if $\Delta$ is convex, $ \kappa_\Delta(z;v)\geq \frac{|v|}{2\delta_\Delta(z)}$ for every $z\in \Delta$ and $v\in \mathbb C$. \end{theorem} \begin{proof}[Sketch of the proof] The lower estimate follows form the Koebe $1/4$-Theorem. The upper estimate follows at once since the Euclidean disc of center $z$ and radius $\delta_\Delta(z)$ is contained in $\Delta$. In case $\Delta$ is convex, take $z\in \Delta$ and let $p\in \partial \Delta$ be a point such that $|p-z|=\delta_\Delta(z)$. By convexity, $\Delta$ is contained in a half-plane whose boundary is a separating line for $\Delta$ at $p$. From this the lower estimate follows at once. \end{proof} Integrating the previous estimates, one has: \begin{theorem}[Distance Lemma]\label{Thm:Distance-Lemma} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. Then for every $w_1, w_2\in \Delta$, \[ \frac{1}{4} \log \left(1+\frac{|w_1-w_2|}{\min\{\delta_\Delta(w_1), \delta_\Delta(w_2)\}} \right)\leq k_\Delta(w_1,w_2)\leq \int_{\Gamma}\frac{|dw|}{\delta_\Delta(w)}, \] where $\Gamma$ is any piecewise $C^1$-smooth curve in $\Delta$ joining $w_1$ to $w_2$. In case $\Delta$ is convex, one can replace $1/4$ with $1/2$ in the left-hand side of the previous inequality. \end{theorem} \section{Hyperbolic sectors and non-tangential limits}\label{hypse} The aim of this section is to provide an intrinsic way to define non-tangential limits in simply connected domains. More precisely, the question we settle here is the following: let $\Delta \subsetneq \mathbb C$ be a simply connected domain and $f:\mathbb D\to \Delta$ a Riemann map. Let $\{z_n\}\subset \Delta$ be a sequence such that $\{f^{-1}(z_n)\}$ converges to $\sigma\in \partial \mathbb D$. How is it possible to determine whether $\{f^{-1}(z_n)\}$ converges non-tangentially to $\sigma$ by looking at the geometry of $\Delta$? We start with a definition which allows to extend the notion of non-tangential limit to any simply connected domain: \begin{definition} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain. Let $\gamma:(a,+\infty)\to \Delta$, $a\geq -\infty$, be a geodesic with the property that $\lim_{t\to +\infty}k_\Delta(\gamma(t),\gamma(t_{0}))=+\infty$, for some $t_{0}\in (a,+\infty)$. A {\sl hyperbolic sector around $\gamma$ of amplitude $R>0$} is \[ S_\Delta(\gamma, R):=\{w\in \Delta: k_\Delta(w, \gamma((a,+\infty)))<R\}. \] \end{definition} Now we aim to give a description of hyperbolic sectors. To this aim, it is useful to move our considerations to the right half-plane, where actual computations turn out to be easier. Let $\mathbb H:=\{z \in \mathbb C : {\sf Re}\,(z) > 0\}$. The map $z\mapsto \frac{1+z}{1-z}$ is a biholomorphism between $\mathbb D$ and $\mathbb H$. Hence, a direct computation shows that for $z,w\in \mathbb H$, $v\in \mathbb C$ \[ \kappa_\mathbb H(z;v)=\frac{|v|}{2{\sf Re}\, z}, \quad k_\mathbb H(z,w)=\frac{1}{2}\log \frac{1+\left|\frac{z-w}{z+\overline{w}} \right|}{1-\left|\frac{z-w}{z+\overline{w}} \right|}. \] Moreover, since $\mathbb D$ and $\mathbb H$ are biholomorphic via a Moebius transformation, it follows easily that the geodesics in $\mathbb H$ are either intervals contained in semi-lines in $\mathbb H$ parallel to the real axis, or arcs in $\mathbb H$ of circles intersecting orthogonally the imaginary axis. \begin{lemma}\label{Lem:hyper-semipiano} Let $\beta\in (-\frac{\pi}{2},\frac{\pi}{2})$. \begin{enumerate} \item Let $0<\rho_0<\rho_1$ and let $\Gamma:=\{\rho e^{i\beta}: \rho_0\leq \rho\leq \rho_1\}$. Then, $\displaystyle{\ell_{\mathbb H}(\Gamma)=\frac{1}{2\cos \beta}\log\frac{\rho_1}{\rho_0}}$. \item Let $\rho_0, \rho_1>0$. Then, $\displaystyle{k_{\mathbb H}(\rho_0,\rho_1e^{i\beta})-k_{\mathbb H}(\rho_0,\rho_1)\geq \frac{1}{2}\log\frac{1}{\cos \beta}.}$ \item Let $\rho_0>0$ and $\alpha\in (-\frac{\pi}{2},\frac{\pi}{2})$. Then, $(0,+\infty)\ni \rho\mapsto k_{\mathbb H}(\rho e^{i\alpha},\rho_0e^{i\beta})$ has a minimum at $\rho=\rho_0$, it is increasing for $\rho>\rho_0$ and decreasing for $\rho<\rho_0$. \item Let $\theta_0, \theta_1\in (-\frac{\pi}{2},\frac{\pi}{2})$ and $\rho>0$. Then $k_{\mathbb H}(\rho e^{i\theta_0},\rho e^{i\theta_1})=k_{\mathbb H}(e^{i\theta_0},e^{i\theta_1})$. Moreover, $k_\mathbb H(1,e^{i\theta})=k_\mathbb H(1,e^{-i\theta})$ for all $\theta\in [0,\pi/2)$ and $[0, \pi/2)\ni \theta\mapsto k_\mathbb H(1,e^{i\theta})$ is strictly increasing. \item Let $\beta_0,\beta_1\in (-\frac{\pi}{2},\frac{\pi}{2})$ and $0<\rho_0<\rho_1$. Then $\displaystyle{ k_{\mathbb H}(\rho_0e^{i\beta_0},\rho_1e^{i\beta_1})\geq k_{\mathbb H}(\rho_0,\rho_1)}$. \end{enumerate} \end{lemma} \begin{proof} (1) Setting $\gamma(\rho):=\rho e^{i\beta}$, we have \[ \ell_\mathbb H(\Gamma)=\ell_\mathbb H(\gamma;[\rho_0,\rho_1])=\int_{\rho_0}^{\rho_1}\frac{1}{2{\sf Re}\, \rho e^{i\beta}}d\rho=\frac{1}{2\cos \beta}\log\frac{\rho_1}{\rho_0}. \] In particular, since for $\beta=0$, $\Gamma$ is a geodesic of $\mathbb H$, $\ell_\mathbb H(\Gamma;[\rho_0, \rho_1])=k_\mathbb H(\rho_0, \rho_1)$. (2) We have, \begin{equation*} \begin{split} k_{\mathbb H}(\rho_0,\rho_1e^{i\beta})&=\frac{1}{2}\log \frac{\left(1+\left\vert \frac{\rho_1e^{i\beta}-\rho_0}{\rho_1e^{i\beta}+\rho_0}\right\vert\right)^2}{1-\left\vert \frac{\rho_1e^{i\beta}-\rho_0}{\rho_1e^{i\beta}+\rho_0}\right\vert^2}=\frac{1}{2}\log \frac{\left(\left\vert{\rho_1e^{i\beta}+\rho_0}\right\vert+\left\vert \rho_1e^{i\beta}-\rho_0\right\vert\right)^2}{\left\vert {\rho_1e^{i\beta}+\rho_0}\right\vert^2-\left\vert {\rho_1e^{i\beta}-\rho_0}\right\vert^2}\\& = \frac{1}{2}\log \frac{\rho_0^2+\rho_1^2+\sqrt{\rho_1^4+\rho_0^4-2\rho_0^2\rho_1^2\cos(2\beta)}}{2\rho_0\rho_1\cos\beta}. \end{split} \end{equation*} Assume $\rho_0\leq \rho_1$ and set $x=\frac{\rho_0}{\rho_1}$ (in case $\rho_0>\rho_1$, set $x=\frac{\rho_1}{\rho_0}$). Hence, \[ k_{\mathbb H}(\rho_0,\rho_1e^{i\beta})-k_{\mathbb H}(\rho_0,\rho_1)= \frac{1}{2}\log \frac{1+x^2+\sqrt{1+x^4-2x^2\cos(2\beta)}}{2\cos \beta}. \] Since the numerator inside the logarithm is strictly increasing in $x$ and $x\in (0,1]$, the estimate follows. (3) We have \begin{equation*} k_{\mathbb H}(\rho e^{i\alpha},\rho_0e^{i\beta})=\frac{1}{2}\log \frac{1+\left\vert \frac{\rho_0e^{i\beta}-\rho e^{i\alpha}}{\rho_0e^{i\beta}+\rho e^{-i\alpha}}\right\vert}{1-\left\vert \frac{\rho_0e^{i\beta}-\rho e^{i\alpha}}{\rho_0e^{i\beta}+\rho e^{-i\alpha}}\right\vert}. \end{equation*} Since the derivative of $[0,1)\ni x\mapsto \frac{1}{2}\log \frac{1+x}{1-x}$ is strictly positive, it is enough to prove the statement for the function \[ (0,+\infty)\ni \rho\mapsto \frac{|\rho_0e^{i\beta}-\rho e^{i\alpha}|^2}{|\rho_0e^{i\beta}+\rho e^{-i\alpha}|^2}=\frac{\rho_0^2+\rho^2-2\rho\rho_0\cos(\beta-\alpha)}{\rho_0^2+\rho^2+2\rho\rho_0\cos(\beta+\alpha)}, \] and this follows immediately from a direct computation. (4) From a straightforward computation from the very definition of hyperbolic distance in $\mathbb H$, we have \[ k_{\mathbb H}(\rho e^{i\theta_0},\rho e^{i\theta_1})=\frac{1}{2}\log \frac{1+\frac{|e^{i\theta_0}-e^{i\theta_1}|}{|e^{i\theta_0}+e^{-i\theta_1}|}}{1-\frac{|e^{i\theta_0}-e^{i\theta_1}|}{|e^{i\theta_0}+e^{-i\theta_1}|}}=k_{\mathbb H}( e^{i\theta_0}, e^{i\theta_1}). \] This proves the first part of the statement. Alternatively, this follows from the fact that the multiplication by $\rho$ is a biholomorphism of $\mathbb H$. Next, since \[ [0,\pi/2)\ni \theta\mapsto \left|\frac{e^{i\theta}-1}{e^{i\theta}+1}\right|=\sqrt{\frac{1-\cos \theta}{1+\cos \theta}}, \] is strictly increasing, using the fact that $(0,1)\ni x\mapsto \frac{1}{2}\log \frac{1+x}{1-x}$ is strictly increasing in $x$ and from the very definition of $k_{\mathbb H}(1, e^{i\theta})$ it follows that $[0, \pi/2)\ni \theta\mapsto k_\mathbb H(1,e^{i\theta})$ is strictly increasing. Moreover, the previous formula also shows that $k_{\mathbb H}(1, e^{i\theta})=k_{\mathbb H}(1, e^{-i\theta})$ for all $\theta\in [0,\pi/2)$. (5) Using the fact that $(0,1)\ni x\mapsto \frac{1}{2}\log \frac{1+x}{1-x}$ is strictly increasing in $x$ and from the very definition of $k_{\mathbb H}$, it is enough to prove that \[ \frac{|e^{i\beta_0}\rho_0-e^{i\beta_1}\rho_1|}{|e^{i\beta_0}\rho_0+e^{-i\beta_1}\rho_1|}\geq \frac{\rho_1-\rho_0}{\rho_0+\rho_1}. \] Setting $a:=\rho_0^2+\rho_1^2$ and $b=2\rho_0\rho_1$, and taking the square in the previous inequality, this amounts to show that \[ \frac{a-b\cos(\beta_0-\beta_1)}{a+b\cos(\beta_0+\beta_1)}\geq \frac{a-b}{a+b}. \] After simple computations, this is equivalent to \[ \cos\beta_1\cos\beta_0+\frac{b}{a}\sin\beta_1\sin\beta_0\leq 1. \] Since $b\leq a$, the result follows. \end{proof} Now we describe the shape of a hyperbolic sector in the half-plane. We need a definition: \begin{definition} Let $\beta\in (0,\pi)$ and $r_0\in [0,+\infty)$, let \[ V(\beta, r_0):=\{\rho e^{i\theta}: \rho>r_0, |\theta|< \beta\}, \] be a {\sl horizontal sector} of angle $2\beta$ symmetric with respect to the real axis and with height $r_0$. \end{definition} \begin{lemma}\label{Lem:hyper-sector-inH} Let $\gamma:[0,+\infty)\to \mathbb H$ be a geodesic such that $\gamma([0,+\infty))=[r_0, +\infty)$ and $\gamma(0)=r_0$ for some $r_0>0$. Then for every $R>0$ there exists $\beta\in (0,\pi/2)$, with $k_\mathbb H(1,e^{i\beta})=R$, such that \begin{equation}\label{Stolz-hyperb} S_\mathbb H(\gamma, R)=V(\beta, r_0)\cup D^{hyp}_\mathbb H(r_0, R), \end{equation} where $D^{hyp}_\mathbb H(r_0, R):=\{w\in \mathbb H: k_\mathbb H(r_0, w)<R\}$ is the hyperbolic disc in $\mathbb H$ of center $r_0$ and radius $R$. \end{lemma} \begin{proof} Let $w\in \mathbb H$, $w=\rho e^{i\theta}$ for some $\rho>0$ and $\theta\in (-\pi/2,\pi/2)$. Hence, by Lemma \ref{Lem:hyper-semipiano}(3), \[ k_\mathbb H(w,(0,+\infty))=k_\mathbb H(\rho e^{i\theta}, \rho)=k_\mathbb H(e^{i\theta},1). \] Let $\beta\in (0,\pi/2)$ be such that $k_\mathbb H(1,e^{i\beta})=R$. Therefore, given $\rho>0$, by Lemma \ref{Lem:hyper-semipiano}(4) and the previous equalities, $k_\mathbb H(\rho e^{i\theta}, (0,+\infty))<R$ if and only if $|\theta|<\beta$. This implies at once that $V(\beta, r_0)\subset S_\mathbb H(\gamma, R)$. Moreover, let $w\in D^{hyp}_\mathbb H(r_0, R)$. Hence, $M:=k_\mathbb H(r_0, w)<R$. Let $r\in (r_0, +\infty)$ be such that $k_\mathbb H(r,r_0)<R-M$. Hence, by the triangle inequality, \[ k_\mathbb H(w, r)\leq k_\mathbb H (w,r_0)+k_\mathbb H(r_0,r)<M+R-M=R, \] proving that $w\in S_\mathbb H(\gamma, R)$. Therefore, $V(\beta, r_0)\cup D^{hyp}_\mathbb H(r_0, R)\subset S_\mathbb H(\gamma, R)$. Now, let $w=\rho e^{i\theta}\in S_\mathbb H(\gamma, R)$ with $\rho>0$ and $\theta\in (-\pi/2, \pi/2)$. If $\rho>r_0$, by Lemma \ref{Lem:hyper-semipiano}(3) and (4), it follows immediately that $w\in V(\beta, r_0)$. If $\rho\leq r_0$, the condition $w\in S_\mathbb H(\gamma, R)$ implies that there exists $r>r_0$ such that $k_\mathbb H(w,r)<R$. Hence, by Lemma \ref{Lem:hyper-semipiano}(3), $k_\mathbb H(\rho e^{i\theta}, r_0)<k_\mathbb H(\rho e^{i\theta}, r)<R$ and $w\in D^{hyp}_\mathbb H(r_0, R)$. This proves that $S_\mathbb H(\gamma, R)\subset V(\beta, r_0)\cup D^{hyp}_\mathbb H(r_0, R)$. \end{proof} As a consequence, we have the following characterization of non-tangential convergence: \begin{proposition}\label{Prop:conv-nt-sc} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain and let $f:\mathbb D \to \Delta$ be a Riemann map. Let $\{z_n\}\subset \Delta$ be a compactly divergent sequence. Then $\{f^{-1}(z_n)\}$ converges non-tangentially to $\sigma\in \partial \mathbb D$ if and only if there exist $R>0$ and a geodesic $\gamma:[0,+\infty)\to \Delta$ such that $\lim_{t\to +\infty}\gamma(t)=\hat{f}(\underline{x}_\sigma)$ in the Carath\'eodory topology of $\Delta$ and $\{z_n\}$ is eventually contained in $S_\Delta(\gamma, R)$. Here, $\hat{f}:\widehat{\mathbb D}\to \widehat{\Delta}$ is the homeomorphism induced by $f$ and $\underline{x}_\sigma\in \partial_C \mathbb D$ is the prime end corresponding to $\sigma$ under $f$. \end{proposition} \begin{proof} Since the condition that $\{z_n\}$ is eventually contained in $S_\Delta(\gamma, R)$ is invariant under isometries for the hyperbolic distance and $f$ is an isometry between $\omega$ and $k_\Delta$, it is enough to prove the statement for $\Delta=\mathbb H$ and a Cayley transform $f:\mathbb D\to \mathbb H$ which maps $\sigma $ to $\infty$. Hence, $\{z_n\}\subset \mathbb D$ converges non-tangentially to $\infty$ if and only if $\{z_n\}$ is eventually contained in a horizontal sector in $\mathbb H$. The result follows then at once by Lemma \ref{Lem:hyper-sector-inH}. \end{proof} The previous result allows to talk of non-tangential limits in simply connected domains, but, from a practical point of view, it is not very useful since the characterization of hyperbolic sectors in a general simply connected domain is a very hard task. Still, we will see how, using localization, one can obtain useful conclusions. We start with the following localization result for hyperbolic sectors: \begin{lemma}\label{Lem:Stolz-qg} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain and let $\gamma:[0,+\infty)\to \Delta$ be a geodesic such that $\lim_{t\to +\infty}k_\Delta(\gamma(0), \gamma(t))=+\infty$. Let $R>0$. Then, for every $0<R_0<R$ there exists $C>1$ such that \begin{enumerate} \item for every $z\in S_\Delta(\gamma, R_0)$ and $v\in \mathbb C$, \begin{equation}\label{eq:estima-sector} \kappa_\Delta(z;v)\leq \kappa_{S_\Delta(\gamma, R)}(z;v)\leq C \kappa_\Delta(z;v), \end{equation} \item for every $z, w\in S_\Delta(\gamma, R_0)$, \begin{equation}\label{eq:estima-sector2} k_\Delta(z, w)\leq k_{S_\Delta(\gamma, R)}(z, w)\leq C k_\Delta(z,w). \end{equation} \end{enumerate} \end{lemma} \begin{proof} The left hand side inequalities follow at once since $S_\Delta(\gamma, R)\subset \Delta$. As for the right hand side inequalities in \eqref{eq:estima-sector} and \eqref{eq:estima-sector2}, since univalent maps are isometries for the hyperbolic distance, we can assume that $\Delta=\mathbb H$ and $\gamma([0,+\infty))=[1,+\infty)$. By Lemma \ref{Lem:hyper-sector-inH}, \[ S:=S_\mathbb H(\gamma, R)=V(\beta, 1)\cup D^{hyp}_\mathbb H(1, R), \] for some $\beta\in (0,\pi/2)$, and $S_\mathbb H(\gamma, R_0)=V(\beta', 1)\cup D^{hyp}_\mathbb H(1, R_0)$ for some $\beta'\in (0,\beta)$. Taking into account that $\overline{D^{hyp}_\mathbb H(1, R_0)}\subset D^{hyp}_\mathbb H(1, R)$, it follows at once that $S_\mathbb H(\gamma, R_0)\cap\{w\in S: |z|\leq M\}$ is relatively compact in $S$ for every $M>1$. Therefore, given $M>1$ there exists $C$ (which depends on $M$) such that \eqref{eq:estima-sector} holds for every $z\in S_\mathbb H(\gamma, R_0)\cap\{w\in S: |z|\leq M\}$ and every $v\in \mathbb C$. Fix $M>1$ such that $\delta_S(z)=\delta_{V(\beta, 1)}(z)$ for all $z\in V(\beta',M)$. By the previous argument, we only need to prove that \eqref{eq:estima-sector} holds for $z\in V(\beta', M)$. Let $z\in V(\beta', M)$ and $v\in \mathbb C\setminus\{0\}$. By Theorem \ref{Thm:Distance-Lemma-inf}, \begin{equation}\label{Eq:estima-k-hyp-sect1} \frac{\kappa_S(z;v)}{\kappa_\mathbb H(z;v)}\leq 4\frac{\delta_\mathbb H(z)}{\delta_S(z)}=4\frac{\delta_\mathbb H(z)}{\delta_{V(\beta, 1)}(z)}. \end{equation} Now, let $z\in V(\beta', M)$ and let $q_z\in \partial V(\beta,1)$ be such that $|z-q_z|=\delta_{V(\beta, 1)}(z)$. If we write $z=\rho e^{i\theta}$ with $\rho>M$ and $|\theta|<\beta'$, assuming $\theta\geq 0$ (the case $\theta<0$ is similar), a simple computation shows that \[ q_z-z=\rho\cos \beta \cos \theta (\tan\theta-\tan\beta)(\sin\beta-i\cos\beta). \] Hence, \[ \delta_{V(\beta, 1)}(z)=|q_z-z|=\rho \cos \beta \cos \theta (\tan\beta-\tan\theta)\geq \rho \cos \beta \cos \theta (\tan\beta-\tan\beta'). \] Since $\delta_\mathbb H(z)={\sf Re}\, z=\rho\cos\theta$, we have \[ \frac{\delta_\mathbb H(z)}{\delta_{V(\beta, 1)}(z)}\leq \frac{1}{\cos\beta(\tan\beta-\tan\beta')}, \] and the right hand side inequality in \eqref{eq:estima-sector} follows at once from \eqref{Eq:estima-k-hyp-sect1}. We are left to prove the right hand side inequality in \eqref{eq:estima-sector2}. To this aim, we claim that $S_\mathbb H(\gamma, R_0)$ is totally geodesic in $\mathbb H$. Assuming the claim for the moment, let $z,w \in S_\mathbb H(\gamma, R_0)$ and let $\eta:[0,1]\to \mathbb H$ be a geodesic such that $\eta(0)=z$ and $\eta(1)=w$. By the claim, $\eta([0,1])\subset S_\mathbb H(\gamma, R_0)$. Hence, by \eqref{eq:estima-sector}, \[ k_{S_\mathbb H(\gamma, R)}(z,w)\leq \int_0^1\kappa_{S_\mathbb H(\gamma, R)}(\eta(t);\eta'(t))dt \leq C \int_0^1\kappa_{\mathbb H}(\eta(t);\eta'(t))dt=C k_\mathbb H(z,w), \] and the right hand side inequality in \eqref{eq:estima-sector2} follows. Let us prove the claim. Since geodesics of $\mathbb H$ are either contained in half lines parallel to the real axis or in arcs of circles which intersect orthogonally the imaginary axis, it is clear that $V(\beta',0)$ is totally geodesic in $\mathbb H$. Next, since $\{\zeta\in \mathbb C: |\zeta|=r\}\cap \mathbb H$ is a geodesic in $\mathbb H$ for all $r>0$, it follows by Lemma \ref{Lem:total-geo-disc} that $\{w\in \mathbb H: |w|>1\}$ is totally geodesic in $\mathbb H$, hence, $V(\beta',1)=V(\beta',0)\cap \{w\in \mathbb H: |w|>1\}$ is totally geodesic in $\mathbb H$. Moreover, $D^{hyp}_\mathbb H(1, R_0)$ is totally geodesic in $\mathbb H$ --- this can be easily seen by proving that any hyperbolic disc in $\mathbb D$ centered at $0$ is totally geodesic and using a Cayley transform to move to $\mathbb H$. Therefore, we only have to show that if $z\in D^{hyp}_\mathbb H(1, R_0)\setminus V(\beta',1)$ and $w\in V(\beta',1)\setminus D^{hyp}_\mathbb H(1, R_0)$, the geodesic $\eta:[0,1]\to \mathbb H$ for $\mathbb H$ such that $\eta(0)=z$ and $\eta(1)=w$ is contained in $V(\beta',1)\cup D^{hyp}_\mathbb H(1, R_0)$. To this aim, we first observe that $D^{hyp}_\mathbb H(1, R_0)\subset V(\beta',0)$. Indeed, if $\rho e^{i\theta}\in D^{hyp}_\mathbb H(1, R_0)$ for some $\rho>0$ and $\theta\in (-\pi/2, \pi/2)$, then by Lemma \ref{Lem:hyper-semipiano}(3), \[ k_\mathbb H(\rho, \rho e^{i\theta})\leq k_\mathbb H(1, \rho e^{i\theta})<R_0. \] This, together with Lemma \ref{Lem:hyper-semipiano}(4) and Lemma \ref{Lem:hyper-sector-inH}, proves that \[ k_\mathbb H(1, e^{i\theta})=k_\mathbb H(\rho,\rho e^{i\theta})<R_0=k_\mathbb H(1, e^{i\beta'}), \] and hence $|\theta|<\beta'$. That is, $\rho e^{i\theta}\in V(\beta',0)$. Therefore, since $V(\beta',0)$ is totally geodesic in $\mathbb H$, \begin{equation}\label{Eq:sta-inV-eq1} \eta([0,1])\subset V(\beta',0). \end{equation} Hence, if $\eta([0,1])\not\subset V(\beta',1)\cup D^{hyp}_\mathbb H(1, R_0)$, then, by \eqref{Eq:sta-inV-eq1}, there exists $s\in (0,1)$ such that $|\eta(s)|<1$ and $\eta(s)\not\in D^{hyp}_\mathbb H(1, R_0)$. Now, the arc $(-\beta', \beta')\ni \theta\mapsto e^{i\theta}$ is contained in $D^{hyp}_\mathbb H(1, R_0)$ by Lemma \ref{Lem:hyper-semipiano}(4), and divides $V(\beta',0)$ into two connected components, which are $V(\beta',1)$ and $V(\beta',0)\setminus \overline{V(\beta',1)}$. Since $\eta([0,1])$ is connected, there exists $s'\in (s,1)$ such that $|\eta(s')|=1$---hence, $\eta(s')\in D^{hyp}_\mathbb H(1, R_0)$. But then, $\eta|_{[0,s']}$ is a geodesic in $\mathbb H$ which joins $z, \eta(s')\in D^{hyp}_\mathbb H(1, R_0)$ but it is not contained in $D^{hyp}_\mathbb H(1, R_0)$, contradicting the fact that $D^{hyp}_\mathbb H(1, R_0)$ is totally geodesic in $\mathbb H$. Therefore, $\eta([0,1])\subset V(\beta',1)\cup D^{hyp}_\mathbb H(1, R_0)$ and the claim follows. \end{proof} \begin{remark}\label{Rem:disco-in-sector-qg} The last part of the proof of the previous lemma shows in particular that $D^{hyp}_\mathbb H(1, R_0)\subset V(\beta',0)$, where $k_\mathbb H(1, e^{i\beta'})=R_0$. Note that, by Lemma \ref{Lem:hyper-semipiano}(4), \[ V(\beta',0)=\{z\in \mathbb H: k_\mathbb H(z, (0,+\infty))<R_0\}. \] Making use of Riemann mappings, we conclude that if $\Delta\subsetneq \mathbb C$ is a simply connected domain and $\gamma:(0,+\infty)\to \Delta$ is a geodesic such that $\lim_{t\to 0^+}k_\Delta(\gamma(t), \gamma(1))=\lim_{t\to +\infty}k_\Delta(\gamma(t), \gamma(1))=+\infty$, then for every $t\in (0,+\infty)$, \[ D^{hyp}_\Delta(\gamma(t), R_0)\subset \{z\in \Delta: k_\mathbb H(z, \gamma((0,+\infty)))<R_0\}. \] \end{remark} We next present two consequences of Lemma \ref{Lem:Stolz-qg}: \begin{proposition}\label{Prop:suff-qc-cont} Let $\Delta, U\subsetneq \mathbb C$ be two simply connected domains. Let $R>0$ and let $\gamma:[0,+\infty)\to U$ be a geodesic in $U$ such that $\lim_{t\to+\infty}k_U(\gamma(t), \gamma(0))=+\infty$. Suppose \[ S_U(\gamma, R)\subset\Delta\subseteq U. \] Then there exists $C>1$ such that for every $0\leq T<+\infty$ the curve $[0,T]\ni t\mapsto \gamma(t)$ is a $(C,0)$-quasi-geodesic in $\Delta$. In particular, if $f:\mathbb D \to \Delta$ is a Riemann map, then $f^{-1}(\gamma(t))$ converges non-tangentially to a point $\sigma\in \partial \mathbb D$. \end{proposition} \begin{proof} Fix $R_0\in (0, R)$. Note that $\gamma(t)\in S_U(\gamma, R_0)$ for all $t\in [0,+\infty)$. Hence, by Lemma \ref{Lem:Stolz-qg}, and taking into account that $\gamma$ is a geodesic in $U$, for every $0\leq s\leq t<+\infty$, we have \begin{equation*} \begin{split} \ell_\Delta(\gamma;[s,t])&=\int_s^t\kappa_{\Delta}(\gamma(u);\gamma'(u))du\leq \int_s^t\kappa_{S_U(\gamma, R)}(\gamma(u);\gamma'(u))du \\&\leq C\int_s^t\kappa_{U}(\gamma(u);\gamma'(u))du=C\ell_U(\gamma;[s,t])\\&=Ck_U(\gamma(s),\gamma(t))\leq C k_\Delta(\gamma(s), \gamma(t)), \end{split} \end{equation*} which shows that $\gamma:[0, T]\to \Delta$ is a $(C,0)$-quasi-geodesic in $\Delta$ for all $T>0$. Since $\Delta\subset U$, we have \[ \lim_{t\to+\infty}k_\Delta(\gamma(0), \gamma(t))\geq \lim_{t\to+\infty}k_U(\gamma(0), \gamma(t))=+\infty. \] Hence, by Corollary \ref{Cor:shadow}, there exist a geodesic $\eta:[0,+\infty)\to \Delta$ for $\Delta$ and $\delta>0$ such that $\eta(0)=\gamma(0)$, $\lim_{t\to +\infty}k_\Delta(\eta(0), \eta(t))=+\infty$ and $k_\Delta(\gamma(s), \eta([0,+\infty)))<\delta$ for every $s\in [0,+\infty)$. Proposition \ref{Prop:conv-nt-sc} implies then the final statement. \end{proof} \begin{theorem}\label{Thm:nec-suff-non-tg} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain and let $f:\mathbb D\to \Delta$ be a Riemann map. Let $\{z_n\}\subset \Delta$ be a compactly divergent sequence. Then $\{f^{-1}(z_n)\}$ converges non-tangentially to a point $\sigma\in \partial \mathbb D$ if and only if there exist a simply connected domain $U\subsetneq \mathbb C$, a geodesic $\gamma:[0,+\infty)\to U$ of $U$ such that $\lim_{t\to+\infty}k_U(\gamma(t), \gamma(0))=+\infty$ and $R>R_0>0$ such that \begin{enumerate} \item $S_U(\gamma, R)\subset\Delta\subseteq U$, \item there exists $n_0\geq 0$ such that $z_n\in S_U(\gamma, R_0)$ for all $n\geq n_0$. \end{enumerate} \end{theorem} \begin{proof} If $\{f^{-1}(z_n)\}$ converges non-tangentially to a point in $\partial \mathbb D$, then the result follows trivially by taking $U=\Delta$ and appealing to Proposition \ref{Prop:conv-nt-sc}. Conversely, since $S_U(\gamma, R)\subset \Delta\subset U$, by Proposition \ref{Prop:suff-qc-cont}, there exists $C>1$ such that the curve $[0,T]\ni r\mapsto \gamma(r)$ is a $(C,0)$-quasi-geodesic in $\Delta$ for all $T>0$ and, arguing as in the last part of the proof of Proposition \ref{Prop:suff-qc-cont}, we find a geodesic $\eta:[0,+\infty)\to \Delta$ for $\Delta$ and $\delta>0$ such that $\eta(0)=\gamma(0)$, $\lim_{t\to +\infty}k_\Delta(\eta(0), \eta(t))=+\infty$ and $k_\Delta(\gamma(s), \eta([0,+\infty)))<\delta$ for every $s\in [0,+\infty)$. Fix $n\geq n_0$. By hypothesis, there exists $s_n\in [0,+\infty)$ such that $k_U(z_n, \gamma(s_n))<R_0$. Hence, by Lemma \ref{Lem:Stolz-qg} \[ k_\Delta(z_n, \gamma(s_n))\leq k_{S_U(\gamma, R)}(z_n, \gamma(s_n))\leq C k_U(z_n, \gamma(s_n))<CR_0. \] Let $u_n\in [0,+\infty)$ be such that $k_\Delta(\gamma(s_n), \eta(u_n))<\delta$. Then, \[ k_\Delta(z_n, \eta(u_n))\leq k_\Delta(z_n, \gamma(s_n))+k_\Delta(\gamma(s_n), \eta(u_n))<CR_0+\delta. \] By the arbitrariness of $n$, this proves that for $n\geq n_0$, $z_n\in S_\Delta(\eta, CR_0+\delta)$. Proposition \ref{Prop:conv-nt-sc} implies then the statement. \end{proof} The previous results have practical applications. For instance, if $\Delta\subset \mathbb C$ is a simply connected domain such that $V(\beta, 0)\subset \Delta\subset \mathbb H$, for some $\beta\in (-\pi/2, \pi/2)$, then the curve $(0,+\infty)\ni t\mapsto t$ is a quasi-geodesic in $\Delta$ and its pre-image via a Riemann map converges non-tangentially to the boundary. When dealing with the slope problem for semigroups, it is useful to consider other simple domains, and we conclude this section with a corollary which will be useful later on. \begin{definition} The {\sl Koebe domain with base point $p\in \mathbb C$} is \[ \mathcal K_p:=\mathbb C\setminus\{\zeta\in \mathbb C: {\sf Re}\, \zeta={\sf Re}\, p, {\sf Im}\,\zeta\leq {\sf Im}\, p\}. \] \end{definition} Since $\mathcal K_p$ is symmetric with respect to the line $\{\zeta\in \mathbb C: {\sf Re}\, \zeta={\sf Re}\, p\}$, it follows that the curve $\gamma_p:(0,+\infty)\ni t\mapsto p+it$ is a geodesic in $\mathcal K_p$. A simple direct computation, using the Riemann map $f:\zeta\mapsto \sqrt{-i\zeta}$ from $\mathcal K_0$ to $\mathbb H$, shows \begin{lemma}\label{Lem:sector-koebe} Let $p\in \mathbb C$ and let $R>0$. Fix $t_0>0$ and let $\gamma_p:[t_0,+\infty)\to \mathcal K_p$ be given by $\gamma_p(t)=p+it$, $t\geq t_0$. Then there exists $\beta\in (0,\pi)$ such that \[ S_{\mathcal K_p}(\gamma_p, R)=\left((p+iV(\beta, 0))\setminus \{\zeta\in \mathbb C: |\zeta-p|\leq t_0\}\right)\cup D^{hyp}_{\mathcal K_p}(it_0+p, R). \] \end{lemma} The following corollary gives a simple geometric condition for the preimage of a line to converge non-tangentially to the boundary of the disc: \begin{corollary}\label{Cor:sector-implies-nt} Let $\Delta\subsetneq \mathbb C$ be a simply connected domain and $f:\mathbb D \to \Delta$ a Riemann map. Suppose there exists $p\in \mathbb C$ such that $\{p-it, t\geq 0\}\subset \mathbb C\setminus \Delta$ and $\{p+it, t> 0\}\subset \Delta$. If there exist $N\geq 0$ and $\beta\in (0,\pi)$ such that $p+iN+iV(\beta,0)\subset \Delta$, then there exist $C>1$, $N'>0$, such that for every $T>N'$ the curve $[N', T]\ni t\mapsto p+it$ is a $(C,0)$-quasi-geodesic in $\Delta$. In particular, there exists $\sigma\in \partial \mathbb D$ such that $(0,+\infty)\ni t\mapsto f^{-1}(p+it)$ converges non-tangentially to $\sigma$ as $t\to +\infty$. \end{corollary} \begin{proof} By hypothesis, $\Delta\subseteq \mathcal K_p$ and for every $a\geq 0$ the curve $\gamma_a:(a,+\infty)\to \mathcal K_p$, $\tilde\gamma(t)=p+it$ is a geodesic in $\mathcal K_p$. Therefore, according to Proposition \ref{Prop:suff-qc-cont}, it is enough to show that there exist $a\geq 0$ and $R>0$ such that $S_{\mathcal K_p}(\gamma_a, R)\subset \Delta$. Let $\beta'\in (0,\beta)$. Let $w'$ be the point of intersection between $\{z=p+i\rho e^{i\beta'}, \rho>0\}$ and $\{z=p+iN+i\rho e^{i\beta}, \rho>0\}$. Let $R:=\inf_{\rho>0}k_{\mathcal K_p}(w', p+i\rho)$. By Remark \ref{Rem:disco-in-sector-qg}, for all $t>0$, \begin{equation}\label{Eq:palla-koebe-insect} D^{hyp}_{\mathcal K_p}(it+p, R)\subset \{z\in \mathcal K_p: k_{\mathcal K_p}(z, p+i(0,+\infty))<R\}=p+iV(\beta',0), \end{equation} where the last equality follows at once using the biholomorphism $\mathcal K_p \ni \zeta\mapsto \sqrt{-i\zeta}\in \mathbb H$. Let $t_0>N$. Since, $\lim_{t\to +\infty}k_{\mathcal K_p}(p+it_0, p+it)=+\infty$ and \[ (p+iV(\beta',0)\cap \{z\in \mathbb C: |z-p|>|w'-p|\})\subset p+iN+iV(\beta, 0), \] there exists $N'>0$ such that $D^{hyp}_{\mathcal K_p}(iN'+p, R)\subset p+iN+iV(\beta, 0)$. Hence, \begin{equation*} \begin{split} &\left((p+iV(\beta', 0))\setminus \{\zeta\in \mathbb C: |\zeta-p|\leq N'\}\right)\cup D^{hyp}_{\mathcal K_p}(p+iN', R)\\& \subset p+iN+iV(\beta,0) \subset \Delta, \end{split} \end{equation*} which, by Lemma \ref{Lem:sector-koebe}, implies that $S_{\mathcal K_p}(\gamma_{N'}, R)\subset \Delta$, and we are done. \end{proof} As a corollary of the previous results we have: \begin{proposition}\label{sector-implies-convergnt} Let $(\phi_t)$ be a non-elliptic semigroup in $\mathbb D$, let $h$ be its Koenigs function, $\Omega=h(\mathbb D)$ and $\tau\in \partial \mathbb D$ its Denjoy-Wolff point. If there exist $\beta\in (0,\pi/2)$ and $q\in \Omega$ such that $q+iV(\beta,0)\subset \Omega$, then $\phi_t(z)$ converges non-tangentially to $\tau$ for all $z\in \mathbb D$. \end{proposition} \begin{proof} Let $p\in \partial \Omega$. Then $\Omega\subset \mathcal K_p$. Let $w_0$ be the point of intersection of the line $\{\zeta\in \mathbb C: {\sf Re}\, \zeta={\sf Re}\, p\}$ and the boundary of the sector $q+iV(\beta,0)$. Since the domain is starlike at infinity, it follows at once that $w_0+iV(\beta,0)\subset \Omega$. Let $\alpha\in (0,\beta)$. Let $w'$ be the point of intersection between $\{z=p+i\rho e^{i\alpha}, \rho>0\}$ and $\{z=w_0+i\rho e^{i\beta}, \rho>0\}$. Let $R:=\inf_{\rho>0}k_{\mathcal K_p}(w', p+i\rho)$. Finally, since $D^{hyp}_{\mathcal K_p}(it+p, R)\subset V(\alpha, 0)$ for all $t>0$, there exists $t_0\in \mathbb R$ be such that $D^{hyp}_{\mathcal K_p}(it_0, R)\subset w_0+iV(\beta,0)$. Hence, \begin{equation}\label{eq:insidekoe} (p+iV(\alpha, 0)\setminus \{\zeta\in \mathbb C: |\zeta-p|\leq t_0\}\cup D^{hyp}_{\mathcal K_p}(p+it_0, R)\subset w_0+iV(\beta,0)\subset \Omega. \end{equation} Let $\gamma:[0,+\infty)$ be given by $\gamma(t)=p+i(t_0+t)$. The curve $\gamma$ is a geodesic for $\mathcal K_p$ such that $\lim_{t\to+\infty}k_{\mathcal K_p}(\gamma(0), \gamma(t))=+\infty$. By \eqref{eq:insidekoe} and Lemma \ref{Lem:sector-koebe}, $S_{\mathcal K_p}(\gamma, R)\subset \Omega$. Moreover, if $z_0\in \mathbb D$ is such that $h(z_0)=p+it_0$, then $h(z_0)+it\in S_{\mathcal K_p}(\gamma, R)$ for all $t\geq 0$. Hence, by Theorem \ref{Thm:nec-suff-non-tg}, $\phi_t(z_0)$ converges non-tangentially --- hence, $\phi_t(z)$ converges non-tangentially to $\tau$ for all $z\in \mathbb D$. \end{proof} As a direct corollary from the previous proposition we have \begin{corollary} Let $(\phi_t)$ be a non-elliptic semigroup in $\mathbb D$, let $h$ be its Koenigs function, $\Omega=h(\mathbb D)$ and $\tau\in \partial \mathbb D$ its Denjoy-Wolff point. Suppose $w_0\in \Omega$. If \[ \liminf_{t\to+\infty}\frac{\delta_\Omega(w_0+it)}{t}>0 \] then $\phi_t(z)$ converges to $\tau$ non-tangentially, for all $z\in \mathbb D$. \end{corollary} \section{Good boxes and localization}\label{good} The goal of this section is to prove that if a simply connected domain contains a rectangle whose height is much larger than the base size---a ``good box''---then the hyperbolic geometry of the domain inside the rectangle is similar to that of a strip. We start with discussing hyperbolic geometry in the strip. \begin{definition} For $\rho>0$ we define the {\sl strip of width $\rho$} \index{Strip} \[ \mathbb{S}_\rho:=\{\zeta\in \mathbb C: 0<{\sf Re}\, \zeta<\rho\}. \] For $\rho=1$ we simply write $\mathbb{S}:=\mathbb{S}_1$. \end{definition} \begin{proposition}\label{Prop:strip} Let $a\in \mathbb R$ and $R>0$. \begin{enumerate} \item the curve $\gamma_0:\mathbb R\ni t\mapsto a+\frac{R}{2}+it$ is a geodesic of $\mathbb{S}_R+a$ and, for every $s<t$, \[ k_{\mathbb{S}_R+a}(a+\frac{R}{2}+is,a+\frac{R}{2}+it)=\frac{\pi (t-s)}{2R}. \] \item For every $z\in \mathbb{S}_R+a$, the orthogonal projection $\pi_{\gamma_0}(z)$ of $z$ onto $\gamma_0$, {\sl i.e.}, the (only) point $\pi_{\gamma_0}(z)$ such that $k_{\mathbb{S}_R+a}(z,\gamma_0)= k_{\mathbb{S}_R+a}(z,\pi_{\gamma_0}(z))$, is \[ \pi_{\gamma_0}(z)=a+\frac{R}{2}+i{\sf Im}\, z. \] \item For every $y\in \mathbb R$, the curve $(-\frac{R}{2},\frac{R}{2})\ni s\mapsto s+a+\frac{R}{2}+iy$ is a geodesic of $\mathbb{S}_R+a$ and for all $s_1, s_2\in (-\frac{R}{2},\frac{R}{2})$, $0<s_2<s_1$ or $s_1<s_2<0$, \[ \frac{1}{2}\log \frac{R-2|s_2|}{R-2|s_1|}\leq k_{\mathbb{S}_R+a}(s_1+a+\frac{R}{2}+iy, s_2+a+\frac{R}{2}+iy)\leq \log \frac{R-2|s_2|}{R-2|s_1|}. \] \item For every $\delta>0$, the hyperbolic sector $S_{\mathbb{S}_R+a}(\gamma_0, \delta)=\mathbb{S}_r+a+\frac{R-r}{2}$, for some $r<R$. Moreover, if $z\in S_{\mathbb{S}_R+a}(\gamma_0, \delta)$, then $|{\sf Re}\, z-a-\frac{R}{2}|<\frac{R}{2}(1-e^{-2\delta})$. While, if $z\not\in S_{\mathbb{S}_R+a}(\gamma_0, \delta)$, then, setting $u(z)=\hbox{sgn}({\sf Re}\, z-a-\frac{R}{2})$, \[ k_{\mathbb{S}_R+a}(z,S_{\mathbb{S}_R+a}(\gamma_0, \delta))=k_{\mathbb{S}_R+a}(z, a+\frac{R}{2}+u(z) r+i{\sf Im}\, z). \] \item For every $M_2,M_1\in \mathbb R$, $M_2>M_1$, let \[ Q(M_1,M_2):=\inf \{k_{\mathbb{S}_R+a}(z,w): z,w\in \mathbb{S}_R+a,{\sf Im}\, z\leq M_1, {\sf Im}\, w\geq M_2\}. \] Then \[ Q(M_1,M_2) =\frac{\pi(M_2-M_1)}{2R}. \] \item For every $\delta>0$ and $N_0>0$ there exists $N>N_0$ which does not depend on $R, a$, such that for every $M_1, M_2\in \mathbb R$ with $M_2-M_1> RN$, there exists $q\in (M_1, M_2-RN_0)$, such that every geodesic $\gamma$ in $\mathbb{S}_R+a$ joining two points $z, w\in \mathbb{S}_R+a$ with ${\sf Im}\, w>M_2$ and ${\sf Im}\, z<M_1$ satisfies $\gamma\cap \{\zeta\in \mathbb C: q<{\sf Im}\, \zeta< q+RN_0\}\subset S_{\mathbb{S}_R+a}(\gamma_0, \delta)$. \item For every $z, w\in \mathbb{S}_R+a$ with ${\sf Im}\, w\geq {\sf Im}\, z$, the geodesic joining $z$ and $w$ is contained in $\{\zeta\in \mathbb{S}_R+a: {\sf Im}\, z\leq {\sf Im}\, \zeta\leq {\sf Im}\, w\}$. \end{enumerate} \end{proposition} \begin{proof} The holomorphic function $f:\mathbb H\ni z\mapsto \frac{Ri}{\pi}\log z+\frac{R}{2}+a$ is a biholomorphism from $\mathbb H$ to $\mathbb{S}_R+a$. (1) Since $\mathbb{S}_R+a$ is symmetric with respect to the line $\{z\in \mathbb C: {\sf Re}\, z=a+\frac{R}{2}\}$, it follows that $\gamma_0$ is a geodesic. The formula for the hyperbolic distance follows at once by a direct computation using $f$ and the corresponding expression of $k_\mathbb H$. (2) Using the biholomorphism $f$, this amounts to proving that for every $\rho_1>\rho_2>0$ and $\theta_1, \theta_2\in (-\pi/2,\pi/2)$ we have \[ k_{\mathbb H}(\rho_1 e^{i\theta_1}, \rho_2 e^{i\theta_2})\geq k_\mathbb H(\rho_1,\rho_2), \] which follows directly from Lemma \ref{Lem:hyper-semipiano}(3). (3) By symmetry, for every $y\in \mathbb R$, the curve $\eta:(-\frac{R}{2},\frac{R}{2})\ni s\mapsto s+a+\frac{R}{2}+iy$ is a geodesic. Let $-\frac{R}{2}<s_1<s_2<\frac{R}{2}$. By Theorem \ref{Thm:Distance-Lemma}, \[ \frac{1}{2}\int_{s_1}^{s_2}\frac{|\eta'(s)|}{\delta_{\mathbb{S}_R+a}(\eta(s))}ds\leq k_{\mathbb{S}_R+a}(\eta(s_1), \eta(s_2))\leq \int_{s_1}^{s_2}\frac{|\eta'(s)|}{\delta_{\mathbb{S}_R+a}(\eta(s))}ds. \] Simple geometric consideration shows that $\delta_{\mathbb{S}_R+a}(\eta(s))=\frac{R}{2}-|s|$. Hence, the estimates follow from a direct computation. (4) Using again the biholomorphism $f$, it is easy to see that $\gamma_0$ corresponds to the geodesic $(0,+\infty)$ in $\mathbb H$ and by Lemma \ref{Lem:hyper-sector-inH}, \[ f^{-1}(S_{\mathbb{S}_R+a}(\gamma_0, \delta))=S_\mathbb H((0,+\infty), \delta)=V(\beta,0) \] for some $\beta\in (0,\pi/2)$. Hence, $S_{\mathbb{S}_R+a}(\gamma_0, \delta)=\mathbb{S}_r+a+\frac{R-r}{2}$, for some $r<R$. Next, assume $z\in S_{\mathbb{S}_R+a}(\gamma_0, \delta)$ and let $s_1:={\sf Re}\, z-a-\frac{R}{2}$. Hence, by (2), \begin{equation*} \begin{split} \delta&>k_{\mathbb{S}_R+a}(z, \gamma_0)=k_{\mathbb{S}_R+a}(z, a+\frac{R}{2}+i{\sf Im}\, z)\\&=k_{\mathbb{S}_R+a}(s_1+a+\frac{R}{2}+i{\sf Im}\, z, a+\frac{R}{2}+i{\sf Im}\, z), \end{split} \end{equation*} and from the lower estimate in (3) we obtain \[ \frac{1}{2}\log \frac{R}{R-2|s_1|}<\delta. \] A direct computation shows that this is equivalent to $|{\sf Re}\, z-a-\frac{R}{2}|<\frac{R}{2}(1-e^{-2\delta})$. Finally, if $z\not\in S_{\mathbb{S}_R+a}(\gamma_0, \delta)$, using again $f$, the problem reduces to show that, given $\rho e^{i\theta}=f^{-1}(z)$, with $\rho>0$ and $\theta\in (\beta, \pi/2)$, (the case $\theta\in (-\pi/2,\beta)$ is analogous), then \[ k_\mathbb H(\rho e^{i\theta}, V(\beta,0))=k_\mathbb H(\rho e^{i\theta}, \rho e^{i\beta}). \] This follows at once from Lemma \ref{Lem:hyper-semipiano}(3). (5) It is clear that $Q(M_1,M_2)=\inf \{k_{\mathbb{S}_R+a}(z,w): z,w\in \mathbb{S}_R+a,{\sf Im}\, z= M_1, {\sf Im}\, w= M_2\}$. Using the biholomorphism $f$, we see that $\{\zeta\in \mathbb{S}_R+a: {\sf Im}\, \zeta=M_j\}$ is mapped onto $\{\rho_j e^{i\theta}: \theta\in (-\pi/2,\pi/2)\}$ for some $0<\rho_1<\rho_2$. Hence, the statement is equivalent to \[ \inf_{\theta, \tilde\theta\in (-\pi/2,\pi/2)}k_{\mathbb H}(\rho_1 e^{i\theta}, \rho_2 e^{i\tilde\theta})\geq k_\mathbb H(\rho_1, \rho_2), \] which follows from Lemma \ref{Lem:hyper-semipiano}(5). (6) Fix $\delta>0, N_0>0$. We already saw that $f^{-1}(S_{\mathbb{S}_R+a}(\gamma_0, \delta))=V(\beta,0)$ for some $\beta\in (0,\pi/2)$. Now (see Figure \ref{fig:slop}), \begin{figure}[h] \centering \begin{tikzpicture}[scale = 0.8] \draw [dashed] (0,-7.5) -- (0,7.5) ; \draw [dashed] (0,0) -- (7,0) ; \draw (0,-6) arc (-90:90:6) ; \draw (0,-1) arc (-90:90:1) ; \draw (0,1) arc (-90:90:2.5) ; \draw (0,-6) arc (-90:90:2.5) ; \draw (0,-4.5) arc (-90:90:4.5) ; \draw (0,0) -- (4.975,7.5) ; \draw (4.7,7) node[scale=1][right]{$L^+$}; \draw (0,0) -- (4.975,-7.5) ; \draw (4.7,-7) node[scale=1][right]{$L^-$}; \draw (0,1) arc (-90:90:1.18) ; \draw (0,-1) arc (90:-90:1.18) ; \draw (0,-1.33) arc (-90:90:1.33) ; \draw (2.487,3.75) node[scale=1]{$\bullet$} ; \draw (2.487,3.8) node[scale=1][right]{$q_2^+$}; \draw (2.487,-3.75) node[scale=1]{$\bullet$} ; \draw (2.487,-3.75) node[scale=1][right]{$q_2^-$}; \draw (0.731,1.101) node[scale=1]{$\bullet$} ; \draw (0.77,1.101) node[scale=1][right]{$q_1^+$}; \draw (0.731,-1.101) node[scale=1]{$\bullet$} ; \draw (0.78,-1.01) node[scale=1][right]{$q_1^-$}; \draw (0,1) node[scale=1][left]{$i$}; \draw (0,-1) node[scale=1][left]{$-i$}; \draw (0,6) node[scale=1][left]{$pi$}; \draw (0,-6) node[scale=1][left]{$-pi$}; \draw (0,0.5) node[scale=1][left]{$A^-$}; \draw[->,>=latex] (-0.2,0.4) to (0.5,0.2) ; \draw (6.9,3) node[scale=1][left]{$A^+$}; \draw (4,-0.65) node[scale=1][left]{$Q$}; \draw (-0.2,2.5) node[scale=1][left]{$C_0^+$}; \draw[->,>=latex] (-0.25,2.5) to (0.9,2.9) ; \draw (-0.2,5) node[scale=1][left]{$F_0^+$}; \draw[->,>=latex] (-0.25,5) to (1.55,5.45) ; \fill [color=gray!20, pattern=north east lines] (0,-1) arc (-90:90:1) ; \draw [dashed] (3.3,-3) -- (1.8,-1.5) ; \draw [dashed] (3.1,-2.1) -- (1.6,-0.6) ; \draw [dashed] (2.8,-1.1) -- (1.3,0.4) ; \draw [dashed] (4.1,-1.6) -- (2.6,-0.2) ; \draw [dashed] (3.3,-0.3) -- (1.8,1.2) ; \draw [dashed] (4.1,-0.3) -- (2.6,1.2) ; \draw [dashed] (3.7,0.7) -- (2.2,2.2) ; \draw [dashed] (3.9,1.2) -- (2.4,2.7) ; \draw [dashed] (4.1,1.7) -- (2.6,3.2) ; \draw [dashed] (0.8,6.2) -- (2.5,7.9) ; \draw [dashed] (2.5,6) -- (4.2,7.7) ; \draw [dashed] (4.65,4.9) -- (6.35,6.6) ; \draw [dashed] (5.2,3.2) -- (6.9,4.9) ; \draw [dashed] (6,1.7) -- (7,2.7) ; \draw [dashed] (6.2,0.5) -- (7,1.3) ; \draw [dashed] (0.6,-7.7) -- (2.3,-6) ; \draw [dashed] (2.1,-7.7) -- (3.8,-6) ; \draw [dashed] (4.65,-6.6) -- (6.35,-4.9) ; \draw [dashed] (5,-5.1) -- (6.9,-3.2) ; \draw [dashed] (6,-2.7) -- (7,-1.7) ; \draw [dashed] (6.2,-1.3) -- (7,-0.5) ; \end{tikzpicture} \caption{}\label{fig:slop} \end{figure} let $C_0$ be the circle with center $\frac{i}{1-\cos \beta}$ and radius $\frac{\cos \beta}{1-\cos\beta}$ and let $C_0^+=C_0\cap \mathbb H$. Note that, since the center of $C_0$ is on the imaginary axis, $C_0$ intersects orthogonally $i\mathbb R$. Hence, $C_0^+$ is a geodesic in $\mathbb H$. Moreover, it is easy to see that for $x>0$, the Euclidean distance from $ix$ to $L^+:=\{\rho e^{i\beta}:\rho>0\}$ is $x\cos\beta$, so that $C_0$ is tangent to $L^+$. Also, the end points of $C_0^+$ are $i$ and $\frac{1+\cos\beta}{1-\cos \beta} i$. Let now $F_0^+=F_0\cap \mathbb H$, where $F_0$ is the circle orthogonal to $i\mathbb R$ and passing through $i$ and $p i$, for some $p>\frac{1+\cos \beta}{1-\cos\beta}$ to be chosen later. Note that by construction $F_0^+$ intersects $L^+$ into two points, $q_1^+$ and $q_2^+$, $|q_1^+|<|q_2^+|$. Let $F_0^-$ be the reflection of $F_0^+$ about the real axis, that is, $F_0^-$ is the circle orthogonal to $i\mathbb R$ passing through $-i$ and $-p i$. Let $U^\pm$ be the unbounded connected component of $\mathbb H\setminus F_0^\pm$. By Lemma \ref{Lem:total-geo-disc}, $U^+$, $U^-$ are totally geodesic. Let $U:=U^+\cap U^-$. Then $U$ is totally geodesic as well since for every two points of $U$ the geodesic joining them is contained in $U^+$ and $U^-$. Let $A^-:=\{\rho e^{i\theta}: 0<\rho <1, |\theta|<\pi/2\}$, $A^+:=\{\rho e^{i\theta}: \rho>p, |\theta|<\pi/2\}$, $\tilde Q:=\{\rho e^{i\theta}: |q_1^+|<\rho<|q_2^+|, |\theta|<\pi/2\}$ and $Q=\tilde Q\cap U$. Note that $A^-, A^+\subset U$, hence, if $\zeta_0, \zeta_1\in \mathbb H$ are such that $\zeta_0\in A^-$ and $\zeta_1\in A^+$, the geodesic $\eta:[0,1]\to \mathbb H$ of $\mathbb H$ joining $\zeta_0$ and $\zeta_1$ is contained in $U$ and, by construction, it necessarily crosses $V(\beta,0)$. Moreover, by construction, for all $t\in (0,1)$ such that $\eta(t)\in \tilde Q$, the point $\eta(t)\in Q$. Since $|q_2^+|/|q_1^+|\to 1$ for $p\to \frac{1+\cos \beta}{1-\cos\beta}$ and $|q_2^+|/|q_1^+|\to +\infty$ for $p\to+\infty$, given $N_0>0$ we can find $p$ such that $\frac{1}{\pi}[\log |q_2^+|-\log |q_1^+|]= N_0$. Let $p$ be such a number and let $N:=\frac{1}{\pi}\log p$. Note that $N$ depends only on $\beta$---hence, on $\delta$ and on $N_0$ and that $N>N_0$. A simple computation shows that $f(A^-)=\{z\in \mathbb{S}_R+a: {\sf Im}\, z<0\}$, and $f(A^+)=\{z\in \mathbb{S}_R+a: {\sf Im}\, z>\frac{R}{\pi}\log p \}$. Moreover, $f^{-1}(\tilde Q)=\{z\in \mathbb{S}_R+a: \frac{R\log |q_1^+|}{\pi}<{\sf Im}\, z<\frac{R\log |q_2^+|}{\pi}\}$. Therefore, since $f$ maps geodesics of $\mathbb H$ onto geodesics of $\mathbb{S}_R+a$, the previous argument shows that for every $z\in \{z\in \mathbb{S}_R+a: {\sf Im}\, z<0\}$ and $w\in \{z\in \mathbb{S}_R+a: {\sf Im}\, z>\frac{R}{\pi}\log p \}$ the geodesic $\gamma$ joining $z$ and $w$ satisfies \[ \gamma \cap \{z\in \mathbb{S}_R+a: \frac{R\log |q_1^+|}{\pi}<{\sf Im}\, z<\frac{R\log |q_2^+|}{\pi}\}\subset S_{\mathbb{S}_R+a}(\gamma_0, \delta). \] Finally, given $M_1, M_2\in \mathbb R$ such that $M_2-M_1>RN$, one can reduce to the previous case using automorphisms of $\mathbb{S}_R+a$ of the form $z\mapsto z-ik$, $k\in \mathbb R$, and taking into account that such automorphisms are isometries for $k_{\mathbb{S}_R+a}$ and map $\gamma_0$ onto $\gamma_0$. (7) If ${\sf Im}\, z={\sf Im}\, w$, the result follows from (2). If ${\sf Im}\, w>{\sf Im}\, z$, we saw in (6) that for all $\epsilon>0$, $\{\zeta\in \mathbb{S}_R+a: {\sf Im}\, \zeta< {\sf Im}\, w+\epsilon\}$ and $\{\zeta\in \mathbb{S}_R+a: {\sf Im}\, \zeta> {\sf Im}\, z+\epsilon\}$ are totally geodesic in $\mathbb{S}_R+a$. Hence, their intersection is. Therefore, for all $\epsilon>0$ the geodesic of $\mathbb{S}_R+a$ joining $z$ and $w$ is contained in $\{\zeta\in \mathbb{S}_R+a: {\sf Im}\, z+\epsilon<{\sf Im}\, \zeta< {\sf Im}\, w+\epsilon\}$. By the arbitrariness of $\epsilon$, we get the result. \end{proof} We now present several localization results which will be useful in the subsequent constructions. Let $a, b\in \mathbb R$ and $R>0$. Let \[ \Omega_{a,b,R}:=\mathbb C\setminus \{z\in \mathbb C: {\sf Re}\, z\in \{a, a+R\}, {\sf Im}\, z\leq b\}. \] \begin{proposition}\label{Prop:local-strip-bound} Let $c>1$. Then there exists $D(c)>0$ with the following properties. Let $D\geq D(c)$ and $R>0$, $a,b\in \mathbb R$. Then for all $v\in \mathbb C$ and $z\in (\mathbb{S}_R+a)$ such that ${\sf Im}\, z\leq b-RD$, \[ \kappa_{\Omega_{a,b,R}}(z;v)\leq \kappa_{\mathbb{S}_R+a}(z;v)\leq c \kappa_{\Omega_{a,b,R}}(z;v). \] Moreover, for every $z, w\in (\mathbb{S}_R+a)$ such that ${\sf Im}\, z, {\sf Im}\, w\leq b-RD$ \[ k_{\Omega_{a,b,R}}(z,w)\leq k_{\mathbb{S}_R+a}(z,w)\leq c k_{\Omega_{a,b,R}}(z, w). \] \end{proposition} \begin{proof} The inequalities on the left hand side follow immediately since $\mathbb{S}_R+a\subset \Omega_{a,b,R}$. Assume now $R=1, a=b=0$ and let $\Omega:=\Omega_{0,0,1}$. For $n\in \mathbb N$ let $C_n:=\{\zeta\in \mathbb C: 0\leq {\sf Re}\, \zeta\leq 1, {\sf Im}\, \zeta=-n\}$. Clearly, $(C_n)$ is a null chain in $\Omega$, which represents a prime end $\underline{x}$ of $\Omega$. Let $\mathbb{S}^\ast$ be the open set in $\widehat{\Omega}$ defined by $\mathbb{S}$ (that is, $\mathbb{S}^\ast$ is the union of $\mathbb{S}$ and all prime ends for which a representing null chain is eventually contained in $\mathbb{S}$). Hence, $\mathbb{S}^\ast$ is an open neighborhood of $\underline{x}$, since, by construction, the interior part of $C_n$ belongs to $\mathbb{S}$ for all $n\geq 1$. Moreover, $\mathbb{S}^\ast\cap \Omega=\mathbb{S}$, which is simply connected. Therefore, we can apply Theorem \ref{Thm:localiz} to $\underline{x}$ and $\mathbb{S}^\ast$ and come up with an open set $V^\ast\subset \mathbb{S}^\ast$ in $\widehat{\Omega}$ which contains $\underline{x}$ and such that \begin{equation}\label{Eq:estima-banda-out} \kappa_{\mathbb{S}}(z;v)\leq c \kappa_{\Omega}(z;v), \quad k_{\mathbb{S}}(z,w)\leq c k_{\Omega}(z, w), \end{equation} for all $z, w \in V:=V^\ast\cap \Omega$ and $v\in \mathbb C$. Note that since $V^\ast$ is an open neighborhood of $\underline{x}$, there exists $n_0\in \mathbb N$ such that the interior part of $C_n$ is contained in $V$ for all $n\geq n_0$. In particular, \eqref{Eq:estima-banda-out} holds for every $\zeta\in \mathbb{S}$ such that ${\sf Im}\, \zeta\leq -(n_0+1)$. Hence, we have proved the result with $D:=-(n_0+1)$ for $R=1, a=b=0$. Now, assume $R>0$ and $a, b\in \mathbb R$. Using the map $\mathbb C\ni z\mapsto \frac{1}{R}(z-a-ib)\in \mathbb C$, which is a biholomorphism from $\Omega_{a,b,R}$ to $\Omega$ and maps $(\mathbb{S}_R+a)$ onto $\mathbb{S}$ and $\{\zeta\in \mathbb{S}_R+a: {\sf Im}\, \zeta\leq b-RD\}$ onto $\{\zeta\in \mathbb{S}: {\sf Im}\, \zeta\leq -D\}$, the result follows at once from \eqref{Eq:estima-banda-out}. \end{proof} The next localization result is a sort of converse of the previous one: we choose the part we want to localize and come up with a constant for the localization. We start with a definition: \begin{definition} Let $M\in \mathbb R, R>0$. The {\sl semi-strip} of {\sl width $R$ and height $M$} is \[ \mathbb{S}^M_R:=\{\zeta\in \mathbb C: 0<{\sf Re}\, \zeta<R, {\sf Im}\, \zeta>M\}. \] \end{definition} \begin{proposition}\label{Prop:estim-strip-var2} For every $E>0$ there exists $c'=c'(E)>1$ with the following properties. Let $a\in \mathbb R$, $M\in \mathbb R$ and $R>0$. Then for all $v\in \mathbb C$ and $z\in (\mathbb{S}_R+a)$ such that ${\sf Im}\, z\geq RE+M$, \[ \kappa_{\mathbb{S}_R+a}(z;v)\leq \kappa_{\mathbb{S}^M_R+a}(z;v)\leq c'\kappa_{\mathbb{S}_R+a}(z;v). \] Moreover, for every $z, w\in (\mathbb{S}_R+a)$ such that ${\sf Im}\, z, {\sf Im}\, w> RE+M$, \[ k_{\mathbb{S}_R+a}(z,w)\leq k_{\mathbb{S}^M_R+a}(z,w)\leq c' k_{\mathbb{S}_R+a}(z,w). \] \end{proposition} \begin{proof} The left-hand side estimates follow immediately since $\mathbb{S}^M_R+a\subset \mathbb{S}_R+a$. In order to prove the right-hand side estimates, arguing as in Proposition \ref{Prop:local-strip-bound}, it is enough to prove the result for $R=1$, $a=0$, $M=0$ and then use the affine map $z\mapsto \frac{1}{R}(z-a-iM)$ to pass to the general case. Fix $E>0$. Let $K:=\{z\in \mathbb C: E\leq {\sf Re}\, z\leq 1-E, E\leq {\sf Im}\, z\leq 1\}$ (possibly $K$ is empty if $E>1$). For $z\in \mathbb{S}_1^0$ such that ${\sf Im}\, z\geq E$ and $z\not\in K$, we have $\delta_{\mathbb{S}_1^0}(z)=\delta_{\mathbb{S}_1}(z)$, hence, from Theorem \ref{Thm:Distance-Lemma-inf}, \[ \kappa_{\mathbb{S}_1^0}(z;v)\leq \frac{|v|}{\delta_{\mathbb{S}_1^0}(z)}=2 \frac{|v|}{2\delta_{\mathbb{S}_1}(z)}\leq 2\kappa_{\mathbb{S}_1}(z;v). \] In case $K$ is non-empty, $K$ is compact in $\mathbb{S}_1^0$ and in $\mathbb{S}_1$. Since the hyperbolic metric is continuous in $z$, the following numbers are well defined: \[ q:=\min_{z\in K}\kappa_{\mathbb{S}_1}(z;1), \quad Q:= \max_{z\in K}\kappa_{\mathbb{S}^0_1}(z;1). \] Moreover, $q>0$ (for otherwise the hyperbolic norm of $1$ would be $0$ at an interior point). Hence, for $z\in K$ and $v\in \mathbb C$, \[ \kappa_{\mathbb{S}_1^0}(z;v)=|v|\kappa_{\mathbb{S}_1^0}(z;1)\leq \frac{|v|Q}{q}q\leq \frac{|v|Q}{q}\kappa_{\mathbb{S}_1}(z;1)=\frac{Q}{q}\kappa_{\mathbb{S}_1}(z;v). \] Taking $c'=\max\{2, \frac{Q}{q}\}$ we have the first estimate. In order to prove the second inequality, note that $(0,1)+iE$ is a geodesic in $\mathbb{S}_1$ by symmetry. Then, Lemma \ref{Lem:total-geo-disc} guarantees that $\mathbb{S}_1^E$ is totally geodesic in $\mathbb{S}_1$. Therefore, given $z, w\in \mathbb{S}_1^E$, let $\gamma:[0,1]\to \mathbb{S}_1$ be the geodesic for $\mathbb{S}_1$ which joins $z$ and $w$. Hence, $\gamma([0,1])\subset \mathbb{S}_1^E$. Therefore, for what we have already proved, \begin{equation*} \begin{split} k_{\mathbb{S}^0_1}(z,w)&\leq \ell_{\mathbb{S}^0_1}(\gamma;[0,1])=\int_0^1\kappa_{\mathbb{S}^0_1}(\gamma(t);\gamma'(t))dt \\&\leq c' \int_0^1\kappa_{\mathbb{S}_1}(\gamma(t);\gamma'(t))dt = c' k_{\mathbb{S}_1}(z,w), \end{split} \end{equation*} and we are done. \end{proof} The next result allows us to estimate the hyperbolic distance and the displacement of geodesics in simply connected domains which contain ``good boxes'': \begin{proposition}\label{good box} Let $c>1$, let $D\geq D(c)$, where $D(c)>0$ is given by Proposition \ref{Prop:local-strip-bound}, and fix $E\in(0, D)$. Then there exist $\epsilon=\epsilon(c,D,E)>0$ and $C=C(c,D,E)>1$ with the following property. Let $\Omega\subset \mathbb C$ be any simply connected domain such that \begin{enumerate} \item $\Omega\subset \Omega_{a,b,R}$ for some $a, b\in \mathbb R$ and $R>0$ \item $\mathbb{S}^M_R+a\subset \Omega$ for some $-\infty\leq M<b$, \item $b-M>2RD$. \end{enumerate} Let \begin{equation}\label{Eq:good-box-def} B:=\{\zeta\in (\mathbb{S}_R+a): M+RE < {\sf Im}\, \zeta < b-RD\}. \end{equation} Then, if $\gamma:[u_0,u_1]\to \Omega$ is a geodesic for $\Omega$ contained in $B$, and $\eta:[v_0,v_1]\to \mathbb{S}_R+a$ is the geodesic in $\mathbb{S}_R+a$ such that $\gamma(u_j)=\eta(v_j)$, $j=0,1$, then for every $u\in [u_0,u_1]$ and $v\in [v_0,v_1]$, \begin{equation}\label{Eq:box-strip-Om} k_{\mathbb{S}_R+a}(\gamma(u), \eta)< \epsilon, \quad k_{\mathbb{S}_R+a}(\eta(v), \gamma)< \epsilon. \end{equation} Moreover, if $\eta:[v_0,v_1]\to B$ is a geodesic for $\mathbb{S}_{R}+a$ and $\gamma:[u_0,u_1]\to \Omega$ is the geodesic for $\Omega$ such that $\gamma(u_j)=\eta(v_j)$, $j=0,1$, then for every $u\in [u_0,u_1]$ and $v\in [v_0,v_1]$, \begin{equation}\label{Eq:box-Om-strip} k_{\Omega}(\gamma(u), \eta)< \epsilon, \quad k_{\Omega}(\eta(v), \gamma)< \epsilon. \end{equation} In addition, for every $z,w \in B$, \[ \frac{1}{C} k_{\mathbb{S}_R+a}(z,w)\leq k_\Omega(z,w)\leq C k_{\mathbb{S}_R+a}(z,w). \] \end{proposition} \begin{proof} Let $\Omega\subset\mathbb C$ be a simply connected domain which satisfies condition (1)---(3). Condition (3) implies that $M+RE<M+RD<b-RD$, therefore, $B\neq\emptyset$. Let $\gamma:[u_0,u_1]\to B$ be a geodesic for $\Omega$. Then for every $u_0\leq s\leq t\leq u_1$, by Proposition \ref{Prop:local-strip-bound} and by (1), \begin{equation*} \begin{split} \ell_{\mathbb{S}_R+a}(\gamma;[s,t])&=\int_s^t \kappa_{\mathbb{S}_R+a}(\gamma(r);\gamma'(r))dr \leq c \int_s^t \kappa_{\Omega_{a,b,R}}(\gamma(r);\gamma'(r))dr \\&\leq c \int_s^t \kappa_{\Omega}(\gamma(r);\gamma'(r))dt=c k_\Omega(\gamma(s), \gamma(t)). \end{split} \end{equation*} On the other hand, by (2) and Proposition \ref{Prop:estim-strip-var2}, \[ k_\Omega(\gamma(s), \gamma(t))\leq k_{\mathbb{S}_R^M+a}(\gamma(s), \gamma(t))\leq c' k_{\mathbb{S}_R+a}(\gamma(s), \gamma(t)). \] Hence, $\ell_{\mathbb{S}_R+a}(\gamma;[s,t])\leq cc' k_{\mathbb{S}_R+a}(\gamma(s), \gamma(t))$. This proves that every geodesic $\gamma$ for $\Omega$ which is contained in $B$ is a $(cc',0)$-quasi-geodesic in $\mathbb{S}_R+a$. Hence, by Theorem~\ref{Gromov}, there exists $\epsilon=\epsilon(c,c')=\epsilon(c,D,E)>0$ such that \eqref{Eq:box-strip-Om} holds. In order to prove \eqref{Eq:box-Om-strip}, we argue similarly. Given $\eta:[v_0,v_1]\to B$ a geodesic for $\mathbb{S}_{R}+a$, for all $v_0<s<t<v_1$, using Proposition \ref{Prop:local-strip-bound} and Proposition \ref{Prop:estim-strip-var2}, we have \begin{equation*} \begin{split} \ell_{\Omega}(\eta;[s,t])&=\int_s^t \kappa_{\Omega}(\eta(r);\eta'(r))dr \leq \int_s^t \kappa_{\mathbb{S}_R^M+a}(\eta(r);\eta'(r))dr \\&\leq c' \int_s^t \kappa_{\mathbb{S}_R+a}(\eta(r);\eta'(r))dt=c' k_{\mathbb{S}_R+a}(\eta(s), \eta(t))\\ &\leq c'ck_{\Omega_{a,b,R}}(\eta(s),\eta(t))\leq c'ck_{\Omega}(\eta(s),\eta(t)). \end{split} \end{equation*} Hence $\eta$ is a $(cc',0)$-quasi-geodesic in $\Omega$. Theorem \ref{Gromov} implies \eqref{Eq:box-Om-strip} with the same $\epsilon$ as before. In order to prove the last inequalities, we note that $\mathbb{S}^M_R+a\subset \Omega\subset \Omega_{a,b,R}$, hence, for every $z,w\in B$, \[ k_{ \Omega_{a,b,R}}(z,w)\leq k_\Omega(z,w)\leq k_{\mathbb{S}^M_R+a}(z,w). \] Therefore, the result follows at once from Proposition \ref{Prop:local-strip-bound} and Proposition \ref{Prop:estim-strip-var2} by taking $C=\max\{c,c'\}$. \end{proof} \begin{definition} The set $B$ defined in \eqref{Eq:good-box-def} is a {\sl good box for $\Omega$ for the data $(c,D,E)$}\index{Good box for a simply connected domain}. Its {\sl width}\index{Width of a good box} is $R$ and its {\sl height}\index{Height of a good box} is $b-M-R(D+E)$. The segment $\{z=a+\frac{R}{2}+it, M+RE < t < b-RD\}$ is called the {\sl vertical bisectrix}\index{Vertical bisectrix of a good box} of $B$ and we denote it by $\hbox{bis}(B)$. \end{definition} With the help of the previous results, we prove now that ``long'' geodesics for $\Omega$ in a good box get close to the vertical bisectrix of the good box in a controlled way. \begin{corollary}\label{cor:good-box-estim} Let $c>1$ and let $D\geq D(c)$, where $D(c)>0$ is given by Proposition \ref{Prop:local-strip-bound} and fix $E\in(0, D)$. Let $\epsilon>0$ and $C>1$ be given by Proposition \ref{good box} and let $N_0> \frac{4(C+1)\epsilon}{\pi}$. Let $\delta>0$ and let $N>N_0>0$ be the constant given by Proposition \ref{Prop:strip}(6). Finally, let $N_1:=N_0-\frac{4\epsilon}{\pi}$ and $N_2:=\frac{N_0}{2}-\frac{ 2(C+1)\epsilon}{\pi}$. Let $\Omega\subset \mathbb C$ be a simply connected domain and assume $B\subset \Omega$ is a good box of $\Omega$ for the data $(c,D,E)$ of width $R>0$, height $h>0$ and vertical bisectrix $\hbox{bis}(B)=\{z=a+\frac{R}{2}+it, r_0 < t < r_0+h\}$, where $a, r_0\in \mathbb R$. Suppose $h>NR$. Let $I\subset \mathbb R$ be an open interval and let $\gamma: I\to \Omega$ be a geodesic of $\Omega$. Suppose there exists an interval $[u_0,u_1]\subset I$ such that $\gamma([u_0,u_1])\subset B$ and ${\sf Im}\, \gamma(u_1)-{\sf Im}\, \gamma(u_0)>NR$. Then, there exists $r_1\in (r_0, r_0+h-R(N_1+\frac{2\epsilon}{\pi}))$ such that for every $t\in [u_0,u_1]$ for which $r_1<{\sf Im}\, \gamma(t)<r_1+RN_1$, \begin{equation}\label{Eq:metr-2} \gamma(t)\in \{z\in B: |{\sf Re}\, z-a-\frac{R}{2}|< \frac{R}{2}(1-e^{-2(\epsilon+\delta)})\}. \end{equation} Moreover, let $u_0':=\min \{t\in [u_0,u_1]: {\sf Im}\, \gamma(t)\geq r_1\}$ and $u_1':=\max \{t\in [u_0,u_1]: {\sf Im}\, \gamma(t)\leq r_1+RN_1\}$. Then for every $t\in I\setminus [u'_0,u'_1]$ it follows that $\gamma(t)\not\in \{z\in B: r_1+(\frac{N_1}{2}-N_2)R<{\sf Im}\, z<r_1+(\frac{N_1}{2}+N_2)R\}$. \end{corollary} \begin{proof} Let $\gamma: I\to \Omega$ be a geodesic of $\Omega$. Suppose there exists an interval $[u_0,u_1]\subset I$ such that $\gamma([u_0,u_1])\subset B$ and ${\sf Im}\, \gamma(u_1)-{\sf Im}\, \gamma(u_0)>NR$. Let $\eta:[v_0,v_1]\to \mathbb{S}_R+a$ be the geodesic of $\mathbb{S}_R+a$ such that $\eta(v_j)=\gamma(u_j)$, $j=0,1$. Note that $\eta([v_0,v_1])\subset B$ by Proposition \ref{Prop:strip}(7). Moreover, since ${\sf Im}\, \eta(v_1)-{\sf Im}\, \eta(v_0)>NR$, by Proposition \ref{Prop:strip}(6), there exists $q\in ({\sf Im}\, \eta(v_0),{\sf Im}\, \eta(v_1)-RN_0)$ such that $\eta(v)\in S_{\mathbb{S}_R+a}(\gamma_0, \delta)$ for all $v\in [v_0,v_1]$ for which $q<{\sf Im}\, \eta(v)<q+RN_0$. By Proposition \ref{Prop:strip}(4), in fact, $k_{\mathbb{S}_R+a} (a+\frac{R}{2}+i{\sf Im}\, \eta(v), \eta(v))=k_{\mathbb{S}_R+a}(\gamma_0, \eta(v))<\delta$. Let \[ r_1:=q+\frac{2\epsilon R}{\pi}. \] Note that \[ r_0<q<r_1=q+\frac{2\epsilon R}{\pi}<r_0+h-RN_0+\frac{2\epsilon R}{\pi}=r_0+h-R(N_1+\frac{2\epsilon}{\pi}). \] and $r_1+RN_1<q+RN_0$. Let \[ B^G:=\{z\in \mathbb{S}_R+a: r_1<{\sf Im}\, z<r_1+RN_1\}. \] By Proposition \ref{good box}, for every $u\in [u_0,u_1]$, there exists $v_u\in [v_0,v_1]$ such that $k_{\mathbb{S}_R+a}(\eta(v_u), \gamma(u))< \epsilon$. Let $u\in [u_0,u_1]$ be such that $\gamma(u)\in B^G$. We claim that $q<{\sf Im}\, \eta(v_u)<q+RN_0$. Indeed, from Proposition \ref{Prop:strip}(5), one sees that for every $z\in \mathbb{S}_R+a$ such that ${\sf Im}\, z\geq q+RN_0$ or ${\sf Im}\, z\leq q$, the hyperbolic distance in $\mathbb{S}_R+a$ of $z$ from $B^G$ is at least $\epsilon$. For instance, if ${\sf Im}\, z\leq q$, then \begin{equation*} \begin{split} k_{\mathbb{S}_R+a}(z, B^G)&\geq k_{\mathbb{S}_R+a}(a+\frac{R}{2}+i{\sf Im}\, z, a+\frac{R}{2}+i{\sf Im}\, r_1)\\&\geq k_{\mathbb{S}_R+a}(a+\frac{R}{2}+i{\sf Im}\, q, a+\frac{R}{2}+i{\sf Im}\, r_1)=\epsilon, \end{split} \end{equation*} where the last equality follows from a direct computation using Proposition \ref{Prop:strip}(1). The claim we have just proved implies that $k_{\mathbb{S}_R+a}(\eta(v_u), a+\frac{R}{2}+i{\sf Im}\, \eta(v_u))<\delta$, hence, by the triangle inequality, \begin{equation*} \begin{split} k_{\mathbb{S}_R+a}(\gamma(u), a+\frac{R}{2}+i{\sf Im}\, \eta(v_u))&\leq k_{\mathbb{S}_R+a}(\gamma(u), \eta(v_u))\\&\quad +k_{\mathbb{S}_R+a}(\eta(v_u), a+\frac{R}{2}+i{\sf Im}\, \eta(v_u))<\epsilon+\delta. \end{split} \end{equation*} Then \eqref{Eq:metr-2} follows from Proposition \ref{Prop:strip}(4). In order to prove the last statement, suppose there exists $t\in I\setminus [u'_0,u'_1]$ such that \[ \gamma(t)\in B_1:=\{z\in B: r_1+(\frac{N_1}{2}-N_2)R<{\sf Im}\, z<r_1+(\frac{N_1}{2}+N_2)R\}. \] We assume that $t> u_1'$ (the case $t<u_0'$ is similar). Note that, by definition, ${\sf Im}\, \gamma(u_0')=r_1$, ${\sf Im}\, \gamma(u_1')=r_1+RN_1$. Let $\xi:[0,1]\to \mathbb{S}_R+a$ be the geodesic of $\mathbb{S}_R+a$ such that $\xi(0)=\gamma(u_0')$ and $\xi(1)=\gamma(t)$. By Proposition \ref{Prop:strip}(7), for all $s\in [0,1]$, \begin{equation}\label{Eq:xi-stima-alto} r_1\leq {\sf Im}\, \xi(s)\leq {\sf Im}\, \gamma(t)<r_1+(\frac{N_1}{2}+N_2)R. \end{equation} Since $\gamma:[s_0, t]\to \Omega$ is the geodesic of $\Omega$ which joins $\xi(0)$ with $\xi(1)$, by \eqref{Eq:box-Om-strip}, there exists $s\in [0,1]$ such that $k_\Omega(\xi(s), \gamma(u'_1))<\epsilon$. Hence, by Proposition \ref{good box}, $k_{\mathbb{S}_R+a}(\xi(s), \gamma(u'_1))<C\epsilon$. On the other hand, by Proposition \ref{Prop:strip}(5) and \eqref{Eq:xi-stima-alto} \begin{equation*} \begin{split} k_{\mathbb{S}_R+a}(\xi(s), \gamma(u'_1))&\geq k_{\mathbb{S}_R+a}(a+\frac{R}{2}+i{\sf Im}\,\xi(s), a+\frac{R}{2}+i(r_1+RN_1))\\&=\frac{\pi}{2R}(r_1+RN_1-{\sf Im}\, \xi(s))>\frac{\pi}{2}(\frac{N_1}{2}-N_2)=C\epsilon, \end{split} \end{equation*} a contradiction, and the proof is concluded. \end{proof} \section{Trajectories oscillating to the Denjoy-Wolff point}\label{Traj} Using the tools developed in the previous sections, we can construct examples of different slope behavior for semigroups. \begin{proposition}\label{Prop:example-non-tg-osc} There exists a parabolic semigroup $(\phi_t)$ in $\mathbb D$ with zero hyperbolic step such that $\phi_t(z)$ converges non-tangentially to its Denjoy-Wolff point $\tau\in\partial\mathbb D$ but $\lim_{t\to+\infty}\mathrm{Arg} (1-\overline{\tau}\phi_t(0))$ does not exist. \end{proposition} \begin{proof} Let $c>1$, let $D\geq D(c)$ where $D(c)$ is the constant given by Proposition \ref{Prop:local-strip-bound} and let $E\in(0, D)$. Let $\epsilon>0$ and $C>1$ be given by Proposition \ref{good box} and let $N_0> \frac{4(C+1)\epsilon}{\pi}$. Let $\delta>0$ and let $N>N_0>0$ be the constant given by Proposition \ref{Prop:strip}(6). Finally, let $N_1:=N_0-\frac{4\epsilon}{\pi}$ and $N_2:=\frac{N_0}{2}-\frac{ 2(C+1)\epsilon}{\pi}$. Let $\alpha_0\in (0,\pi/2)$ be such that \begin{equation}\label{tan-alp-r} \tan \alpha_0\leq \min\{\frac{1}{8D}, \frac{1}{8N}\}. \end{equation} Let $\chi:= 1-e^{-2(\epsilon+\delta)}<1$ and choose $\alpha_1\in (0,\alpha_0)$ such that \begin{equation}\label{Eq:good-alpa1} \tan \alpha_1<(1-\chi) \tan \alpha_0. \end{equation} In order to construct the example, we will define a domain $\Omega\subset \mathbb C$ starlike at infinity given by $\Omega=\bigcap_{j=1}^\infty \Omega_j$, where $\Omega_j:=\Omega_{-a_j, b_j, R_j}$, where $\{b_j\}$ is an increasing sequence of positive real numbers converging to infinity with the property that $b_1=1$, \begin{equation}\label{Eq:choice-b} b_{j}>b_{j-1}\max\left\{\frac{2\tan \alpha_0-\tan\alpha_1}{\tan\alpha_1}, \frac{\tan\alpha_1}{2\tan \alpha_0-\tan\alpha_1}, 4\right\}, \quad j=2,3,\ldots, \end{equation} and for $j=1,2,\ldots,$ \begin{equation}\label{Eq:good-choice-Ra} \begin{split} &R_j=2b_j\tan \alpha_0, \\ &a_{2j} =b_{2j}\tan \alpha_1,\\ &a_{2j-1}= R_{2j-1}-b_{2j-1}\tan\alpha_1=b_{2j-1}(2\tan\alpha_0-\tan\alpha_1). \end{split} \end{equation} Note that, by \eqref{Eq:choice-b} and \eqref{Eq:good-choice-Ra}, the sequences $\{a_{j}\}$ and $\{R_{j}-a_{j}\}$ are increasing. In particular, the domain $\Omega$ is starlike at infinity (see Figure \ref{fig:T}). \begin{figure}[h] \centering \begin{tikzpicture} \draw [dotted] (4.3,-3) -- (4.3,0) node[scale=0.85]{$\bullet$}; \draw (3.5,0) node[above][scale=0.75]{$-a_{2j-1}+ib_{2j-1}$} ; \draw [dotted] (5.2,-3) -- (5.2,0) node[scale=0.85]{$\bullet$}; \draw (6.7, 0) node[above][scale=0.75]{$-a_{2j-1}+ R_{2j-1}+ib_{2j-1}$}; \draw [dotted] (3.6,-3) -- (3.6,4) node[scale=0.85]{$\bullet$} node[above][scale=0.75]{$-a_{2j}+ib_{2j}$} ; \draw [dotted] (9,-3) -- (9,4) node[scale=0.85]{$\bullet$} node[above][scale=0.75]{$-a_{2j}+ R_{2j}+ib_{2j}$}; \draw [dotted] (-3,-3) -- (-3,8.5) node[scale=0.85]{$\bullet$} node[above][scale=0.8]{$-a_{2j+1}+ib_{2j+1}$} ; \draw [dotted] (10.5,-3) -- (10.5,8.5) node[scale=0.85]{$\bullet$} node[above][scale=0.75]{$-a_{2j+1}+ R_{2j+1}+ib_{2j+1}$};; \draw [dashed] [->] (-3,-1.5) -- (11,-1.5); \draw (5,-1.5) node[scale=0.85]{$\bullet$} ; \draw (4.9, -1.7) node[scale=0.85][below, left]{$0$}; \draw [black,fill=gray!20] (3.6,1) -- (9,1) -- (9,3) -- (3.6,3) -- cycle; \draw [black,fill=gray!40] (5.5,1) -- (5.5,3) -- (7.1,3) -- (7.1,1) -- cycle; \draw [black,fill=gray!20] (-3,5) -- (10.5,5) -- (10.5,7.5) -- (-3,7.5) -- cycle; \draw [black,fill=gray!40] (2.8,5) -- (2.8,7.5) -- (4.7,7.5) -- (4.7,5) -- cycle; \draw [dashed] [->] (5,-2) -- (5,9) node[scale=0.85][left]{$it$}; \draw[->,>=latex] (10.8,4.5) to (9.8,5.5) ; \draw (11.1,4.2) node[scale=0.85]{$B_{2j+1}$}; \draw[->,>=latex] (2.5,4.4) to (3.4,5.4) ; \draw (2.2,4.3) node[scale=0.85]{$B_{2j+1}^+$}; \draw[->,>=latex] (7.7,0.8) to (6.9,1.4) ; \draw (8,0.7) node[scale=0.85]{$B_{2j}^+$}; \draw (3.6, 2.8) -- (9,2.8) ; \draw[->,>=latex](2.5, 2.2) -- (3.8,2.8); \draw (2.2, 2.2) node[scale=0.85]{$L_{2j}^+$}; \draw (5,2.2) node{$\bullet$}; \draw (4.6,2.3) node[scale=0.85]{$iy_{2j}$} ; \draw[->,>=latex] (2.8,1.4) to (3.9,1.4) ; \draw (2.4,1.4) node[scale=0.85]{$B_{2j}$}; \draw (1.6,8.1) node[scale=0.85]{$\gamma([0,1))$}; \draw[->,>=latex] (2.3,8.1) to (3.4,8) ; \draw (5,4.84) node[scale=0.85]{$\bullet$}; \draw (4.7,4.6) node[scale=0.85]{$it_{2j}$}; \draw (5,0.2) node[scale=0.85]{$\bullet$}; \draw (4.5,0.5) node[scale=0.85]{$it_{2j-1}$}; \let\pcoord\relax \let\tcoord\relax \foreach [count=\num] \coord in { (3.4,8.3), (3.7,5.8), (5.9,4.5), (5.9,3), (4.8,0) } { \ifx\pcoord\relax \global\let\pcoord\coord \path \pcoord coordinate (c1); \else \ifx\tcoord\relax \global\let\tcoord\coord \else \path \pcoord coordinate (p); \path \tcoord coordinate (t); \path \coord coordinate (n); \path ($(p)!.75!(n)$) coordinate (m); \path ($(t)!1cm!90:(m)$) coordinate (r); \path ($(t)-(p)$); \pgfgetlastxy{\xx}{\yy} \pgfmathsetmacro{\len}{.5*veclen(\xx,\yy)} \path ($(t)!(p)!(r)$) coordinate (rp); \path ($(t)!\len pt!(rp)$) coordinate (c2); \draw (p) .. controls (c1) and (c2) .. (t); \path ($(t)-(n)$); \pgfgetlastxy{\xx}{\yy} \pgfmathsetmacro{\len}{.5*veclen(\xx,\yy)} \path ($(t)!(n)!(r)$) coordinate (rn); \path ($(t)!\len pt!(rn)$) coordinate (c1); \global\let\pcoord\tcoord \global\let\tcoord\coord \fi \fi } \draw (t) .. controls (c1) and (n) .. (n); \draw [dashed] (3.4,8.3) -- (3.2,9); \draw [dashed] (4.8,0) -- (4.4,-0.5); \end{tikzpicture} \caption{}\label{fig:T} \end{figure} Moreover, the point $-a_{2j}+ib_{2j}$ belongs to $\{\zeta\in \mathbb C: \zeta=i\rho e^{\alpha_1 i}, \rho>0\}$, the boundary of $iV(\alpha_1,0)$ is contained in the left half-plane $\{z \in \mathbb C: {\sf Re}\, z<0\}$, while $R_{2j-1}-a_{2j-1}+ib_{2j-1}$ belongs to $\{\zeta\in \mathbb C: \zeta=i\rho e^{-\alpha_1 i}, \rho>0\}$ and the boundary of $iV(\alpha_1,0)$ is contained in the right half-plane $\{z \in \mathbb C: {\sf Re}\, z>0\}$. Since $(R_j-a_{j})/b_j\geq\tan\alpha_1$, this implies that $iV(\alpha_1,+\infty)\subset \Omega$. Hence, let $h:\mathbb D \to \Omega$ be the Riemann map such that $h(0)=0$, $\lim_{t\to+\infty}h^{-1}(it)=1$. The semigroup $(\phi_t)$, defined by $\phi_t(z):=h^{-1}(h(z)+it)$, has Koenigs function $h$, $\Omega=h(\mathbb D)$ and Denjoy-Wolff point $1$. Moreover, by Proposition \ref{sector-implies-convergnt}, $\phi_t(z)$ converges non-tangentially to its Denjoy-Wolff point. Let $\tilde\gamma:[0,1)\to \mathbb D$, given by $\tilde\gamma(t)=t$, be the geodesic joining $0$ to $1$. Let $\gamma:=h\circ \tilde\gamma:[0,1)\to \Omega$. The curve $\gamma$ is a geodesic in $\Omega$, $\gamma(0)=0$, with the property that for every $M>0$ there exists $s_M\in [0,1)$ such that ${\sf Im}\, \gamma(s)>M$ for all $s\geq s_M$. \smallskip {\sl Claim A}: \begin{enumerate} \item there exists a sequence $\{t_m\}$ converging to $+\infty$ such that $it_m\in \gamma([0,1))$, \item there exist $\beta>0$ and a sequence $\{t_k\}$ converging to $+\infty$ such that $it_k\not\in S_\Omega(\gamma, \beta)$. \end{enumerate} \smallskip Assume that Claim A is true. Translating in the unit disc via $h$, this implies that $\{\phi_{t_k}(0)\}$ is in the complement in $\mathbb D$ of the hyperbolic sector $S_\mathbb D(\tilde\gamma, \beta)$, thus, it is outside a fixed Stolz region of vertex $1$, while $\{\phi_{t_m}(0)\}$ converges radially to $1$. Therefore, $\lim_{t\to+\infty}\mathrm{Arg}(1-\phi_t(0))$ does not exist. {\sl Proof of Claim A}. First of all notice that, since $\tan\alpha_0\leq \frac{1}{8D}$ by \eqref{tan-alp-r}, and $b_{j}>4b_{j-1}$, \begin{equation}\label{Eq:go-good-box} b_j-b_{j-1}-2R_jD=b_j-b_{j-1}-4b_jD\tan\alpha_0\geq \frac{1}{2}b_j-b_{j-1}>0. \end{equation} Let \[ B_j:=\{\zeta\in (\mathbb{S}_{R_j}-a_j): b_{j-1}+R_jE < {\sf Im}\, \zeta < b_j-R_jD\}. \] By \eqref{Eq:go-good-box}, Proposition \ref{good box} implies that $B_j$ is a good box in $\Omega$ for the data $(c, D,E)$. Moreover, $B_j$ has width $R_j=2b_j\tan \alpha_0$ and, by \eqref{tan-alp-r}, its height is \begin{equation*} \begin{split} h_j&:=b_j-b_{j-1}-R_j(D+E)>b_j-b_{j-1}-2R_j D\\&=b_j-b_{j-1}-4b_j D \tan \alpha_0\geq \frac{1}{2} b_j-b_{j-1}. \end{split} \end{equation*} In particular since $b_j>4b_{j-1}$, we have by \eqref{tan-alp-r} \begin{equation}\label{Eq:good-height-in-box} \frac{h_j}{R_j}>\frac{1-\frac{2b_{j-1}}{b_j}}{4\tan \alpha_0}>\frac{1}{8\tan \alpha_0}\geq N. \end{equation} Since ${\sf Im}\, \gamma(s)$ converges to $+\infty$ as $s\to 1$, and $\gamma(0)=0$, it follows that there exist $0<s^0_j<t^0_j<1$ such that $\gamma(s)\in B_j$ for all $s\in (s^0_j, t^0_j)$ and ${\sf Im}\, \gamma(s^0_j)=b_{j-1}+R_jE$, ${\sf Im}\, \gamma(t^0_j)=b_j-R_jD$. Therefore, by \eqref{Eq:good-height-in-box}, we can find $s_j^0<s_j<t_j<t_j^0$ such that ${\sf Im}\, \gamma(t_j)-{\sf Im}\, \gamma(s_j)>NR_j$. Hence, by Corollary \ref{cor:good-box-estim} (with $B=B_j, a=-a_j, R=R_j$) there exists $r_j\in (b_{j-1}+R_{j-1}E, b_j-R_j(D+N_1))$ such that for all $u\in (s_j,t_j)$ such that $r_j<{\sf Im}\, \gamma(u)<r_j+N_1R_j$ (recalling that $\chi= 1-e^{-2(\epsilon+\delta)})$, \begin{equation}\label{Eq:gamma-in_Bntosc} \gamma(u)\in B_j^+:=\{z\in B_j: |{\sf Re}\, z+a_j-\frac{R_j}{2}|\leq \frac{\chi}{2}R_j\}. \end{equation} Using \eqref{Eq:good-choice-Ra}, it is easy to see that $|a_j-\frac{R_j}{2}|=b_j(\tan \alpha_0-\tan \alpha_1)$, while $R_j/2=b_j \tan \alpha_0$. Hence, by \eqref{Eq:good-alpa1}, \begin{equation*} |a_j-\frac{R_j}{2}|-\frac{\chi}{2}R_j=b_j((1-\chi)\tan \alpha_0-\tan \alpha_1)>0. \end{equation*} Therefore, $it\not\in B^+_j$ for all $t>0$ such that $it\in B_j$. Moreover, since $-a_{j}+\frac{R_j}{2}=(-1)^jb_{j}(\tan \alpha_0-\tan \alpha_1)$, it follows that $B_j^+\subset \{z\in \mathbb C: (-1)^j{\sf Re}\, z>0\}$. Hence, if $u_j\in (s_j,t_j)$ is such that $r_j<{\sf Im}\, \gamma(u_j)<r_j+N_1R_j$, we have ${\sf Re}\, \gamma(u_{2j})>0$ and ${\sf Re}\, \gamma(u_{2j-1})<0$, $j=1,2,\ldots$. Since $\gamma$ is continuous, it follows that there exists a sequence $\{t_m\}$ converging to $+\infty$ such that $it_m\in \gamma([0,1))$ for all $m\in \mathbb N$: Part (1) of Claim A is proved. As for Part (2) of Claim A, let $u_{2j}\in (s_{2j},t_{2j})$ be such that ${\sf Im}\, \gamma(u_{2j})=r_{2j}+\frac{N_1}{2}R_{2j}$. Note that $r_{2j}<{\sf Im}\, \gamma(u_{2j})<r_{2j}+N_1R_{2j}$. Let \[ y_{2j}:={\sf Im}\, \gamma(u_{2j})=r_{2j}+\frac{N_1}{2}R_{2j}. \] Let $x_{2j}\in (0,R_{2j}/2)$ be such that $-x_{2j}-a_{2j}+\frac{R_{2j}}{2}=\frac{\chi}{2}R_{2j}$. Note that by \eqref{Eq:good-choice-Ra} and \eqref{Eq:good-alpa1}, \[ x_{2j}=b_{2j} [(1-\chi)\tan\alpha_0-\tan \alpha_1]>0. \] By Proposition \ref{Prop:strip}(3) (with $R=R_{2j}, a=-a_{2j}$), the curve $\eta:(-\frac{R_{2j}}{2},\frac{R_{2j}}{2})\ni s\mapsto s-a_{2j}+\frac{R_{2j}}{2}+iy_{2j}$ is a geodesic of $\mathbb{S}_{R_{2j}}-a_{2j}$, and by Proposition \ref{Prop:strip}(4), \begin{equation*} \begin{split} k_{\mathbb{S}_{R_{2j}}-a_{2j}}(B_{2j}^+, iy_{2j})&= k_{\mathbb{S}_{R_{2j}}-a_{2j}}(\eta(-x_{2j}), iy_{2j})=k_{\mathbb{S}_{R_{2j}}-a_{2j}}(\eta(-x_{2j}), \eta(a_{2j}-\frac{R_{2j}}{2}))\\ &\geq \frac{1}{2}\log\frac{R_{2_j}-2x_{2j}}{R_{2j}-2 |a_{2j}-\frac{R_{2j}}{2}|}=\log\frac{\chi \tan\alpha_0+\tan\alpha_1}{\tan\alpha_1}=:\tilde\beta_1>0, \end{split} \end{equation*} where the inequality follows from the left inequality in Proposition \ref{Prop:strip}(3) and the last equality follows from \eqref{Eq:good-choice-Ra}. By Proposition \ref{good box}, $k_{\Omega}(B_{2j}^+, iy_{2j})\geq \frac{1}{C}k_{\mathbb{S}_{R_{2j}}-a_{2j}}(B_{2j}^+, iy_{2j})$. Let $\beta_1:=\frac{\tilde\beta_1}{C}$. The previous estimate and \eqref{Eq:gamma-in_Bntosc} imply then that for all $t\in (s_{2j}, t_{2j})$ such that $r_{2j}<{\sf Im}\, \gamma(t)<r_{2j}+N_1R_{2j}$ \begin{equation}\label{Eq:good-part-estimabeta} k_{\Omega}(\gamma(t), iy_{2j})\geq \beta_1. \end{equation} Now, let $W_{2j}^+:=\{z\in \Omega: {\sf Im}\, z\geq r_{2j}+(\frac{N_1}{2}+N_2)R_{2j}\}$ and $W_{2j}^-:=\{z\in \Omega: {\sf Im}\, z\leq r_{2j}+(\frac{N_1}{2}-N_2)R_{2j}\}$. Assume $t\in [0,1)$ and $\gamma(t)\in W^+_{2j}$. Let $L_{2j}^+:=\{z\in B_{2j}: {\sf Im}\, z=r_{2j}+(\frac{N_1}{2}+N_2)R_{2j}\}$. Hence, by Proposition \ref{good box} and Proposition \ref{Prop:strip}(5) \begin{equation*} \begin{split} k_\Omega(\gamma(t), iy_{2j})&\geq k_\Omega(W^+_{2j}, iy_{2j})= k_{\Omega}(L^+_{2j}, iy_{2j})\geq \frac{1}{C}k_{\mathbb{S}_{R_2j}-a_{2j}}(L^+_{2j}, iy_{2j})\\&\geq \frac{1}{C}k_{\mathbb{S}_{R_2j}-a_{2j}}(L^+_{2j}, R_{2j}-a_{2j}+iy_{2j})\\&=\frac{1}{C}k_{\mathbb{S}_{R_2j}-a_{2j}}(L^+_{2j}, R_{2j}-a_{2j}+i(r_{2j}+\frac{N_1}{2}R_{2j}))=\frac{\pi N_2}{2C}. \end{split} \end{equation*} A similar computation shows that $k_\Omega(\gamma(t), iy_{2j})\geq \frac{\pi N_2}{2C}$ for all $t\in [0,1)$ and $\gamma(t)\in W^-_{2j}$. Let $\beta:=\min\{\beta_1, \frac{\pi N_2}{2C}\}$ and let $u_1^{2j}:=\min \{t\in [s_{2j},t_{2j}]: {\sf Im}\, \gamma(t)\geq r_{2j}\}$ and $u_2^{2j}:=\max \{t\in [s_{2j},t_{2j}]: {\sf Im}\, \gamma(t)\leq r_{2j}+N_1R_{2j}\}$. By Corollary \ref{cor:good-box-estim}, for every $t\in I\setminus [u_1^{2j},u_2^{2j}]$ it follows that $\gamma(t)\in W_{2j}^+\cup W_{2j}^-$, hence $k_\Omega(\gamma(t), iy_{2j})\geq \beta$ for what we already proved. On the other hand, if $t\in [s_{2j},t_{2j}]$ then either $\gamma(t)\in W_{2j}^+\cup W_{2j}^-$ and hence $k_\Omega(\gamma(t), iy_{2j})\geq \beta$, or $r_{2j}<{\sf Im}\, \gamma(t)<r_{2j}+N_1R_{2j}$ and hence $k_\Omega(\gamma(t), iy_{2j})\geq \beta$ by \eqref{Eq:good-part-estimabeta}. Therefore the sequence $\{iy_{2j}\}$ satisfies the requirement of Claim A Part (2). \end{proof} For $\alpha\geq 1$, let \[ Z_\alpha:=\{z\in \mathbb C: |{\sf Re}\, z|^\alpha<{\sf Im}\, z\}. \] As it is shown in Proposition \ref{sector-implies-convergnt}, if $(\phi_t)$ is a non-elliptic semigroup in $\mathbb D$ with universal model $(\mathbb C, h, z+it)$ and $Z_1+p\subset h(\mathbb D)$ for some $p \in h(\mathbb D)$, then $\phi_t(z)$ converges non-tangentially to its Denjoy-Wolff point. In the following proposition, we show that $\alpha=1$ is the best we can have: \begin{proposition}\label{Prop:tang-conve-with-para} Let $\alpha>1$. Then there exists a parabolic semigroup $(\phi_t)$ of $\mathbb D$ such that $\phi_t$ does not converge non-tangentially to its Denjoy-Wolff point, and such that if $h$ is its Koenigs function, then $Z_\alpha\subset h(\mathbb D)$. \end{proposition} \begin{proof} Let $c>1$, let $D\geq D(c)$ be the constant given by Proposition \ref{Prop:local-strip-bound} and let $E\in(0, D)$. Let $C>1$ be given by Proposition \ref{good box}. Let $\{N_j^0\}$ be an increasing sequence of positive numbers, converging to $+\infty$, such that $N_j^0>\max\{\frac{4(C+1)\epsilon}{\pi}, 2D\}$ for all $j$. Let $\delta>0$. For every $j$, let $N_j>N_j^0$ be the constant given in Proposition \ref{Prop:strip}(6), relative to the pair $(N^0=N_j^0, \delta)$. Let $\beta\in (\frac{1}{\alpha},1)$. Let $\{R_j\}$ be a sequence of positive real numbers, converging to $\infty$, such that for all $j=1,2,\ldots$, \begin{equation*} \begin{split} & R_0^{\beta-1}<\frac{1}{2},\\ & R_j>R_{j-1}^{\beta\alpha},\\ & R_j^{\alpha\beta-1}>N_j+1. \end{split} \end{equation*} For $j=0,1,2,\ldots$ we set \begin{equation*} \begin{split} & a_j:=R_j^\beta,\\ & b_j:=a_j^\alpha=R_j^{\alpha\beta}. \end{split} \end{equation*} Note that \begin{equation}\label{Eq:behavior-constants} \begin{split} & \lim_{j\to +\infty} \frac{R_j}{a_j}=+\infty,\\ & \frac{b_j-b_{j-1}}{R_j}> R_j^{\alpha\beta-1}-1>N_j>N_j^0>2D. \end{split} \end{equation} Let $\Omega_j:=\Omega_{-a_j, b_j, R_j}$ and $\Omega:=\cap_{j=0}^\infty \Omega_j$. Note that $\Omega$ is starlike at infinity and $a_j=R_j^\beta>R_{j-1}^{\beta}=a_{j-1}$. Moreover, \begin{equation}\label{Eq:R_j-tiene-testa-aj} R_j-2a_j=R_j-2R_j^\beta=R_j(1-2R_j^{\beta-1})>R_j(1-2R_0^{\beta-1})>0. \end{equation} Therefore, $Z_\alpha\subset \Omega$. We let $h:\mathbb D \to \Omega$ be the Riemann map such that $h(0)=i$. We define the semigroup $\phi_t:=h^{-1}(h(z)+it)$. We can assume that $1$ is its Denjoy-Woff point. We prove that there exists a sequence $\{t_k\}$, converging to $+\infty$, such that $k_\mathbb D([0,1), \phi_{t_k}(0))\to +\infty$ when $k$ tends to $\infty$, which means that the sequence $\{\phi_{t_k}(0)\}$ converges tangentially to $1$. Let $\gamma_0:=h([0,1))$. The previous condition is equivalent to find a sequence $\{t_k\}$ converging to $+\infty$ such that $k_{\Omega}(\gamma_0, it_k)\to +\infty$ when $k$ tends to $\infty$. In order to prove this, we note that by \eqref{Eq:behavior-constants}, $b_j-b_{j-1}>R_j N_j>2R_jD$. Hence, \[ B_j:=\{\zeta\in (\mathbb{S}_{R_j}-a_j): b_{j-1}+R_j E<{\sf Im}\, \zeta < b_j-R_j D\}, \] is a good box for $\Omega$ for the data $(c,D,E)$, with width $R_j$ and height $b_j-b_{j-1}$ (see Proposition~\ref{good box}). Since $\gamma_0:[0,1)\to \Omega$ has the property that $\gamma_0(0)=i$ and ${\sf Im}\, \gamma_0(t)\to +\infty$ as $t\to+\infty$, we can find $0<s_j^0<t_j^0<1$ such that $\gamma_0(s)\in B_j$ for all $s\in (s_j^0, t_j^0)$ and ${\sf Im}\, \gamma_0(s_j^0)=b_{j-1}+R_jE$, ${\sf Im}\, \gamma_0(t_j^0)=b_{j}-R_jD$. Let \[ N^1_j:=N^0_j-\frac{4\epsilon}{\pi}, \quad N^2_j:=\frac{N^0_j}{2}-\frac{2(C+1)\epsilon}{\pi}, \quad \chi:= 1-e^{-2(\epsilon+\delta)}. \] By Corollary \ref{cor:good-box-estim} (with $B=B_j, a=-a_j, R=R_j$) there exists $r_j\in (b_{j-1}+R_{j}E, b_j-R_j(D+N_j^1))$ such that for all $u\in (s_j,t_j)$ such that $r_j<{\sf Im}\, \gamma(u)<r_j+N^1_jR_j$, \begin{equation}\label{Eq:gamma-in_Bntosc} \gamma(u)\in B_j^+:=\{z\in B_j: |{\sf Re}\, z+a_j-\frac{R_j}{2}|\leq \frac{\chi}{2}R_j)\}. \end{equation} Moreover, let $u_0^{j}:=\min \{t\in [s_{j},t_{j}]: {\sf Im}\, \gamma(t)\geq r_{j}\}$ and $u_1^{j}:=\max \{t\in [s_{j},t_{j}]: {\sf Im}\, \gamma(t)\leq r_{j}+N_j^1R_{j}\}$. It follows from Corollary \ref{cor:good-box-estim} that for every $t\in I\setminus [u^j_0,u^j_1]$ \begin{equation}\label{Eq:fuera-buena-caja-j} \gamma(t)\not\in \{z\in B_j: r_j+(\frac{N_j^1}{2}-N_j^2)R_j<{\sf Im}\, z<r_j+(\frac{N_j^1}{2}+N_j^2)R_j\}. \end{equation} Note that, if $z\in B_j^+$ then \[ {\sf Re}\, z\geq \frac{R_j}{2}(1-\chi)-a_j=\frac{1-\chi}{2}R_j-R^\beta_j, \] and, since $\frac{1-\chi}{2}>0$ and $\beta<1$, for every $M>0$ there exists $j_M$ such that $B_j^+\subset \{\zeta\in \mathbb C: {\sf Re}\, \zeta>M\}$ for all $j\geq j_M$. Let $u_{j}\in (u_1^{j},u_2^{j})$ be such that ${\sf Im}\, \gamma(u_{j})=r_{j}+\frac{N^1_j}{2}R_{j}$. Note that $r_{j}<{\sf Im}\, \gamma(u_{j})<r_{j}+N^1_jR_{j}$. Let \[ y_{j}:={\sf Im}\, \gamma(u_{j})=r_{j}+\frac{N^1_j}{2}R_{j}. \] Let $x_{j}\in (0,R_{j}/2)$ be such that $-x_{j}-a_{j}+\frac{R_{j}}{2}=\frac{\chi}{2}R_{j}$. Note that, for $j$ sufficiently large, $x_j>0$. By Proposition \ref{Prop:strip}(2) (with $R=R_{j}, a=-a_{j}$), the curve $\eta:(-\frac{R_{j}}{2},\frac{R_{j}}{2})\ni s\mapsto s-a_{j}+\frac{R_{j}}{2}+iy_{j}$ is a geodesic of $\mathbb{S}_{R_{j}}-a_{j}$, and by Proposition \ref{Prop:strip}(4), \begin{equation*} \begin{split} k_{\mathbb{S}_{R_{j}}-a_{j}}(B_j^+, iy_{j})&= k_{\mathbb{S}_{R_{j}}-a_{j}}(\eta(-x_{j}), iy_{j})=k_{\mathbb{S}_{R_{j}}-a_{j}}(\eta(-x_{j}), \eta(a_{j}-\frac{R_{j}}{2}))\\ &\geq \frac{1}{2}\log\frac{R_{j}-2x_{j}}{R_{j}-2 (\frac{R_{j}}{2}-a_{j})}=\frac{1}{2}\log \frac{\chi R_j+2a_j}{2a_j}\simeq \frac{1}{2}\log \frac{R_j}{a_j}, \end{split} \end{equation*} where the last inequality follows from \eqref{Eq:R_j-tiene-testa-aj} and the inequalities in Proposition \ref{Prop:strip}(2). By Proposition \ref{good box}, $k_{\Omega}(B_j^+, iy_{j})\geq \frac{1}{C}k_{\mathbb{S}_{R_{2j}}-a_{j}}(B_j^+, iy_{j})$. The previous estimate and \eqref{Eq:gamma-in_Bntosc} imply then that for all $M>0$ there exists $j_0^M$ such that for all $j\geq j_0^M$ and $t\in (s_{j}, t_{j})$ such that $r_{j}<{\sf Im}\, \gamma(t)<r_{j}+N_j^1R_{j}$ \begin{equation}\label{Eq:good-part-estimainf} k_{\Omega}(\gamma(t), iy_{j})>M. \end{equation} Now, let $W_{j}^+:=\{z\in \Omega: {\sf Im}\, z\geq r_{j}+(\frac{N_j^1}{2}+N_j^2)R_{j}\}$ and $W_{j}^-:=\{z\in \Omega: {\sf Im}\, z\leq r_{j}+(\frac{N_j^1}{2}-N_j^2)R_{j}\}$. Assume $t\in [0,1)$ and $\gamma(t)\in W^+_{j}$. Let $L_{j}^+:=\{z\in B_{j}: {\sf Im}\, z=r_{j}+(\frac{N_j^1}{2}+N_j^2)R_{j}\}$. Hence, by Proposition \ref{good box} and Proposition \ref{Prop:strip}(5), \begin{equation*} \begin{split} k_\Omega(\gamma(t), iy_{j})&\geq k_\Omega(W^+_{j}, iy_{j})= k_{\Omega}(L^+_{j}, iy_{j})\geq \frac{1}{C}k_{\mathbb{S}_{R_j}-a_{j}}(L^+_{j}, iy_{j})\\&\geq \frac{1}{C}k_{\mathbb{S}_{R_j}-a_{j}}(L^+_{j}, R_{j}-a_{j}+iy_{j})\\&=\frac{1}{C}k_{\mathbb{S}_{R_j}-a_{j}}(L^+_{j}, R_{j}-a_{j}+i(r_{j}+\frac{N_j^1}{2}R_{j}))=\frac{\pi N_j^2}{2C}. \end{split} \end{equation*} A similar computation shows that $k_\Omega(\gamma(t), iy_{j})\geq \frac{\pi N_j^2}{2C}$ for all $t\in [0,1)$ and $\gamma(t)\in W^-_{j}$. Note that $\lim_{j\to +\infty}\frac{\pi N_j^2}{2C}=+\infty$. Hence, for every $M>0$ there exists $j_1^M$ such that for all $j\geq j_1^M$ and all $t\in [0,1)$ such that $\gamma(t)\in W_j^+\cup W_j^-$, we have $k_\Omega(iy_j, \gamma(t))>M$. By \eqref{Eq:fuera-buena-caja-j}, and \eqref{Eq:good-part-estimainf} this means that for every $M>0$, and for every $j\geq \max\{j_0^M, j_1^M\}$ we have $k_{\Omega}(iy_j, \gamma(t))>M$ for all $t\in [0,1)$. The proof is completed. \end{proof}
{ "timestamp": "2019-08-27T02:07:03", "yymm": "1804", "arxiv_id": "1804.05553", "language": "en", "url": "https://arxiv.org/abs/1804.05553" }
\section{Introduction} Let $\Omega\subset \mathbb{R}^N$ be a bounded domain with smooth boundary. An elementary result from the theory of ODEs establishes that if a smooth function $G:\overline\Omega\to \mathbb{R}^N$ is inwardly pointing over $\partial\Omega$, that is \begin{equation} \label{hart-weak} \langle G(x),\nu(x)\rangle <0 \qquad x\in \partial\Omega, \end{equation} where $\nu(x)$ denotes the outer normal at $x$, then the solutions of the autonomous system of ordinary differential equations $$u'(t)=G(u(t))$$ with initial data $u(0)=u_0\in \overline \Omega$ are defined and remain inside $\Omega$ for all $t>0$. {Now, let us denote the space of $T$--periodic continuous functions as $$ C_T:=\{u\in C(\mathbb{R},\mathbb{R}^N):u(t+T)=u(t)\} $$ and, for given $p\in C_{T}$, consider the non-autonomous system $$u'(t)=G(u(t)) + p(t).$$} {If $\overline\Omega$ has the fixed point property, then the above system has at least one $T$-periodic orbit, provided that $\|p\|_\infty$ is small.} This is a straightforward consequence of the fact that the time-dependent vector field $G(x)+ p(t)$ is still inwardly pointing for all $t$; hence, the set $\overline \Omega$ is invariant for the associated flow and thus the Poincar\'e operator given by $Pu_0:=u(T)$ is well defined for $u_0\in \overline\Omega$ and satisfies $P(\overline\Omega)\subset \overline\Omega$. More generally, observe that, when (\ref{hart-weak}) is assumed, the homotopy defined by $h(x,s):= sG(x) - (1-s)\nu(x)$ with $s\in [0,1]$ does not vanish on $\partial\Omega$; whence $$ deg_B(G,\Omega,0) = deg_B(-\nu,\Omega,0), $$ where $deg_B$ stands for the Brouwer degree. Thus, it follows from \cite{hopf} that $deg_B(G,\Omega,0)=(-1)^N\chi(\Omega)$, where $\chi(\Omega)$ denotes the Euler characteristic of $\Omega$. It is worthy to recall (see \emph{e.g.},\cite{wecken}) that if $\overline \Omega$ has the fixed point property, then $\chi(\Omega)$ is different from $0$. This follows easily in the present setting from the fact that if $\chi(\Omega)=0$ then one can construct a field $G$ satisfying (\ref{hart-weak}) that does not vanish in $\Omega$. If $\overline\Omega$ has the fixed point property, then there exist (non-constant) $T$-periodic solutions of all periods which, in turn, implies that $G$ vanishes, a contradiction. Interestingly, the converse of the result in \cite{wecken} is not true; that is, one can easily find $\Omega$ with nonzero Euler characteristic such that $\overline \Omega$ has not the fixed point property. For such a domain, the Poincar\'e map has obviously a fixed point (because $G$ vanishes in $\Omega$). This yields the conclusion that a fixed point-free map in $C(\overline \Omega,\overline\Omega)$ cannot belong to the closure of the set of all the Poincar\'e maps associated to the homotopy class of $-\nu$. Now suppose, independently of the value of $\chi(\Omega)$, that $G$ vanishes at some point $e\in \Omega$, namely, that $e$ is an equilibrium point of the autonomous system. It is well known that if $M:=DG(e)$ is nonsingular, then the degree of $G$ over any small neighbourhood $V$ of $e$ is well defined and coincides with $s(M)$, where \begin{equation} \label{sM} s(M):= sgn ({\rm det}(M)). \end{equation} Thus, if $s(M)$ is different from $(-1)^N\chi(\Omega)$, then the excision property of the degree implies that the system has at least another equilibrium point in $\Omega\setminus \overline V$. Furthermore, it follows from Sard's lemma that, for almost all values $\overline p$ in a neighbourhood of $0\in \mathbb{R}^N$, the mapping $G + \overline p$ has at least $\Gamma$ different zeros in $\Omega$, with \begin{equation} \label{Gamma} \Gamma=\Gamma(M):=|\chi(\Omega)- (-1)^{N} s(M)| + 1. \end{equation} Thus, one might expect that if $p\in C(\mathbb{R},\mathbb{R}^N)$ is $T$-periodic and $\|p\|_\infty$ is small, then the number of $T$-periodic solutions of the non-autonomous system is generically greater or equal to $\Gamma$. Here, `generically' should be understood in the sense of Baire category, that is, the property is valid for all $p$ (close to the origin) in the space of continuous $T$-periodic except for a meager set. It can be shown, indeed, that the fixed point index of the Poincar\'e map $P$ at $e$ is equal to $(-1)^Ns(M)$ and, moreover, a homotopy argument shows that the degree of $P$ over $\Omega$ is equal to $\chi(\Omega)$. Details are omitted because the result follows from the main theorem of the present paper. For several reasons, the situation is different for the delayed system \begin{equation} \label{ec} u'(t) = g(u(t),u(t-\tau)) \end{equation} where, for simplicity, we shall assume that $g:\overline\Omega\times \overline\Omega\to \mathbb{R}^N$ is continuously differentiable. In the first place observe that, due to the delay, the condition that the field $G(x):=g(x,x)$ is inwardly pointing does not necessarily avoid that solutions with initial data $x_0:=\phi\in C([-\tau,0],\overline\Omega)$ may eventually abandon $\overline\Omega$. However, taking into account that $$ |u(t_0-\tau)- u(t_0)| \le \tau \max_{t\in [t_0-\tau,t_0]} |u'(t)|,$$ it follows that the flow-invariance property, now over the set $C([-\tau,0],\overline\Omega)$, is retrieved under the stronger assumption \begin{equation} \label{hart} \langle g(x,y),\nu(x)\rangle < 0 \qquad (x,y)\in \mathcal A_\tau (\Omega) \end{equation} where $$ \mathcal A_\tau (\Omega):= \{ (x,y)\in \partial\Omega\times \overline\Omega: |y-x|\le \tau\|g\|_{\infty}\}. $$ In the second place, the previous considerations regarding the Poincar\'e map become less obvious, since the latter is now defined not over $\overline\Omega$ but over the metric space $C([-\tau,0],\overline\Omega)$. In connection with this fact, we recall that the characteristic equation for the autonomous linear delayed systems is transcendental (also called quasipolynomial equation), so there exist typically infinitely many complex characteristic values. \medskip Throughout the paper, we shall assume as before that system (\ref{ec}) has an equilibrium point $e\in \Omega$, that is, such that $g(e,e)=0$. This necessarily occurs when $\chi(\Omega)\neq 0$, although this latter condition shall not be imposed. Denote by $A,B\in \mathbb{R}^{N\times N}$ the respective matrices $D_xg(e,e)$ and $D_yg(e,e)$. Again, if $A+B$ is nonsingular and $s(A+B)$ is different from $(-1)^N\chi(\Omega)$, then the system has at least one extra equilibrium point in $\Omega$; furthermore, the number of equilibria in $\Omega$ is generically greater or equal to $\Gamma$. This is readily verified by writing the set of all the functions $g\in C^1(\overline\Omega\times\overline\Omega,\mathbb{R}^N)$ satisfying (\ref{hart}) as the union of the closed sets $$X_n:=\left\{g\in C^1(\overline\Omega\times\overline\Omega,\mathbb{R}^N): \langle g(x,y),\nu(x)\rangle \le -\frac 1n \quad\hbox{for $(x, y)\in \mathcal A_\tau (\Omega)$} \right\}$$ and noticing that $X_n\cap \mathcal C$ is nowhere dense, where $ \mathcal C$ denotes the set of those functions $g$ such that $0$ is a critical value of the corresponding $G$. Our goal in this work is to extend the preceding ideas for non-autonomous periodic perturbations of (\ref{ec}), namely the problem \begin{equation} \label{nonaut} u'(t) = g(u(t),u(t-\tau)) + p(t) \end{equation} with {$p\in C_{T}$}. As a basic hypothesis, we shall assume that the linearisation at the equilibrium, that is, the system \begin{equation} \label{linear} u'(t) = Au(t)+ Bu(t-\tau) \end{equation} has no nontrivial $T$-periodic solutions. This clearly implies, in particular, the above-mentioned condition that $A+B$ is invertible. From the Floquet theory for DDEs, it is known that the latter condition is also sufficient for nearly all positive values of $T$ (\emph{i.e.}, except for at most a countable set). For the sake of completeness, this specific consequence of the Floquet theory shall be shown below (see Remark \ref{remark1}). Our main result reads as follows. \begin{thm} \label{main} Let the equilibrium $e$ and the matrices $A$ and $B$ be as before and assume that the linear system (\ref{linear}) has no nontrivial $T$-periodic solutions. Then: \begin{itemize} \item[(a)] There exists $r>0$ such that {for any $p\in C_{T}$} with $\|p\|_\infty<r$ the non-autonomous problem (\ref{nonaut}) has at least one $T$-periodic solution. \item[(b)] If moreover (\ref{hart}) holds and $ s(A+B) \neq (-1)^N\chi(\Omega) $ with $s$ defined as in (\ref{sM}), then (\ref{nonaut}) has at least two $T$-periodic solutions. \item[(c)] Furthermore, there exists a residual set $\Sigma_r\subset C_T$ such that if $p\in \Sigma_r\cap B_r(0)$, then the number of $T$-periodic solutions is at least $\Gamma(A+B)$, where $\Gamma$ is given by (\ref{Gamma}). \end{itemize} \end{thm} The next result is an immediate consequence of {Theorem \ref{main} combined with the preceding comments}. \begin{cor} \label{corol} Let $e, A$ and $B$ be as before and assume that $A+B$ is invertible. Then for nearly all $T>0$ there exists $r=r(T)>0$ such that if {$p\in C_{T}$} with $\|p\|_\infty<r$ then the non-autonomous problem (\ref{nonaut}) has at least one $T$-periodic solution. If moreover (\ref{hart}) holds and $ s(A+B) \neq (-1)^N\chi(\Omega), $ then the number of $T$-periodic solutions is at least $2$ and generically $\Gamma(A+B)$. \end{cor} {For small delays, the condition that (\ref{linear})} has no nontrivial $T$-periodic solutions can be formulated explicitly in terms of the matrix $A+B$ : \begin{cor} \label{smalldelay} Let $e, A$ and $B$ be as before and assume that $\frac{2k\pi}Ti$ is not an eigenvalue of the matrix $A+B$ for all $k\in\mathbb{N}_0$. Then for each $\tau$ small enough there exists $r=r(\tau)$ such that the non-autonomous problem (\ref{nonaut}) has at least one $T$-periodic solution for any {$p\in C_{T}$} with $\|p\|_\infty<r$. If moreover (\ref{hart-weak}) holds for $G(x):=g(x,x)$ and $s(A+B)\ne (-1)^N\chi(\Omega)$, then (\ref{nonaut}) has at least two $T$-periodic solutions and generically $\Gamma(A+B)$. \end{cor} It is worthy mentioning that if $\Omega$ is for example a ball, then the condition $s(A+B)\neq (-1)^N\chi(\Omega)$ implies that the equilibrium is unstable. As we shall see, this can be regarded as a consequence of the fact that the Leray-Schauder index of the fixed point operator defined in the proof of our main theorem is $(-1)^{N+1}$. This connection can be deduced from a version of the Krasnoselskii relatedness principle, which implies that the mentioned index coincides except for a $(-1)^N$ factor with that of the Poincar\'e operator. As shown in Proposition \ref{poinc-stab}, this implies, in turn, that the equilibrium cannot be stable. The paper is organised as follows. In the next section, we prove some basic facts concerning the linearised problem (\ref{linear}); in particular, we give a necessary and sufficient condition in order to ensure that it has no nontrivial $T$-periodic solutions. In section \ref{dem} we present a proof of Theorem \ref{main} by means of an appropriate fixed point operator. The next two sections are devoted to a proof In section \ref{sec-delay}, we give a proof Corollary \ref{smalldelay}. In section \ref{poincare}, we make some considerations on the stability of the equilibrium and the indices, on the one hand, of the fixed point operator defined in section \ref{dem} and of the Poincar\'e map, on the other hand. Finally, a simple application of the main results for a singular system is introduced in section \ref{exam}. \section{Linearised system} In this section, we shall prove some basic facts concerning the linear system (\ref{linear}). To this end, let us introduce some notation. For $k\in \mathbb N_0$, define $$\lambda_k:= \frac{2k\pi}T$$ and $$\varphi_k(t):= \cos(\lambda_k t) \qquad \psi_k(t):= \sin (\lambda_k t).$$ It is readily verified that $$ \varphi_k(t-\tau)= \varphi_k(t)\varphi_k(\tau) + \psi_k(t)\psi_k(\tau) $$ $$ \psi_k(t-\tau)= \psi_k(t)\varphi_k(\tau) - \varphi_k(t)\psi_k(\tau) $$ and $$ \varphi_k'= -\lambda_k \psi_k,\qquad \psi_k'=\lambda_k\varphi_k. $$ For an element $u\in C_T$, we may consider its Fourier series, namely $$u = a_0 + \sum_{k=1}^\infty (\varphi_k a_k +\psi_k b_k) $$ in the $L^2$ sense, with $a_k, b_k\in \mathbb{R}^N$. Furthermore, recall that if $u$ is smooth (\emph{e.g.}, of class $C^2$) then the series and its term-by-term derivative converge uniformly to $u$ and $u'$ respectively. \begin{lem} \label{lema} Let $u\in C_T$ and define \begin{equation} \label{matrices} X_k:=A+\varphi_k(\tau)B, \qquad Y_k:=\lambda_kI + \psi_k(\tau) B. \end{equation} Then $u$ is a solution of (\ref{linear}) if and only if \begin{equation} \label{matr-ident} \left(\begin{array}{cc} X_k & -Y_k\\ Y_k & X_k \end{array} \right) \left(\begin{array}{c} a_k\\ b_k \end{array} \right) = \left(\begin{array}{c} 0\\ 0 \end{array} \right) \end{equation} for all $k\in \mathbb N_0$. \end{lem} \begin{proof} Since $\varphi_k'(t), \varphi_k(t-\tau), \psi_k'(t)$ and $\psi_k(t-\tau)$ belong to ${\rm\bf span}\{ \varphi_k(t),\psi_k(t)\}$, it follows that $u$ is a solution of of (\ref{linear}) if and only if $$ (A+B)a_0=0 $$ and $$ \varphi_k'(t) a_k +\psi_k'(t) b_k = A(\varphi_k(t) a_k +\psi_k(t) b_k) + B(\varphi_k(t-\tau) a_k + \psi_k(t-\tau) b_k) $$ for all $k>0$. The latter identity, in turn, is equivalent to $$ \begin{array}{ccc} \lambda_k b_k & = & [A+ \varphi_k(\tau) B] a_k - \psi_{k}(\tau) Bb_k \\ {} \\ -\lambda_k a_k & = & \psi_{k}(\tau)Ba_k + [A+ \varphi_k(\tau) B] b_k, \end{array}. $$ that is, $$X_ka_{k} -Y_kb_{k}= Y_ka_{k} + X_kb_{k}= 0. $$ Because $X_0=A+B$ and $Y_0=0$, we deduce that $u$ is a solution of (\ref{linear}) if and only if (\ref{matr-ident}) holds for all $k\in \mathbb N_0$. \end{proof} \begin{cor} \label{no-nontrivial} (\ref{linear}) has no nontrivial $T$-periodic solutions if and only if \begin{equation} \label{nec-suf} h_k:={\rm det}\left(\begin{array}{cc} X_k & -Y_k\\ Y_k & X_k \end{array} \right)\neq 0 \end{equation} for all $k\in \mathbb N_0$. \end{cor} \begin{rem} \label{remark1} \ \begin{enumerate} \item Because $A+B$ is invertible, it is clear that for nearly all $T>0$ condition (\ref{nec-suf}) is satisfied for all $k$. Indeed, it suffices to observe that $h_k$, regarded as a function of $T\in (0,+\infty)$, is an analytic function and, consequently, it has at most a countable number of zeros. \item It can be shown that $h_k\ge 0$; in particular, its roots have even multiplicity. The proof is straightforward when $A$ and $B$ commute, since in this case $$ {\rm det}\left(\begin{array}{cc} X_k & -Y_k\\ Y_k & X_k \end{array} \right) = {\rm det}(X_k ^2+Y_k ^2). $$ The conclusion then follows, because for any pair of square real matrices $X, Y$ such that $XY=YX$ it is verified that $$ {\rm det}(X ^2+Y^2)= {\rm det}[(X+iY)(X-iY)] = {\rm det}(X+iY)\overline{{\rm det}(X+iY)}\ge 0. $$ {A proof for the non-commutative case is given below in section \ref{dem}, step \ref{directo-fourier}. } It is noticed that (\ref{nec-suf}) may hold for non-invertible matrices $X_k$ and $Y_k$: for instance, observe that $$ \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right)^2 + \left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right)^2 = I. $$ \item \label{k_0} Since $\lambda_k\to +\infty$ it follows, for $k$ large, that $$h_k={\rm det}(Y_k){\rm det}(Y_k + X_kY_k^{-1}X_k)\simeq \lambda_k^{2N} > 0.$$ In particular, there exists $k_0$ such that if $u$ is a $T$-periodic solution of (\ref{linear}) then $a_k=b_k=0$ for $k>k_0$. This means that $u$ is a (vector) trigonometric polynomial. Incidentally, observe that, because the family $\{\varphi_k, \psi_k\}$ is uniformly bounded, the constant $k_0$ may be chosen independent of $\tau$. { In other words, if we consider the linear operator $L:C_T\to C_T$, given by $Lu(t):=u'(t) - Au(t) - Bu(t-\tau)$, then $ {\rm ker}(L)\subset {\rm \bf span}\{\varphi_k,\psi_k\}_{0\le k\le k_0}$. Observe furthermore that ${\rm Im}(L)$ consists of all the Fourier series $a_0+ \sum_{k>0}(\varphi_ka_k + \psi_kb_k)$ such that $a_0\in {\rm Im}(A+B)$ and $(a_k,b_k)\in {\rm Im}(M_k)$, where $M_k$ is the matrix defined in (\ref{matr-ident}). This yields a direct proof of the well-known fact that $L$ is a zero-index Fredholm operator. Moreover, it is verified that $(a_k,b_k)\in {\rm ker}(M_k)\iff (-b_k,a_k) \in {\rm ker}(M_k)$, a fact that will be of relevance in the proofs of our results.} \end{enumerate} \end{rem} \section{Proof of the main theorem} \label{dem} For convenience, a little of extra notation shall be introduced. For a function $u\in C_T$, let us write $$\mathcal Iu(t):= \int_0^t u(s)\, ds, \qquad \overline u:= \frac 1T {\mathcal Iu (T)}. $$ Moreover, denote by $\mathcal N$ the Nemitskii operator associated to the problem, namely $$ \mathcal Nu(t):= g(u(t),u(t-\tau)). $$ Without loss of generality we may assume $e=0$ and fix $T>0$ such that (\ref{linear}) has no nontrivial $T$-periodic solutions. For simplicity, we shall assume from the beginning that all the assumptions are satisfied; it shall be easy for the reader to deduce the existence of one solution near the equilibrium when (\ref{hart}) is not satisfied. Define the open bounded set $U=\{u\in C_T:u(t)\in \Omega\,\hbox{ for all $t$} \}$ and the compact operator $K:\overline U\to C_T$ defined by $$ Ku(t):= \overline u -t\, \overline {\mathcal Nu} + \mathcal I\mathcal Nu(t) - \overline{\mathcal I\mathcal Nu}. $$ We shall prove that the Leray-Schauder degree of $I-K$ is equal to $(-1)^N\chi(\Omega)$ over $U$ and to $s(A+B)$ over $B_\rho(0)$ for small values of $\rho>0$. To this end, let us proceed in several steps: \begin{enumerate} \item Let $K_0u:= \overline u -\frac T2 \overline {\mathcal Nu}$ and define, for $s\in [0,1]$, the operator given by $K_s:=s K +(1-s)K_0$. We claim that $K_s$ has no fixed points on $\partial U$. Indeed, for $s>0$ it is clear that $u\in\overline U$ is a fixed point of $K_s$ if and only if $u'(t)=s\mathcal Nu(t)$, that is: $$ u'(t)= sg(u(t),u(t-\tau)). $$ Suppose there exists $t_0$ such that $u(t_0)\in\partial\Omega$, then we deduce, as before, $$|u(t_0-\tau)- u(t_0)| \le \tau \max_{t\in [t_0-\tau,t_0]} |u'(t)| \le \tau \|g\|_ \infty $$ and by (\ref{hart}) we obtain $$0= \langle u'(t_0),\nu(u(t_0))\rangle = s\langle g(u(t_0),u(t_0-\tau)),\nu(u(t_0))\rangle <0, $$ a contradiction. On the other hand, we observe that the range of $K_0$ is contained in the set of constant functions, which can be identified with $\mathbb{R}^N$; thus, the Leray-Schauder degree of $I-K_0$ can be computed as the Brouwer degree of its restriction to $\overline U\cap \mathbb{R}^N = \overline \Omega$. Furthermore, for $u(t)\equiv u\in \overline \Omega$ it is clear that $K_0u= u - \frac T2 G(u)$, which does not vanish on $\partial \Omega=\partial U\cap \mathbb{R}^N$. By the homotopy invariance of the degree, we conclude that $$deg(I-K,U,0)=deg \left(\frac T2G,\Omega,0\right)=(-1)^N\chi(\Omega).$$ \item Let $K_L$ be the operator associated to the linearised problem, defined by $$ K_Lu(t):= \overline u -t\,\overline {\mathcal N_Lu} + \mathcal I\mathcal N_Lu(t) - \overline{\mathcal I\mathcal N_Lu}, $$ with $\mathcal N_Lu(t):= Au(t) + Bu(t-\tau).$ As before, it is seen that $K_Lu=u$ if and only if $u$ is a solution of (\ref{linear}); hence, it follows from the assumptions that $K_L$ has no nontrivial fixed points. Furthermore, the degree of $I-K_L$ coincides with the degree of $I-K$ on $B_\rho(0)$ when $\rho$ is small. This is a well-known fact but, for the reader's convenience, a simple proof is sketched as follows. Since the degree is locally constant, we may assume that $g$ is of class $C^2$ near $(0,0)$, then {for some $C>0$,} $$ \|Kv-K_Lv\|_{\infty} \le C\|\mathcal Nv-\mathcal N_Lv\|_\infty = o(\rho). $$ Because $K_L$ is compact, it is verified that, for some $\theta>0$, $$ \|v-K_Lv\|_{\infty}\ge \theta \rho $$ for all $v\in \partial B_\rho(0)$. Indeed, due to linearity, it suffices to prove the claim for $\rho=1$. By contradiction, suppose there exists a sequence $\{v_n\}\subset \partial B_1(0)$ such that $\|v_n-K_Lv_n\|_{\infty}\to 0$, then passing to a subsequence we may assume that $\{K_Lv_n\}$ converges to some $v$. Then $v_n\to v$ which, in turn, implies that $\|v\|_{\infty}=1$ and $v=K_Lv$, a contradiction. It follows that if $\rho>0$ is small then $sK + (1-s)K_L$ has no fixed points on $\partial B_\rho(0)$ for $s\in [0,1]$ because $$ \|v - sKv - (1-s)K_Lv\|_{\infty} \ge \|v - K_Lv\|_{\infty} - \|K_Lv-Kv\|_{\infty} \ge \theta \rho - o(\rho)>0$$ for $v\in \partial B_\rho(0)$. Thus, the degree of $I-K$ is well defined and coincides with the degree of $I-K_L$ over $B_\rho(0)$. \item Claim: $deg(I-K_L,B_\rho(0),0) = s(A+B)$. \label{directo-fourier} Indeed, for $u$ as before it is seen by direct computation that $$u-K_Lu=\tilde a_0 + \sum_{k\ge 1} (\varphi_k\tilde a_k + \psi_k\tilde b_k)$$ where $$\tilde a_0= \mathcal M_0 a_0 $$ and $$ \left( \begin{array}{c} \tilde a_k \\ \tilde b_k \end{array}\right) = \mathcal M_k \left( \begin{array}{c} a_k \\ b_k \end{array}\right) $$ with $$\mathcal M_0:= \frac T2(A+B)\qquad \hbox{and }\,\, \mathcal M_k:= \frac 1{\lambda_k} \left( \begin{array}{cc} Y_k & X_k \\ -X_k & Y_k \end{array}\right)\quad \hbox{for}\, k>0. $$ Hence, the degree coincides with the sign of the determinant of the block matrix $$ \left( \begin{array}{ccccc} \mathcal M_0 & 0 & 0 & \ldots & 0 \\ 0 & \mathcal M_1 & 0 & \ldots & 0 \\ 0 & 0 & \mathcal M_2 & \ldots & 0\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 0 & 0 & 0 & \ldots & \mathcal M_J \end{array}\right) $$ for $J$ sufficiently large. Thus, the proof follows in a straightforward manner from the fact that ${\rm det}(\mathcal M_k) >0$ for all $k>0$. We remark that the latter property holds even when $A$ and $B$ do not commute (see Remark \ref{remark1}). Indeed, identifying the pairs $(a,b)\in \mathbb{R}^N\times \mathbb{R}^N$ with vectors $a+ib\in \mathbb C^N$, a matrix of the form $\left( \begin{array}{cc} X & -Y \\ Y & X \end{array}\right)$ may be called a {$\mathbb C$-linear matrix}. Thus, we need to prove that if $\mathcal M$ is an arbitrary invertible $\mathbb C$-linear matrix, then the algebraic multiplicity of each eigenvalue $\sigma<0$ of $\mathcal M$ is even. It is known that this value can be computed as the dimension of the kernel of the matrix $(\mathcal M-\sigma I)^m$, where $m$ is the minimum integer such that ${\rm ker}(\mathcal M-\sigma I)^m = {\rm ker}(\mathcal M-\sigma I)^{m+1}$. Now observe that the set of $\mathbb C$-linear matrices is a subring of $\mathbb{R}^{2N\times 2N}$; thus, $(\mathcal M-\sigma I)^m$ is again a $\mathbb C$-linear matrix. In particular, if $(a,b)\in {\rm ker}(\mathcal M-\sigma I)^m$ then $(-b,a)\in {\rm ker}(\mathcal M-\sigma I)^m$ and the result follows. \item \textit{Existence of two solutions for small $p$}. From the previous steps and the fact that the degree is locally constant we deduce that $$deg(I-K,U,\hat p)=(-1)^N\chi(\Omega),\qquad deg(I-K,B_\rho(0),\hat p)=s(A+B)$$ when $\|\hat p\|_\infty$ is small. Now the excision property of the Leray-Schauder degree implies $$deg(I-K,B_\rho(0),\hat p)=s(A+B)\ne 0,$$ and $$ deg(I-K,U\backslash B_\rho(0),\hat p)=(-1)^N\chi(\Omega)- s(A+B) \ne 0.$$ Thus, there exists $\hat r>0$ such that the equation $(I-K)u=\hat p$ has at least two solutions for $\|\hat p \|_\infty <\hat r$. Finally, for each $p\in C_T$ define $$\hat p (t):= \mathcal I p(t) - \overline{\mathcal Ip} - t\overline p, $$ then clearly $\|\hat p\|_\infty\le c\|p\|_\infty$ for {some $c>0$}. The result is then deduced from the fact that if $u-Ku=\hat p$, then $u$ is a $T$-periodic solution of {(\ref{nonaut})}. $$u'(t)=g(u(t),u(t-\tau))+p(t).$$ \item \textit{Genericity.} The last part of the proof follows as a consequence of the following particular case of the Sard-Smale Theorem \cite{smale}: \begin{thm} Let $\mathcal F:X \to Y$ be a $C^1$ Fredholm map of index $0$ between Banach manifolds, i.e. such that $D\mathcal F(x):T_x X \to T_{\mathcal F(x)} Y$ is a Fredholm operator of index $0$ for every $x\in X$. Then the set of regular values of $\mathcal F$ is residual in $Y$. \end{thm} At this point, we notice that the argument is a bit subtle: when applied to $\mathcal F:=I-K$, the Sard-Smale Theorem implies the existence of a residual set $\Sigma\subset C_T$ such that the mapping $\mathcal F-\hat p$ has at least $\Gamma - 1$ zeros in $U\setminus B_\rho(0)$ for $\hat p\in \Sigma\cap B_{ \hat r}(0)$. {Indeed, it is readily seen that $K$ is of class $C^1$ and $DK(u)$ is compact for all $u$. Thus, $\mathcal F=I-K$ is a zero-index Fredholm operator. If $\hat p$ is a regular value, that is, $D\mathcal F(u)$ is surjective for every preimage $u \in \mathcal F^{-1}(\hat p)$ then, since the index is $0$, it is also injective and from the open mapping theorem we conclude that $D\mathcal F(u)$ is an isomorphism. Hence, the number of such preimages in $U\setminus B_{\rho}(0)$ is greater or equal than $|deg(I-K,U\setminus B_{\rho}(0),0)|$. This follows by taking small neighbourhoods $N_u$ around each of these values $u$ such that $\mathcal F:N_u\to \mathcal F(N_u)$ is a diffeomorphism. Because there are no other zeros of $\mathcal F -\hat p$ in $U\setminus B_{\rho}(0)$, the degree is the sum of the degrees $d_u$ over each of these neighbourhoods. The claim then follows from the fact that $d_u=\pm 1$ for each $u$.} However, although the mapping $p\mapsto \hat p$ defined before establishes an isomorphism $J:C_T\to C_T^1$, it might happen that $J^{-1}(\Sigma\cap C^1_T)$ is not a residual set. The difficulty is overcome for example by considering the same operator $K$ as before, now defined over the set $$ \hat U:= \{u\in C^1_T: u(t)\in \Omega,\, \|u'\|_\infty < \|g\|_{\infty} + 1\} \subset C^1_T. $$ Details are left to the reader. \end{enumerate} \begin{rem} {Notice that} \begin{enumerate} \item The existence of a solution near the equilibrium can be also proved in a direct way by the Implicit Function Theorem. \item Condition (\ref{hart}) alone implies the existence of generically $|\chi(\Omega)|$ solutions. \item Analogous conclusions are obtained if the sign of (\ref{hart}) is reversed. In this case, $G$ is homotopic to $\nu$ and hence $deg(I-K,U,0)= \chi(\Omega)$. However, in this latter case the considerations about the Poincar\'e operator become less clear, because it is not guaranteed that solutions with initial values $\phi$ with $\phi(t)\in \overline \Omega$ remain inside $\Omega$. \end{enumerate} \end{rem} {} \section{Small delays} \label{sec-delay} As mentioned in the introduction, condition (\ref{hart}) implies that the vector field $G(x)=g(x,x)$ is inwardly pointing over $\partial \Omega$, although the converse is not true; the need of a condition stronger than (\ref{hart-weak}) is due to the presence of the delay. However, if only (\ref{hart-weak}) is assumed, then Theorem \ref{main} is still valid for all $\tau<\tau ^*$, where $\tau ^*$ depends only on $\|g\|_\infty$. More precisely, by continuity we may fix $\varepsilon>0$ such that (\ref{hart}) holds for all $x\in \partial \Omega$ and all $y\in\overline\Omega$ with $|y-x|<\varepsilon$ and take $\tau* := \frac \varepsilon{\|g\|_\infty}$. In this section, we show that the problem for small $\tau$ can be seen as a perturbation of the non-delayed case, thus giving the explicit sufficient condition for the non-existence of nontrivial $T$-periodic solutions of (\ref{linear}) expressed in Corollary \ref{smalldelay}. We shall make use of the following lemmas: \begin{lem} \label{lambdas} $1$ is a Floquet multiplier of the system $u'(t)=Mu(t)$ if and only if $-\lambda_k^2$ is an eigenvalue of $M^2$ for some $k\in\mathbb{N}_0$, that is, if and only if $\pm i\lambda_k$ are eigenvalues of $M$ for some $k$. \end{lem} \begin{proof} The result follows by direct computation, or from Lemma \ref{lema} with $\tau=0$. \end{proof} For example, when $M$ is triangularizable (or, equivalently, when all its eigenvalues are real), $1$ is not an eigenvalue of the system $u'(t)=Mu(t)$ if and only if $M$ is nonsingular; in this particular case, the conclusion follows directly, because the system uncouples and the result is obviously true for a scalar equation. \begin{lem} \label{Floq} Assume that $1$ is not a Floquet multiplier of the linear ODE system $u'(t)=(A+B)u(t)$. Then the DDE system (\ref{linear}) has no nontrivial $T$-periodic solutions, provided that $\tau$ is small. \end{lem} \begin{proof} Suppose that $u_n\in C_T$ is a nontrivial solution for $\tau_n\to 0$. Without loss of generality, it may be assumed that $\|u_n\|_\infty=1$ and hence $\|u_n'\|_\infty\le C$ for some constant $C$. Thus, we may assume that $u_n$ converges uniformly to some $u\in C_T$ with $\|u\|_\infty=1$. Because $\|u_n(t-\tau_n)-u_n(t)\|\le C\tau_n\to 0$, it becomes clear that $u_n'$ converges uniformly to $(A+B)u$ which, in turn, implies $u'=(A+B)u$, a contradiction. \end{proof} \begin{rem} A more direct proof of Lemma \ref{Floq} follows just by considering Remark \ref{remark1}.\ref{k_0} and Lemma \ref{lambdas}. Indeed, in the context of Lemma \ref{lema} it suffices to check that $h_k\ne 0$ only for a finite number of values of $k$. By continuity, this is true for small $\tau$, because ${\rm det} [(A+B)^2 + \lambda_k^2 I]\neq 0$ for all $k$. However, the previous proof has an interest in its own because it can be extended in a straightforward manner to the non-autonomous case. \end{rem} \medskip \underline{\textit{Proof of Corollary \ref{smalldelay}}}: As a consequence of the preceding lemma, the conclusions of Theorem \ref{main} hold for small $\tau$, provided that the linearisation has no nontrivial $T$-periodic solutions for the non-delayed case. Thus, in view of Lemma \ref{lambdas}, the proof is complete. \hfill{}$\square$ \section{Poincar\'e operator} \label{poincare} In this section, we shall make some considerations regarding the Poincar\'e operator associated to the system. Let us firstly observe that if $\chi(\Omega)=1$ (for example, if $\Omega$ is homeomorphic to a ball), then the condition $s(A+B) \neq (-1)^N\chi(\Omega)$ in Theorem \ref{main} simply reads $(-1)^N {\rm det}(A+B)<0$. This, in turn, implies that the equilibrium is unstable. {Indeed, consider the characteristic function $h(\lambda)= {\rm det}\left(\lambda I - A - Be^{-\lambda\tau} \right)$, then $h(0)= (-1)^N {\rm det}(A+B)<0$ and $h(\lambda) =\lambda^N$ for $|\lambda|\gg 0$. In particular, this implies the existence of a characteristic value $\lambda>0$. } We shall show that, in the present context, the instability of the equilibrium when $(-1)^N {\rm det}(A+B)<0$ is due to the fact, proved in section \ref{dem}, that the index of the fixed point operator $K$ at $e$ (i. e. the degree of $I-K$ over small balls around $e$) is equal to $(-1)^{N+1}$. When $\tau=0$, this can be regarded as a direct consequence of the following properties: \begin{enumerate} \item $deg(I-K, B_\rho(e),0)$ with $B_\rho(e)\subset C_T$ is equal to $(-1)^Ndeg_B(I-P, B_\rho(e),0)$ with $B_\rho(e)\subset\mathbb{R}^N$, where $P$ is the Poincar\'e map. \item If the equilibrium is stable, then the index of $P$ is $1$. \end{enumerate} The first property is a particular case of a \textit{relatedness principle} due to Krasnoselskii (see \cite{krasno}). The second property is well-known and can be found for example in \cite{K}. For more details see \cite{rafa}, where sufficient conditions for the validity of the converse statement are also obtained. Our goal in this section consists in understanding the connections between the instability of the equilibrium and the index of the fixed point operator defined in the proof of the main theorem. With this aim, let us define the Poincar\'e operator for the delayed case as follows. Let $\tau\le T$ and consider a general autonomous system \begin{equation} \label{general} u'(t)=F(u_t) \end{equation} with $F: C([-\tau,0])\to \mathbb{R}^N$ locally Lipschitz, \emph{i.e.}: for all $R>0$ there exists a constant $L$ such that $$ \|F(\phi)-F(\psi)\|\le L\|\phi-\psi\|_\infty $$ for all $\phi,\psi\in \overline {B_R(0)}\subset C([-\tau,0],\mathbb R^n)$. The notation $u_t$ expresses, as usual, the mapping defined by $u_t(\theta):=u(t+\theta)$ for $\theta\in [-\tau,0]$. Denote by ${\rm dom}(P)\subset C([-\tau,0])$ the set of those functions $\phi$ such that the unique solution $u=u(\phi)$ of the problem with initial condition $\phi$ is defined up to $t=T$, then $P:{\rm dom}(P)\to C([-\tau,0])$ is defined by $$ P\phi(s):=u(T+s). $$ Clearly, the $T$-periodic solutions of the problem can be identified with the fixed points of $P$. We shall see that, as in the non-delayed case, if the linearisation has no nontrivial $T$-periodic solutions then the index $i(P)$ of the operator $P$ at a stable equilibrium is equal to $1$. To this end, assume without loss of generality that $e=0$ and observe that stability implies that ${\rm dom}(P)$ is a neighbourhood of $0$. It is worth noticing that, in the general setting, extra conditions are required in order to prove the compactness of $P$ (see \emph{e.g}. \cite{liu}), so the Leray-Schauder degree may be not well defined; however, it is verified that the stability assumption implies that $P$ is compact over small neighbourhoods of $0$. More precisely: \begin{lem} \label{compact} Let $F$ be as before and assume that for some open $U\subset C([-\tau,0])$ there exists $R>0$ such that if $\phi\in U$ then the solution $u$ with initial condition $\phi$ is defined and satisfies $|u(t)| <R$ for all $t\in [0,T]$. Then $P$ is well defined and compact over $U$. \end{lem} \begin{proof} Let $B\subset U$ be bounded and observe, in the first place, that $P(B)$ is bounded. Moreover, if $u$ is a solution with initial condition $\phi\in B$, then $$u(t)= \phi(0) + \int_0^t F(u_s)\, ds. $$ Enlarging $R$ if necessary, we may assume $B\subset B_R(0)$, then $\|u_s\|_\infty <R$ for all $s\in [0,T]$. Given $t_1<t_2$ in $[-\tau,0]$, since $\tau\le T$ it is verified that $$|P\phi(t_2)-P\phi(t_1)| \le \int_{T+t_1}^{T+t_2} |F(u_s)|\, ds. $$ Let $L$ be the Lipschitz constant corresponding to $R$, then $$|F(\phi)|\le |F(0)| + L\|\phi\|_\infty\le C + LR, $$ where $C:=|F(0)|$. Hence $|P(t_2)-P(t_1)|\le (C+LR)(t_2-t_1)$ and the result follows from the Arzel\`{a}-Ascoli Theorem. \end{proof} \begin{rem} For example, the assumptions of the previous lemma are satisfied if $F$ has linear growth, that is $$|F(\phi)|\le \gamma\|\phi\|_\infty + \delta. $$ \end{rem} Furthermore, extra assumptions are required to ensure the non-existence of nontrivial periodic solutions near $0$; this is why we shall impose this fact as an extra condition (see Proposition \ref{poinc-stab} below), which is clearly satisfied for example when the stability is asymptotic. For simplicity, we shall also assume that $F$ is Fr\'echet differentiable at $0$, that is, $$F(\phi)= D_\phi(0)\phi + \mathcal R(\phi) $$ with $\|\mathcal R(\phi)\|_\infty \le o(\|\phi\|_\infty)\|\phi\|_\infty$. Thus, it is readily verified that the linearisation of $P$ at the origin coincides with the Poincar\'e operator associated to the linearised system $u'(t)=D_\phi(0)u_t$. \begin{prop} \label{poinc-stab} In the previous setting, assume that $0$ is a stable equilibrium of (\ref{general}) such that its linearisation has no nontrivial $T$-periodic solutions. Then $i(P)=1$. \end{prop} \begin{proof} Without loss of generality, we may assume that $P$ is compact on $\overline V$ for some neighbourhood $V$ of $0$. It follows from the assumptions that the index of $P$ is well defined and coincides with the index of its linearisation $P_L$. According to Theorem 13.8 in \cite{brown}, $deg(I-P_L,B_\rho(0),0)$ is equal to $(-1)^\alpha$, where $\alpha$ is the sum of the (finite) algebraic multiplicities of the (finitely many) eigenvalues $\sigma$ of $P_L$ satisfying $\sigma>1$. If $deg(I-P_L,B_\rho(0),0) = -1$, then $P_L$ has an eigenfunction $\phi$ with eigenvalue $\sigma>1$. If $u$ is the corresponding solution of the linearised problem with initial condition $u=\phi$ on $[-\tau,0]$ then $u$ can be extended to $\mathbb{R}$ in a $(T,\sigma)$-periodic fashion, that is, with $u(t+T)=\sigma u(T)$ for all $t$ (see \cite{pinto}). In particular, $u(t)$ is unbounded for $t>0$. In other words, $0$ is unstable for the linearised problem which, in turn, implies that it cannot be stable for the original problem (see \emph{e.g.} \cite{hale}). \end{proof} In order to complete the picture for system (\ref{ec}), it would be interesting to prove that, indeed, the index of the Poincar\'e operator at the equilibrium when the linearisation has no nontrivial solutions is $(-1)^Ns(A+B)= (-1)^Ni(K)$. Here, we shall simply verify that the claim holds when the delay is small; the analysis of the general case and {a version of the Krasnoselskii relatedness principle for delayed systems shall be the subject of a forthcoming paper.} To this end, let us start with a direct computation for the non-delayed case: \begin{lem} \label{degP} Let $M\in \mathbb R^{N\times N}$ and let $P_M$ be the Poincar\'e operator associated to the linear ODE system $u'(t)=Mu(t)$ for some fixed $T$. If $1$ is not a Floquet multiplier, then $$deg_B(I-P_M,V,0) = (-1)^Ns(M)$$ for any neighbourhood $V\subset \mathbb{R}^N$ of the origin. \end{lem} \begin{proof} By definition, $$(I-P_M)(u)= \left(I-e^{TM}\right)u.$$ Write $M$ in its (possibly complex) Jordan form $M=C ^{-1}JC$, where $J$ is upper triangular. Then $${\rm det}\left(I- e^{TM}\right) = {\rm det}\left(I- e^{TJ}\right) = \prod_{j=1}^N \left(1-e ^{\lambda_jT}\right), $$ where $\lambda_j$ are the eigenvalues of $M$. Now observe that if $\lambda=a+ib\notin \mathbb{R}$, then $$ \left(1-e ^{\lambda T}\right)\left(1-e ^{\overline{\lambda}T}\right) = 1 + e ^{aT}\left(e^{aT} -2\cos (bT)\right) >0. $$ Thus, complex eigenvalues do not affect the sign of ${\rm det}\left(I- e^{TM}\right)$, as well as it happens with the sign of ${\rm det}(M)$ because $\lambda\overline\lambda =|\lambda|^2$. The result follows now from the fact that, for $\lambda\in\mathbb{R}$, $$sgn\left(1 - e^{\lambda T}\right) = -sgn (\lambda).$$ \end{proof} \begin{rem} An alternative (somewhat exotic) proof follows from the relatedness principle. Indeed, we may consider the operator $K_L$ in the proof of Theorem \ref{main} with $A=M$ and $B=0$, then $deg_B(I-P,V,0)=(-1)^Ndeg(I-K_L, V,0) = (-1)^Ns(M)$. \end{rem} The conclusion for small $\tau$ is obtained now by a continuity argument. Indeed, fix $r>0$ and $P_L$ as before. The solutions of (\ref{linear}) with initial value $\phi\in B_r(0)$ are uniformly bounded; thus, by Gronwall's lemma we deduce that $\|P-P_0\| = O(\tau)$, where the operator $P_0$ is defined by $P_0(\phi)(t)\equiv v(T)$, with $v$ the unique solution of the system $v'(t)=(A+B)v(t)$ satisfying $v(0)=\phi(0)$. Moreover, recall that if $\tau$ is small then $P_L$ is homotopic to $P_0$; thus, the result follows from Lemma \ref{degP}. \section{Example: a system of DDEs with singularities} \label{exam} A simple example is presented here in order to illustrate our main results. Let $0\le J_0\le J \ne 0$ and $$ g(x,y):= -dx + |y|^2\left( \sum_{j=1}^{J_0} a_j\frac{x-v_j}{|x-v_j|^{\alpha_j}} + \sum_{j=J_0+1}^{J} a_j\frac{y-v_j}{|y-v_j|^{\alpha_j}} \right)$$ where $d,a_j>0$, $\alpha_j>2$ and $v_j\in \mathbb{R}^N\backslash\{0\}$ are pairwise different vectors. A simple computation shows that $$\langle g(x,x),x\rangle < 0 \qquad |x|\gg 0$$ and $$\langle g(x,x),v_j-x\rangle < 0 \qquad |x-v_j|\ll 1 $$ for $j=1,\ldots, J$. Moreover, $g(0,0)=0$ and $$A=D_xg(0,0)=-dI, \quad B=D_yg(0,0)=0. $$ Thus, taking $\Omega:=B_R(0)\backslash \cup_{j=1}^J B_\eta(v_j)$ where $R\gg 0$ and $\eta\ll 1$, Corollary \ref{smalldelay} applies. Since $\chi(\Omega)= 1-J < 1 = (-1)^Ns(A+B)$, we conclude that the number of $T$-periodic solutions of (\ref{nonaut}) for small $\tau$ and $\|p\|_\infty$ is generically $J+1$. \section*{Acknowledgements} The first two authors were partially supported by projects CONICET PIP 11220130100006CO and UBACyT 20020160100002BA. The first author wants to thank Prof. J. Barmak for his thoughtful comments regarding the fixed point property and the Euler characteristic.
{ "timestamp": "2018-04-17T02:16:03", "yymm": "1804", "arxiv_id": "1804.05616", "language": "en", "url": "https://arxiv.org/abs/1804.05616" }
\section{\newpage\stdsection} \usepackage{soul} \usepackage{xcolor} \definecolor{Light}{gray}{.95} \sethlcolor{Light} \let\OldTexttt\texttt \renewcommand{\texttt}[1]{\OldTexttt{\hl{#1}}} \algdef{SE}[DOWHILE]{Do}{doWhile}{\algorithmicdo}[1]{\algorithmicwhile\ #1}% \begin{document} \maketitle \newpage \textbf{Acknowledgement} This document was written in \href{http://rmarkdown.rstudio.com/}{Rmarkdown} inside \texttt{RStudio} and converted with \texttt{knitr} (Version 1.17) by Xie (\protect\hyperlink{ref-yihui}{2015}) and \texttt{pandoc}. All visualisations in this document were created using the \texttt{ggplot2} package (Version 2.2.1) by Wickham (\protect\hyperlink{ref-hadley}{2016}) in \texttt{R} 3.4.1 (R Core Team \protect\hyperlink{ref-R}{2017}). \newpage \tableofcontents \newpage \chapter{Introduction}\label{introduction} The recent data storage technology has enabled improved data collection and capacity which lead to more massive volumes of data collected and higher data dimensionality. Therefore, searching an optimal set of predictive features within a noisy dataset became an indispensable process in supervised machine learning to extract useful patterns. Excluding irrelevant and redundant features for problems with large and noisy datasets provides the following benefits: (1) reduction in overfitting, (2) better generalisation of models, (3) superior prediction performance, and (4) CPU and memory efficient and fast prediction models. There are two major techniques for reducing data dimensionality: Feature Selection (FS) and Feature Extraction (FE). The core principle of Feature extraction (FE) is to generate new features by usually combining original ones then maps acquired features into a new (lower dimensional) space. The well-known FE techniques are Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and ISO-Container Projection (ISOCP) (Z. Zheng, Chenmao, and Jia \protect\hyperlink{ref-iso}{2010}).The most critical drawback of the FE method is that the reduced set of newly generated features might lose the interpretability of the original dataset. Relating new features to original features proves difficult in FE, and consequently, any further analysis of transformed features become limited. Feature selection (FS) is loosely defined as selecting a subset of available features in a dataset that is associated with the response variable by excluding irrelevant and redundant features (Aksakalli and Malekipirbazari \protect\hyperlink{ref-vural}{2016}). In contrast with the FE method, the FS preserves the physical meaning of original features and provides better readability and interpretability. Suppose the dataset has \(p\) original features, the FS problem has \(2^p-1\) possible solutions and thus it is an NP-hard problem due to an exponential increase in computational complexity with \(p\). FS methods can fall into four categories: wrapper, filter, embedded and hybrid methods. A wrapper method uses a predictive score of given learning algorithm to appraise selected features. While wrappers usually give the best performing set of features, they are very computationally intensive as they train a new model for each subset. In contrast with wrapper methods, filter methods evaluate features without utilising any learning algorithms. Filter methods measure statistical characteristics of data such as correlation, distance and information gain to eliminate insignificant features. The commonly used filter methods include Relief (Sikonia and Kononenko \protect\hyperlink{ref-sikonia}{2003}) and information-gain based models (H. Peng, Long, and Ding \protect\hyperlink{ref-peng}{2005}). Senawi, Wei, and Billings (\protect\hyperlink{ref-senawi}{2017}) proposed a filter method that applies straightforward hill-climbing search and correlation characteristics such as conditional variance and orthogonal projection. Bennasar, Hicks, and Setchi (\protect\hyperlink{ref-bennasar}{2015}) introduced two filter methods which aim to mitigate the issue of the overestimated feature significance by using mutual information and the maximin criterion. Embedded methods aim to improve a predictive score like wrapper methods but carry out FS process in the learning time (\protect\hyperlink{ref-cadenas}{2013}). Their computational complexity tends to fall between filters and wrappers. Some of the popular embedded methods are Least Absolute Shrinkage and Selection Operator (LASSO) (Tibshirani \protect\hyperlink{ref-lasso}{1996}), Support Vector Machine recursive feature elimination (SVM-RFE) (Guyon et al. \protect\hyperlink{ref-guyon}{2002}) and Random-Forest (Leo \protect\hyperlink{ref-rf}{2001}). Hybrid methods are a combination of filter and wrapper methods. Hsu, Hsieh, and Ming-Da (\protect\hyperlink{ref-hsu}{2011}) combined F-score and information gain with the sequential forward and backward searches to solve bioinformatics problems. Cadenas, Garrido, and Martinez (\protect\hyperlink{ref-cadenas}{2013}) presented the blending of Fuzzy Random Forest and discretisation process (filter). Apolloni, Leguizamón, and Alba (\protect\hyperlink{ref-apolloni}{2016}) developed two hybrid algorithms which are a combination of rank-based filter FS method and Binary Differential Evolution (BDE) algorithm. Furthermore, MIMAGA-Selection method (Lu et al. \protect\hyperlink{ref-lu}{2017}) is a mix of the adaptive genetic algorithm (AGA) and mutual information maximisation (MIM). A wrapper method typically begins subsetting the feature set, evaluating the set by the performance criterion of the classifier, and repeating until the desired quality is obtained (Kohavi and John \protect\hyperlink{ref-kohavi}{1997}). Wrapper FS algorithms are classified into four categories: complete, heuristic and meta-heuristic search, and optimisation-based. As complete search algorithms become infeasible with increasing number of features, they are not appropriate for big-data problems (L. Wang, Wang, and Chang \protect\hyperlink{ref-wang2}{2016}). Heuristic algorithms search good local optima before barren subsets. Besides, a suitable heuristic search algorithm could determine a global optimum given sufficient computation time. Heuristic wrapper methods contain branch-and-bound techniques, beam search, best-first and greedy hill-climbing algorithms. Prominent greedy hill-climbing algorithms are Sequential Forward Search (which starts with an empty feature set and most significant features are gradually added to the feature set) and Sequential Backward Search (start with a full feature set and gradually eliminate insignificant features). However, these models do not re-evaluate and unwittingly exclude the eliminated features which might be possibly predictive in another feature sets (Guyon and Elisseeff \protect\hyperlink{ref-guyon2}{2003}). Thus, Sequential Forward Floating Selection (SFFS) and Sequential Backward Floating Selection (SBFS) methods are developed to mitigate deal with the early elimination issue (Pudil, Novovicová, and Kittler \protect\hyperlink{ref-pudil}{1994}). For instance, whenever a feature is added to the feature set, the SFFS method involves checking the feature set by removing any worse feature if the conditions are satisfied. Such process can correct wrong decisions made in the previous steps. Meta-heuristics are problem-independent methods, and so they can surpass complete search and heuristic sequential feature selection methods. However, meta-heuristic algorithms might be computationally expensive. They do not guarantee optimality for resulted feature sets and require additional parameter tuning in order to provide more robust results. Several meta-heuristic methods have been applied to FS problems are genetic algorithms (GA) (Oluleye, Armstrong, and Diepeveen \protect\hyperlink{ref-oluleye}{2014}), (Raymer et al. \protect\hyperlink{ref-raymer}{2000}), (Tsai, Eberle, and Chu \protect\hyperlink{ref-tsai}{2013}), ant colony optimization (Al-Ani \protect\hyperlink{ref-ani}{2005}), (Wan et al. \protect\hyperlink{ref-wan}{2006}), binary flower pollination (Sayed, Nabil, and Badr \protect\hyperlink{ref-sayed}{2016}), simulated annealing (Debuse and Rayward-Smith \protect\hyperlink{ref-debuse}{1997}), forest optimization (Ghaemi and Feizi-Derakhshi \protect\hyperlink{ref-ghaemi}{2016}), tabu search (Tahir, Bouridane, and Kurugollu \protect\hyperlink{ref-tahir}{2007}), bacterial foraging optimization (Y. Chen et al. \protect\hyperlink{ref-chen}{2017}), particle swarm optimization (X. Wang et al. \protect\hyperlink{ref-wang3}{2007}), binary black hole (Pashaei and Aydin \protect\hyperlink{ref-aydin}{2017}) and hybrid whale optimization (Mafarja and Mirjalili \protect\hyperlink{ref-mafarja}{2017}). Optimisation-based methods treat the FS task as a mathematical optimisation problem. One of the optimisation-based methods is Nested Partitions Method (NP) introduced by (Ólafsson and Yang \protect\hyperlink{ref-olafsson}{2005}). The NP method randomly searches the entire space of possible feature subsets by partitioning the search space into regions. It analyses each regional result and then aggregates them to determine the search direction. Another optimisation-based method is the Binary Simultaneous Perturbation Stochastic Approximation algorithm (BSPSA) (Aksakalli and Malekipirbazari \protect\hyperlink{ref-vural}{2016}). BSPSA simultaneously approximates the gradient of each feature and eventually determine a feature subset that yields the best performance measurement. Although it outperforms most of the wrapper algorithms, its computational cost is higher. This study introduces the Simultaneous Perturbation Stochastic Approximation for Feature Selection (SPSA-FS) algorithm which improves BSPSA using non-monotone Barzilai \& Borwein (BB) search method. The SPSA-FS algorithm improves BSPSA to be computationally faster and more decisive by incorporating the following: \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item Non-monotone step size calculated via the BB method, \item Averaging of \(n\) number of gradient approximation, and \item \(m\) period gain smoothing. \end{enumerate} The rest of this document is organised as follow. Chapter \ref{section2} gives an overview of both SPSA and BSPSA algorithms and a simple two-iteration computational example to illustrate its concept. Chapter \ref{section3} introduces the SPSA-FS algorithm which utilises the BB method to mitigate the slow convergence of BSPSA at a cost of minimal decline in performance accuracy. With SPSA-FS as a benchmark, Chapter \ref{section4} compares the performance of other wrappers in various classification and regression problems using the open datasets. Chapter \ref{section5} concludes. \chapter{Background}\label{section2} \section{Stochastic Pseudo-Gradient Descent Algorithms}\label{stochastic-pseudo-gradient-descent-algorithms} Introduced by J. Spall (\protect\hyperlink{ref-spall}{1992}), SPSA is a pseudo-gradient descent stochastic optimisation algorithm. It starts with a random solution (vector) and moves toward the optimal solution in successive iterations in which the current solution is perturbed simultaneously by random offsets generated from a specified probability distribution. Let \(\mathcal{L}: \mathbb{R}^{p} \mapsto \mathbb{R}\) be a real-valued objective function. The gradient descent approach startes searching for a local minimum of \(\mathcal{L}\) with an initial guess of a solution. Then, it evaluates the gradient of the objective function i.e.~the first-order partial derivatives \(\nabla \mathcal{L}\), and moves in the direction of \(-\nabla \mathcal{L}\). The gradient descent algorithm attempts to converge to a local optima where the gradient is zero. In the context of a supervised machine learning problem, \(\mathcal{L}\) is sometimes known as a ``loss'' function as a case of minimisation problem. However, the gradient descent approach is not applicable in the situations where the loss function is not explicitly known and so its gradient. Here come stochastic pseudo-gradient descent algorithms to rescue. These algorithms, including \textbf{SPSA}, approximate the gradient from noisy loss function measurements and hence do not need the information about the (unobserved) functional form of the loss function. \section{SPSA Algorithm}\label{spsa-algorithm} Given \(w \in D \subset \mathbb{R}^{p}\), let \(\mathcal{L}(w): D \mapsto \mathbb{R}\) be the loss function where its functional form is unknown but one can observe noisy measurement: \begin{center} \begin{align} y(w) & := \mathcal{L}(w) + \varepsilon(w) \label{lossfunction} \end{align} \end{center} where \(\varepsilon\) is the noise and \(y\) is the noise measurement. Let \(g(w)\) denote the gradient of \(\mathcal{L}\): \[g(w): = \nabla \mathcal{L} = \frac{\partial \mathcal{L} }{\partial w}\] SPSA starts with an initial solution \(\hat{w}_0\) and iterates following the recursion below in search for a local minima \(w^{*}\): \[\hat{w}_{k+1} := \hat{w}_{k} - a_k \hat{g}(\hat{w}_{k})\] where: \begin{itemize} \tightlist \item \(a_{k}\) is an iteration gain sequence; \(a_{k} \geq 0\), and \item \(\hat{g}(\hat{w}_{k})\) is the approximate gradient at \(\hat{w}_{k}\). \end{itemize} Let \(\Delta_k \in \mathbb{R}^p\) be a \textbf{simultaneous perturbation vector} at iteration \(k\). SPSA imposes certain regularity conditions on \(\Delta_k\) (J. Spall \protect\hyperlink{ref-spall}{1992}): \begin{itemize} \tightlist \item The components of \(\Delta_{k}\) must be mutually independent, \item Each component of \(\Delta_{k}\) must be generated from a symmetric zero mean probability distribution, \item The distribution must have a finite inverse, and \item \(\{ \Delta_k\}_{k=1}\) must be a mutually independent sequence which is independent of \(\hat{w}_0, \hat{w}_1,...\hat{w}_k\). \end{itemize} The finite inverse requirement precludes \(\Delta_k\) from uniform or normal distributions. A good candidate is a symmetric zero mean Bernoulli distribution, say \(\pm 1\) with 0.5 probability. SPSA ``perturbs'' the current iterate \(\hat{w}_k\) by an amount of \(c_k \Delta_k\) in each direction of \(\hat{w}_k + c_k \Delta_k\) and \(\hat{w}_k - c_k \Delta_k\) respectively. Hence, the \textbf{simultaneous perturbations} around \(\hat{w}_{k}\) are defined as: \[\hat{w}^{\pm}_k := \hat{w}_{k} \pm c_k \Delta_k\] where \(c_k\) is a nonnegative gradient gain sequence. The noisy measurements of \(\hat{w}^{\pm}_k\) at iteration \(k\) become: \[y^{+}_k:=\mathcal{L}(\hat{w}_k + c_k \Delta_k) + \varepsilon_{k}^{+}\] \[y^{-}_k:=\mathcal{L}(\hat{w}_k - c_k \Delta_k) + \varepsilon_{k}^{-}\] where \(\mathbb{E}( \varepsilon_{k}^{+} - \varepsilon_{k}^{-}|\hat{w}_0, \hat{w}_1,...\hat{w}_k, \Delta_k) = 0 \forall k\). Therefore, \(\hat{g}_k\) is computed as: \[\hat{g}_k(\hat{w}_k):=\bigg[ \frac{y^{+}_k-y^{-}_k}{w^{+}_{k1}-w^{-}_{k1}},...,\frac{y^{+}_k-y^{-}_k}{w^{+}_{kp}-w^{-}_{kp}} \bigg]^{T} = \bigg[ \frac{y^{+}_k-y^{-}_k}{2c_k \Delta_{k1}},...,\frac{y^{+}_k-y^{-}_k}{2c_k \Delta_{kp}} \bigg]^{T} = \frac{y^{+}_k-y^{-}_k}{2c_k}[\Delta_{k1}^{-1},...,\Delta_{kp}^{-1}]^{T}\] At each iteration \(k\), SPSA evaluates \textbf{three noisy measurements of loss function}: \(y^{+}_k\), \(y^{-}_k\), and \(y(\hat{w}_{k+1})\). \(y^{+}_k\) and \(y^{-}_k\) are used to approximate the gradient whereas \(y(\hat{w}_{k+1})\) is used to measure the performance of next iterate, \(\hat{w}_{k+1}\). J. C. Spall (\protect\hyperlink{ref-spall2}{2003}) states that if certain conditions hold, \(w_k \mapsto w^{*}\) as \(k \rightarrow \infty\). See J. C. Spall (\protect\hyperlink{ref-spall2}{2003}) for more information about theoretical aspects of SPSA. J. Spall (\protect\hyperlink{ref-spall}{1992}) proposed the following functions for tuning parameter \begin{center} \begin{align} a_k & := \frac{a}{(A+k)^{\alpha}} \label{spsaStepSize} \\ c_k & := \frac{c}{\gamma^{k}} \end{align} \end{center} \(A\), \(a\), \(\alpha\), \(c\) and \(\gamma\) are pre-defined; these parameters must be fine-tuned properly. SPSA does not have automatic stopping rules. So, we can specify a maximum number of iterations as a stopping criterion. In addition, the iteration sequence \ref{spsaStepSize} must be monotone and satisfy: \[\lim_{k\rightarrow\infty} a_k = 0\] \section{Binary SPSA Algorithm}\label{bspsa} J. Spall and Qi (\protect\hyperlink{ref-wang}{2011}) provided a discrete version of SPSA where \(w \in \mathbb{Z}^{p}\). Therefore, a binary version of SPSA (BSPSA) is a special case of the discrete SPSA with fixed perturbation parameters. The loss function becomes \(\mathcal{L}: \{0,1\}^{p} \mapsto \mathbb{R}\). BSPSA is different from the conventional SPSA in two ways: \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item The gain sequence \(c_k\) is constant, \(c_k=c\); \item \(\hat{w}_k^{\pm}\) are bounded and rounded before \(y_k^{\pm}\) are evaluated. \end{enumerate} Algorithm \ref{pseudoCodes} illustrates the pseudo code for BSPSA Algorithm. \begin{algorithm} \caption{BSPSA Algorithm} \label{pseudoCodes} \begin{algorithmic}[1] \Procedure{\underline{BSPSA}($\hat{w}_0$, $a$, $A$, $\alpha$, $c$, $M$)}{} \State\hskip-\ALG@thistlm Initialise $k = 0$ \State\hskip-\ALG@thistlm \textbf{do}: \State Simulate $\Delta_{k, j} \sim \text{Bernoulli}(-1, +1)$ with $\mathbb{P}(\Delta_{k, j}=1) = \mathbb{P}(\Delta_{k, j}=-1) = 0.5$ for $j=1,..p$ \State $\hat{w}^{\pm}_k = \hat{w}_{k} \pm c \Delta_k$ \State $\hat{w}^{\pm}_k = B(\hat{w}^{\pm}_k)$ \Comment{$B( \bullet)$ = component-wise $[0,1]$ operator } \State $\hat{w}^{\pm}_k = R(\hat{w}^{\pm}_k)$ \Comment{$R( \bullet)$ = component-wise rounding operator} \State $y^{\pm}_k =\mathcal{L}(\hat{w}_k \pm c_k \Delta_k) \pm \varepsilon_{k}^{\pm}$ \State $\hat{g}_k(\hat{w}_k) =\bigg( \frac{y^{+}_k-y^{-}_k}{2c}\bigg)[\Delta_{k1}^{-1},...,\Delta_{kp}^{-1}]^{T}$ \Comment{$\hat{g}_k(\hat{w}_k)$ = the gradient estimate} \State $\hat{w}^{\pm}_k = \hat{w}_{k} \pm a_k \hat{g}_k(\hat{w}_k)$ \Comment{$a_k = \frac{A}{(a+k)^{\alpha}}$} \State $k = k + 1$ \State\hskip-\ALG@thistlm \textbf{while} ($k < M$) \State\hskip-\ALG@thistlm \textbf{Output}: $\hat{w}^{\pm}_M = R(\hat{w}^{\pm}_M)$ \EndProcedure \end{algorithmic} \end{algorithm} \section{Illustration of BSPSA Algorithm in Feature Selection}\label{fbspsa} Let \textbf{X} be \(n \times p\) data matrix of \(p\) features and \(n\) observations whereas \(Y\) denotes the \(n \times 1\) response vector. Do not confuse with \(y\) which represents the functional form of the loss function (see Equation \ref{lossfunction}). \(\{\)\textbf{X}\(, Y \}\) consitute as the dataset. Let \(X:= \{ X_1, X_2, ....X_p \}\) denote the feature set where \(X_j\) represents the \(j^{th}\) feature in \textbf{\(X\)}. For a nonempty subset \(X' \subset X\), we define \(\mathcal{L}_{C}(X', Y)\) as the true value of performance criterion of a wrapper classifier (the model) \(C\) on the dataset. As \(\mathcal{L}_{C}\) is not known, we train the classifier \(C\) and compute the error rate, which is denoted by \(y_C(X', Y)\). Therefore, \(y_C = \mathcal{L}_C + \varepsilon\). The wrapper FS problem is defined as determining the non-empty feature set \(X^{*}\): \[X^{*} := \arg \min_{X' \subset X}y_C(X', Y)\] It would be the best to use some examples to illustrate how Binary SPSA method works. With a block diagram (\protect\hyperlink{ref-vural}{2016}, Figure 1, p.~6), Aksakalli and Malekipirbazari (\protect\hyperlink{ref-vural}{2016}) provided an example with one iteration and a hypothetical dataset with four features. In this section, for completeness, we show the next example depicts how the SPSA algorithm runs in two iterations. Suppose we have: \begin{itemize} \tightlist \item 6 features i.e. \(p=\) 6; \item \(y_C\) as a cross-validated error rate of a classifer; \item Parameters: \(c=\) 0.05, \(a=\) 0.75, \(A=\) 100, and \(\alpha=\) 0.6; and \item Maximum 2 iterations i.e. \(M=\) 2 \item An initial guess \(\hat{w}_0:=\) {[}0.5, 0.5, 0.5, 0.5, 0.5, 0.5{]}. \end{itemize} \newpage At the \textbf{first} iteration, i.e. \(k=\) 0; \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item Generate \(\Delta_0\) as {[}-1, -1, 1, 1, -1, 1{]} from a Bernoulli Distribution. \item Compute \(\hat{w}^{\pm}_0 = \hat{w}_{0} \pm c \Delta_0\) \end{enumerate} \begin{center} \begin{align*} \hat{w}^{+}_0 & =\hat{w}_{0} + c \Delta_0 = [0.45, 0.45, 0.55, 0.55, 0.45, 0.55]\\ \hat{w}^{-}_0 & =\hat{w}_{0} - c \Delta_0 = [0.55, 0.55, 0.45, 0.45, 0.55, 0.45] \end{align*} \end{center} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Bound \(w^{\pm}_0\): \end{enumerate} \begin{center} \begin{align*} w^{+}_0 & = B(w^{+}_0)= [0.45, 0.45, 0.55, 0.55, 0.45, 0.55]\\ w^{-}_0 & = B(w^{-}_0)= [0.55, 0.55, 0.45, 0.45, 0.55, 0.45] \end{align*} \end{center} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \setcounter{enumi}{3} \tightlist \item Round \(w^{\pm}_0\) \end{enumerate} \begin{center} \begin{align*} w^{+}_0 & = R(w^{+}_0)= [0, 0, 1, 1, 0, 1]\\ w^{-}_0 & = R(w^{-}_0)= [1, 1, 0, 0, 1, 0] \end{align*} \end{center} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \setcounter{enumi}{4} \item Evaluate \(y^{+}_{1} := y(\){[}0, 0, 1, 1, 0, 1{]}\()\) and \(y^{-}_{1} := y(\){[}1, 1, 0, 0, 1, 0{]}\()\). Assume \(y^{+}_{1} =\) 0.32 and \(y^{-}_{1} =\) 0.53. \item Compute \(\hat{g}_0 (\hat{w}_0):=\bigg(\frac{y_1^{+}-y_1^{-}}{2c} \bigg) \Delta_0^{-1} =\) {[}2.1, 2.1, -2.1, -2.1, 2.1, -2.1{]}. \item Calculate \(a_0=\frac{a}{(100+0)^\alpha}=\) 0.047. Compute \(\hat{w}_1=\hat{w}_0-a_0\hat{g}_0 (\hat{w}_0)\) \begin{center} \begin{align*} \hat{w}_1 & = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] - 0.047[2.1, 2.1, -2.1, -2.1, 2.1, -2.1]\\ & = [0.4013, 0.4013, 0.5987, 0.5987, 0.4013, 0.5987]. \end{align*} \end{center} \end{enumerate} \newpage At the \textbf{second} iteration, i.e. \(k=\) 1; \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item Generate \(\Delta_1\) as {[}-1, 1, 1, -1, 1, 1{]} from a Bernoulli Distribution. \item Compute \(\hat{w}^{\pm}_1 = \hat{w}_{1} \pm c \Delta_1\) \end{enumerate} \begin{center} \begin{align*} \hat{w}^{+}_1 & =\hat{w}_{1} + c \Delta_1 = [0.351, 0.451, 0.649, 0.549, 0.451, 0.649]\\ \hat{w}^{-}_1 & =\hat{w}_{1} - c \Delta_1 = [0.451, 0.351, 0.549, 0.649, 0.351, 0.549] \end{align*} \end{center} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Bound \(w^{\pm}_1\): \end{enumerate} \begin{center} \begin{align*} w^{+}_1 & = B(w^{+}_1)= [0.351, 0.451, 0.649, 0.549, 0.451, 0.649]\\ w^{-}_1 & = B(w^{-}_1)= [0.451, 0.351, 0.549, 0.649, 0.351, 0.549] \end{align*} \end{center} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \setcounter{enumi}{3} \tightlist \item Round \(w^{\pm}_0\) \end{enumerate} \begin{center} \begin{align*} w^{+}_1 & = R(w^{+}_1)= [0, 0, 1, 1, 0, 1]\\ w^{-}_1 & = R(w^{-}_1)= [0, 0, 1, 1, 0, 1] \end{align*} \end{center} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \setcounter{enumi}{4} \item Evaluate \(y^{+}_{2} := y(\){[}0, 0, 1, 1, 0, 1{]}\()\) and \(y^{-}_{2} := y(\){[}0, 0, 1, 1, 0, 1{]}\()\). Assume \(y^{+}_{2} =\) 0.53 and \(y^{-}_{2} =\) 0.38. \item Compute \(\hat{g}_1 (\hat{w}_1):=\bigg(\frac{y_2^{+}-y_2^{-}}{2c} \bigg) \Delta_1^{-1} =\) {[}-1.5, 1.5, 1.5, -1.5, 1.5, 1.5{]}. \item Calculate \(a_1=\frac{a}{(100+1)^\alpha}=\) 0.047. Compute \(\hat{w}_2=\hat{w}_1-a_1\hat{g}_1 (\hat{w}_1)\) \begin{center} \begin{align*} \hat{w}_2 & = [0.401, 0.401, 0.599, 0.599, 0.401, 0.599] - 0.047[-1.5, 1.5, 1.5, -1.5, 1.5, 1.5]\\ & = [0.471, 0.33, 0.529, 0.67, 0.33, 0.529]. \end{align*} \end{center} \end{enumerate} In the \textbf{final} step, let's round \(\hat{w}_2\) to the solution vector of {[}0, 0, 1, 1, 0, 1{]}. This means the best performing feature set include features 3, 4, 6. \chapter{SPSA-FS Algorithm}\label{section3} \section{Barzilai-Borwein (BB) Method}\label{barzilai-borwein-bb-method} The philosophy of the non-monotone methods involves remembering the data provided by previous iterations. As the first non-monotone search method, the Barzilai-Borwein method (Barzilai and Borwein \protect\hyperlink{ref-bb}{1988}) is described as the gradient method with two point-step size. Motivated by Newton's method, the BB method aims to approximate the Hessian matrix, instead of direct computation. In other words, it computes a sequence of objective values that are not monotonically decreasing. Proven by Barzilai and Borwein (\protect\hyperlink{ref-bb}{1988}), the BB method significantly outperforms the classical steppest decent method by means of better performances and lower computation costs. Consider an unconstrained optimisation problem expressed in Equation \ref{3.1}: \begin{center} \begin{align} \min_{x\epsilon{R^p}} f(x) \label{3.1} \end{align} \end{center} The steepest decent method, also known as the Cauchy method (Cauchy \protect\hyperlink{ref-cauchy}{1847}), uses the negative of the gradient as the search direction to locate next point with step size determined by exact or backtracking line search. The next point in the search direction is given by \[x_{k+1} = x_{k} + \alpha_{k}d_{k}\] The negative gradient of \(f\) at \(x_{k}\) is defined as: \[d_{k} = -\nabla f(x_{k})\] The step size \(\alpha_{k}\) is defined as: \begin{center} \begin{align} \alpha_k = \arg \min_\alpha f(x_k + \alpha d_k) \label{3.2} \end{align} \end{center} The gradient will denoted as \(g_k = g(x_k) = \nabla f(x_k)\). Suppose \(f\) has a quadratic form such that: \[f(x) = \frac{1}{2} x^T Qx - b^Tx + c\] then the exact line search (\ref{3.2}) becomes explicit and the stepsize \(\alpha_k\) of steepest decent method can be derived as: \begin{center} \begin{align} \alpha_k = \frac{g_k^Tg_k}{g_k^T Qg_k} \label{3.3} \end{align} \end{center} Although Cauchy's method is simple and uses optimal property (\ref{3.2}), it does not use the second order information. As a result, it tends to perform poorly and suffers from the ill-conditioning problem. The alternative is Newton's Method (Nocedal and Wright \protect\hyperlink{ref-no}{2006}, 44--46), which finds the next trial point by \[x_{k+1} = x_k - (F_{k})^{-1} g_k\] where \(F_k = \nabla^2 f(x_k)\) which is computationally very expensive and sometimes requires modification if \(F_k \nsucc 0\). On the other hand, the BB method choose the step size \(\alpha_k\) by solving either of following least squares problems so that \(\alpha_k g_k\) approximates \((F_k)^{-1}g_k\) \begin{center} \begin{align} & \min_\alpha ||\nabla{x} - \alpha\nabla{g}||^2 \label{eqn3.4} \\ & \min_\alpha ||\alpha\nabla{x} - \nabla{g}||^2 \label{eqn3.5} \end{align} \end{center} where \(\nabla x = x_k - x_{k-1}\), and \(\nabla g = g_k - g_{k-1}\). The respective solutions to Problem \ref{eqn3.4} and Problem \ref{eqn3.5} are: \begin{center} \begin{align} & \alpha_k = \frac{\nabla{x^T}\nabla{g}}{\nabla{g^T}\nabla{g}} \label{eqn3.6} \\ & \alpha_k = \frac{\nabla x^T\nabla x} {\nabla x^T\nabla g} \label{eqn3.7} \end{align} \end{center} Raydan (\protect\hyperlink{ref-raydan}{1993}), Molina and Raydan (\protect\hyperlink{ref-molina}{1996}), and Y. Dai and Liao (\protect\hyperlink{ref-dai2}{2002}) studied the convergence analysis of the BB method and found that the BB method BB linearly converges in a strictly convex quadratic form. In the literature, the famous BB methods include Caughy BB and Cyclic BB. Cauchy BB (Raydan and Svaiter \protect\hyperlink{ref-raydan2}{2002}) combines BB and Cauchy method and reduces computational work by half compared to BB. Cauchy method outperforms BB for quadratic problems when \(g_k\) is not almost an eigenvector of \(Q\). Meanwhile, Cyclic BB Method (Y. Dai et al. \protect\hyperlink{ref-dai}{2006}) involves specifying a predetermined cycle length and uses the same calculated step size \ref{eqn3.6} until the cycle length is reached before proceeding to the next step size. Therefore, the computation time is very sensitive to the choice of cycle length affects. Furthermore, Cauchy BB includes steppest descent method whereas Cyclic BB has an extra process which determines the appropriate cyclic length. Given these shortcomings, the original BB method with a smoothing effect is implemented in the SPSA-FS algorithm. \section{BB Method in SPSA-FS}\label{bb-method-in-spsa-fs} Since it relies on a monotone step size \(a_k\), the BSPSA algorithm has a slow convergence rate, which renders its usefulness in a time-critical situation. The slow convergence issue become more acute in the larger data size. To reduce the convergence time, we propose the nonmonotone BB method. It is important to notice the difference in the notation to represent the step size in the literature. For the BB method, it is typically denoted by \(\alpha_k\) while it is \(a_k\), which is also known as the iteration gain sequence, in SPSA (see Equation \ref{spsaStepSize}). To be consistent with BSPSA, we shall modify the latter \(a_k\) to \(\hat{a_{k}}\) and express the BB method's step size \ref{eqn3.6} as: \begin{center} \begin{align} \hat{a}_k &= \frac{\nabla{\hat{w}}^T\nabla{\hat{g}(\hat{w})}}{\nabla{\hat{g}^T(\hat{w})}\nabla{\hat{g}(\hat{w})}} \label{eqn3.8} \end{align} \end{center} We use \(\hat{a_{k}}\) to indicate it is an estimate rather than a closed form like Equation \ref{spsaStepSize}. Sometimes the gain can be negative such that \(\nabla{\hat{w}^T}\nabla{\hat{g}(\hat{w})} < 0\). This is possible because the Hessian of \(f\) might include negative eigenvalues at \(\nabla{\hat{w}}\) i.e.~a point between \(\hat{w_k}\) and \(\hat{w_{k-1}}\) (Y. Dai et al. \protect\hyperlink{ref-dai}{2006}). Consequently, it is necessary to set closed boundaries around the gain to ensure it is monotonic. Therefore, the current gain (equation \ref{eqn3.8}) becomes: \begin{center} \begin{align} \hat{a}_k^{'} &= \max\{a_{\min},\min\{\hat{a}_k,a_{\max}\}\} \label{eqn3.9} \end{align} \end{center} where \(a_{\min}\) and \(a_{\max}\) are the minimum and the maximum of gain sequence \(\{\hat{a}_k\}_{k}\) at the current iteration \(k\) respectively. \textbf{Gain Smoothing} Tan et al. (\protect\hyperlink{ref-tan}{2016}) propose to smooth the gain as the following: \begin{center} \begin{align} \hat{b}_k = \frac{\sum_{n=k-t}^k{\hat{a}_{n}^{'}}}{t+1} \label{eqn3.10} \end{align} \end{center} The role of \(\hat{b}_k\) is to eliminate the irrational fluctuations in the gains and ensure the stability of the SPSA-FS algorithm. SPSA-FS averages the gains at the current and last two iterations, i.e. \(t=2\). Gain smoothing results in a decrease in coverage time. \textbf{Gradient Averaging} Due to its stochastic nature and noisy measurements, the gradients \(\hat{g}(\hat{w})\) can be approximately wrongly and hence distort the convergence direction in SPSA-FS algorithm. To mitigate such side effect, the current and the previous \(m\) gradients are averaged as a gradient estimate at the current iteration: \begin{center} \begin{align} \hat{g_k}(\hat{w_k}) = \frac{\sum_{n=k-m}^k{\hat{g_{n}}(\hat{w_{k}})}}{m+1} \label{eqn3.11} \end{align} \end{center} SPSA-FS is developed to converge much more faster than BSPSA at an small incremental in the loss function. Algorithm \ref{pseudoCodes2} summarises the pseudo code for the SPSA-FS Algorithm, which is modified based on the BSPSA Algorithm (see Algorithm \ref{pseudoCodes}). Note that in Algorithm \ref{pseudoCodes2}, Steps 13 and 14 correspond to Equation \ref{eqn3.9} whereas Step 15 correspond to Equation \ref{eqn3.10}. \begin{algorithm} \caption{SPSA-FS Algorithm} \label{pseudoCodes2} \begin{algorithmic}[1] \Procedure{\underline{SPSA-FS}($\hat{w}_0$, $c$, $M$)}{} \State\hskip-\ALG@thistlm Initialise $k = 0$, $m=0$ \State\hskip-\ALG@thistlm \textbf{do}: \State Simulate $\Delta_{k, j} \sim \text{Bernoulli}(-1, +1)$ with $\mathbb{P}(\Delta_{k, j}=1) = \mathbb{P}(\Delta_{k, j}=-1) = 0.5$ for $j=1,..p$ \State $\hat{w}^{\pm}_k = \hat{w}_{k} \pm c \Delta_k$ \State $\hat{w}^{\pm}_k = B(\hat{w}^{\pm}_k)$ \Comment{$B( \bullet)$ = component-wise $[0,1]$ operator } \State $\hat{w}^{\pm}_k = R(\hat{w}^{\pm}_k)$ \Comment{$R( \bullet)$ = component-wise rounding operator} \State $y^{\pm}_k =\mathcal{L}(\hat{w}_k \pm c_k \Delta_k) \pm \varepsilon_{k}^{\pm}$ \State $\hat{g}_k(\hat{w}_k) =\bigg( \frac{y^{+}_k-y^{-}_k}{2c}\bigg)[\Delta_{k1}^{-1},...,\Delta_{kp}^{-1}]^{T}$ \Comment{$\hat{g}_k(\hat{w}_k)$ = the gradient estimate} \State $\hat{g_k}(\hat{w_k}) = \frac{1}{m+1}\sum_{n=k-m}^k{\hat{g_{n}}(\hat{w_{k}})}$ \Comment{Gradient Averaging} \State $\hat{a}_k = \frac{\nabla{\hat{w}}^T\nabla{\hat{g}(\hat{w})}}{\nabla{\hat{g}^T(\hat{w})}\nabla{\hat{g}(\hat{w})}}$ \Comment{$\hat{a}_k$ = BB Step Size} \If{$\hat{a}_{k}<0$} \State $\hat{a}_k = \max \bigg(\min{\{\hat{a}_{k}\}},\min\{\hat{a}_k, \max{\{\hat{a}_{k}\}}\}\bigg)$ \EndIf \State $\hat{a}_k = \frac{1}{t+1}\sum_{n=k-t}^k{\hat{a}_{n}}$ for $t = \min\{2, k\}$ \Comment{Gain Smoothing} \State $\hat{w}^{\pm}_k = \hat{w}_{k} \pm a_k \hat{g}_k(\hat{w}_k)$ \State $k = k + 1$, $m = k$ \State\hskip-\ALG@thistlm \textbf{while} ($k < M$) \State\hskip-\ALG@thistlm \textbf{Output}: $\hat{w}^{\pm}_M = R(\hat{w}^{\pm}_M)$ \EndProcedure \end{algorithmic} \end{algorithm} \section{SPSA-FS vs BSPSA Algorithms}\label{spsa-fs-vs-bspsa-algorithms} SPSA-FS can locate a solution around 400\% faster than BSPSA by losing only 2\% in the prediction accuracy given the same dataset. In other words, the SPSA-FS algorithm is five times faster than the BSPSA algorithm to reach the same loss function value or the accuracy rate. In practice, the difference might rachet up to 20 times with a minimal drop in the accuracy around 2 \%. For illustration, we experimented with the decision (or recursive partitioning) tree as a wrapper on Arrhythmia dataset provided by Guvenir et al. (\protect\hyperlink{ref-guvenir1997supervised}{1997}) accessible at \href{http://archive.ics.uci.edu/ml}{UCI Machine Learning Repository} (Lichman \protect\hyperlink{ref-UCI}{2013}). This dataset contains 279 features. Figure \ref{figA} compares the performance of two algorithms by evaluating their inaccuracy rates (loss function values) at each iteration. BSPSA found the lowest and hence the best loss function but five times slower than SPSA-FS Algorithm. It was 20 times slower with regard to overall calculation period. \newpage \begin{figure} {\centering \includegraphics{ArXiv5_files/figure-latex/figA-1} } \caption{\label{figA}Covergence Time Comparison between SPSA-FS and BSPSA on Arrhythmia dataset. Using decision-tree, SPSA-FS hit its lowest loss function value before reaching 250 iterations. BSPSA required around 500 iterations to achieve the same loss function value although it outperformed the SPSA-FS algorithm if more iterations were allowed.}\label{fig:figA} \end{figure} \chapter{Wrapper Comparison}\label{section4} Supervised learning problems can fall into two broad categories: classification and regression. The target or dependent feature is a binary, nominal or ordinal variable in a classification task while it is a continuous variable in a regression problem. Apart from feature selection, classification problems have another important aspect: feature ranking. While feature selection aims to determine the optimal subset of predictive features, feature ranking measures how important each feature from the specified set explains the target feature. For comparability, we divided the wrapper comparison experiments into three sections below: \begin{itemize} \tightlist \item \protect\hyperlink{feature-selection-in-classification-problems}{Feature Selection in Classification Problems} \item \protect\hyperlink{feature-ranking-in-classification-problems}{Feature Ranking in Classification Problems} \item \protect\hyperlink{feature-selection-in-regression-problems}{Feature Selection in Regression Problems} \end{itemize} We ran the wrapper comparison experiments using the open dataset accessible from the following sources: \begin{itemize} \tightlist \item \href{http://archive.ics.uci.edu/ml}{UCI Machine Learning Repository} (Lichman \protect\hyperlink{ref-UCI}{2013}) \item \href{http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html}{DCC Regression DataSets} (Torgo \protect\hyperlink{ref-DCC}{2017}) \item \href{http://featureselection.asu.edu/index.php}{Scikit-feature feature selection repository at ASU} (J. Li et al. \protect\hyperlink{ref-ASU}{2016}) \end{itemize} \hypertarget{feature-selection-in-classification-problems}{\section{Feature Selection in Classification Problems}\label{feature-selection-in-classification-problems}} We selected nine (9) datasets for feature selection (see Table \ref{tabA}). For each dataset, we implemented four classifiers namely Recursive Partitioning for Classification (R.Part), K-Nearest Neighbours (KNN), Naïve Bayes (NB), and Support Vector Machine (SVM). For each classifier, we considered three main wrapper methods: \begin{itemize} \tightlist \item SPSA; \item SFS: Sequential feature selection; and \item Full as the baseline benchmark. \end{itemize} For some datasets, we compared four following additional wrappers below: \begin{itemize} \tightlist \item GA: Genetic Algorithm; \item SBS: Sequential Backward Selection; \item SFFS: Sequential Feature Forward selection; and \item SFBS: Sequential Feature Backward selection. \end{itemize} Besides the mean classification error rate, we also considered the mean runtime of the learning process to assess feature selection performance. Despite the mixed results, SPSA-FS managed to balance accuracy and runtime on average. In the scenario where SPSA-FS outperformed in the accuracy, it required slightly more or approximately runtime of other wrappers. When SPSA-FS trailed behind other wrappers in term of accuracy, it did not cost too much runtime. Such empirical results were consistent with the theoretical design of SPSA-FS where the BB method helps reduce the computational cost by sacrificing a minimal amount of accuracy. \begin{table} \centering \begin{tabular}{ | l | l | l | l | l |} \hline dataset & $p$ & $N$ & Source & Figure \\ \hline Arrhythmia & 279 & 452 & \href{https://archive.ics.uci.edu/ml/datasets/arrhythmia}{UCI} & Figure \ref{FRfig1} \\ Glass & 9 & 214 & \href{https://archive.ics.uci.edu/ml/datasets/glass+identification}{UCI} & Figure \ref{FRfig2} \\ Heart & 13 & 270 & \href{http://archive.ics.uci.edu/ml/datasets/statlog+(heart)}{UCI} & Figure \ref{FRfig3} \\ Ionosphere & 34 & 351 & \href{https://archive.ics.uci.edu/ml/datasets/ionosphere}{UCI} & Figure \ref{FRfig4} \\ Libras & 90 & 360 & \href{https://archive.ics.uci.edu/ml/datasets/Libras+Movement}{UCI} & Figure \ref{FRfig5} \\ Musk (Version 1) & 166 & 476 & \href{https://archive.ics.uci.edu/ml/machine-learning-databases/musk/ }{UCI} & Figure \ref{FRfig6} \\ Sonar & 60 & 208 & \href{http://archive.ics.uci.edu/ml/datasets/connectionist+bench+(sonar,+mines+vs.+rocks)}{UCI} & Figure \ref{FRfig7} \\ Spam Base & 57 & 4601 & \href{https://archive.ics.uci.edu/ml/datasets/spambase}{UCI} & Figure \ref{FRfig8} \\ Vehicle & 18 & 946 & \href{https://archive.ics.uci.edu/ml/datasets/Statlog+(Vehicle+Silhouettes)}{UCI} & Figure \ref{FRfig9} \\ \hline \end{tabular} \caption{Feature Selection Classification datasets. $p$ represents the number of explanatory feature excluding exclude the response variables and identifier attributes; $N$ denotes the number of observations.} \label{tabA} \end{table} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/FRfig1-1.pdf} \caption{\label{FRfig1}Feature Selection Performance Result on Arrhythmia} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/FRfig2-1.pdf} \caption{\label{FRfig2}Feature Selection Performance Result on Glass} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/FRfig3-1.pdf} \caption{\label{FRfig3}Feature Selection Performance Result on Heart} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/FRfig4-1.pdf} \caption{\label{FRfig4}Feature Selection Performance Result on Ionospehere} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/FRfig5-1.pdf} \caption{\label{FRfig5}Feature Selection Performance Result on Libras} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/FRfig6-1.pdf} \caption{\label{FRfig6}Feature Selection Performance Result on Musk} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/FRfig7-1.pdf} \caption{\label{FRfig7}Feature Selection Performance Result on Sonar} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/FRfig8-1.pdf} \caption{\label{FRfig8}Feature Selection Performance Result on Spam Base} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/FRfig9-1.pdf} \caption{\label{FRfig9}Feature Selection Performance Result on Vehicle} \end{figure} \hypertarget{feature-ranking-in-classification-problems}{\section{Feature Ranking in Classification Problems}\label{feature-ranking-in-classification-problems}} Table \ref{tabB} delineates eight (8) datasets for feature ranking. For each dataset, we applied four (4) classifiers which were Decision Tree (DT), K-Nearest Neighbours (KNN), Naïve Bayes (NB), and Support Vector Machine (SVM). For each classifier, using the mean classification error rate, we compared five wrapper methods below: \begin{itemize} \tightlist \item SPFS: SPSA as Feature Selection; \item RFI: Random Forest Importance; \item Chi.Sq: Chi-Squared; \item Info.Gain: Information Gain; and \item Full as the baseline benchmark. \end{itemize} As shown in Table \ref{tabB}, each dataset has a large number of features, especially Orl and AR10p which suffer the high dimensionality curse, i.e. \(p > N\). To illustrate the difference in feature ranking by wrappers, we capped the number of features used say \(m\) in each classifier and utilised each wrapper to return the top \(m\) important features. For example, consider Sonar dataset which consists of 60 features and \(m=5\). Each wrapper would rank the top 5 important features out of 60 which yield the lowest accuracy rate in classifying the type of rock from Sonar data. For completeness, we ran the experiment on a series of \(m\) starting from 5 up to 40 features in an increment of 5. That is, \(m = \{5, 10, 15, ... ,40\}\). We also compared the wrapper performance to the baseline benchmark, which incorporated all features. Unlike \(R^{2}\) used in regression problems, additional explanatory features do not necessarily improve the accuracy rate. For example, as depicted in Figure \ref{fig9}, NB classifier committed more misclassifications on Sonar data when more than 30 features were used in each wrapper. From Figures \ref{fig9} to \ref{fig16}, we inferred that: \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item With some exceptions, the accuracy rates tended to decrease as the number of features increased albeit at a lower rate. \item SPSA-FS outperformed other wrapper methods in most data sets but did not consistently beat the baseline due to the choice of classifier. \end{enumerate} \begin{table} \centering \begin{tabular}{ | l | l | l | l | l |} \hline Dataset & $p$ & $N$ & Source & Figure \\ \hline Sonar & 60 & 208 & \href{http://archive.ics.uci.edu/ml/datasets/connectionist+bench+(sonar,+mines+vs.+rocks)}{UCI} & Figure \ref{fig9} \\ Libras & 90 & 360 & \href{https://archive.ics.uci.edu/ml/datasets/Libras+Movement}{UCI} & Figure \ref{fig10} \\ Musk (Version 1) & 166 & 476 & \href{https://archive.ics.uci.edu/ml/machine-learning-databases/musk/}{UCI} & Figure \ref{fig11} \\ Usps & 256 & 9298 & \href{http://featureselection.asu.edu/datasets.php}{ASU} & Figure \ref{fig12} \\ Isolet & 617 & 1560 & \href{http://featureselection.asu.edu/datasets.php}{ASU} & Figure \ref{fig13} \\ Coil20 & 1024 & 1440 & \href{http://featureselection.asu.edu/datasets.php}{ASU} & Figure \ref{fig14} \\ Orl & 1024 & 400 & \href{http://featureselection.asu.edu/datasets.php}{ASU} & Figure \ref{fig15} \\ AR10p & 2400 & 130 & \href{http://featureselection.asu.edu/datasets.php}{ASU} & Figure \ref{fig16} \\ \hline \end{tabular} \caption{Feature Ranking Classification datasets. $p$ represents the number of explanatory feature excluding exclude the response variables and identifier attributes; $N$ denotes the number of observations.} \label{tabB} \end{table} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig9-1.pdf} \caption{\label{fig9}Wrapper Misclassification Error on Sonar dataset by Types of Classifiers} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig10-1.pdf} \caption{\label{fig10}Wrapper Misclassification Error on Libras dataset by Types of Classifiers} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig11-1.pdf} \caption{\label{fig11}Wrapper Misclassification Error on Musk dataset by Types of Classifiers} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig12-1.pdf} \caption{\label{fig12}Wrapper Misclassification Error on USPS dataset by Types of Classifiers} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig13-1.pdf} \caption{\label{fig13}Wrapper Misclassification Error on Isolet dataset by Types of Classifiers} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig14-1.pdf} \caption{\label{fig14}Wrapper Misclassification Error on Coil20 dataset by Types of Classifiers} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig15-1.pdf} \caption{\label{fig15}Wrapper Misclassification Error on Orl dataset by Types of Classifiers} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig16-1.pdf} \caption{\label{fig16}Wrapper Misclassification Error on AR10p dataset by Types of Classifiers} \end{figure} \hypertarget{feature-selection-in-regression-problems}{\section{Feature Selection in Regression Problems}\label{feature-selection-in-regression-problems}} The regression experiment involved eight datasets. Table \ref{tab1} describes each dataset and their sources. Using a typical linear regression model, we ran the following feature selection methods for each dataset: \begin{itemize} \tightlist \item SPSA-FS \item Minimum Redundancy Maximum Relevance (mRMR), proposed by C. Ding and Peng (\protect\hyperlink{ref-mrmr}{2003}) \item RELIEF algorithm, which is first introduced by Kira and Rendell (\protect\hyperlink{ref-relief}{1992}) \item Linear Correlation \end{itemize} \begin{table} \centering \begin{tabular}{ | l | p{2.5cm} | p{2.5cm} | l | l |} \hline Dataset & $p$ & $N$ & Source & Figure \\ \hline Ailerons & 39 & 13750 & DCC & Figure \ref{fig1} \\ CPU ACT & 21 & 8192 & DCC & Figure \ref{fig2} \\ Elevator & 17 & 16559 & DCC & Figure \ref{fig3} \\ Boston Housing & 13 & 506 & UCI & Figure \ref{fig4} \\ Pole Telecomm & 47 & 1500 & DCC & Figure \ref{fig5} \\ Pyrim & 26 & 74 & DCC & Figure \ref{fig6} \\ Triazines & 58 & 186 & DCC & Figure \ref{fig7} \\ Wisconsin Breast Cancer & 32 & 194 & UCI & Figure \ref{fig8} \\ \hline \end{tabular} \caption{Regression datasets. $p$ represents the number of explanatory feature excluding exclude the response variables and identifier attributes; $N$ denotes the number of observations.} \label{tab1} \end{table} For benchmarking, we calculated inaccuracy rate defined by \(1-R^{2}\) (R-squared) with respect to the number of features used in the regression. Known as the coefficient of determination, \(R^{2}\) measures how close the data are to the fitted regression line. Therefore, a higher \(R^{2}\) implies a lower inaccuracy rate. Note that \(R^{2}\) can only either increase or remain constant as the number of explanatory variables or features, \(p\) increases\protect\rmarkdownfootnote{We did not use the adjusted \(R^{2}\) because it penalises the large number of features and hence would defeat our objective of compariing the wrapper algorithms.}. For comparative evaluation, we normalised \(p\) to the percentage of features used since each dataset has a different number of explanatory features. In all datasets, on average, SPSA-FS outperformed other wrapper methods even with fewer features. Other wrapper methods would only catch up with SPSA-FS starting 30 \% of explanatory features used. At 100 \%, i.e.~when all explanatory features were used as regressors, all wrapper methods converged to approximately similar inaccuracy rate due to the nature of \(R^{2}\). The exemplary performance of SPSA-FS implies it managed to identify the optimal subset of regressors given the same number of explanatory features compared to other methods. \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig1-1.pdf} \caption{\label{fig1}Regression On Ailerons} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig2-1.pdf} \caption{\label{fig2}Regression On CPU ACT} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig3-1.pdf} \caption{\label{fig3}Regression on Elevators} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig4-1.pdf} \caption{\label{fig4}Regression on Boston Housing} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig5-1.pdf} \caption{\label{fig5}Regression on Pole Telecom} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig6-1.pdf} \caption{\label{fig6}Regression on Pyrim} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig7-1.pdf} \caption{\label{fig7}Regression on Triazines} \end{figure} \begin{figure} \centering \includegraphics{ArXiv5_files/figure-latex/fig8-1.pdf} \caption{\label{fig8}Regression on Wisconsin Breast Cancer} \end{figure} \chapter{Summary and Conclusions}\label{section5} In this study, we propose the SPSA-FS algorithm which mitigates the slow convergence issue of the BSPSA algorithm in feature selection. By applying BB method to smooth step size and average gradient estimates, SPSA-FS results in significantly lower computational costs but a very minimal loss of accuracy. To vindicate our proposition, we ran experiments which compared SPSA-FS to other wrapper methods on various open datasets. For classification tasks, we evaluated accuracy performance of the wrappers using the misclassification error rate. The results were mixed since SPSA-FS's performance relied on the choice of the classifier. However, SPSA-FS managed to strike a balance between runtime required and accuracy. In the situation where SPSA-FS outperformed, it yielded significantly higher accuracy rate. Meanwhile, in the scenario where it underperformed, the performance differences were marginal. For regression tasks, by using one minus R-squared as the error measure, and SPSA-FS outperformed other wrappers with fewer explanatory features. In conclusion, theoretically and empirically, SPSA-FS not only leverages on the design of BSPSA which yields optimal feature selection results but also gains substantial speed in locating the solutions. \chapter*{References}\label{references} \addcontentsline{toc}{chapter}{References} \hypertarget{refs}{} \hypertarget{ref-vural}{} Aksakalli, Vural, and Milad Malekipirbazari. 2016. ``Feature Selection via Binary Simultaneous Perturbation Stochastic Approximation.'' \emph{Pattern Recognition Letters} 75 (Supplement C): 41--47. doi:\href{https://doi.org/https://doi.org/10.1016/j.patrec.2016.03.002}{https://doi.org/10.1016/j.patrec.2016.03.002}. \hypertarget{ref-ani}{} Al-Ani, Ahmed. 2005. ``Feature Subset Selection Using Ant Colony Optimization.'' \emph{International Journal of Computational Intelligence} 2 (January): 53--58. \hypertarget{ref-apolloni}{} Apolloni, Javier, Guillermo Leguizamón, and Enrique Alba. 2016. ``Two Hybrid Wrapper-Filter Feature Selection Algorithms Applied to High-Dimensional Microarray Experiments.'' \emph{Applied Soft Computing} 38: 922--32. \hypertarget{ref-bb}{} Barzilai, J., and J. Borwein. 1988. ``Two-Point Step Size Gradient Methods.'' \emph{IMA Journal of Numerical Analysis} 8: 141--48. \hypertarget{ref-bennasar}{} Bennasar, Mohamed, Yulia Hicks, and Rossitza Setchi. 2015. ``Feature Selection Using Joint Mutual Information Maximisation.'' \emph{Expert Systems with Applications} 42: 8520--32. \hypertarget{ref-cadenas}{} Cadenas, Jose M., M. Carmen Garrido, and Raquel Martinez. 2013. ``Feature Subset Selection Filter--Wrapper Based on Low Quality Data.'' \emph{Expert Systems with Applications} 40: 6241--52. \hypertarget{ref-cauchy}{} Cauchy, M. Augustine. 1847. ``Méthode Générale Pour La Résolution Des Systèmes d'équations Simultanées.'' \emph{Comptes Rendus Hebd. Seances Acad. Sci.} 25: 536--38. \hypertarget{ref-chen}{} Chen, Yu-Peng, Ying Li, Gang Wang, Yue-Feng Zheng, Qian Xu, Jia-Hao Fan, and Xue-Ting Cui. 2017. ``A Novel Bacterial Foraging Optimization Algorithm for Feature Selection.'' \emph{Expert Systems with Applications} 83: 1--17. \hypertarget{ref-dai2}{} Dai, Y., and L. Liao. 2002. ``R-Linear Convergence of the Barzilai and Borwein Gradient Method.'' \emph{IMA Journal of Numerical Analysis} 22: 1--10. \hypertarget{ref-dai}{} Dai, Y., W. Hager, K. Schittkowski, and H. Zhang. 2006. ``The Cyclic Barzilai-Borwein Method for Unconstrained Optimization.'' \emph{Journal of Numerical Analysis} 26: 604--27. \hypertarget{ref-debuse}{} Debuse, J. C. W., and V. J. Rayward-Smith. 1997. ``Feature Subset Selection Within a Simulated Annealing Data Mining Algorithm.'' \emph{Journal of Intelligent Information Systems} 9 (January): 57--81. \hypertarget{ref-mrmr}{} Ding, C., and H. Peng. 2003. ``Minimum Redundancy Feature Selection from Microarray Gene Expression Data.'' In \emph{Computational Systems Bioinformatics. Csb2003. Proceedings of the 2003 Ieee Bioinformatics Conference. Csb2003}, 523--28. doi:\href{https://doi.org/10.1109/CSB.2003.1227396}{10.1109/CSB.2003.1227396}. \hypertarget{ref-ghaemi}{} Ghaemi, Manizheh, and Mohammed-Reza Feizi-Derakhshi. 2016. ``Feature Selection Using Forest Optimization Algorithm.'' \emph{Pattern Recognition} 60: 121--29. \hypertarget{ref-guvenir1997supervised}{} Guvenir, H A, B Acar, G Demiroz, and A Cekin. 1997. ``A Supervised Machine Learning Algorithm for Arrhythmia Analysis.'' In \emph{Computers in Cardiology}, 433--36. \hypertarget{ref-guyon2}{} Guyon, I., and Andre Elisseeff. 2003. ``An Introduction to Variable and Feature Selection.'' \emph{Journal of Machine Learning Research} 3: 1157--82. \hypertarget{ref-guyon}{} Guyon, I., J. Weston, S. Barnhill, and V. Vapnik. 2002. ``Gene Selection for Cancer Classifica- Tion Using Support Vector Machines.'' \emph{Machine Learning} 46: 389--422. \hypertarget{ref-hsu}{} Hsu, Hui-Huang, Cheng-Wei Hsieh, and Lu Ming-Da. 2011. ``Hybrid Feature Selection by Combining Filters and Wrappers.'' \emph{Expert Systems with Applications} 38: 8144--50. \hypertarget{ref-relief}{} Kira, Kenji, and Larry A. Rendell. 1992. ``The Feature Selection Problem: Traditional Methods and a New Algorthm.'' In \emph{AAAI-92 Proceedings}. \hypertarget{ref-kohavi}{} Kohavi, R., and G. H. John. 1997. ``Wrappers for Feature Subset Selection.'' \emph{Artificial Intelligence} 97 (1-2): 273--324. \hypertarget{ref-rf}{} Leo, B. 2001. ``Random Forests.'' \emph{Machine Learning} 45: 5--32. \hypertarget{ref-ASU}{} Li, Jundong, Kewei Cheng, Suhang Wang, Fred Morstatter, Trevino Robert, Jiliang Tang, and Huan Liu. 2016. ``Feature Selection: A Data Perspective.'' \emph{arXiv:1601.07996}. \hypertarget{ref-UCI}{} Lichman, M. 2013. ``UCI Machine Learning Repository.'' University of California, Irvine, School of Information; Computer Sciences. \url{http://archive.ics.uci.edu/ml}. \hypertarget{ref-lu}{} Lu, Huijuan, Junying Chen, Ke Yan, Qun Jin, Yu Xue, and Zhigang Gao. 2017. ``A Hybrid Feature Selection Algorithm for Gene Expression Data Classification.'' \emph{Neurocomputing} 256: 56--62. \hypertarget{ref-mafarja}{} Mafarja, Majdi M., and Seyedali Mirjalili. 2017. ``Hybrid Whale Optimization Algorithm with Simulated Annealing for Feature Selection.'' \emph{Neurocomputing} 260: 302--12. \hypertarget{ref-molina}{} Molina, B, and M Raydan. 1996. ``Preconditioned Barzilai-Borwein Method for the Numerical Solution of Partial Differential Equations.'' \emph{Numerical Algorithms} 13: 45--60. \hypertarget{ref-no}{} Nocedal, Jorge, and Stephen J. Wright. 2006. \emph{Numerical Optimization}. 2nd ed. Newyork: Springer. \hypertarget{ref-oluleye}{} Oluleye, B., L. Armstrong, and D. Diepeveen. 2014. ``A Genetic Algorithm-Based Feature Selectionture Selection.'' \emph{International Journal of Electronics Communication and Computer Engineering} 5 (April): 2278--4209. \hypertarget{ref-olafsson}{} Ólafsson, Sigurdur, and Jaekyung Yang. 2005. ``Intelligent Partitioning for Feature Selection.'' \emph{INFORMS Journal on Computing} 17 (3): 339--55. \hypertarget{ref-aydin}{} Pashaei, Elnaz, and Nizamettin Aydin. 2017. ``Binary Black Hole Algorithm for Feature Selection and Classification on Biological Data.'' \emph{Applied Soft Computing} 56: 94--106. \hypertarget{ref-peng}{} Peng, H., F. Long, and C. Ding. 2005. ``Feature Selection Based on Mutual Information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy.'' \emph{IEEE Transactions on Pattern Analysis and Machine Intelligence}, no. 1226--1238. \hypertarget{ref-pudil}{} Pudil, P., J. Novovicová, and J. Kittler. 1994. ``Floating Search Methods in Feature Selection.'' \emph{Pattern Recognition Letters} 15 (October): 1119--25. \hypertarget{ref-R}{} R Core Team. 2017. \emph{R: A Language and Environment for Statistical Computing}. Vienna, Austria: R Foundation for Statistical Computing. \url{https://www.R-project.org/}. \hypertarget{ref-raydan}{} Raydan, M. 1993. ``On the Barzilai and Borwein Choice of Steplength for the Gradient and Method.'' \emph{IMA Journal of Numerical Analysis} 13: 321--26. \hypertarget{ref-raydan2}{} Raydan, M, and B. Svaiter. 2002. ``Relaxed Steepest Descent and Cauchy-Barzilai-Borwein Method.'' \emph{Computational Optimization and Applications} 21: 155--67. \hypertarget{ref-raymer}{} Raymer, Michael L., William F. Punch, Erik D. Goodman, Leslie Kuhn, Anil K. Jain, and et al. 2000. ``Dimensionality Reduction Using Genetic Algorithms.'' \emph{Evolutionary Computation, IEEE Transactions on} 4 (February): 164--71. \hypertarget{ref-sayed}{} Sayed, Safinaz AbdEl-Fattah, Emad Nabil, and Amr Badr. 2016. ``A Binary Clonal Flower Pollination Algorithm for Feature Selection.'' \emph{Pattern Recognition Letters} 77: 21--27. \hypertarget{ref-senawi}{} Senawi, Azlyna, Hua-Liang Wei, and Stephan A. Billings. 2017. ``A New Maximum Relevance-Minimum Multicollinearity (Mrmmc) Method for Feature Selection and Ranking.'' \emph{Pattern Recognition} 67: 47--61. \hypertarget{ref-sikonia}{} Sikonia, M. R., and I. Kononenko. 2003. ``Theoretical and Empirical Analysis of Relief and Relieff.'' \emph{Machine Learning} 53 (23-69). \hypertarget{ref-spall}{} Spall, James C. 1992. ``Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation.'' \emph{IEEE} 37 (3): 322--41. \hypertarget{ref-spall2}{} Spall, James C. 2003. \emph{'Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control'}. John Wiley. \hypertarget{ref-wang}{} Spall, James C., and Wang Qi. 2011. ``Discrete Simultaneous Perturbation Stochastic Approximation on Loss Function with Noisy Measurements.'' \emph{'In: Proceeding American Control Conference'} 37 (3): 4520--5. \hypertarget{ref-tahir}{} Tahir, M. A., A. Bouridane, and F. Kurugollu. 2007. ``Simultaneous Feature Selection and Feature Weighting Using Hybrid Tabu Search/K-Nearest Neighbor Classifier.'' \emph{Pattern Recognition Letters} 28 (April): 438--46. \hypertarget{ref-tan}{} Tan, Conghui, Shiqian Ma, Yu-Hong Dai, and Yuqiu Qian. 2016. ``Barzilai-Borwein Step Size for Stochastic Gradient Descent.'' Barcelona. \hypertarget{ref-lasso}{} Tibshirani, Robert. 1996. ``Regression Shrinkage and Selection via the Lasso.'' \emph{Journal of the Royal Statistical Society, Series B} 58: 267--88. \hypertarget{ref-DCC}{} Torgo, L. 2017. ``DCC Regression Datasets.'' Universidade Do Porto, Portugal, Department of Computer Science. \url{http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html}. \hypertarget{ref-tsai}{} Tsai, Chih-Fong, William Eberle, and Chi-Yuan Chu. 2013. ``Genetic Algorithms in Feature and Instance Selection.'' \emph{Knowledge-Based Systems} 39: 240--47. \hypertarget{ref-wan}{} Wan, Youchuan, Mingwei Wang, Zhiwei Ye, and Xudong Lai. 2006. ``A Feature Selection Method Based on Modified Binary Coded Ant Colony Optimization Algorithm.'' \emph{Applied Soft Computing} 49: 248--58. \hypertarget{ref-wang2}{} Wang, Lipo, Yaoli Wang, and Qing Chang. 2016. ``Feature Selection Methods for Big Data Bioinformatics: A Survey from the Search Perspective.'' \emph{Methods} 111 (December): 21--31. \hypertarget{ref-wang3}{} Wang, X., J. Yang, X. Teng, W Xia, and R. Jensen. 2007. ``Feature Selection Based on Rough Sets and Particle Swarm Optimization.'' \emph{Pattern Recognition Letters} 28 (April): 459--71. \hypertarget{ref-hadley}{} Wickham, Hadley. 2016. \emph{Ggplot2: Elegant Graphics for Data Analysis}. Springer-Verlag New York. \url{http://ggplot2.org}. \hypertarget{ref-yihui}{} Xie, Yihui. 2015. \emph{Dynamic Documents with R and Knitr}. 2nd ed. Boca Raton, Florida: Chapman; Hall/CRC. \url{https://yihui.name/knitr/}. \hypertarget{ref-iso}{} Zheng, Zhonglong, Xie Chenmao, and Jiong Jia. 2010. ``ISO-Container Projection for Feature Extraction.'' IEEE. \end{document}
{ "timestamp": "2018-04-17T02:15:25", "yymm": "1804", "arxiv_id": "1804.05589", "language": "en", "url": "https://arxiv.org/abs/1804.05589" }
\section{Introduction} \label{sec:intro} While the photon structure and it properties in the high energy regime are investigated mainly using the data from man-made accelerators, one should remain aware of the scientific potential of astrophysical studies: gamma-ray astronomy and ultra-high energy cosmic ray (UHECR) research. Within the former field we deal with photons at energies unavailable in terrestrial instruments which adds complementarity to the accelerator photon investigations, despite incomparably low flux of gamma rays reaching the Earth. Considering the photon energies even larger than in gamma-ray astronomy, one enters the realm of UHECR which concerns particles of energies exceeding 10$^{18}$~eV, with the few extreme events clearly above 10$^{20}$~eV. The existence of such extremely energetic particles have remained a puzzle since decades. Interestingly, the two main classes of scenarios describing production of UHECR: ``bottom-up'' models based on acceleration of nuclei and ``top-down'' class postulating stable existence and decay or annihilation of super-massive particles of energies reaching even 10$^{23}$~eV, predict that photons should contribute to the UHECR flux \cite{Bhattacharjee:1998qc}. A clear distinction between the two classes is based on the scale of this contribution: in ``bottom-up'' models one would expect very small fraction of photons in the UHECR flux while in ``top-down'' scenarios the photon contribution to the observed flux is expected to exceed even 50\% at 10$^{20}$~eV. The research performed over the last decade by the largest cosmic-ray instruments does not indicate the existence of UHE photons, thus stringent upper limits are placed which under some basic assumptions might allow a severe constraining of the ``top-down'' class as a whole (see e.g. \cite{auger-diffuse-photon-2017}). The point that we undertake in this paper is based on a trivial note concerning the mentioned ``basic assumptions'': we do not know the physics at UHE, relaying on extrapolations over many orders of magnitude from the accelerator energy region. Being aware of the fundamental theoretical uncertainties involved in the interpretation of the UHECR data allows considering the variety of logical and observational consequences of taking significantly different theoretical assumptions. In this paper we highlight one of such consequences: since UHE photons are expected to exist and the available evidence does not confirm their existence, we propose considering scenarios in which UHE photons exist but have negligibly little chance to reach Earth due to the interactions during their propagation on the way to us. Such scenarios prompt a purely technical challenge: can one see the products of these interactions - ensembles of cosmic rays? Actually this question needs an answer also within the state-of-the-art set of assumptions: if UHE photons exist, they should interact with the matter and fields during their propagation through the Universe which would lead to the initiation of extremely large cascades composed mainly of photons. And, continuing the logics within the paradigm, we also ask whether under some circumstances the possible sizes of such cascades might compensate a very small flux, as pointed out by the stringent upper limits to UHE photons. In other words we ask whether the scenarios involving UHECR photons can be tested more efficiently with the focus put on possible detection of photon cascades rather than single particles, complementarily to the current state-of-the-art research. The photon cascade approach has been initiated only recently by the CREDO Collaboration~\cite{credo-web,credo-general-icrc2017} and the scope of the addressed issues defines a wide physics program with long-term perspectives rather than a short term project. The basic research channel proposed by CREDO is the experimental verification of the astrophysical models where particle cascades are initiated, with the emphasis on photon ensembles. Such a verification would be possible only if there is a chance to observe at least partly the products of primary particle (e.g. UHE photons) interactions and this chance should be determined for the scenarios to be verified before proceeding with the experimental effort. Complementarily, we also propose another type of investigation which we call ``fishing for unexpected physics''. This approach is oriented on hunting for clearly non-statistical excesses above the diffuse and random cosmic-ray background, or arrival time correlations of air showers and single muons (or other secondary cosmic rays) in distant detectors, independently of the expectations from theoretical models. In this paper we highlight the CREDO science case and instrumentation and analysis strategies related to these two research channels: testing scenarios and fishing for unexpected. \section{N$_{\rm{ATM}}$>1: mysterious air shower observations and generalized cosmic-ray research} \label{sec:ngt1} It seems to be not very well known within the cosmic-ray community, especially among the younger colleagues, that there exist published reports on multi-cosmic-ray events looking like footprints of ensembles of primary cosmic rays correlated in time~\cite{smith-sps-b-83,fegan-sps-d-83}. The reports discuss a) a burst of air showers at estimated mean energy of $3\times10^{15}$~eV lasting 5~minutes~\cite{smith-sps-b-83}, and b) unusual simultaneous increase in the cosmic-ray shower rate at two recording stations separated by 250~km~\cite{fegan-sps-d-83}. Both observations were taken in two independent experiments in 1981 and 1975, respectively, and were the only events of their kinds seen during the lifetimes of both detecting systems. Other few hints of such possibly correlated cosmic-ray phenomena were seen by some small cosmic-ray experiments dotted around the world, such as a Swiss experiment that deployed four detector systems in Basel, Bern, Geneva and Le Locle, with a total enclosed area of around 5000 km$^2$ \cite{cern-06-global-cosmic}. As proposed in Ref.~\cite{cern-06-global-cosmic}, a globally coordinated cosmic-ray detection and analysis effort seems to be in place to verify whether the peculiar air shower observations carry any physical essence or maybe are just artifacts. The proposal concerned building small and cheap detectors which were planned to be installed in high schools at different locations around the globe, then operated and maintained by the high school pupils and staff. The science case addressed the cascading processes initiated by nuclei, mainly the photodisintegration of high-energy cosmic-ray nuclei passing through the vicinity of the Sun, first proposed by N. M. Gerasimova and G. Zatsepin back in the 1950s. The challenge had been undertaken in several research centers across the world where scientists in cooperation with their educational partners established small size experiments and approached a global coordination. Insufficient funding together with deficit of enthusiasm among the participants which grew with the continuing lack of exciting observations gave no scientifically meaningful outcome and led to reducing or closing up the activity in most of the involved high schools. Now the idea of a global cosmic-ray research is being revoked in an enriched incarnation of CREDO with the following novelties: \begin{enumerate} \item The enriched CREDO science case includes photon cascades which addresses foundations of physics at the highest energies, allowing constraints on e.g. Lorentz invariance violation (LIV) \cite{Galaverni:2007tq}, QED nonlinearities \cite{maccione08}, space-time structure \cite{maccione-liv-spacetime-foam-2010} or the ``top-down'' UHECR scenarios \cite{Bhattacharjee:1998qc}, similarly as in the UHE photon search. Furthermore, the generalization of the global approach allowing consideration of photon cascades changes the detection strategy. Photon cascades might contain even millions of particles, comparing to few or at best several in case of nuclear cascading like the Gerasimova-Zatsepin effect. This implies the necessity of implementing novel algorithms and triggers, and at the same time gives promising perspectives for unique, global observable signatures enabling event-by-event identification of cosmic-ray ensembles. \item CREDO points to the necessity of involving as many detectors as possible, regardless the technical diversity or complexity of the whole network, focusing in the first stage only at the timing of single events and particles, looking for excesses in time windows of different scales. This approach addresses a very wide variety of instruments: satellites, stratospheric balloons, cosmic-ray arrays, fluorescence telescopes, radio air shower detectors, gamma-ray telescopes (see Ref.~\cite{credo-gamma-rays-icrc2017} for the first multi-primary gamma ray study in the CREDO context), neutrino observatories, accelerators, educational arrays, university laboratories, high school detectors, popular pocket-size detectors and finally smartphones equipped with a detection app and educational toys with simplest detectors. Such a global network can serve as a universal scenario tester: a subnetwork of detectors meeting the optimum requirements of a specific scenario can be used and more detectors can be built ``on demand'' if scientifically justified by the expectations of the tested scenario. At the same time the network can be used as a whole to fish for unexpected physics, i.e. unusual rate excesses or arrival time orders, as highlighted in the Introduction. \item As the acquisition of the data from ``everywhere'' recorded with ``everything'' would generate an enormous stream of information, a sensible analysis and interpretation would automatically require an enormous manpower, including an effort of non-professional but enthusiastic scientific partners. Therefore CREDO takes the public engagement as the key scientific tool, putting the emphasis on the clarity of the key objectives, and easy, intuitive usage of the relevant tools. In parallel we will offer the paths for both education and science careers for all the participants. \item CREDO puts a particular emphasis on the exploration of an alerting potential of the global cosmic-ray network, addressing not only the astrophysical strategies but also multidisciplinary research involving climate changes or seismic studies \cite{credo-general-icrc2017}. \end{enumerate} With the novel approach to a global cosmic-ray effort proposed by CREDO it becomes evident that a) widening of the scientific perspectives, b) including as much of the available data as possible, and c) involving as many of the potentially interested and enthusiastic colleagues as possible, increases the chances for scientific discoveries of a fundamental importance. Therefore CREDO postulates a fully open project, with free access to data and open source tools, where both financial and in-kind contributions are welcome. It all offers an unprecedented chance for multidisciplinary research and education based on cosmic ray data which are available everywhere and at negligibly small cost. Following the Introduction and the above considerations one defines the CREDO mission in the simplest way by admitting more than one cosmic ray particle (including photons) entering the atmosphere simultaneously, where the term ``simultaneously'' denotes a temporal correlation and the specification ``more than one'' is equivalent to ``ensemble'' or to the mathematical expression N$_{\rm{ATM}}$>1 (see Fig.~\ref{fig:ngt1}). \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\linewidth]{cosmic-rays-n-gt-1.pdf} \caption{Generalization of cosmic-ray research by admitting ensembles of particles as observation target.} \label{fig:ngt1} \end{center} \end{figure} \section{Ensembles of photons: scenarios plus fishing} \label{sec:ensembles} The ``N$_{\rm{ATM}}$>1'' definition brings us to the already mentioned detection channels: A) testing theoretical scenarios and B) hunting for unexpected physics manifestations. Let us illustrate the channel A) with the two examples: exotic and standard. The exotic example is based on noting that there are variants of LIV with critically different predictions concerning the UHE photon fluxes, depending significantly on the assumed alteration of the dispersion relation at the highest energies. E.g. taking the dispersion relation in the shape defined in Ref.~\cite{Klinkhamer:2008ky}: \begin{equation} \label{Eq-dispersion} E_{\gamma}(\vec{k})=\sqrt{\frac{(1-\kappa)}{(1+\kappa)}}|\vec{k}| \end{equation} one understands that the sign of parameter $\kappa$ changes the UHE photon flux expectations dramatically. If $\kappa$ is positive, the pair production by a primary UHE photon is suppressed, which should lead to increased UHE photon fluxes observed at Earth, in comparison to the implications concluded with using non-altered dispersion relation. In this scenario the non-observation result allows constraining $\kappa$ and therefore also LIV. On the other hand, if $\kappa$ is negative, the lifetime of a UHE photon would be extremely short, even of the order of 1 second, which on astrophysical scale is equivalent to an immediate decay \cite{chadha83-phot-decay,kostelecky2002-phot-decay,jacobson-2005-liv-rev}. We note that if the latter scenario is real, non-observation of UHE photons at Earth and the subsequent upper limits would be a trivial, inconclusive result. However, even then one still has at hand one yet not checked research option -- approaching an observation of products of a UHE photon decay: cosmogenic electromagnetic cascades. Although it is widely assumed that such cascades get completely dissipated before reaching Earth, thus contributing to the diffuse photon flux, there are no precise calculations of the horizon (the distance within which a cascade can reach Earth at least in part, i.e. as an ensemble of a minimum two particles) with different theoretical assumptions. Such calculations within the Standard Model of particles are possible with the currently available tools \cite{crpropa2016} and the first steps in this direction have already been made, as described in Ref.~\cite{auger-targeted-photon-2017}. In addition, when one takes into consideration physics beyond the Standard Model, either of particles or cosmological -- more scenarios allowing observation of cascade-like signatures at the Earth appear verifiable (see e.g. Ref. \cite{jacobson-2005-liv-rev} for a review on concepts relating to potential observation of quantum gravity manifestations). In this context it becomes apparent that a complete study and search for UHE photons should include both an effort towards identification of single UHE particles and a search for products of their decay: ensembles of photons correlated in time, most likely dispersed significantly in space, maybe also in time, with energies spanning even a very wide spectrum. The existence of a logically obvious and experimentally available, although yet not probed UHE photon search direction can be illustrated with considering two extreme cases: obvious detection of a photon ensemble and its obvious extinction. If photons in a cosmic-ray ensemble which reaches Earth travel very closely to each other, both in space and time, they would induce a set of extensive air showers (EAS) which would effectively behave as one big EAS, being a superposition of the smaller ones, detectable with the state-of-the art techniques, e.g. with a giant array of particle counters or with fluorescence telescopes. On the other hand, if the ensemble components are distant one from another on average more that the size of the Earth, then obviously no conclusion about the cascade-like nature of the phenomenon is possible: we see at best one particle which contributes to the diffuse and random cosmic-ray particle background. What is in between of these two ``extremes'', ensembles of particles (photons) distant one from another on average by less than the size of the Earth, remains to be studied, and, possibly, observed. An example of a non-exotic scenario within the channel~A is a cascade of photons initiated by a UHE photon primary passing through the vicinity of the Sun and interacting with its magnetic field (see Fig.~\ref{fig:sus-sps}). \begin{figure}[ht] \begin{center} \includegraphics[width=1.0\linewidth]{sun-sps.pdf} \caption{The particle distribution on Earth of an ensemble of photons originated from an interaction of UHE photon of energy 10$^{19}$~eV with the Sun magnetic field.} \label{fig:sus-sps} \end{center} \end{figure} This phenomenon, known in the literature as the preshower effect~\cite{erber66,presh-mcbreen81}, is expected within the standard quantum electrodynamics and can be simulated with the available open source tools~\cite{cpc1,cpc2} which are also being used as a standard in the studies involving UHE photon-induced EAS~\cite{corsika}. The expected particle distribution at the top of atmosphere is very much elongated (even 10000~km!) in the West-East direction and super-thin (meters) along the North-South line, promising a unique observable signature built of temporal sequence of arrival times of the secondary cosmic rays on ground, and a very characteristic pattern of the triggering detectors~\cite{credo-general-icrc2017}. In the preshower effect, once the primary UHE photon converts into an electron-positron pair, the electrons begin to radiate magnetic bremsstrahlung photons. The further the electrons travel the lower their energies and the larger deflection with respect to the primary direction. This is reflected in the photon distribution on ground: the photons near the core corresponding to the primary direction posses high energies as they were emitted right after the electron-positron pair creation, when the electrons still had energies comparable to the primary and they did not get deflected significantly in the magnetic field of the Sun. The further from the core, the lower photon energies. In the example shown in Fig.~{\ref{fig:sus-sps} the primary photon energy is 10$^{19}$~eV and the spectrum of photons at the top of the Earth atmosphere extends from below GeV to above EeV (not the whole spectrum shown!). A feature such as shown in Fig.~\ref{fig:sus-sps} could be observed with a global cosmic-ray network, or with a single large cosmic-ray observatory, or with a dedicated experiment tuned to the particle densities expected on ground. Testing this scenario, which we call Sun-SPS (SPS for Super Pre-Shower), is one of the first scientific tasks of the CREDO Collaboration. It is worthwhile to mention that the largest observatories are tuned to record EAS with energies typical for the very vicinity of the Sun-SPS core, landing at distances not further than few tens km, while the whole Sun-SPS footprint might be even 3 orders of magnitude longer. It points to the advantage of the global and diversified approach to the available cosmic ray data implemented in CREDO, at least as far as testing the Sun-SPS scenario is considered. A special attention in CREDO is put to the channel B: fishing for unexpected physics. The idea of the ``unexpected physics'' trigger based on arrival time correlations and order in distant detecting stations is sketched in Fig.~\ref{fig:mtrigger}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\linewidth]{mtrigger.pdf} \caption{A standard (cluster in space) vs ensemble (cluster in time) trigger in a ground array of a cosmic-ray observatory.} \label{fig:mtrigger} \end{center} \end{figure} Complementarily to the standard search of neighbor detectors triggered simultaneously by an EAS (clusters in space) one might also look for distant detectors triggered within some predefined time window by an ensemble of cosmic rays (clusters in time). In addition one might expect some order in the arrival times of the particles or events contributing to the cluster. The presence of such a feature would increase statistical significance of the observation. \section{Observation: public engagement as a scientific tool} As already explained, the scientific success of the CREDO mission is strictly determined by the scale of the project possible to be achieved: total collecting area, geographical distribution of the detecting sites, and availability of manpower. The optimum can be reached by combining the available professional resources and wide public engagement. Apart from social reasons for which the public should be kept informed and even involved in the professional scientific research, it is obvious that public engagement in an exciting scientific project must induce a growth of professional scientific resources bringing profits to the whole science community and to the society as a whole. The key condition for this scientific growth is to show opportunities and paths of individual development and education within the project. In CREDO public engagement is going to be driven by three simple tools that would help to reach both social and scientific objectives of the project. Firstly, a massive participation will be achieved with an open source mobile application which turns a smartphone into a particle detectors. Such applications already exist \cite{deco, crayfis} although they are not yet open, thus not enabling sufficient flexibility required for a society driven software engine. For this reason CREDO opens its own app, to be freely distributed among science enthusiasts across the world with the encouragement to contributing to the development \cite{credo-detector}. This of course does not exclude contributions from the users of the other applications to the common worldwide database. Another potentially available channel to involve even the youngest generations of science enthusiasts is related to the educational toys capable of detecting secondary cosmic rays and networked worldwide to help the CREDO mission. The detection of a particle and the link to the community dedicated to reach common and ambitious scientific goals should stimulate the passion, enthusiasm and a desire to get involved deeper in the project, i.e. to get educated. Both using the smartphone particle detection app or an educational toy will enable passive participation in the CREDO project by collecting the data. The next level of involvement will be the activity within the CREDO community environment. The pilot component of this environment is the CREDO citizen science platform Dark Universe Welcome (DUW)~\cite{duw} installed on the Zooniverse engine \cite{zooniverse}. With the easily understandable analysis format of DUW (see Fig.~\ref{fig:map}) one will be able to analyze ``private'' particles in the global context, search for ``strange'' detection patterns and help to train the ``scientific fishing'' algorithms. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\linewidth]{map.pdf} \caption{A simple visualization of the ``fishing for unexpected'' strategy. The average arrival time within a certain temporal an spatial interval should be statistically consistent with the mean of the time interval if the received signal is composed of uncorrelated particles. A significant departure from the mean might be a footprint of an ensemble of correlated cosmic rays.} \label{fig:map} \end{center} \end{figure} Other social facilities of the CREDO community environment, like e.g. individual and group rankings, will increase the pleasure of doing science and further stimulate the motivation to get involved deeper. The educational and scientific career paths supplementing the popular devices and software will in turn strengthen the stream of creativity and ``fresh blood'' to power the community of science professionals, that ultimately should be reflected in the increase of the ability of the society as a whole to develop by making scientific discoveries. The third and scientifically the most exciting tool of the public engagement will be the automated procedure to monitor the cosmic-ray data globally which will provide the easily classifiable monitoring images with the largest discovery potential. The prototype of such a monitoring machine, called CREDO Monitor, fed by some of the publicly available cosmic-ray data, has been launched recently and is internally available for the CREDO members \cite{credo-monitor}. The available data is migrated periodically from the active acquisition sites to the data storage and computing center maintained at ACC Cyfronet AGH-UST~\cite{cyfronet}, then after basic processing (scanning for time-clustering) classifiable global detector patterns (maps as in Fig.~\ref{fig:map}) are generated and stored on a web server ready for the inspection with a human eye. The receivers of CREDO Monitor will be able to tune the view of the most interesting discovery proposals selected by classifying machines and initiate a collective human-based classification according to the predefined crowdsourcing requirements, and finally open a professional analysis with a variety of algorithms which would actually lead to specifying of statistical significance of the proposed discovery patterns. The above three pillars of the public engagement in CREDO should attract large number of participants and increase the chances for a scientific success of the project. Importantly, all the contributions to data acquisition and analysis, no matter from scientists or from ``just'' science enthusiasts, would give the right to claim co-authorship of scientific publications and the share in the possibly accompanying awards. Moreover, it is planned that the contributions will be easily registered and evaluated, leading to the estimate of the share in the project. Such an evaluation system, including e.g. the already mentioned user rankings, would offer a potential to activate additional motivations of the participants: positive competition. \section{Summary} \label{sec:summary} We consider cosmic-ray cascades composed of photons correlated in time as a yet not checked channel of information about the Universe and the physics at the highest energies known. If such cascades exist they might have a wide spatial distribution which might make them observable only with a worldwide network of detectors, and keep out of the reach of even the largest cosmic-ray observatories with their state-of-the-art configuration. We introduce the Cosmic-Ray Extremely Distributed Observatory, the infrastructure and physics program tuned to cosmic-ray cascades, with potential impact on ultra-high energy astrophysics, the physics of fundamental particle interactions and cosmology, offering also a multidimensional interdisciplinary opportunities. We implement the CREDO strategy by applying a trivially novel approach to the cosmic-ray data taking - a global and massive approach. Within the CREDO strategy based on the collective and global approach to the available and future cosmic-ray data the chances for detecting and studying the astrophysical cascades by definition exceed the capabilities of even the largest observatories and detectors working independently of each other. Everybody, from theorists to non-experts, both institutions and private persons, are invited and welcome to contribute. \section*{Acknowledgements} This research has been supported in part by PLGrid Infrastructure. We warmly thank the staff at ACC Cyfronet AGH-UST for their always helpful supercomputing support. The Dark Universe Welcome citizen science experiment was developed with the help of the ASTERICS Horizon2020 project. ASTERICS is a project supported by the European Commission Framework Programme Horizon 2020 Research and Innovation action under grant agreement n. 653477. PH thanks Andrew Taylor, Marcus Niechciol, Daniel Kuempel, and David d'Enterria for inspiring discussions.
{ "timestamp": "2018-04-17T02:16:02", "yymm": "1804", "arxiv_id": "1804.05614", "language": "en", "url": "https://arxiv.org/abs/1804.05614" }
\section{Introduction} Our Universe overwhelms us with richness of structure and complexity. In order to grasp its governing processes, one has to focus on one single aspect. Unfortunately most observations are sensitive to a large variety of phenomena. Their accurate separation into distinct components is critical, as it will influence any further analysis. In this paper we want to deal with the problem of separating point sources from diffuse emission. We develop the \texttt{\textbf{starblade}} (\textbf{st}ar and \textbf{a}rtifact \textbf{r}emoval with a \textbf{b}ayesian variationa\textbf{l} \textbf{a}lgorithm from \textbf{d}iffuse \textbf{e}mission) method to separate those two classes of structures occurring in astronomical imaging. Point-like sources can be extremely bright as well as extremely faint, therefore they inhabit a huge dynamic range. By definition they are too small to be spatially resolved and are rather independent of their apparent surroundings. Diffuse, extended emission can be spatially resolved. Large structures are almost impossible to observe without being affected by superimposed point sources. In other contexts weak point-like structures embedded in a diffuse background are of interest. Here it might be important not to be blinded by the background emission. Another component which is present in real observation are artifacts originating from the measurement process itself. Those artifacts can exhibit point-like characteristics, such as cosmic ray hits on the detector or edges. They are often correlated only along one image direction and relatively unrelated to the distant cosmos. Conceptually the two components, point sources and diffuse emission, are independent, therefore an independent component analysis (ICA) \citep{ICAAA} should be the method of choice in order to separate them. ICA separates stochastically independent components by their different appearance. It is problematic for classical ICA algorithms to have fewer data channels than components. Considering only one individual image the separation is an ill-posed problem. In principle any flux could be explained by either only point sources or diffuse emission, but both scenarios are neither plausible nor useful. In order to separate the components one needs to add additional information, judging a possible separation by some criteria. There will not be a unique solution to this problem, but a large number of plausible separations. An answer to the question of the separation can therefore be only of probabilistic nature. The likelihood of one configuration derives from a prior probabilities, which manifest some knowledge on the components. For this we have to mathematically formulate the concept of diffuse emission and point-like sources and then confront them to the data in order to obtain a plausible separation. In this paper we present a method how to separate those two components from a single image. We will use physically motivated models for the description of point-like and diffuse components and derive a posterior estimate of the separation using Bayes' theorem. As the posterior is not accessible analytically, we perform a variational approximation to the posterior quantities, which is capable of capturing uncertainty. The resulting algorithm will be a non-parametric, hand-crafted ICA method specifically tailored to separate diffuse from point-like sources. It can also provide an estimate of the separation uncertainty at every position. We will use the formalism of information field theory (IFT) \citep{IFT}, which allows us to easily generalize the methods to additional spatial dimensions or resolutions. This paper deals with the pure diffuse-point source separation problem. Complications due to the imperfection of the data originating from noise or point spread functions are ignored. There are several reasons for such an approach. First, there are data sets that are indeed of such high fidelity that the assumption of vanishing noise is basically fulfilled. Second, for moderate fidelity data a detailed noise modeling might be too expensive, given the scientific focus at hand. Third, the method is useful to detect and remove point-like artifacts from images, as e.g. generated by cosmic ray hits on CCDs of space based telescopes. Additionally, for the imaging of low-fidelity data, the separation of point-like and diffuse flux given a perfectly assumed sky brightness is a useful internal step of the denoising and imaging algorithm, as we will explain in Appendix \ref{ap:lager_picture}. We will explore the algorithms capability to generalize to such imperfect situations by confronting it with real data in on of our examples. The traditional approach how to deal with distracting point sources is to mask them out. A point source is identified by some criterion and its area is removed from the image. This approach has two disadvantages. First, the masking might effect further analysis if it is not carefully considered and therefore could corrupt results. Secondly, it is hard to identify and properly mask weak point sources. In order to identify them it is vital to consider the surrounding area and its correlation structure. A popular method to extract point sources in images is the SExtractor software \citep{sextractor}. It removes background, identifies sources, classifies them, extracts characteristic features and builds catalogs. Another widely used software is DAOPHOT \citep{stetson1987daophot}. It specializes in crowded fields. In both methods the background removal is done by a heuristic scheme, which for many applications performs excellent, especially if the background can be approximated to be constant and sources are sparse. A Bayesian version, which also provides uncertainties on those quantities is the Background-Source separation method \citep{guglielmetti2009method}. Another method which is comparable to the one presented here is \citet{popowicz2015method}, which relies on local neighborhoods and a morphological distance transform, but is not derived from probabilistic principles. The problem of separating point-sources and diffuse emission can also be regarded as recovering a diffuse component, which is corrupted by point-sources. The recent development in deep learning lead to a large variety of different architectures, which are capable to learn such tasks, based on huge amounts of data. One such architecture is the denoising auto-encoder (DAE) \citep{DAE}. It is trained on pairs of corrupted images and its ground truth. Here the corrupted images are typically generated artificially to mimic some kind of degradation, such as Gaussian or Salt-and-Pepper noise \citep{xie2012image}. In analogy to this we will compare our method with a denoising auto-encoder trained on pairs of diffuse emission and a point-source corrupted version. Let us briefly outline the structure of this paper. We will start in Sec. \ref{sec:datamodel} with introducing the underlying description of the data, followed by a discussion of point-like and diffuse emission in Sec. \ref{sec:pointlike} and Sec. \ref{sec:diffuse}, respectively. The full mathematical structure of the problem is derived in Sec. \ref{sec:full_picture}. Solving the problem requires some further numerical considerations, which are outlined in Sec. \ref{sec:numerical}. The variational approach we use to infer an approximated posterior separation is described in Sec. \ref{sec:variational}, followed by a brief summary of the algorithmic steps of the \texttt{starblade} algorithm in Sec. \ref{sec:algorithm}. We validate our algorithm by applying it to synthetically generated data, and demonstrate its application to real data, an image of the $\mathrm{M}100$ galaxy, obtained by the Hubble Space Telescope in Sec. \ref{sec:examples}. In both cases we compare its performance with the background estimation step of the SExtractor algorithm and an denoising auto-encoder (DAE). We conclude in Sec. \ref{sec:conclusion}. How the here presented method can be used in larger inference frameworks is outlined in Appendix. \ref{ap:lager_picture}. In Appendix \ref{ap:DAE} we describe in detail the implementation, architecture and training of the DAE. \section{The data model} \label{sec:datamodel} The data we are considering consists of a superposition of two components. On the one side spatially correlated, positive diffuse flux, on the other side spatially uncorrelated, also positive, point-like flux. Negative flux values are unphysical and we will exclude them by enforcing the positivity of the components. To this end we express them in terms of their logarithmic brightness. \begin{align} \label{eq:data} d = e^{s} + e^{u} \end{align} The logarithmic diffuse emission is expressed in $s$, the logarithmic point-like flux in $u$. The quantities $s$ and $u$ are fields, meaning they are functions of the location $x$. The exponential function in Eq. \ref{eq:data} and other functions are applied point-wise in IFT, meaning $(e^{s})_x = e^{s{_x}}$. We will approach the separation problem from the probabilistic perspective and we can use the data equation Eq.\ref{eq:data} to derive the likelihood of the data, given the point sources and diffuse emission. As the data is not exposed to any randomness for given pairs of $s$ and $u$ in the noiseless limit, this likelihood is expressed by a delta distribution. \begin{align} \label{eq:delta} \mathcal{P}(d \vert s, u) = \delta(d - e^{s} - e^{u}) \end{align} We can combine this likelihood with a prior that models what we mean by point-like and diffuse emissions, allowing their separation. The separation is done by applying Bayes theorem, \begin{align} \label{eq:bayes} \mathcal{P}(s,u\vert d) = \frac{\mathcal{P}(d\vert s, u) \mathcal{P}(s) \mathcal{P}(u)}{\mathcal{P}(d)} \end{align} and asking for the most plausible a posterior separation of $d$ into $e^s$ and $e^u$. Note that we assumed point and diffuse sources to be independent of each other, which implements the fundamental assumption of an ICA: \begin{align} \mathcal{P}(s,u) = \mathcal{P}(s) \mathcal{P}(u) \text{.} \end{align} We now need expressions for the prior distributions $\mathcal{P}(u)$ and $\mathcal{P}(s)$, defining the characteristics of point-like and diffuse emission, respectively. \section{Point-like emission} \label{sec:pointlike} The defining features of point sources is their spatial independence and strong diversity in brightness. The independence is expressed by their joint probability distributions factorizing into independent probabilities for each position. \begin{align} \mathcal{P}(u) = \prod_x \mathcal{P}(u_x) \end{align} The brightness distribution of the individual point sources can often be argued to follow a power-law, as we expect the number of sources to scale with the observed volume and the brightness to decrease with distance. In an Euclidean universe with uniformly distributed point sources the exponent of this distribution is expected to be $\alpha = 1.5$. A detailed discussion of the choice of this parameter can be found in \citet{D3PO} and also in \citet{guglielmetti2009method}. In practical applications this value might be too restrictive, as the universe is not Euclidean and the sources exhibit an evolution with cosmic time, which translates to a distance dependence. Thus other values for $\alpha$ might be chosen. The choice of this parameter will influence the separation and it will define the sensitivity of the method in either the direction of assigning more flux to the diffuse emission or to point sources. It is important to note that the impact of $\alpha$ in general is not scale independent. An increase or decrease in resolution splits or merges pixels and the point sources associated with them. This splitting or merging of point sources changes in general the effective brightness distribution. Only for $\alpha = 1.5$, a change in resolution has no effect. Any other value of $\alpha$ expresses a power law brightness distribution only for the chosen resolution exactly. Changing the resolution without readjusting $\alpha$ actually means to chose a different brightness distribution. For the brightest sources, this subtlety does not make a big difference. A discussion of this matter can as well be found in \citet{D3PO}. In order to ensure the normalization of the prior distribution for any choice of $\alpha$ and for numerical reasons we introduce a low-brightness cut-off. It will suppress vanishing brightness values, stabilizing the algorithm. A physical motivation to this cut-off is the finite extension of our host galaxy, the Milky Way and the finite extent of the look back light cone in the universe. After a certain distance we do not expect a large number of point sources. This leads to the choice of an inverse gamma distribution for the point sources, which reads: \begin{align} \mathcal{P}(u) = \mathcal{I}(e^u, \alpha, q) = \frac{q^{\alpha-1}}{\Gamma(\alpha-1)} e^{-(\alpha-1)^\dagger u } e^{-q^\dagger e^{-u}} \text{.} \end{align} The $\dagger$ expresses the complex conjugated, transposed vector or field. We will set $q$ to small values in order not to influence the separation in a significant way. \section{Diffuse emission} \label{sec:diffuse} For the diffuse emission we propose a non-parametric log-normal model, which assumes the logarithmic flux to be Gaussian distributed. The spatial correlations are expressed in the correlation structure of this Gaussian distribution. \begin{align} \mathcal{P}(s) = \mathcal{G}(s,S) \equiv \frac{1}{\vert 2\pi S\vert^{\frac{1}{2}}} e^{-\frac{1}{2}s^\dagger S^{-1} s} \end{align} The correlation structure $S$ is a priori unknown. Assuming prior homogeneity and isotropy, it is represented by a diagonal operation in the Fourier space, according to the Wiener-Khintchin theorem \citep{Wiener,Khintchin}, and is described by a one dimensional power spectrum. With these assumptions we can express the correlation structure $S$ compactly in terms of: \begin{align} S = \mathbb{F}^\dagger \widehat{\left( \mathbb{P} e^\tau\right)} \mathbb{F} \end{align} We express the power spectrum in term of its logarithmic power spectrum $\tau$ to ensure positivity. The isotropy operator $\mathbb{P}$ distributes this one dimensional power spectrum into the full harmonic space. The $\widehat{}$ indicates the raising of this field to an diagonal operator, and finally the Fourier transformations $\mathbb{F}$ implement the homogeneity assumption. We parametrize the prior correlation structure in terms of its logarithmic power spectrum $\tau$. The inference of this parameter will be part of our overall procedure. This corresponds to the critical filter, which is derived in detail in \citet{ensslinfrommert} and \citet{smoothpower}. Overall we describe the diffuse emission by a log-normal model with unknown a priori correlation structure. This model has been applied in an astrophysical context to describe diffuse structures in various situations \citep{D3PO,RESOLVE, QPO, NCF}. \section{The full picture} \label{sec:full_picture} Now we have all prior distributions in order to calculate the posterior for the diffuse and point-like flux, as described in Eq. \ref{eq:bayes}. We can get rid of one quantity by marginalizing out the delta distribution from the likelihood contribution Eq. \ref{eq:delta}. We choose to perform the marginalization over $s$. \begin{align} \mathcal{P}(u\vert d) &= \int \mathcal{D}s \frac{\mathcal{P}(d\vert s,u)\mathcal{P}(s)\mathcal{P}(u)}{\mathcal{P}(d)}\\ &= \frac{\mathcal{P}(u)}{\mathcal{P}(d)} \int \mathcal{D}s \: \delta(d-e^s -e^u) \: \mathcal{G}(s,S) \\ &= \frac{\mathcal{P}(u)}{\mathcal{P}(d)} \mathcal{G}(\mathrm{ln}(d-e^u),S) \frac{1}{\prod _x\vert d_x-e^{u_x} \vert} \end{align} All terms not containing any dependence on $s$ can be pulled out of the integral. Performing the integral replaces $s$ in its Gaussian prior with $\mathrm{ln}(d-e^u)$ to fulfill the constraint. In addition we get the factor $\prod_x\vert d_x-e^{u_x}\vert^{-1}$ originating from the change in variables in order to perform the integral. The resulting expression only depends on the logarithmic diffuse flux $u$. For mathematical convenience we investigate \begin{align} \label{eq:hamiltonian} \mathcal{H}(u\vert d) \equiv & - \mathrm{ln} \: \mathcal{P}(u\vert d) \\ =& \: \mathcal{H}_0 + \frac{1}{2}\mathrm{ln}(d -e^u)^\dagger S^{-1} \mathrm{ln}(d -e^u) \nonumber \\ &+ (\alpha - 1)^\dagger u + q^\dagger e^{-u} + 1^\dagger \mathrm{ln}(d-e^u) \text{.} \end{align} The expression above fully describes the problem. It corresponds to the negative log-posterior, or, in the language of IFT, the information Hamiltonian. \section{Numerical considerations} \label{sec:numerical} Our inference will be based on the minimization of some target functional with respect to some parameters. In the current formulation of the setup we have numerically problematic expressions of the form $\mathrm{ln}(d-e^u)$, which can be temporarily ill-defined during the inference calculations due to negative values within the logarithm. We can overcome this limitation by introducing a separation field $a$, which ranges in each pixel within $[0,1]$, attributing a fraction $a$ of its image value $d$ to the point source $ e^u \equiv ad$, and the fraction $e^s =(1-a)d$ to diffuse emission. In order to do so we introduce the additional constraint \begin{align} \mathcal{P}(u\vert ad) = \delta(u - \mathrm{ln}(ad)) \text{,} \end{align} which allows us to reformulate the problem Hamiltonian in terms of $a$ via marginalization over $u$. \begin{align} \mathcal{H}(a\vert d) =& \: \mathcal{H}_0 + \frac{1}{2}\mathrm{ln}((1-a)d)^\dagger S^{-1} \mathrm{ln}((1-a)d) \nonumber \\ &+ (\alpha - 1)^\dagger \mathrm{ln}(ad) + q^\dagger \frac{1}{ad} - 1^\dagger \mathrm{ln}((1-a)d) + 1^\dagger \mathrm{ln}(a) \end{align} The last term originates from the functional determinant of the substitution. To ensure that the separation field ranges between zero and one, we parametrize it with a sigmoid function applied to some underlying field $b$. A function fulfilling a sigmoid shape, ranging from $0$ to $1$ is \begin{align} \label{eq:sigmoid} a = \frac{1}{2} (\mathrm{tanh}(b)+1) \text{.} \end{align} Finally, this internal separation field $b$ will be the quantity we try to infer in order to separate point sources from diffuse emission. Again, we can introduce it to the model formulation via an additional probability distribution $\mathcal{P}(a\vert b)$ on $a$, which, when marginalized out, replaces every $a$ with the expression above. Another functional determinant adds through this substitution. The sigmoid function given in Eq. \ref{eq:sigmoid} approaches for large absolute values of $b$ its respective boundary of $0$ or $1$ exponentially. Therefore at some point increasing values of $\vert b\vert $ do not change the separation in any significant way. If only one component is present at one location, there is no resistance for the algorithm to push the value of $b$ to arbitrarily high values. This can cause numerical instabilities, as it represents unconstrained degrees of freedom within the problem. To counteract this behavior we will introduce an additional weak Gaussian prior on $b$, centered at zero. Values of $ \vert b \vert > 10$ impose a dynamical range of the ratio between the two components of roughly $1 : 10^{9} $. We want to keep the values of $b$ within a range to explain any separation between point source and diffuse flux, but regularizing against an unnecessary drift. For this we add a small, quadratic prior energy for $b$. The full description of the problem is then expressed in the Hamiltonian \begin{align} \label{eq:full_hamiltonian} \mathcal{H}(b\vert d) =& \: \mathcal{H}_0 + \frac{1}{2}\mathrm{ln}((1-a)d)^\dagger S^{-1} \mathrm{ln}((1-a)d) \nonumber \\ &+ (\alpha - 1)^\dagger \mathrm{ln}(ad) + q^\dagger \frac{1}{ad} + \frac{1}{2 \sigma^2} b^\dagger b \nonumber \\ &- 1^\dagger \mathrm{ln}((1-a)d) + 1^\dagger \mathrm{ln}(a) - 1^\dagger \mathrm{ln}(1-\mathrm{tanh}^2(b)) \text{.} \end{align} Again the last term originates from the functional determinant of the final substitution. The free parameters of this model are the correlation structure $S$, the cutoff of the brightness distribution $q$ and its scaling behavior $\alpha$, and the prior standard deviation $\sigma$ of the $b$ field, which is chosen large, for example $\sigma = 3$, so that it usually has a small effect, which mainly restricts values the values of $x$ between roughly $-10$ and $10$, providing almost the full range of the separation field $a$ between $0$ and $1$. We propose to set a low value to the cutoff parameter to minimize its impact to the inference, say $q = 10^{-10}$ for a flux scale in the vicinity of unity. In cases one has reasons to assume that the number of faint sources is suppressed, it can be adjusted accordingly. The only parameter we cannot fix generally is the value of the scaling parameter $\alpha$, which influences the outcome of the separation. The larger its value, the stronger point sources are suppressed, the more the flux will be attributed to the diffuse emission, and vice versa. This effect is most sensitive to regions of superimposed fluxes. It determines how significantly point sources have to stick out, in order not to be considered part of the diffuse flux. This parameter will have to be set by the user, depending on the questions asked to the data. A lower limit to $\alpha$ is the value $1$, which corresponds to an uninformative prior on the scale. It has to be larger than one for the prior distribution to be normalizable. Small $\alpha$ correspond to high point-source flux. Choosing such a value makes it easy for the algorithm to explain the flux with the point-like component, suppressing the small scales of the diffuse component. Choosing a large $\alpha$, all flux will end up in the diffuse component, as any point source component is suppressed strongly. For values of $\alpha$ in between those extremes, separations are achieved which balance diffuse emission and point sources according to this parameters. If no specific reason is given to choose $\alpha$, we recommend the resolution independent choice of $\alpha=1.5$, also favored in a homogeneous Euclidean Universe. \section{Variational inference} \label{sec:variational} We do not have access to the posterior distribution as the normalization is not tractable, so we will rely on a variational scheme to obtain posterior estimates of the separation. This approach is more robust against the choice of inappropriate hyper prior parameters, compared to the popular maximum posterior estimate (MAP). We will demonstrate this in one of our examples. The inference of the variational parameters is done by minimizing the Kullback-Leibler divergence \citep{KLdivergence} between the true posterior $\mathcal{P}(b\vert d)$ and an simpler, approximative posterior $\widetilde{\mathcal{P}}(b\vert d)$, which is given by \begin{align} \label{eq:KLDivergence} \mathcal{D}_{KL}(\widetilde{\mathcal{P}}(b\vert d) \vert \vert \mathcal{P}(b \vert d)) &= \int \mathcal{D}b \: {\widetilde{\mathcal{P}}}(b\vert d) \:\mathrm{ln}\:\frac{{\widetilde{\mathcal{P}}}(b\vert d)}{ \mathcal{P}(b \vert d)} \\ &= \langle \mathcal{H}(b\vert d)\rangle_{\widetilde{\mathcal{P}}(b\vert d)} - \langle \widetilde{\mathcal{H}}(b\vert d)\rangle_{\widetilde{\mathcal{P}}(b\vert d)} \text{.} \end{align} As an approximate distribution we will use a Gaussian distribution which has a number of convenient properties and it already captures the crucial feature of an uncertainty, therefore the approximation has the form $\widetilde{\mathcal{P}}(b\vert d) = \mathcal{G}(b-\bar{b}, B)$ and it remains to determine the values for $\bar{b}$ and $B$ by minimizing Eq. \ref{eq:KLDivergence}. We can calculate the gradient with respect to $\bar{b}$ using the identity \begin{align} \frac{\delta \mathcal{D}_{KL}}{\delta \bar{b}} =\left \langle \frac{\delta H(b\vert d)}{\delta b}\right\rangle_{\mathcal{G}(b-\bar{b}, B)} \end{align} and we can solve for $B$ by setting the gradient of the KL divergence with respect to it to zero and solve the resulting equation. As we chose a Gaussian approximation this becomes \begin{align} B^{-1} = \frac{\delta^2 \mathcal{D}_{KL}}{\delta \bar{b} \delta \bar{b}^\dagger} \equiv \left \langle \frac{\delta^2 H(b\vert d)}{\delta b \delta b^\dagger}\right\rangle_{\mathcal{G}(b-\bar{b}, B)} \text{.} \end{align} This covariance is equally the curvature of the KL with respect to its mean and we will recycle it within our minimization to obtain a Newton scheme. In order to approximate the expectation values we draw a set of independent samples from our approximate distribution and replace the integral over the distribution with a simple sum. More detailed discussions of approximations of this kind can be found in \citet{NICAAC} and \citet{NCF}. Note that we avoid to explicitly represent the covariance at any time. We can extract any desired quantity from it by solving a system of linear equations using numerical schemes, such as the conjugate gradient method \citep{conjugate}. This is necessary as its size scales quadratic with the number of image pixels. In order to infer the unknown correlation structure we refer to the critical filter described in \citet{ensslinfrommert}, which assumes a priori homogeneity and isotropy to formulate the correlation structure of the diffuse component as power spectrum in the harmonic domain. The previously mentioned samples can be used here as well to ensure the required uncertainty corrections. \section{The starblade algorithm} \label{sec:algorithm} The \texttt{starblade} algorithm minimizes the variational KL divergence between the true posterior distribution and an approximate Gaussian distribution and additionally estimates the prior correlation structure of the diffuse component. It implements the following steps: \begin{enumerate}[label=\arabic*)] \item Initialize the logarithmic power spectrum $\tau$ and the internal separation field $b$. \item Draw a set of samples from the approximate Gaussian distribution at the current position as described in \citet{NICAAC} or \citet{NCF}. \item Use these samples to obtain a statistical estimate of the KL divergence, gradient and curvature according to Eq. \ref{eq:full_hamiltonian}. \item Minimize the estimated KL divergence to obtain an improved internal separation field, preferably with a second order Newton scheme. \item Update the logarithmic power spectrum with the critical filter according to \citet{smoothpower}. \item Iterate this procedure with updated parameters, starting from the second step until desired convergence is achieved. \item After convergence a set of approximate posterior samples can be drawn to further investigate the result. \end{enumerate} \section{Examples} \label{sec:examples} In order to illustrate the behavior of the \texttt{starblade} method we will show two examples. The first example uses synthetically generated data using a log-normal diffuse component with artificially added point-sources of varying magnitude. We apply the algorithm three times to this data with different choices of $\alpha$ to compare its impact on the separation. Using synthetic data, we do have access to the ground truth, which allows us to evaluate the algorithms fidelity, and compare them to other methods. We will use the same test data and apply two configurations of the background estimation of the SExtractor method \citep{sextractor}, and additionally we train a denoising convolutional auto-encoder on exactly this model. Additionally we infer the MAP solution for an inappropriate choice of $\alpha$ to demonstrate the robustness of the variational approach compared to MAP. In the second example we separate an observation of the galaxy $\mathrm{M}100$ by the Hubble Space Telescope. This data does not fully fulfill the initial assumption of a noise free image. Nevertheless we will be able to obtain reasonable results. Here we also apply the other two methods and compare the results. Unfortunately we do not have access to the ground truth within this real data application, so instead we will check the result for its compatibility with theoretically motivated assumptions on independence and signatures of correlated and point-like emission. We also discuss the relation of the methods with respect to each other. We implemented the algorithm in \texttt{Python}, using the numerical information field theory package \texttt{NIFTy} \citep{Nifty,Nifty3}. \subsection{Synthetic data} \label{sec:mock_example} In the first example we will generate data according to the underlying model and investigate the algorithms behavior. In this scenario we do have access to the ground truth, which allows us to derive the quantitative performance compared to other methods. We will compare the results of the \texttt{starblade} to the background estimation step of the SExtractor method \citep{sextractor}, as well as with the performance of a denoising auto-encoder (DAE) \citep{DAE} trained on the identical model. For this comparison we generate the logarithmic diffuse component from a Gaussian process with the correlation structure \begin{align} p(k) = \frac{1}{(1+k)^{4}} \text{.} \end{align} The $k$ argument corresponds to the harmonic mode. It follows a power law with power $4$ which is equivalent to a smooth behavior in terms of small spatial curvature. The point sources are drawn from the inverse gamma distribution with shape $\alpha = 1.5$ and scale $q =10^{-3}$. Both components are added together to generate our mock data. The data can be seen in Fig. \ref{fig:2d_data}. In order to enhance the perception of the point sources, as well as the diffuse background we will look at the components edge-on. In order to do this we collapse the image along one direction and look at the brightness orthogonal to the collapsed direction. To obtain the visual effect of depth we increase the transparency linearly towards more distant locations, therefore faint features belong to the most distant locations along the collapsed direction, while saturated lines are close by. Our data, as well as the true diffuse and point-like component can be seen in Fig. \ref{fig:1d_data} in this representation. \begin{figure} \includegraphics[scale=0.6]{2d_data} \caption{Data generated according the proposed model on an logarithmic scale.} \label{fig:2d_data} \end{figure} \begin{figure} \includegraphics[scale=0.45]{1d_data} \caption{synthetic data (top), together with its diffuse component (middle) and point-like component (bottom) on a logarithmic scale, collapsed along one spatial dimension.} \label{fig:1d_data} \end{figure} We apply the three different configurations of the \texttt{starblade} algorithm to the data. These three scenarios differ in the choice of $\alpha$. Its value in the first case is $\alpha=1.0$, which corresponds to an uninformative shape parameter. The prior distribution strongly favors bright sources, so it is easy for the algorithm to explain features with point sources. In the result we therefore expect diffuse contributions within the point-like component and a overly-smoothed diffuse component with underestimated power on small scales. In order to justify the variational approach we will also solve this configuration for its MAP solution and compare the results. In the second scenario we pick the correct value for $\alpha=1.5$ and we therefore expect an excellent separation between the two components. In the last configuration we choose a value of $\alpha = 3.0$, which strongly suppresses point sources, so the balance should lean towards more flux in the diffuse component, which will pick up point-source contributions. In this case we expect more power on small scales of the diffuse flux and a lack of faint point sources. It will still be easy for the algorithm to identify bright point sources, as they are absolutely incompatible with the diffuse flux. To compare the performance of our algorithm with other methods we first chose the background estimation step of the SExtractor \citep{sextractor}. SExtractor is a tool to extract sources from images and turn them into catalogs. In order to achieve this it performs a number of consecutive step, one being the subtraction of the image background, which corresponds to diffuse emission present in the image. This is done via $\kappa$-$\sigma$-clipping, which is iteratively performed on patches in the image. Within each patch a constant local background is determined, to which a median filter is applied to obtain a smooth background estimate of the whole image. Crucial to the outcome of this procedure is the choice of the patch size. The smaller it is, the more structures it can pick up. This behavior might not always be desired, as sources could be absorbed in the background. On the contrary, being too restrictive to a varying background, some of its features are identified as sources. The background window size by default is $64\times 64$ pixel. This will be our first SExtractor scenario. In order to tune it towards this problem, we will also reduce it to $8\times 8$ pixel. This is significantly smaller than the recommended range of $32$ to $128$ \citep{sextractor}. Finally we train a denoising convolutional auto-encoder (DAE) on exactly this model. This kind of neural network specializes in removing noise or artifacts from images. It is trained on artificially corrupted images and its ground truth, in our case it gets the data and has to recover the diffuse component. Note that by its training with mock data from the correct model, the DAE was informed about the correct point source brightness distribution as well as about the correlation structure of the diffuse component. A detailed description of the network architecture and training is provided in Appendix \ref{ap:DAE}. The results of the component separations for all these methods and configurations can be seen in Fig. \ref{fig:1d_diffuse} and Fig. \ref{fig:1d_points}, which show the resulting diffuse component and the point-like component respectively. For the choice for $\alpha = 1.0$ in the \texttt{starblade} algorithm we obtain a diffuse component with correct large scale features, but it slightly lacks smaller scales and is slightly smoother than the original component. All these small scale features can be found in the point sources. Here a denser forest of small scales are visible. The brighter point sources are recovered correctly. In Fig. \ref{fig:1d_power} we see the results for the also reconstructed power spectrum, which characterizes the correlation structure of the underlying Gaussian process. In the first case for $\alpha=1.0$, small scales are also slightly stronger suppressed, while the larger scales are recovered correctly. Comparing this result to the MAP solution, strong deviations become apparent. The recovered diffuse emission in this case is visually smoother and the point sources pick up a significant amount of small scale diffuse flux. The reconstructed power spectrum for the MAP solution drops off strongly towards the small scales as well. The minimization of the KL provides therefore a more robust result against an inappropriately chosen hyper parameter, compared to the minimization of the Hamiltonian. The case with the correct $\alpha=1.5$ shows an excellent separation. Comparing the result with the true components, we do not find much difference. Neither remain obvious point sources in the diffuse component, nor the other way around. The recovered power spectrum of the diffuse component is spot on the correct one as well. This result verifies the correctness of our implementation of the \texttt{starblade} algorithm. We should mention here, that the MAP solution for the correct $\alpha=1.5$ gives only slightly worse results in this situation. We would expect a stronger difference in situations with more point-source flux, which corresponds to a higher noise on the diffuse emission. Our algorithm is additionally capable of providing uncertainty on its estimates. In Fig. \ref{fig:uncertainty} the uncertainty of the separation field $a$ is shown, which translates to an uncertainty on either component. Is shows large and small-scale features. The uncertainty is high in regions where both components appear strongly mixed. This can be seen by the large scales, which follow the diffuse component. In regions this component is weak, the uncertainty drops down and the algorithm is confident of its separation. The final \texttt{starblade} scenario with $\alpha = 3.0$ also shows a reasonable separation. As expected the the diffuse component exhibits more small scale features compared to lower $\alpha$ and the point sources appear thinned out at the faint end. This also reflects within the reconstructed power spectrum. For small scales it exhibits higher power compared to the true underlying signal as the missed faint point sources are absorbed in the diffuse component. Applying the SExtractor background estimation with a patch size of $64\times64$ pixel does not provide a reasonable component separation, as a significant portion of the diffuse emission remains within the separated point sources. The default settings of SExtractor are not reasonably applicable in this situation, as it uses a patch size of $64\times 64$ pixel, which corresponds to four patches over the test image, so one cannot expect a detailed separation. The choice of $8\times 8$ patches performs significantly better. It is capable to at least resolve large scale features within the diffuse emission, but still attributes smaller scale correlated features to the point-like emission. Finally, the last method we want to compare our method to is the specially trained DAE. It is worth to note that during the training the correct correlation structure was used, compared to the \texttt{starblade} method, which was agnostic to it. The auto-encoder therefore was equipped with an advantage concerning the a priori knowledge on the problem. The method performs excellent as well. The results, at least by eye, are comparable to the ones obtained by our method. To further investigate the difference between the methods we plotted the results for the diffuse components pixel-wise versus the true underlying component, which is shown in \ref{fig:scatter}. For this plot we sampled a subset of random locations and for a perfect separation we expect a diagonal line. Here we only used the best performing versions of each method, namely \texttt{starblade} with $\alpha = 1.5$ and SExtractor with $8\times 8$ and the auto-encoder. Additionally we also plotted the MAP solution with $\alpha=1.0$ to show the sensitivity of MAP to an suboptimal hyper parameter. We see that SExtractor scatters the most and it tends to completely cut low and high flux diffuse emission. The DAE performs a lot better with significantly lower scattering. It is also capable to identify high-flux diffuse areas. Here the \texttt{starblade} method exhibits an even lower variance. It builds almost a straight line. The MAP result is systematically shifted towards lower flux in the diffuse component, which reflects the higher assumed point-source flux, expressed in the lower $\alpha$. To quantify all those differences we calculate the root mean squared error (RMS) on this logarithmic scale of the deviations of the result compared to the truth. The RMS values for all methods are displayed in Table \ref{tab:example_table}. \begin{table} \centering \caption{The RMS error of the logarithmic classification of all methods and configurations.} \label{tab:example_table} \begin{tabular}{|l|l|} \hline Method & RMS Error\\ \hline MAP $\alpha=1.0$ & $0.15$\\ \texttt{starblade} $\alpha=1.0$ & $0.035$\\ \texttt{starblade} $\alpha=1.5$ & $0.026$\\ \texttt{starblade} $\alpha=3.0$ & $0.056$\\ SExtractor $64\times64$ & $1.4$\\ SExtractor $8\times8$ & $0.35$\\ DAE & $0.049$\\ \hline \end{tabular} \end{table} The RMS error is the highest for both SExtractor configurations, which have an error one or two orders of magnitude higher compared to the other methods. The sub-optimal choices for $\alpha$ in the \texttt{starblade} algorithm perform similar to the to the DAE, where $\alpha=3.0$ is slightly worse and $\alpha=1.0$ slightly better. The least error is accomplished by the \texttt{starblade} algorithm with the optimal choice for $\alpha = 1.5$. It achieves half the error compared to the specially trained network. In this task our method is superior compared to any other tested methods. Other network architectures might achieve a better result, but increasing the accuracy by another factor of two would probably require serious effort. Overall each of the presented methods has its own advantages and disadvantages. The SExtractor background estimation is extremely fast and robust, but lacks precision. It might be sufficient for a large number of applications, but if higher accuracy is required one might want to use another background estimation. The DAE performs reasonably well and is easy to implement and set up. It performs within the same magnitude as \texttt{starblade}. Training the network requires some time, but after that the separation is done quickly. The reasoning of the network for its conclusion is, however, nebulous. More sophisticated architectures might have an increased performance, but they do not origin from first principles and can only be obtained via experimentation. In contrast to that, \texttt{starblade} is derived from probability theoretical considerations. The model assumptions are physically motivated. Compared to the previously mentioned methods we can also provide an estimate of uncertainty for the separation. For this method no training phase is required, but the separation procedure itself requires higher computational effort. In some cases one might be able to take the shortcut of the MAP solution, which can be calculated significantly faster, but this requires a careful selection of the $\alpha$ parameter. \begin{figure} \includegraphics[scale=0.45]{1d_diffuse} \caption{Results for the recovered diffuse emission for \texttt{starblade} with $\alpha = 1.0$, $1.5$ or $3.0$, respectively, on logarithmic scale, as well as for the two configurations of SExtractor, the DAE and the MAP solution for $\alpha=1.0$.} \label{fig:1d_diffuse} \end{figure} \begin{figure} \includegraphics[scale=0.45]{1d_points} \caption{The recovered point-like flux for the set of algorithms on logarithmic scale.} \label{fig:1d_points} \end{figure} \begin{figure} \includegraphics[scale=0.6]{1d_power} \caption{The recovered power spectra of the logarithmic diffuse component for the three different choices for $\alpha$ on double logarithmic scale, together with the power spectrum to create the diffuse component and the power of the logarithmic data.} \label{fig:1d_power} \end{figure} \begin{figure} \includegraphics[scale=0.6]{scatter} \caption{The true diffuse component plotted against the separated for \texttt{starblade} with $\alpha=1.5$, DAE, MAP with $\alpha=1$ and SExtractor using $8\times 8$ pixel patches.} \label{fig:scatter} \end{figure} \begin{figure} \includegraphics[scale=0.6]{uncertainty} \caption{Estimated uncertainty in terms of standard deviation of the separation for $\alpha=1.5$.} \label{fig:uncertainty} \end{figure} \subsection{$\mathrm{M}100$ observed by the Hubble Space Telescope} \label{sec:hubble} So far we only considered synthetic data, for which the ground truth is known. There we could compare the different methods directly to the truth. This is no longer possible for real data applications. Here we can only describe the differences within the methods and judge the performance subjectively. In case of real data, the idealistic model assumptions do not hold any more. We will investigate how well the method generalizes. To do this we apply all previously used methods to separate an image obtained by the Wide-Field Planetary Camera 2 (WFPC2) mounted to the Hubble Space Telescope of the galaxy $\mathrm{M}100$ \citep{M100}. The image is subject to noise, convolved with a point spread function, affected by cosmic ray hits and exhibits regions with high noise levels at the edge of the field of view. The logarithmic data is shown in Fig. \ref{fig:M100_data}. Because of the point-spread function, bright sources are smeared out over a larger area, which reduces the brightness within individual pixels, the canonical choice for $\alpha = 1.5$ is too restrictive for these spread point-sources. They tend to be absorbed within the diffuse component. To counteract this behavior we reduce its value to $\alpha = 1.1$. Everything not compatible with diffuse flux will be part of the point-like component, therefore we wish to separate the diffuse component efficiently from foreground stars, cosmic ray hits and noise artifacts. The recovered diffuse component can be seen in Fig. \ref{fig:M100_diffuse}. Almost any flux outside the disk of $\mathrm{M}100$ itself was identified as point-like emission and removed from the diffuse component. The brightest points inside the disk were removed as well. What remains is the diffuse emission from the galaxy. The recovered point sources are shown in Fig. \ref{fig:M100_point}. This image was convolved with a small Gaussian kernel to enhance the visibility of the point-sources. Here we clearly obtain the brightest sources, some of them superimposed on the diffuse structure. Additionally, most measurement artifacts are captured by the point source component, such as the rectangular edge of the field of view, as they are incompatible with diffuse emission. The estimated flux uncertainty can be seen in Fig. \ref{fig:M100_uncertainty}, which shows the expected standard deviation associated with every location. It clearly follows the diffuse component. This is reasonable, as we do expect higher uncertainties at locations where both components are superimposed. The recovered power spectrum of the logarithmic diffuse emission can be seen in Fig. \ref{fig:M100_power}, as well as the power of the logarithmic data. The data on large scales are dominated by the diffuse emission, the small scales by the point sources. The correlation structure of the diffuse emission obtains all the power on large and intermediate scales and continues to drop with a constant slope towards small scales, in contrast to the data. We can also detect cosmic ray hits in the form of point sources in a consecutive line in this component. One such example can be seen in Fig. \ref{fig:M100_zoom}, which shows a zoomed in section of the image on the edge of the disk of $\mathrm{M}100$. Here the recovered point sources are not convolved artificially. Even though a point spread function of the instrument is present, and point sources do not only inhabit individual pixels in the image, the difference between the detected diffuse emission and smeared out high intensities from point sources is sufficient to separate both components, at least for the brightest sources. Overall we obtained a good estimate of the diffuse emission of $\mathrm{M}100$, removing any point-like contribution, originating either from point-sources or from systematics. The application of the background estimation of SExtractor does provide a less reasonable result. The largest structures are correctly identified as diffuse emission, as can be seen in Fig. \ref{fig:SEhubblediff}. Significant amounts of smaller structures, but still clearly diffuse emission, remains within the point-sources. This we already observed in the mock example. The point-like component can be seen in Fig. \ref{fig:SEhubblepoint}. The disk is split into several individual patches. Introducing such artifacts might be hard to deal with in further analysis. Applying the SExtractor background separation also introduces areas of negative flux in both components, which on the other side artificially creates flux. For illustration purposes this negative flux was clipped. An advantage of this approach is, that certainly no point-like flux remains within the diffuse component. Applying the identical DAE trained on the previous model provides a relatively reasonable result, which can be seen in Fig. \ref{fig:DAEhubblediffuse} and Fig. \ref{fig:DAEhubblepoint}. The network therefore somehow abstracted the notion of point sources to a degree to make it applicable outside its training set. This observation is not trivial, as neural networks are not guaranteed at all to show this behavior. It performs well in subtracting the diffuse component from the point sources. It separates a larger amount of flux from the disk compared to \texttt{starblade}, which might still belong to the galaxy as they follow its morphology. The diffuse component, however still contains a significant amount of point-like emission. One cannot train a network directly on such data sets, as we do not have access to the ground truth of the separation and in any case one has to rely on generating a training set according to some model. Other network architectures also might lead to a better performance, but this requires a large amount of experimentation. To compare the different results we can look at the power spectrum of the point-like components of the different methods. This time we calculate the power of the components on a linear scale. For randomly scattered point sources we would expect a flat power spectrum with equal power on all scales. Deviations from that indicate an incomplete separation. Especially high power on large scales correspond to a remaining large-scale background. The power spectra can be seen in Fig. \ref{fig:pointpower}. The data itself exhibits power on large scales, which drops of and then flattens out. The large scales are dominated by the spatial extension of the galaxy, while the flattening can be attributed to the point sources. The increase at the smallest scales show the point-spread function of the instrument itself. Subtracting the background obtained by SExtractor removes roughly one order of magnitude in power for the largest scales, but everything below some large threshold is captured in this component. Overall the power spectrum has a large slope, which corresponds to a spatially correlated structure in this point-like component. Compared to this, the other two methods have a significantly lower slope, and therefore less large-scale structure. At the largest scales these components exhibit roughly two orders of magnitude less power and the spectrum stays below the power of the data even for smaller scales. One can also look at the power spectrum of the recovered diffuse components. Here, falling power spectra indicate correlated structures. These are shown in Fig. \ref{fig:diffusepower}. The power spectrum of the background estimation of the SExtractor has large power on large scales, but is shifted down systematically compared to the data. Towards smaller scales it drops of rapidly, which does not allow for smaller scale correlated features. Compared to that, the other two methods explain large scales almost exclusively with the diffuse component. Once their power diverges from the data, the spectrum exhibits a series of bumps, which should correspond to some characteristic scales within the structure of the galaxy. In the data alone, these structures are hidden by the power of the point sources. At the smallest scales he DAE drops of slightly steeper and then levels off flat. This leveling off is a sign of remaining point sources of some smaller brightness, which we can also observe in the reconstructed images. The \texttt{starblade} algorithm does not show any leveling off, which indicates the absence of any point sources above the smallest scales of the diffuse component. At the very end of the spectrum, it deviates from the drop. This coincides with the increase in power of the data due to the point-spread function and we attribute it to this instrumental effects. Besides this, our algorithm seems to generalize well in this real data application. To investigate further, we can look at a number of correlation metrics between the different components and methods. Initially we assumed the components to be independent of each other. One way to test this, is to calculate the correlation between the results of the separation. A vanishing correlation does not imply independence, but the reverse holds, so the more correlation we observe, the less independence in the separation we can assume. We will measure the correlation by their cosine similarity, which is given by \begin{align} \mathrm{cos}(\theta) = \frac{a^\dagger b}{\vert a\vert\vert b \vert} \text{.} \end{align} Here $a$ and $b$ are the components and $\theta$ defines the angle between them. Uncorrelated components are orthogonal to each other, and therefore the cosine similarity vanishes. Highly correlated components have a large overlap and therefore a small angle between them, leading to a similarity of unity. The cosine similarities between the separated components for the different methods can be found in Tab. \ref{tab:correlation_table}. The largest similarity is found in SExtractor. This is not too surprising, as both components exhibit a significant portion of large scale structure. One order of magnitude less similarity can be found in the DAE and a bit below that we find the \texttt{starblade} algorithm. For it we can also state an uncertainty, which was calculated from one hundred samples of the approximate posterior, and it amounts to roughly ten percent in $\mathrm{cos}\theta$. This result mimics to some extent the outcome of the RMS test in the mock example. Overall \texttt{starblade} produces the most uncorrelated components, followed by the DAE and the highest correlation can be found in SExtractor. \begin{table} \centering \caption{The cosine similarity between the diffuse and point-like component for all methods.} \label{tab:correlation_table} \begin{tabular}{|l|l|} \hline Method & cosine similarity\\ \hline \texttt{starblade} & $0.0059 \pm 0.0005$\\ SExtractor & $0.094$\\ DAE & $0.0082$\\ \hline \end{tabular} \end{table} Another interesting question is how similar the \texttt{starblade} results are to the other methods. SExtractor performs reasonably well in most cases, so we do not want to diverge too strongly from its results, at least for the point-like component. The results of the similarity between the different methods and with the data can be seen in Tab. \ref{tab:methods_correlation_table}. The first entry shows the cosine similarity between the point-like component and the data. Here the most dominant contributions origin from the brightest sources, therefore the high score. A larger similarity can be observed to the SExtractor point sources, which tells us that, at least for the bright sources, the methods behave highly similarly. The difference should be due to the deviations in the fainter sources and the remaining large-scale structures in SExtractor. The results for the DAE are very close to \texttt{starblade}, which we already observed in the power spectra. For the diffuse components, the picture is slightly different. The similarity to the data is low, as all bright sources are missing. To SExtractor it is significantly higher, as it correctly picks up the largest scales, which are responsible for the highest contribution to the similarity, but they are still quite un-similar. Compared to the DAE, the similarity is very high, but significantly lower than in the point-like component. This again reflects our observations from the power spectra concerning the smaller and smallest scales, which diverge to some extent. Our estimated error for the point source similarities is one order of magnitude smaller compared to the diffuse component. This should be due to the robustness of the separation of the brightest sources, which impact the similarity the most. \begin{table} \centering \caption{The cosine similarity of the diffuse and point-like component between \texttt{starblade} and other methods and data.} \label{tab:methods_correlation_table} \begin{tabular}{|l|l|} \hline component of \texttt{starblade} with & cosine similarity\\ \hline point sources and data & $0.9518 \pm 0.0003$\\ point sources and SExtractor & $0.9775 \pm 0.0003$\\ point sources and DAE & $0.9983 \pm 0.0002$\\ diffuse and data & $0.3124 \pm 0.0007$\\ diffuse and SExtractor & $0.790 \pm 0.002$\\ diffuse and DAE & $0.983 \pm 0.002$\\ \hline \end{tabular} \end{table} As previously mentioned we do not have access to any kind of ground truth in this real data case, so the judgment has to be subjective. First of all we do obtain satisfying results with the \texttt{starblade} algorithm also on real data. We separate point sources and diffuse emission also in the presence of point-spread functions and noise. Any kind of artifacts which are introduced by the measurement process and incompatible with diffuse emission are, as expected, attributed to point-like emission. Aiming at a reasonable separation of point-like and diffuse emission, SExtractor does not provide a useful result. We should note that the background estimation of SExtractor was also not designed for this particular purpose. The DAE generalizes to some extent, especially in removing point-like emission, but lacks precision in removing point-sources from diffuse emission. Overall we would judge that \texttt{starblade} provides the separation with highest accuracy also in this real data application, at least given all applied metrics. \begin{figure} \includegraphics[scale=0.6]{hubble_data} \caption{The data of the $\mathrm{M}100$ galaxy on logarithmic scale.} \label{fig:M100_data} \end{figure} \begin{figure} \includegraphics[scale=0.6]{hubble_diffuse} \caption{The separated diffuse component on logarithmic scale.} \label{fig:M100_diffuse} \end{figure} \begin{figure} \includegraphics[scale=0.6]{hubble_point} \caption{The separated point-like component on logarithmic scale. The linear flux image has been convolved with a Gaussian beam to enhance the visibility of the separated point sources.} \label{fig:M100_point} \end{figure} \begin{figure} \includegraphics[scale=0.6]{hubble_uncertainty} \caption{The flux uncertainty in terms of a one sigma interval.} \label{fig:M100_uncertainty} \end{figure} \begin{figure} \includegraphics[scale=0.6]{hubble_log_power} \caption{The recovered power spectrum of the logarithmic diffuse component with the power spectrum of the logarithmic data.} \label{fig:M100_power} \end{figure} \begin{figure} \includegraphics[scale=0.6]{hubble_zoom} \caption{A zoomed in section for the data and the separated components on logarithmic scale. Here the point sources are not convolved.} \label{fig:M100_zoom} \end{figure} \begin{figure} \includegraphics[scale=0.6]{SExtractor_hubble_diffuse} \caption{Diffuse component obtained by SExtractor with the $64\times 64$ pixel window.} \label{fig:SEhubblediff} \end{figure} \begin{figure} \includegraphics[scale=0.6]{SExtractor_hubble_point_like} \caption{Convolved point-like component obtained by SExtractor with $\mathrm{BACKSIZE}=64\times 64$.} \label{fig:SEhubblepoint} \end{figure} \begin{figure} \includegraphics[scale=0.6]{DAE_hubble_diffuse} \caption{Diffuse component separated by the DAE.} \label{fig:DAEhubblediffuse} \end{figure} \begin{figure} \includegraphics[scale=0.6]{DAE_hubble_point_like} \caption{Convolved point-like component separated by the DAE.} \label{fig:DAEhubblepoint} \end{figure} \begin{figure} \includegraphics[scale=0.6]{point_power} \caption{The power spectrum of the linear maps of the point-like components on double logarithmic scale.} \label{fig:pointpower} \end{figure} \begin{figure} \includegraphics[scale=0.6]{diffuse_power} \caption{The power spectrum of the linear maps of the diffuse components on double logarithmic scale.} \label{fig:diffusepower} \end{figure} \section{Conclusion} \label{sec:conclusion} We derived the \texttt{starblade} algorithm, which is capable of separating point-like from diffuse emission. It enforces positivity of all components and the correlation structure of the diffuse component is inferred as well. The only free parameters correspond to assumptions of the underlying point source distribution, for which physically motivated choices are available. As we perform a variational approximation to the true posterior, one has access to uncertainties on the separation itself, as well as all derived quantities. We validate the implementation of the algorithm in an example with data generated according to the model, where it performs better in terms of the logarithmic root mean squared error than the background estimation of SExtractor, a denoising auto-encoder trained on the same model. It also exhibits more robustness in choice of the hyper-parameters than the MAP solution. Applying the algorithm to a data set of the $\mathrm{M}100$ galaxy obtained by HST provides satisfying results. The components are clearly separated visually and this impression is confirmed in the individual power spectra of the separated components. Of all applied methods, \texttt{starblade} provides the most uncorrelated components. A comparison between the results show a high similarity to the point-source result of SExtractor. The results of the \texttt{starblade} algorithms can be used in further analysis to build catalogs or to study extended, correlated structures. Through the samples, the uncertainty of the separation can be fully propagated to the science result at the end by performing all calculations for the samples, averaging the sample results and evaluate their variance. By this approach the full uncertainty is taken into account, including large scale effects from the diffuse component. This method can also be used as an internal step within a larger inference framework, which solves the full reconstruction problem with all instrumental effects. By providing a good estimate of the separation of the components it can speed up the computations. Details on this are outlined in Appendix \ref{ap:lager_picture}. We believe that the \texttt{starblade} algorithm can be used in a large variety of applications and we provide an open source application of it at \url{https://gitlab.mpcdf.mpg.de/ift/starblade}. \section*{Acknowledgments} We acknowledge Philipp Arras, Fabrizia Guglielmetti, Reimar Leike, and Martin Reinecke for fruitful discussions and comments on the manuscript. \bibliographystyle{mnras}
{ "timestamp": "2018-08-07T02:19:42", "yymm": "1804", "arxiv_id": "1804.05591", "language": "en", "url": "https://arxiv.org/abs/1804.05591" }
\section{Introduction} Weil gave the relation between the value of Eisenstein series and integral of theta function in the paper \cite{Weil2}, which is called the Siegel-Weil formula. It plays a very important role in number theory and arithmetic geometry. In this paper, we mainly study the arithmetic and geometry on quaternion algebras. By the Siegel-Weil formula, we give the explicit formulae for Hecke correspondence's degree and average representation numbers over genus. We identify these numbers with Fourier coefficients of Eisenstein series, which could be written as infinite products of local Whittaker functions. There are mainly two ways to study representation numbers of positive definite quadratic forms, that is, the Siegel-Weil formula and the cycle method. By the Siegel-Weil formula, we give the exactly formulae for representation numbers of three squares and four squares sums. Hardy \cite{Ha1}\cite{Ha2} studied the representation number via singular series, which is an infinite product. We find these two methods locally are the same, i.e., the local factors of singular series are equal to local Whittaker functions. Siegel-Weil formula holds for all quaternion algebras over $\mathbb{Q}$ except the space $\mathbb{M}_{2}(\mathbb{Q})$. For this space, we prove the Siegel-Weil formula except the constant term and call it weak Siegel-Weil formula (Theorem \ref{weakformula}) in this paper. Let us recall the classical Siegel-Weil formula of the orthogonal type. Let $(V, Q)$ be a quadratic space over $\mathbb{Q} $ with even dimension $m$. For the reductive dual pair $G=\operatorname{Sp}_{n}$ and $H=O(V)$, one has the Weil representation $\omega$ of group $G(\mathbb{A}) \times H(\mathbb{A})$ which acts on $S(V^{n}(\mathbb{A}))$. For convenience, we assume $n=1$, hence $G=\operatorname{SL}_{2}$. For an algebraic group $W$ over $\mathbb{Q}$, set $[W]=W(\mathbb{Q}) \backslash W(\mathbb{A})$. Then the theta kernel (\cite{Weil}) \begin{equation} \theta(g, h, \varphi)=\sum_{x\in V(\mathbb{Q})} \omega(g)\varphi(h^{-1}x), \end{equation} is an automorphic form on $[G]\times [H]$, where $ \varphi \in S(V(\mathbb{A})), g \in G(\mathbb{A}), h\in H(\mathbb{A})$. So the theta integral \begin{equation} I(g, \varphi)=\int_{[H]} \theta(g, h, \varphi)dh \end{equation} is an automorphic form on $[G]$ if the integral is absolutely convergent. There is another way to construct automorphic forms from $\varphi \in S(V(\mathbb{A}))$. For $s \in \mathbb{C}$, let $I(s, \chi_V)= \operatorname{Ind}_{P}^{G}( | | ^{s}\chi_V)$ be the induced representation of $G(\mathbb{A})$ consists of smooth functions $\Phi(g, s)$ on $G(\mathbb{A})$. The Eisenstein series is defined by \begin{equation} E(g, s, \Phi)= \sum_{\gamma\in P \setminus G} \Phi(\gamma g, s), \end{equation} where $P$ is the parabolic subgroup of $G$. There is a $\operatorname{SL}_2(\mathbb{A})$-intertwining map ($s_0= \frac{m}2 -1$) \begin{equation} \lambda=\lambda_{V} : S(V(\mathbb{A})) \rightarrow I(s_{0}, \chi_{V}), \quad \lambda(\varphi)(g)= \omega(g)(0). \end{equation} It is defined locally, and we write $\lambda=\otimes_p \lambda_p$. We often drop the index $p$ of $\lambda_p$ if there is no confusion. Since there exists a section $\Phi \in I(s, \chi_{V})$ such that $\lambda(\varphi)=\Phi(g, s_0)$, hence one write \begin{equation} E(g, s, \varphi)= E(g, s, \Phi). \end{equation} The Siegel-Weil formula is extended by Kudla and Rallis (\cite{siegelweil} , \cite{KR2}), which asserts that two automorphic forms $I(g, \varphi)$ and $E(g, s_0, \varphi)$ are coincide: \begin{theorem} (Siegel-Weil formula) \label{theo:Siegel-Weil} Assume V is anisotropic or \\ $m -r>2$, where $r$ is the Witt index. Then for every $\varphi \in S(V(\mathbb{A}))$, the Eisenstein series $E(g, s; \varphi)$ is holomorphic at $s_{0}$, and \begin{center} $E(g, s_{0}, \varphi)=\kappa I(g, \varphi)$, \end{center} where $\kappa=2$ when $m\leq2$ and $\kappa=1$ otherwise. \end{theorem} For the space which doesn't satisfy the above convergence condition, one could study the regularized theta integral. Kudla and Rallis studied the regularized Siegel-Weil formula \cite{regularized}, and a lot of cases have been proved by Gan, Qiu and Takeda in \cite{GQT}. In this paper, we study the quadratic space $V=(M_{2}(\mathbb{Q}), Q)$ with quadratic form $Q=\det$. The main idea of proving the Siegel-Weil formula is to compare the Fourier coefficients of Eisenstein series and theta integral. The next theorem shows that they are equal except the constant term. \begin{theorem}{\bf Weak Siegel-Weil formula}\label{weakformula} For any $\eta \in \mathbb{Q}^{*}$ and $\varphi \in S(V(\mathbb{A}))$, $E_\eta(g, s, \varphi)$ is holomorphic at $s=s_0$ and $I_{\eta}(g, \varphi)$ is absolutely convergent. Moreover, one has \begin{equation} E_\eta(g, s_{0}, \varphi)=I_{\eta}(g, \varphi). \end{equation} where $E_\eta(g, s_{0}, \varphi)$ is $\eta$-th Fourier coefficient of Eisenstein series, and $I_{\eta}(g, \varphi)$ is $\eta$-th Fourier coefficient of $I(g, \varphi)$. \end{theorem} \begin{remark} (1)\quad The Siegel-Weil formula holds if and only if $$E_{\eta}(g, s_{0}, \varphi)=I_{\eta}(g, \varphi)$$ for all $\eta \in \mathbb{Q}$. So above theorem is almost the Siegel-Weil formula except the constant term, and we call it weak Siegel-Weil formula. (2)\quad Kudla's work \cite{KuIntegral} shows that the Fourier coefficients of theta integral always have geometric explanations, i.e., degrees of cycles on Shimura varieties. Following the above theorem, we could compute these numbers via Fourier coefficients of Eisenstein series, which could be written as infinite products. It is also holds for any other indefinite quaternions, since there exists the Siegel-Weil formula. \end{remark} We drop rank one elements in $V(\mathbb{Q})$, and define \begin{equation} \widetilde{\theta}(g , h , \varphi)=\sum_{x\in V(\mathbb{Q}), \operatorname{rank} (x)\neq 1} \omega(g, h)\varphi(x). \end{equation} Then the integral is given by \begin{equation}\widetilde{I}(g, \varphi)= \int_{[H]}\widetilde{\theta}(g, h, \varphi) dh. \end{equation} When $\eta \neq 0$, it is easy to see that \begin{equation} \widetilde{I}_{\eta}(g, \varphi)=I_{\eta}(g, \varphi), \end{equation} where $\widetilde{I}_{\eta}(g, \varphi)$ is the Fourier coefficients of $\widetilde{I}(g, \varphi)$. \begin{theorem} \label{result} When $\varphi_{\infty}=\varphi_{\infty}^{sp}$ as defined in equation (\ref{equsplit}), $\tilde{I}(g, \varphi)$ is absolutely convergent. Moreover, if $\Phi_{1}(g, s_0)=0$, we have $$\tilde{I}(g, \varphi)=E(g, s_{0}, \varphi).$$ Here \begin{equation} \Phi_{1}(g, s)=\int_{\mathbb{A}}\Phi(wn(b)g, s)db, \end{equation} with $\Phi(g, s_0)=\lambda(\varphi)(g)$. \end{theorem} The above result could be extended to the case when $\varphi_\infty$ is a polynomial times a Gaussian. We assume that $D>0$ is a square free integer, and let $B=B(D)$ be the quaternion algebra which is ramified at a finite prime $p$ if and only if $p|D$. The reduced norm, denoted by $\det$ in this paper, gives a canonical quadratic form $Q$ on $B$ and makes it as a quadratic space. When $D=1$, $B(D)=M_{2}(\mathbb{Q})$. In this paper, we denote any other quaternion algebra over $\mathbb{Q}$ by $V^{\prime}$, which is anisotropic. For a positive integer $N$ which is prime to $D$, let $\mathcal O_D(N)$ be an Eichler order in $B$ of conductor $N$. We can view $L=(\mathcal O_D(N), \det)$ as an even integral lattice in $V$. When $B$ is definite, there is a very interesting question to compute the representation number (for a positive integer $m$) $$ r_L(m)=|\{ x \in \mathcal O_D(N):\, \det x = m\}|. $$ In general, it is very hard to compute, so we consider its average over the genus, which is denoted by \begin{equation} r_{D, N}(m) =r_{\operatorname{gen}(L)}(m) =\bigg(\sum_{L_1 \in \operatorname{gen}(L)} \frac{1}{|\operatorname{Aut}(L_1)|}\bigg)^{-1} \sum_{L_1 \in \operatorname{gen}(L)} \frac{r_{L_1}(m)}{|\operatorname{Aut}(L_1)|}. \end{equation} It depends only on $D$ and $N$, and is independent of the choice of Eichler order $\mathcal O_D(N)$. From Siegel's formula \cite{siegelformula}, it could be written as an infinite product. Now it is could be viewed as coefficients of Eisenstein series, which is the motivation of Siegel-Weil formula. When $B$ is an indefinite quaternion, let $\Gamma_0^D(N) =\mathcal O_D(N)^1$ be the group of (reduced) norm $1$ elements in $\mathcal O_D(N)$ and let $X_0^D(N) = \Gamma_0^D(N) \backslash \mathfrak{H}$ be the associated Shimura curve. For a positive integer $m$, let $T_{D, N}(m)$ be the Hecke correspondence on $X_0^D(N)$ which is defined in Section \ref{sect:Preli}. Then we define the normalized degree by \begin{equation}\label{nordegree} r_{D, N}(m)= -\frac{2}{ \operatorname{vol}(X_0^D(N), \Omega_0)}\deg T_{D, N}(m), \end{equation} where $$ \operatorname{vol}(X_0^D(N), \Omega_0) = \int_{X_0^D(N)} \Omega_0 $$ is the volume of $X_0^D(N)$ with respect to $\Omega_0 =\frac{1}{2\pi} y^{-2} dx \wedge dy$. Two kinds of numbers $r_{D, N}(m)$ defined as above are Fourier coefficients of theta integral, see Section \ref{application} for details. We could compute these numbers via Fourier coefficients of Eisenstein series in the following result. \begin{theorem}\label{rDN} Let notations be as above and $k$ be the number of prime factors of $m$, then one has \begin{eqnarray} r_{D,N}(m) &=&(-1)^{k+1}24m\prod_{p \nmid ND}\frac{p-p^{-\operatorname{ord}_pm}}{p-1}\nonumber\\ &&\times \prod_{p \mid N}\frac{2p-p^{-(\operatorname{ord}_pm-1)}-p^{-\operatorname{ord}_pm}}{p^{2}-1}\prod_{p \mid D}\frac{1}{(p-1)p^{\operatorname{ord}_pm}}.\nonumber \end{eqnarray} \end{theorem} \begin{remark} The case $D=1$ follows from the weak Siegel-Weil formula(Theorem \ref{weakformula}). \end{remark} As an application of the Theorem \ref{rDN}, we obtain the following result. \begin{corollary} Assume $D$ is a square-free positive integer with even number of prime factors, one has \begin{eqnarray}\label{degree} &&\deg T_{D, N}(m)\nonumber\\&=&2mND\prod_{p \nmid ND}\frac{p-p^{-\operatorname{ord}_pm}}{p-1}\prod_{p \mid N}\frac{2-p^{-\operatorname{ord}_pm}-p^{-\operatorname{ord}_pm-1}}{p-1}\prod_{p \mid D}\frac{1}{p^{\operatorname{ord}_pm+1}}.\nonumber \end{eqnarray} \end{corollary} When $D=1$, set $$M(N, m)=\{\kzxz{a}{b}{c}{d} \in M_{2}(\mathbb{Z}): ad-bc=m, c\equiv 0(\mod N)\}.$$ It could be written as a disjoint union $$ M(N, m)=\bigsqcup_{i=1}^K \Gamma_{0}(N)\alpha_i.$$ The Hecke operator $T(n)$ is a map of $Div(X_0(N))$ to itself given by : $$T(n)([\tau])=\sum_{i}^K [\alpha_i \tau],$$ where $\tau\in \H^{\ast}=\H \bigcup\{\mathbb{Q}\}$ and $[\tau]$ is the corresponding member of $X_0(N)$. Then one has \begin{equation} K=\frac{1}{2}\deg T_{D, N}(m)=mN\prod_{p \nmid N}\frac{p-p^{-\operatorname{ord}_pm}}{p-1}\prod_{p \mid N}\frac{2-p^{-\operatorname{ord}_pm}-p^{-\operatorname{ord}_pm-1}}{p-1}. \end{equation} Moreover, when $N=1$, we recover the well known result \begin{equation}K=m\prod_{p }\frac{p-p^{-\operatorname{ord}_pm}}{p-1}=\sum_{d \mid m}d.\end{equation} When $D\neq 1$, it also gives a similar explanation for Shimura curve $X_0^D(N)$. By Theorem \ref{rDN}, one reprove the main results in \cite{DuYang} as follows. \begin{theorem}\cite[Theorem 1.1,1.2,1.3 and 1.4] {DuYang}For primes $p, q$, let $Dpq$ be a square-free positive integer, and let $N$ be a positive integer prime to $Dpq$. For every positive integer $m$, we have \begin{equation} -\frac{2}{q-1} r_{Dp, N}(m) + \frac{q+1}{q-1} r_{Dp, Nq}(m) =-\frac{2}{p-1} r_{Dq, N}(m) + \frac{p+1}{p-1} r_{Dq, Np}(m) \end{equation} and \begin{equation} r_{Dp, N}(m)= -\frac{2}{p-1} r_{D, N}(m) + \frac{p+1}{p-1} r_{D, Np}(m). \end{equation} \end{theorem} \begin{remark} In \cite{DuYang}, the second equation is proved with the assumption $D>1$. We extend it to the case $D=1$ here by the weak Siegel-Weil formula(Theorem \ref{weakformula}). \end{remark} For positive integer $m$, let $r_k(m)$ denotes the number of representations of an integer $m$ as a sum of $k$ squares. Recall that $\sum_{m\geq 0}r_{k}(m)q^m$ is the $k$-th power of theta series, i.e., $\sum_{m\geq 0}r_{k}(m)q^m=(\theta(\tau))^k$ with $\theta(\tau)=1+q+\cdots+q^{n^2}+\cdots$, where $q=e^{2\pi i\tau}$ and $\tau \in \H$. By the circle method, Hardy \cite{Ha1} and Ramanujan \cite{Ra1} proved that \begin{equation} r_s(m)=\rho_{s}(m)+O(m^{\frac{s}{4}}), ~s\geq 5, \end{equation} where \begin{equation} \rho_{s}(m)=\frac{\pi^{\frac{s}{2}}}{\Gamma(\frac{s}{2})}m^{\frac{s}{2}-1} \mathfrak{G}_{s}(m), \end{equation} which is called singular series. Here \begin{equation} \mathfrak{G}_{s}(m)=\sum_{k=1}^{\infty} A_k(m), \end{equation} and $$A_k(m)= \sum_{h=1, (h,k)=1}^{k}(\frac{1}{k}\sum_{j=1}^ke^{2\pi i hj^2/k})^{s}e^{-2\pi imh/k}.$$ When $s=5, 6, 7, 8$, Hardy \cite{Ha1}\cite{Ha2} gave the exactly formula by \begin{equation} r_{s}(m)=\rho_{s}(m). \end{equation} He also claimed that it is false when $s=2$ and $s>8$. Bateman proved that this conclusion holds when $s= 3, 4$ in \cite{Ba}. Comparing factors of singular series with local Whittaker functions, we give another proof in the last section. By the Siegel-Weil formula, we reprove the following result as follows. \begin{theorem}(Three and Four squares Theorem )\label{threefoursquareth} Let $m>0$, and assume $-4m=dc^2$, where $d$ is the discriminant of $\mathbb{Q}(\sqrt{-m})$. One has \begin{equation} r_4(m)=8\sum_{d \mid m, 4 \nmid d} d, \end{equation} and \begin{equation}\label{threefinalintr} r_3(m)=\frac{24 h(d) }{w}(1-\chi_d(2))\sum_{l \mid c, (l, 2)=1}l \prod_{p \mid l}(1-\chi_{d}(p)p^{-1}), \end{equation} where $p$ runs over prime factors of $l$. Here $h(d)$ is the class number, $\chi_{d}$ is the character associated to quadratic field $\mathbb{Q}(\sqrt{-m})$ and $w$ is the number of roots of unity. \end{theorem} For a discriminant $m >0$, define Hurwitz class number $H(m)$ be the number of classes of positive definite quadratic forms of discriminant $-m$, which is given by \begin{equation} H(m)=\frac{2h(d)}{w}\sum_{l \mid f}l \prod_{p \mid l}(1-\chi_{d}(p)p^{-1}), \end{equation} where $-m=df^2$. When $m$ is not a discriminant, $H(m)=0$. This formula is similar as equation (\ref{threefinalintr}), one could obtain Hirzebruch and Zagier's result \cite{HZ}, see Corollary \ref{HZcor}. Since $A_k(m)$ is multiplicative in $k$, one could write $$\mathfrak{G}_{s}(m)=\sum_{k=1}^{\infty} A_k(m)=\prod_{p}S_p(m),$$ where $S_p(m)=\sum_{r=0}^{\infty}A_{p^r}(m)$. In order to prove Theorem \ref{threefoursquareth}, we compute the local Whittaker functions. Comparing these functions with $S_p(m)$, one has the following result. \begin{theorem}\label{localequality} Let $B(2)$ be the quaternion algebra over $\mathbb{Q}$ with discriminant $2$ and $B^0(2)$ be the trace zero subspace of $B(2)$.\\ 1) Let $\mathcal{L}=\mathbb{Z} i+\mathbb{Z} j+\mathbb{Z} k$ be the lattice in quadratic space $V=(B^0(2), Q)$, one has \begin{equation}S_{p}(m)=W_{ p}(\frac{1}{2}, m). \end{equation} 2) Let $L=\mathbb{Z}+\mathbb{Z} i+\mathbb{Z} j+\mathbb{Z} k$ be the lattice in $V=(B(2), Q)$, one has \begin{equation}S_{p}(m)=W_{ p}(1, m). \end{equation} Here lattices $\mathcal{L}$ and $L$ are given in Section \ref{foursquare}, and the normalized local Whittaker function $W_{ p}(\frac{1}{2}, m)$ and $W_{ p}(1, m)$ are defined by equation (\ref{norwhitt}). \end{theorem} \begin{remark} It implies that local factors of $\mathfrak{G}_{s}(m)$ are equal to local Whittaker functions. We expect that it could be extended to the case $s=5, 6, 7, 8$ at least. \end{remark} Recall taht $r_s(m)$ could be written as product of local Whittaker functions. As an application of the above theorem, we reprove the following result in Section \ref{sechardy}. \begin{theorem} \cite{Ba}\label{thba} \begin{equation} r_{s}(m)=\rho_{s}(m), ~s=3, 4. \end{equation} \end{theorem} \begin{remark} We find that the circle method and Siegel-Weil formula are the same in this question. From Hardy and Bateman's work, the above equality is true when $2<s <9$. For $s=5, 6, 7, 8$, we leave it to readers to check by the Siegel-Weil method. \end{remark} This paper is organized as follows. In Section \ref{sect:Preli}, we introduce the Weil representation and recall Kudla's matching pairs. In Section \ref{splitcase}, we prove the convergence of theta integral $\tilde{I}(g, \varphi)$ ( $\tilde{I}_{\eta}(g, \varphi)$) in Theorem \ref{integral conv} ( Proposition \ref{fourierintegral}). Combining Proposition \ref{fourierintegral} with Theorem \ref{mainresult}, we prove the weak Siegel-Weil formula \ref{weakformula} in Section \ref{sec:mainresult}. In Section \ref{application}, we identify numbers $r_{D, N}$ with Fourier coefficients of Eisenstein series and give the exactly formula in Theorem \ref{rDN}. Finally, we give the exactly formula for representation number of four and three squares sum in Theorem \ref{threefoursquareth} in Section \ref{foursquare}. Comparing the local Whittaker functions with local factors of singular series, we prove Theorem \ref{localequality} and \ref{thba} in the last section. \section{Preliminaries } \label{sect:Preli} Let \begin{center} $( , ) :V\times V \rightarrow \mathbb{Q}$ \end{center} be a nondegenerate symmetric bilinear form on $V$, then $(V, Q)$ is called a quadratic space, where $Q(x)=\frac{1}{2}(x, x)$. Set $G=\operatorname{SL}_2$, $H=O(V)$ and let $\psi: \mathbb{A}/\mathbb{Q} \rightarrow \mathbb{C}^{\times}$ be the canonical unramified additive character such that $\psi_{\infty}(x)=e^{2\pi ix}$. The local component $\psi_{p}$ of $\psi$ at a nonarchimedean place $p$ is unramified if it is trivial on $\mathbb{Z}_{p}$ but nontrivial on $\frac{1}{p} \mathbb{Z}_{p}$. Let $$ \chi_V(x) = (x, (-1)^{\frac{m(m-1)}{2}} \det V)_\mathbb{A} $$ be the associated quadratic character, where $\mathbb{A}=\mathbb{A}_{\mathbb{Q}}$ is the adelic ring of $\mathbb{Q}$ and $ (, )_{\mathbb{A}}$ is the Hilbert symbol of $\mathbb{Q}$. There is a Weil representation $\omega=\omega_{\psi, V}$ of $O(V)(\mathbb{A})\times \operatorname{SL}_{2}(\mathbb{A})$ acts on $S(V(\mathbb{A}))$. We could view it locally. For each prime p, denote the local representation $\omega_p = \omega_{\psi, V_p}$ of $O(V)(\mathbb{Q}_p)\times \operatorname{SL}_{2}(\mathbb{Q}_p)$ acts on $S(V_p)$, where $V_{p}=V\bigotimes_{\mathbb{Q}}\mathbb{Q}_{p}$. Concretely, the orthogonal group $ O(V)(\mathbb{A})$ acts on $ S(V(\mathbb{A}))$ linearly, \begin{center} $\omega (h)\varphi (x)=\varphi(h^{-1}x)$. \end{center} The $\operatorname{SL}_2(\mathbb{A})$-action is determined by (see for example \cite{KuSplit}) \begin{eqnarray}\label{weilrep} &\omega(n(b))\varphi(x)=\psi(bQ(x))\varphi(x), \nonumber\\ &\omega(m(a))\varphi(x)=\chi_V(x) \mid a\mid^{\frac{m}2} \varphi(ax), \\ &\omega(w)\varphi= \gamma(V) \widehat{\varphi} = \gamma(V)\int_{V(\mathbb{A})}\varphi(y)\psi((x, y))dy ,\nonumber \end{eqnarray} where for $a \in \mathbb{A}^\times$, $b \in \mathbb{A}$ \begin{center} $n(b)= \left( \begin{array}{cc} 1 & b \\ & 1 \\ \end{array} \right), m(a)= \left( \begin{array}{cc} a & \\ & a^{-1} \\ \end{array} \right) , w=\left( \begin{array}{cc} & 1 \\ -1& \\ \end{array} \right),$ \end{center} $dy$ is the Haar measure on $V(\mathbb{A})$ self-dual with respect to $\psi ((x, y))$, and $\gamma(V)=\prod_{p} \gamma(V_p)=1$. Here $\gamma(V_p)$ is a $8$-th root of unity associated to the local Weil representation at $p$ (local Weil index). Let $P=NM$ be the standard Borel subgroup of $\operatorname{SL}_2$, where $N$ and $M$ are subgroups of $n(b)$ and $m(a)$, respectively. \subsection{Introduction to quaternions}\label{quaternion} In this paper, we only consider the quaternion algebras over $\mathbb{Q}$. Let $B$ be a quaternion $\mathbb{Q}$-algebra, then \begin{center} $B=\mathbb{Q}+\mathbb{Q} i+\mathbb{Q} j+\mathbb{Q} ij$, \end{center} where $i^{2}=a, j^{2}=b, i j=-j i$ and $a, b \in \mathbb{Q}^{\times }.$ We write $B=\{\frac{a, b }{\mathbb{Q}}\}$. The map \begin{center} $\iota: x=x_{1}+x_{2}i +x_{3}j +x_{4}k \rightarrow \bar{x}=x_{1}-x_{2}i -x_{3}j -x_{4}k$ \end{center} is called main involution. Set the reduced trace $\operatorname{tr}(x)=x+\bar{x}$ and reduced norm $\det (x)=x\bar{x}$. For example, we have $M_{2}(\mathbb{Q})=\{\frac{1, 1 }{\mathbb{Q}}\}$ and division ring $\H=\{\frac{-1, -1}{\mathbb{R}}\}$ of Hamiltonian. For $M_{2}(\mathbb{Q})$, we take $\iota$ to be the involution sending \begin{center} $\left( \begin{array}{cc} x_{1} & x_{2} \\ x_{3}& x_{4} \\ \end{array} \right)$ to $\left( \begin{array}{cc} x_{4} & -x_{2} \\ -x_{3}& x_{1} \\ \end{array} \right)$. \end{center} The reduced norm and trace is the standard determinant and trace of the matrix. Let $D>0$ be a square free integer, and let $B=B(D)$ be the unique quaternion algebra of discriminant $D$ over $\mathbb{Q}$, i.e., $B$ is ramified at a finite prime $p$ if and only if $p|D$. We denote quadratic space $V^{\prime}=(B, \det)$. For a positive integer $N$ prime to $D$, let $\mathcal O_D(N)$ be an Eichler order in $B$ of conductor $N$ such that \begin{enumerate} \item When $p\nmid N$, $\mathcal O_D(N)_p:=\mathcal O_D(N) \otimes_\mathbb{Z} \mathbb{Z}_p$ is the maximal order of $B_p=B\otimes_{\mathbb{Q}}\mathbb{Q}_p$. \item When $p\mid N$, there is an identification $B_p \cong M_{2}(\mathbb{Q}_p)$ under which $$ \mathcal O_D(N)_p=\left\{ \kzxz{a}{b}{c}{d} \in M_{2}(\mathbb{Z}_p):\, c \equiv 0 \pmod p \right\}. $$ \end{enumerate} We can view $L=(\mathcal O_D(N), \det)$ as an even integral lattice in $V^{\prime}$. The quaternion $B$ is definite if and only if $D$ has odd number of prime factors. In this case, we consider the average representation number over the $\operatorname{gen}(L)$, which is defined by \begin{equation} r_{D, N}(m) =\bigg(\sum_{L_1 \in \operatorname{gen}(L)} \frac{1}{|\operatorname{Aut}(L_1)|}\bigg)^{-1} \sum_{L_1 \in \operatorname{gen}(L)} \frac{r_{L_1}(m)}{|\operatorname{Aut}(L_1)|}. \end{equation} Here $\operatorname{gen}(L)$ is the set of equivalence classes of lattices in the same genus of $L$. For details see Section \ref{application}. When $B(D)$ is indefinite, i.e., $D$ has even number of prime factors, then the representation number does not make sense. In this case, $V^\prime$ is of signature $(2, 2)$. We fix an embedding of \begin{center} $i: B \hookrightarrow B \bigotimes \mathbb{R} \cong M_{2}(\mathbb{R}) $, \end{center} such that $B^\times $ is invariant under the automorphism $x \mapsto x^*= {}^tx^{-1}$ of $\operatorname{GL}_2(\mathbb{R})$. Let $\Gamma_0^D(N) =\mathcal O_D(N)^1$ be the group of (reduced) norm $1$ elements in $\mathcal O_D(N)$ and identify it with $i(\Gamma_0^D(N))$. Let $X_0^D(N) = \Gamma_0^D(N) \backslash \mathfrak{H}$ be the associated Shimura curve. For a positive integer $m$, let $T_{D, N}(m)$ be the Hecke correspondence on $X_0^D(N)$ defined by \begin{equation}\label{eq3} \begin{split} T_{D, N}(m)= &\{([z_1], [z_2]) \in X_0^D(N) \times X_0^D(N):\\ & z_1 = i(x)z_2 \hbox{ for some } x \in \mathcal O_D(N), \, \det x =m\}. \end{split} \end{equation} Define\begin{equation}\deg T_{D, N}(m) =\deg (T_{D, N}(m) \rightarrow X_0^D(N))\end{equation} under the projection $([z_1], [z_2]) \mapsto [z_1]$. Let $\Omega_0 =\frac{1}{2\pi} y^{-2} dx \wedge dy$ be the normalized differential on $X_0^D(N)$, and let $$ \operatorname{vol}(X_0^D(N), \Omega_0) = \int_{X_0^D(N)} \Omega_0 $$ be the volume of $X_0^D(N)$ with respect to $\Omega_0$. The same as equation (\ref{nordegree}), one define the normalized degree by \begin{equation} r_{D, N}(m)= -\frac{2}{ \operatorname{vol}(X_0^D(N), \Omega_0)}\deg T_{D, N}(m).\nonumber \end{equation} \subsection{Kudla's matching}\label{kudlamatch} Let $V^{(1)}, V^{(2)}$ be two quadratic spaces with the same dimension and the same quadratic character $\chi$. By the following diagram \begin{equation} \setlength{\unitlength}{1mm} \begin{picture}(60, 20) \linethickness{1pt} \put(0,18){$S(V^{(1)}(\mathbb{A}))$} \put(0,0){$S(V^{(2)}(\mathbb{A}))$} \put(18,18){ \vector(3,-1){25}} \put(18,0){ \vector(3,1){25}} \thicklines \put(45,8){$I(s_{0},\chi)$} \put(25,16){$\lambda_{V^{(1)}}$} \put(25,6){$\lambda_{V^{(2)}}$} \end{picture} \end{equation} one knows that the image of $\lambda_{V^{(1)}}, \lambda_{V^{(1)}}$ are in the same space $I(s_{0},\chi).$ Recall the definition in \cite{KuIntegral} as follows. \begin{definition} For a prime $p \le \infty$, $\varphi_p^{(i)} \in S(V_p^{(i)})$, $i=1, 2$, are said to be matching if $$ \lambda_{V_p^{(1)}} (\varphi_p^{(1)}) = \lambda_{V_p^{(2)}}(\varphi_p^{(2)}). $$ $\varphi^{(i)}=\otimes_{p\leq \infty} \varphi_p^{(i)} \in S(V^{(i)}(\mathbb{A}))$ are said to be matching if they match at each prime $p$, $i=1, 2$. \end{definition} For such a matching pair $(\varphi^{(1)}, \varphi^{(2)})$, one has the following identity: \begin{equation} \label{eq:matching} I(g, \varphi^{(1)})=I(g, \varphi^{(2)}). \end{equation} It implies that their Fourier coefficients are equal. Recall that $V=(M_{2}(\mathbb{Q}), \det)$, and we denote $L_0^{sp}=M_{2}(\mathbb{Z}_p)$, $$ L_1^{sp}= \left\{ \kzxz{a}{b}{c}{d} \in M_{2}(\mathbb{Z}_p):\, c \equiv 0 \pmod p \right\} $$ and $$ L_2^{sp}= \left\{ \kzxz{a}{b}{c}{d} \in M_{2}(\mathbb{Z}_p):\, c \equiv 0 \pmod {p^2} \right\}. $$ For the quadratic space $V^{\prime}=(B(D), \det)$, set $$\varphi^{\prime}=\begin{cases} \operatorname{char}(\widehat{\mathcal{O}_D(N)}) \otimes\varphi^{ra}_{\infty} \in S(V^{\prime}(\mathbb{A})) & \hbox{if } V^{\prime}~is~definite,\\ \operatorname{char}(\widehat{\mathcal{O}_D(N)}) \otimes\varphi^{sp}_{\infty} \in S(V^{\prime}(\mathbb{A})) & \hbox{if } V^{\prime}~is~indefinite. \end{cases}$$ One can prove $(\varphi^{ra}_{\infty}, \varphi^{sp}_{\infty})$ is a matching pair, which will be given in Section \ref{application}. By \cite[Proposition 3.1]{DuYang}, we have the following result: \begin{proposition} \label{globalmatching} Assume $\varphi_{D}^{N}=\otimes_{p\leq \infty}\varphi_{p} \in S(V(\mathbb{A}))$ satisfies the following conditions: (1) \quad When $p =\infty$, $\varphi_\infty=\varphi_\infty^{sp}$, (2) \quad When $p \nmid DN \infty$, $\varphi_p= \varphi_0^{sp} $, (3) \quad When $ p\mid N$, $\varphi_p=\varphi_1^{sp}$, (4) \quad When $ p\mid D$, $\varphi_p=\frac{-2}{p-1}\varphi_0^{sp}+\frac{p+1}{p-1}\varphi_1^{sp} $ which is constructed in \cite{DuYang}. Then $(\varphi^{\prime}, \varphi_{D}^{N})$ is a matching pair. Here $$ \varphi_i^{sp} = \operatorname{char}(L_i^{sp}), \quad i=0, 1, 2. $$ \end{proposition} \section{Theta integral for $M_{2}(\mathbb{Q})$}\label{splitcase} For the quadratic space $(V, Q)$ with $\dim V= m$, let $G=\operatorname{SL}_2, H=O(V)$. The integral is absolutely convergent precisely when V is anisotropic or $\dim(V)-r> 2$, where $ r$ is the Witt index of V, i.e., the dimension of a maximal isotropic $\mathbb{Q}$-subspace of $V$. The Eisenstein series may be not holomorphic at $s_{0}$, so the Siegel-Weil formula is not always true. Kudla and Rallis \cite{regularized} proved the regularized Siegel-Weil formula for some space. \subsection{Theta integral } From now on, we denote space $(M_{2}(\mathbb{Q}), \det)$ by $V$.\ There is no Siegel-Weil for this space since there are too many singular (rank one) elements. We define \begin{equation} \widetilde{\theta}(g , h , \varphi)=\sum_{x\in V(\mathbb{Q}), \operatorname{rank} (x)\neq 1} \omega(g) \varphi(h^{-1}x), \end{equation} and \begin{equation}\widetilde{I}(g, \varphi)= \int_{[H]}\widetilde{\theta}(g, h, \varphi)dh. \end{equation} Here dh ( the half of the Tamagawa measure) is the invariant measure on $[H]$ with $\operatorname{vol}([H], dh)=1$. Notice that $\tilde{\theta}(g , h , \varphi)$ is $P(\mathbb{Q})$-invariant with parabolic subgroup $P$. In general, the theta integral \begin{equation} I(g, \varphi)=\int_{[H]} \theta(g, h, \varphi)dh \end{equation} is not convergent, even for the following case. \begin{example} Let $e=\kzxz{1}{}{}{1}$, $\varphi_{f}=\operatorname{char}(\widehat{M_{2}(\mathbb{Z})})$, and $\varphi_{\infty}=e^{-2\pi Q(x)}$, where $\widehat{M_{2}(\mathbb{Z})}=M_{2}(\mathbb{Z})\otimes_{\mathbb{Z}} \widehat{\mathbb{Z}}$. Then \begin{equation} I(e, \varphi)=\int_{[H]} \theta(e, h, \varphi)dh\nonumber \end{equation} is not convergent since there are infinitely many elements with zero determinant. \end{example} \begin{theorem}\label{integral conv} Assume that $\varphi_{\infty}= \varphi^{sp}_\infty$, then the integral $$\tilde{I}(g,\varphi)=\int_{[O(V)]}\tilde{\theta}(g,h,\varphi) dh$$ is absolutely convergent for each $g \in \operatorname{SL}_2(\mathbb{A})$. \end{theorem} \begin{proof} It suffices to prove that the integral $$\int_{[SO(V)]}\tilde{\theta}(g,h,\varphi) dh$$ is convergent. It is easy to see $$\mathrm{SO}(V)=\{(h_1,h_2)\in \operatorname{GL}_2\times \operatorname{GL}_2: \det(h_1)=\det(h_2)\}/ \mathbb{Q}^\times,$$ which acts on $V$ via $$ (h_1, h_2)x = h_1 x h_2^{-1}. $$ View $\mathrm{SL}_2$ as a subgroup of $\mathrm{SO}(V)$ by $h\mapsto (h,1)$, we get an exact sequence $$ 1\longrightarrow \mathrm{SL}_2 \longrightarrow \mathrm{SO}(V) \longrightarrow \mathrm{PGL}_2 \longrightarrow 1. $$ Write $$J(g,\varphi)=\int_{[\operatorname{SL}_{2}]}\widetilde{\theta}(g,h,\varphi) dh.$$ We need to verify that $J(g,\omega(h')\varphi)=J(g,\varphi)$ for any $h'\in \mathrm{SO}(V)(\mathbb A)$. The reason is as follows. We write $$ \tilde{\theta}(g,h,\varphi)= \theta_0(g,h,\varphi)+ \theta_2(g,h,\varphi), $$ where $$ \theta_i(g,h,\varphi)= \sum_{x\in V, \operatorname{rank}(x)=i}\omega(g, h)\varphi(x). $$ It is easy to see that $\theta_i(g,h,\varphi)$ is invariant under $\mathrm{SL}_2$. So it suffices to verify $J_i(g,\omega(h')\varphi)=J_i(g,\varphi)$ for $$ J_i(g,\varphi)=\int_{[\operatorname{SL}_{2}]}\theta_i(g,h,\varphi) dh. $$ The case $i=0$ is trivial since $$\theta_0(g,h,\varphi)= \omega(g)\varphi(0). $$ Now we treat the case $i=2$. It is easy to have $$ \theta_2(g,h,\varphi)= \sum_{\gamma\in \operatorname{SL}_2}\sum_{\eta\in \mathbb{Q}^\times} \omega(g,\gamma h)\varphi(x_\eta). $$ Here $x_\eta\in V$ is any element of norm $\eta$. Thus \begin{eqnarray*} J_2(g,\varphi)&=&\sum_{\eta\in \mathbb{Q}^\times} \int_{\mathrm{SL}_2(\mathbb A)} \omega(g,h)\varphi(x_\eta) dh. \end{eqnarray*} It suffices to check \begin{eqnarray*} \int_{\mathrm{SL}_2(\mathbb A)} \omega(g)\varphi(h^{-1} h_1^{-1}x_\eta h_2) dh = \int_{\mathrm{SL}_2(\mathbb A)} \omega(g)\varphi(h^{-1}x_\eta) dh \end{eqnarray*} for any $h_1,h_2\in \operatorname{GL}_2(\mathbb A)$ with $\det(h_1)=\det(h_2)$. Denote $y= h_1^{-1}x_\eta h_2 x_\eta^{-1}$, which lies in $\mathrm{SL}_2(\mathbb A)$. Then the left-hand side is equal to \begin{align} \int_{\mathrm{SL}_2(\mathbb A)} \omega(g)\varphi(h^{-1} yx_\eta) dh &= \int_{\mathrm{SL}_2(\mathbb A)} \omega(g)\varphi( (y^{-1}h)^{-1}x_\eta) dh\\ \nonumber &=\int_{\mathrm{SL}_2(\mathbb A)} \omega(g)\varphi(h^{-1}x_\eta) dh. \end{align} Now we have \begin{eqnarray*} \int_{[SO(V)]}\tilde{\theta}(g,h,\varphi) dh &=& \int_{\mathrm{SL}_2(\mathbb A)\mathrm{SO}(V)(\mathbb{Q})\backslash \mathrm{SO}(V)(\mathbb A)} \int_{[\operatorname{SL}_{2}]}\tilde {\theta}(g,h_1h',\varphi) dh_1 dh'\\ &=& \int_{[PGL_{2}]} J(g,\omega(h')\varphi) dh'\\ &=& \int_{[PGL_{2}]} J(g,\varphi) dh'\\ &=& \mathrm{vol}([PGL_{2}])\ J(g,\varphi). \end{eqnarray*} If we use the Tamagawa measures on $\mathrm{SO}(V)(\mathbb A)$ and $\mathrm{\operatorname{SL}}_2(\mathbb A)$, then the quotient measure gives $$\mathrm{vol}([PGL_{2}])=2.$$ It suffices to prove that $J(g,\varphi)$ is absolutely convergent. There is an open compact subgroup $K$ of $\mathrm{SL}_2(\mathbb A_f)$ acting trivially on $\varphi$. It follows that $$\tilde{\theta}(g,h K,\varphi)=\tilde{\theta}(g,h,\varphi).$$ Let $K_\infty=\operatorname{SO}(2)(\mathbb{R})$ and $\varphi'=\omega(g)\varphi$, which is still a Schwartz function on $V(\mathbb A)$, and the infinite part $|\varphi_\infty'|$ is bounded by a polynomial times a Gaussian. It is known that $\varphi_{\infty}(x k_\theta)=\varphi_{\infty}(x)$ for $k_\theta \in K_{\infty}$. The proof is reduced to show the absolute convergence of $$J'(\varphi)=\int_{\mathrm{SL}_2(\mathbb{Q})\backslash \mathrm{SL}_2(\mathbb A)/KK_\infty}\tilde{\theta}(h,\varphi') dh.$$ Then $$ J'(\varphi)=\int_{\Gamma\backslash \mathbb H}\tilde{ \theta}(h,\varphi') dh. $$ Here $\mathbb H$ is the upper half plane, and $\Gamma= \mathrm{SL}_2(\mathbb{Q})\cap K$ is a subgroup of $\mathrm{SL}_2(\mathbb Z)$ with finite index. Let $$\Omega=\{x+yi\in \mathbb H: -1/2<x\leq 1/2, \ |x+yi|>1\}$$ be the standard fundamental domain of $\mathrm{SL}_2(\mathbb Z)\backslash \mathbb H$. It suffices to prove that $$ J''(\varphi)=\int_{\Omega} \tilde{\theta}(h,\varphi') dh =\int_{\Omega}\tilde{ \theta}\bigg( \kzxz{\sqrt{y}}{0}{0}{\frac{1}{\sqrt{y}}} \kzxz{1}{x}{}{1} ,\varphi' \bigg) \frac{dxdy}{y^2} $$ is absolutely convergent. The integrand grows slowly as $y\to \infty$ using the decay of $\varphi'_\infty$. Note that $h$ has only infinite part, and thus, essentially (up to a finite linear combination) $$ \tilde{\theta}(h,\varphi') =\sum_{l\in L, \operatorname{rank}(l)\neq 1} \omega(h) \varphi'_\infty(l) $$ where $L$ is some lattice in $V$. Then it is easy to check that \begin{equation}\label{equtheta} \bigg|\tilde{\theta}\bigg( \kzxz{\sqrt{y}}{0}{0}{\frac{1}{\sqrt{y}}} \kzxz{1}{x}{}{1} ,\varphi'\bigg) \bigg| \leq C \end{equation} for some constant $C$. For convenience, we just assume that $x=0$ and $L=M_{2 }(\mathbb{Z})$. Since $\varphi'_\infty$ is a Schwartz function, there exists $M>0$ for $c>0$, when $\mid l_1\mid +\mid l_2\mid +\mid l_3\mid +\mid l_4\mid >M$, $$l_1^2l_2^2l_3^4l_4^4 \mid \varphi'_\infty(l) \mid <c,$$ for any $l=\kzxz{l_1}{l_2}{l_3}{l_4} \in L$. Thus if $$\mid \frac{l_1}{\sqrt{y}}\mid +\mid \frac{l_2}{\sqrt{y}}\mid +\mid \sqrt{y} l_3\mid +\mid \sqrt{y} l_4\mid >M,$$ one has $$y^2l_1^2l_2^2l_3^4l_4^4\mid \varphi'_\infty\bigg(\kzxz{\sqrt{y}}{0}{0}{\frac{1}{\sqrt{y}}}^{-1}l \bigg)\mid<c.$$ Then is is easy to see \begin{eqnarray} &&\bigg|\sum_{l\in L, \operatorname{rank}(l)\neq 1, l_1l_2l_3l_4\neq 0} \varphi'_\infty\bigg(\kzxz{\sqrt{y}}{0}{0}{\frac{1}{\sqrt{y}}}^{-1}l \bigg)\bigg|\nonumber\\ &<& \frac{16c}{y^2}\sum_{l_1 \in \mathbb{N}} \frac{1}{l_1^2}\sum_{l_2 \in \mathbb{N}} \frac{1}{l_2^2}\sum_{l_3 \in \mathbb{N}} \frac{1}{l_3^4}\sum_{l_4 \in \mathbb{N}} \frac{1}{l_4^4} + \phi= \frac{16c \zeta(2)^2 \zeta(4)^2}{y^2} +\phi,\nonumber \end{eqnarray} where $\phi$ is the sum over the subset $$B=\{l=\kzxz{l_1}{l_2}{l_3}{l_4}\in L, \operatorname{rank}(l)\neq 1 :\mid \frac{l_1}{\sqrt{y}}\mid +\mid \frac{l_2}{\sqrt{y}}\mid +\mid \sqrt{y} l_3\mid +\mid \sqrt{y} l_4\mid \leq M\},$$ which is a finite subset. When $y >M^2$, $B$ is empty. So $$ \bigg|\sum_{l\in L, \operatorname{rank}(l)\neq 1, l_1l_2l_3l_4\neq 0} \varphi'_\infty\bigg(\kzxz{\sqrt{y}}{0}{0}{\frac{1}{\sqrt{y}}}^{-1}l \bigg)\bigg| $$ is bounded by some constant. When $ l_1l_2l_3l_4=0$, we could follow the same method. Then one obtains the equation (\ref{equtheta}), the result follows. \end{proof} \begin{remark} When $\varphi_{\infty}$ is a polynomial times a Gaussian, the above theorem is true. We leave details to the readers. \end{remark} \subsection{ Fourier coefficient of theta integral} The $\eta$-th Fourier coefficient of $I(g, \varphi)$ is given by \begin{eqnarray}I_{\eta}(g, \varphi)&=&\int_{\mathbb{Q}\setminus \mathbb{A}}I(n(b)g, \varphi)\psi(-b\eta)db\nonumber \\ &=&\int_{[H]}\theta_{\eta}(g, h, \varphi)dh \end{eqnarray} with $$\theta_{\eta}(g, h, \varphi)=\sum_{x\in V(\mathbb{Q})[\eta]}\omega(g)\varphi(h^{-1}x),$$ where $$V(\mathbb{Q})[\eta] =\{ x\in V(\mathbb{Q}):\, Q(x)=\eta\}.$$ For $\eta\neq 0$ and for any choice of $\varphi$, \begin{eqnarray} \tilde{I}_{\eta}(g, \varphi)=I_{\eta}(g, \varphi),\nonumber \end{eqnarray} where $\tilde{I}_{\eta}(g, \varphi)$ is $\eta$-th Fourier coefficients of $\tilde{I}_{\eta}(g, \varphi)$. By Theorem \ref{integral conv}, one knows that $\tilde{I}_{\eta}(g, \varphi)$ is absolutely convergent when $ \varphi_{\infty}= \varphi^{sp}_\infty$. We extend it to all $\varphi \in S(V(\mathbb{A}))$ in the following result. \begin{proposition}\label{fourierintegral} For any $\eta \in \mathbb{Q}^{\ast}$ and $\varphi=\otimes_{p} \varphi_{p} \in S(V(\mathbb{A}))$, the integral $I_{\eta}(g, \varphi)$ is absolutely convergent. \end{proposition} \begin{proof} For convenience, we prove that $I_{\eta}(e, \varphi)$ is absolutely convergent. It is not hard to prove all cases if one replace $\varphi$ by $\omega(g) \varphi$. Up to a finite linear combination, we suppose that support$(\varphi_{f})$ $\subseteq \widehat{L}$, where L is a lattice in the space $V(\mathbb{Q})$. We could set $\varphi_{f}= \operatorname{char}(\widehat{M_{2}(\mathbb{Z})})$. Notice that the set $V(\mathbb{Q})[\eta] $ for $\eta\neq 0$ is in one orbit, so there is an bijective map \begin{center} $SO(V)(\mathbb{Q})_{x_{\eta}}\setminus SO(V)(\mathbb{Q}) \longleftrightarrow V(\mathbb{Q})[\eta],$ \end{center} \begin{center} $h \rightarrow hx_{\eta}$, \end{center} where $x_{\eta}$ is any element in $V(\mathbb{Q})$ with $\det x_{\eta}=\eta $. \begin{eqnarray} I_{\eta}(e, \varphi)&=&\frac{1}{2}\int_{[SO(V)]} \sum_{x\in V(\mathbb{Q})[\eta]} \varphi(h^{-1}x)dh \nonumber \\ &=&\frac{1}{2}\int_{SO(V)(\mathbb{Q})_{x_{\eta}}\setminus SO(V)(\mathbb{A})} \varphi(h^{-1}x_{\eta})dh \nonumber\\ &=& \frac{1}{2}\operatorname{vol}(SO(V)_{x_{\eta}})\int_{SO(V)(\mathbb{A})_{x_{\eta}}\setminus SO(V)(\mathbb{A})} \varphi(h^{-1}x_{\eta})dh \nonumber\\ &=& \prod_{p\leq \infty}\int_{SO(V)(\mathbb{Q}_{p})_{x_{\eta}}\setminus SO(V)(\mathbb{Q}_{p})} \varphi_{p}(h^{-1}x_{\eta})dh_{p}. \end{eqnarray} It is easy to obtain that \begin{equation} SO(V)=\{(h_{1}, h_{2})\mid h_{1}, h_{2} \in \operatorname{GL}_{2}(\mathbb{Q}), \det(h_{1})=\det(h_{2})\}/\mathbb{Q}^{\times}. \end{equation} Let $h=(h_{1}, h_{2})\in SO(V)$, the action is given by $h\cdot x=h_{1}xh_{2}^{-1}$. For any $\eta \in \mathbb{Q }\setminus \mathbb{Z}$, it is easy to get that $$\prod_{p\leq \infty}\int_{SO(V)(\mathbb{Q}_{p})_{x_{\eta}}\setminus SO(V)(\mathbb{Q}_{p})} \varphi_{p}(h_{p}^{-1}x_{\eta})dh_{p}= 0.$$ Let $\eta \in \mathbb{Z}, \eta\neq 0$, assume that $x_{\eta}=\kzxz{\eta}{}{}{1}$, then \begin{center} $SO(V)_{x_\eta}=\Big \{(h_{1}, h_{2}) |h_{1}=\kzxz{a}{b}{c}{d}, h_{2}=\kzxz {a}{ \frac{1}{\eta}b} {\eta c} {d}, h_{1}, h_{2} \in \operatorname{GL}_{2}\Big \}/\mathbb{Q}^{\times}.$ \end{center} Hence $SO(V)_{x_\eta}\setminus SO(V) \cong 1\times \operatorname{SL}_{2}$, and the integral \begin{eqnarray}\label{coeff comp} I_{\eta, p}(e, \varphi_p)&=&\int_{SO(V)(\mathbb{Q}_{p})_{x_{\eta}}\setminus SO(V)(\mathbb{Q}_{p})} \varphi_{p}(h^{-1}x_{\eta})dh_{p}\nonumber\\ &= &\int_{\operatorname{SL}_{2}(\mathbb{Q}_{p})} \varphi_{p}(x_{\eta}h_{p})dh_{p}. \end{eqnarray} When $p<\infty$, $I_{\eta, p}(e, \varphi_p)=\operatorname{vol}(A_{\eta}),$ where \begin{center} $A_{\eta}=\bigg\{\left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) | a, b \in \frac{1}{\eta}\mathbb{Z}_{p}, c, d \in \mathbb{Z}_{p}\bigg\}$, \end{center} which is a compact subset of $\operatorname{SL}_{2}(\mathbb{Q}_{p})$, and $\operatorname{vol}(A_{\eta})\leq |\frac{1}{\eta}|_{p}^{2}$ . Now we assume $\eta >0$, and we have the following estimation \begin{equation}\label{finite part} \prod_{p<\infty}I_{\eta, p}(e, \varphi_p) \leq\eta^{2}. \end{equation} When $p=\infty$, \begin{eqnarray} I_{\eta, \infty}(e, \varphi_{\infty})&=&\int_{SO(V)(\mathbb{R})_{x_{\eta}}\setminus SO(V)(\mathbb{R})} \varphi_{\infty}(h^{-1}x_{\eta})dh_{\infty}\nonumber\\ &= &\int_{\operatorname{SL}_{2}(\mathbb{R})} \varphi_{\infty}(x_{\eta}h_{\infty})dh_{\infty}.\nonumber \end{eqnarray} One has the Iwasawa decomposition of group $\operatorname{SL}_{2}(\mathbb{R})$ \begin{equation} \operatorname{SL}_{2}(\mathbb{R})=N(\mathbb{R})M(\mathbb{R})\operatorname{SO}(2)(\mathbb{R}).\nonumber \end{equation} By the \cite[pp194, Lemma 4]{Weil}, there exists $\phi_{\infty} \in S(V(\mathbb{R}))$ such that $\mid\varphi_{\infty}(x k_\theta)\mid \leq \phi_{\infty}$ for $k_\theta \in \operatorname{SO}(2)(\mathbb{R})$. So the integral \begin{eqnarray} \mid I_{\eta, \infty}(e, \varphi_{\infty})\mid &\leq&\int_{0}^{2\pi}\int_{\mathbb{R}\times \mathbb{R}_{+}^{*}} \mid\varphi_{\infty}(x_{\eta}n(x)m(y^{\frac{1}{2}})k_{\theta})\mid \frac{dxdy}{y^{2}}d\theta \nonumber\\ &\leq & 2\pi \int_{\mathbb{R}\times \mathbb{R}_{+}^{*}}\phi_{\infty}\Bigg(\left( \begin{array}{cc} \eta y^{\frac{1}{2}} & \eta xy^{\frac{-1}{2}}\\ 0 & y^{\frac{-1}{2}} \\ \end{array} \right)\Bigg)\frac{dxdy}{y^{2}}\nonumber \\ &=& 2\pi \int_{\mathbb{R}\times \mathbb{R}_{+}^{*}}\phi_{\infty}\Bigg(\left( \begin{array}{cc} \eta y^{\frac{-1}{2}} & \eta xy^{\frac{1}{2}}\\ 0 & y^{\frac{1}{2}} \\ \end{array} \right)\Bigg)dxdy. \end{eqnarray} Since $\phi_{\infty}$ is a Schwartz function, there exists a constant $M>0$ for $c>0$, such that \begin{center} $ \phi_{\infty}\bigg(\left( \begin{array}{cc} \eta y^{\frac{-1}{2}} & \eta xy^{\frac{1}{2}}\\ 0 & y^{\frac{1}{2}} \\ \end{array} \right)\bigg) < \frac{c}{(1+(\eta y^{\frac{-1}{2}})^{2}(\eta xy^{\frac{1}{2}})^{2})(1+(y^{\frac{1}{2}})^{4})}=\frac{c}{(1+y^{2})(1+\eta ^{4}x^{2})}$ \end{center} when $\mid \eta y^{\frac{-1}{2}}\mid + \mid \eta xy^{\frac{1}{2}}\mid +\mid y^{\frac{1}{2}}\mid > M.$ Then \begin{align} \begin{split} \mid I_{\eta, \infty}(e, \varphi_{\infty})\mid &\leq 2\pi \int_{\mathbb{R}\times \mathbb{R}_{+}^{*}} \frac{c}{(1+y^{2})(1+\eta^{4}x^{2})} dxdy \\ &+ 2\pi \int_{B}\phi_{\infty}\Bigg(\left( \begin{array}{cc} \eta y^{\frac{-1}{2}} & \eta xy^{\frac{1}{2}}\\ 0 & y^{\frac{1}{2}} \\ \end{array} \right)\Bigg)dxdy.\nonumber \end{split} \end{align} Here $$B=\{ (x, y) \in\mathbb{R}\times \mathbb{R}_{+}^{*} \mid \mid \eta y^{\frac{-1}{2}}\mid + \mid \eta xy^{\frac{1}{2}}\mid +\mid y^{\frac{1}{2}}\mid \leq M \} ,$$ which is contained in a compact set. Thus the integral $I_{\eta, \infty}$ is convergent, so the integral $I_{\eta}(e, \varphi)$ is absolutely convergent from equation (\ref{finite part}). \end{proof} \section{Weak Siegel-Weil formula}\label{sec:mainresult} In this section, we prove Theorems \ref{weakformula} and \ref{result}. \subsection{Eisenstein series} For the finite place $p$, we let $K_{p}=\operatorname{SL}_2(\mathbb{Z}_{p})$, a maximal compact subgroup of $G_{p}=G(\mathbb{Q}_{p})$. If $p$ is infinite place, we set $K_{\infty}= \operatorname{SO}(2)(\mathbb{R})$. We let $K=\prod_{p}K_{p}$ be the maximal compact subgroup of $G(\mathbb{A})$, and we have $G(\mathbb{A})=P(\mathbb{A})K$, where $P=N M$ is the maximal parabolic subgroup of $G$ (called Siegel parabolic). The induced representation $I(s, \chi_V)= \operatorname{Ind}_{P}^{G}( | | ^{s}\chi_V)$ of $G(\mathbb{A})$ consists of smooth functions $\Phi(g, s)$ on $G(\mathbb{A})$ such that \begin{equation} \label{eq:1.1} \Phi(nm(a)g, s) = \chi_{V}(a)|a|^{s+1}\Phi(g, s). \end{equation} The Eisenstein series is defined by \begin{equation} E(g, s, \Phi)= \sum_{\gamma\in P \setminus G} \Phi(\gamma g, s). \end{equation} It is absolutely convergent for $\Re(s)> 1$. There is a $G(\mathbb{A})$-intertwining map \begin{equation} \lambda=\lambda_{V} : S(V(\mathbb{A})) \rightarrow I(s_{0}, \chi_{V}), \quad \lambda(\varphi)(g)= \omega(g)\varphi(0). \end{equation} There exists a section $\Phi \in I(s, \chi_{V})$ such that $\lambda(\varphi)=\Phi(g, s_0)$, and one could write \begin{equation} E(g, s, \varphi)= E(g, s, \Phi). \end{equation} \begin{comment} Firstly, we recall Kudla and Rallis's work \cite[Section 1]{regularized}. Let $S$ be the set of places of $\mathbb{Q}$ consisting of the archimedean place together with all finite places $p$ at which $\Phi(s)$ is not $\operatorname{SL}_{2}(\mathbb{Z}_{p})$-invariant. Let \begin{equation}\label{bnplocal} b_{n,p}(s, \chi)=L_{p}(s+\rho_{n}, \chi_{p})\cdot \prod_{k=1}^{[\frac{n}2]}L_{p}(2s+n+1-2k, \chi_{p}^{2} ), \end{equation} and let\begin{equation}\label{bnspartial} b_{n}^{S}(s, \chi)=\prod_{p\notin S}b_{n,p}(s, \chi). \end{equation} For a standard section $\Phi(s)$, define the normalized Eisenstein series \begin{equation}\label{normalEisenstein series} E^{*}(g, s, \Phi)=b_{n}^{S}(s, \chi)E(g, s, \Phi). \end{equation} There exists Laurent expansion \begin{equation} E^{*}(g, s, \Phi)=\frac{A^{*}_{-1}(g, \Phi)}{s-s_{0}}+A^{*}_{0}(g, \Phi)+O(s-s_{0}) \end{equation} where $s_0=1$. The leading term $A^{*}_{-1}(g, \Phi)$(or $A^{*}_{0}(g, \Phi)$ if $A^{*}_{-1}(g, \Phi)$ is 0) has an interesting interpretation in term of the theta function, this is regularized Siegel-Weil formula. \end{comment} Now we consider the case for $V=(M_{2}(\mathbb{Q}), \det)$. By \cite[Lemma 1.3]{regularized} and \cite[Theorem 4.12]{regularized}, we know that $E(g, s, \Phi)$ at most has one simple pole at $s_{0}$. So one has the Laurent expansion \begin{equation}\label{Laurent} E(g, s, \Phi)=\frac{A_{-1}(g, \Phi)}{s-s_{0}}+A_{0}(g, \Phi)+O(s-s_{0}). \end{equation} \begin{comment} $$\mathop{\operatorname{ord}}_{s=s_{0}}b_{n,p}(s, \chi)= 0,$$ for any place $p$, where $s_{0}=1$. Then the order of $E(g, s, \Phi)$ at $s_{0}$ is the same as $E^{*}(g, s, \Phi)$ clearly. From \cite[Theorem 4.12]{regularized}, we know that $E^{*}(g, s, \Phi)$ at most has one pole at $s_{0}=1$ . \end{comment} Assume that $\Phi(s)=\bigotimes_{p}\Phi_{p}(s)$ is a factorizable standard section of $I(s, \chi)$. For $\eta \neq 0$, $\Re(s)>1$, the $\eta$-th Fourier coefficient of $E(g, s, \Phi)$ is \begin{eqnarray} E_{\eta}(g, s, \Phi)&=&\int_{\mathbb{A}/\mathbb{Q}}E(n(b)g, s, \Phi)\psi(-b\eta)db\nonumber\\ &=&\prod_{p}W_{\eta, p}(g, s, \Phi_{p}), \end{eqnarray} where \begin{eqnarray}\label{whittakerfun} W_{\eta, p}(g, s, \Phi_{p})=\int_{\mathbb{Q}_{p}}\Phi_{p}(w n(b)g, s)\psi_{p}(-b\eta)db. \end{eqnarray} The integral $W_{\eta, p}(g, s, \Phi_{p})$ extends to an entire function of $s$ \cite{Karel}, \cite{Wallach}. For any fixed $s$, it defines an element of the one dimensional space $$\operatorname{Hom}_{G_{p}}(I_{p}(\chi_V,s), \operatorname{Ind}_{N_{p}}^{G_{p}}(\psi_{\eta})).$$ From \cite[Section 2]{regularized}, one has: \begin{lemma}\label{eiscoeff} For every $\eta \in \mathbb{Q}^{\ast}$, $E_{\eta}(g, s, \Phi)$ is holomorphic at $s_{0}$. \end{lemma} \subsection{Weak Siegel-Weil formula} \begin{comment} Let $p$ be a finite place of $\mathbb{Q}$. For $\eta \in \mathbb{Q}_{p}$, let \begin{center} $S_{N, \psi_{\eta}}=S/\{\omega(n(a))\varphi-\psi_{\eta}(n(a))\varphi: n(a)\in N, \varphi \in S\}$ \end{center} be the twisted Jacquet module of S associated to $\psi_{\eta}$ and $N$. Denote $$\Omega_{\eta}=\{x\in V: (x, x)= 2\eta\}.$$ Firstly, we list two lemmas \cite[Lemma 10, 11]{Urtis}. \begin{lemma}\label{finite distr} The quotient map $S\rightarrow S_{N, \psi_{\eta}}$ factors through the restriction $S\rightarrow S(\Omega_{\eta})$, $\varphi\rightarrow \varphi\mid \Omega_{\eta}$. In particular, if $\eta \neq 0$, then $\Omega_{\eta}$ is single $H-orbit$ and the quotient map $S\rightarrow (S_{N, \psi_{\eta}})_{H}$ is given by \begin{center} $\varphi\rightarrow \int_{\Omega_{\eta}}\varphi d\mu_{\eta}$, \end{center} where $d\mu_{\eta}$ is the unique (up to scalar) $H$-invariant measure on $\Omega_{\eta}$. If $\Omega_{\eta}=\emptyset$, then $S_{N, \psi_{\eta}}=0$. \end{lemma} \begin{proof}See \cite[Lemma 4.2]{Rallis} and \cite[Proposition 4.2]{Rallis}. \end{proof} The analogue of the lemma for the real place is : \begin{lemma}\label{infinite distr} Let $\eta \in \mathbb{R}^{\ast}$ and $(S_{\eta}^{\prime})^{H}$ be the space of $H$-invariant distributions T such that \begin{center} $T(\omega(X)\varphi)=d\psi_{\eta}(X) T(\varphi)$ \end{center} for all $X\in Lie N$ and for all $\varphi \in S$. If $\Omega_{\eta}=\emptyset$, then $(S_{\eta}^{\prime})^{H}=0$. If $\Omega_{\eta}\neq \emptyset$, then $dim(S_{\eta}^{\prime})^{H}=1$ and this space is spanned by \begin{center} $\varphi\rightarrow \int_{\Omega_{\eta}}\varphi d\mu_{\eta}$, \end{center} where $d\mu_{\eta}$ is the unique (up to scalar) $H$-invariant measure on $\Omega_{\eta}$. \end{lemma} \begin{proof}See \cite[Lemma 4.2]{Rallis} and \cite[Proposition 2.9]{regularized}. \end{proof} \begin{definition} For a non-singular variety $V$ with $\dim V=m$, there exists an algebraic differential form of degree $m$, everywhere holomorphic and not zero. Such a form will be called a gauge form. \end{definition} \end{comment} If $G$ is a algebraic group over number field $F$, $G_{A_{F}}$ is a topological locally compact group. If $\omega$ is a gauge form on $G$, and $(\lambda_{\mathcal{P}})$ \cite[Chapter 2]{Weil1} is a set of convergence factor of $G$, then the Tamagawa measure $\Omega=(\omega, (\lambda_{\mathcal{P}}))$ is a left invariant measure on $G_{A_{F}}$, where $\mathcal{P}$ is any place of $F$. This measure is independent of the choice of $\omega$, and is called the Tamagawa measure derived from the convergence factors $(\lambda_{\mathcal{P}})$. If $(1)$ is the convergence factors, the measure $\Omega=(\omega, (1))$ is called Tamagawa measure for $G$, and the number \begin{equation}\tau(G)=\int_{G_{A_{F}}/ G_{F}}(\omega, (1)) \end{equation} is the Tamagawa number of $G$. Now we consider the fixed space $V=M_{2}(\mathbb{Q})$. Choosing the Tamagawa measure $dh^{\prime}$ on $O(V)(\mathbb{A})$ which is 2 times $dh$. The gauge form $\omega= dx_{1}\wedge dx_{2}\wedge dx_{3}\wedge dx_{4}$ on $V$ determines a measure $\omega _{p}$ for $V_{p}$ , which is the self dual for the pairing $[x, y]= \psi_{p}((x, y)),$ where $( , )$ is the bilinear form associated to $Q$. On the other hand, the gauge form $\alpha= d\eta$ on $\mathbb{Q}$ determines a measure $ \alpha_{p}= d_{p}\eta$, which is self -dual with respect to the pairing $[b, \eta]= \psi_{p}(b\eta)$. We can split $\omega = \omega_{\eta} \wedge \alpha$ \cite[section 2.5]{Hida}, where $\omega_{\eta}$ is the gauge form on $V[\eta]$. Since \begin{center} $O(V)_{x_{\eta}}\backslash O(V) \cong V[\eta]$, \end{center} $\omega_{\eta}$ is also the gauge form on $O(V)_{x_{\eta}}\backslash O(V)$. Then $dh^{\prime}$ induces the Tamagawa measure $\omega_{\eta}$ on $ O(V)_{x_{\eta}}(\mathbb{A})\backslash O(V)(\mathbb{A})$ for $x_{\eta}\in V$ with $Q(x_{\eta})= \eta$ ( by the uniqueness of the Tamagawa measure). We now compute the Fourier coefficient of the theta integral \begin{eqnarray}\label{thetacoeff} I_{\eta}(g, \varphi)&=& \int_{[H]}\theta_{\eta}(g, h, \varphi)dh\nonumber\\ &= &\int_{O(V)(\mathbb{Q})_{x_{\eta}}\setminus O(V)(\mathbb{A})} \omega(g)\varphi(h^{-1}x_{\eta})dh \nonumber\\ & =& \frac{1}{2} \tau(O(V)_{x_{\eta}}) \int_{ O(V)_{x_{\eta}}(\mathbb{A})\backslash O(V)(\mathbb{A}) }\omega(g) \varphi (h^{-1}x_{\eta}) \omega_{\eta} \nonumber\\ & =&\prod_{p} O_{\eta, p}(\omega(g_{p}) \varphi_{p}). \end {eqnarray} Where $ \tau(O(V)_{x_{\eta}}) $ is the Tamagawa number of $O(V)_{x_{\eta}}$, Weil have proved that $ \tau(O(V)_{x_{\eta}}) = 2$. In the last step, we have assumed that $ \varphi$ is factorizable and written \begin{equation} O_{\eta, p}(\varphi_{p})= \int_{O(V)_{x_{\eta}}(Q_{p})\setminus O(V)(Q_{p})} \varphi_{p}(h^{-1} x_{\eta}) \omega_{\eta, p},\nonumber \end{equation} $p \leq \infty,$ for the local orbital integral. When $ \varphi$ is factorizable, then the associated $\Phi(s)=\bigotimes_{p} \Phi_{p}(s)$ is also factorizable, where $\Phi_{p}(s) \in I_{p}(s, \chi)$. For $\eta \in \mathbb{Q}^{\times}$ and $\Re(s)> 1$, the $\eta$-th Fourier coefficient of $E(g, s, \varphi)$ is \begin{equation} E_{\eta}(g, s, \varphi)= \int_{\mathbb{Q}\setminus \mathbb{A}} E(n(b)g, s, \varphi) \psi_{-\eta}(b)db = \Pi_{p} W_{\eta, p}(g_p, s, \varphi_{p})\nonumber \end{equation} with \begin{equation} W_{\eta, p}(g_p, s, \varphi_{p}) = \int_{\mathbb{Q}_{p}} \Phi_{p}( w n(b)g_p, s) \psi_{-\eta}(b)db.\nonumber \end{equation} $O_{\eta, p}(\omega_p(g_p)\varphi_{p})$ and $ W_{\eta, p}(g_p, s_0, \varphi_{p})$ are two distributions. By \cite[Proposition 4.2]{Rallis}, \begin{equation} E_{\eta}(g, s_{0}, \varphi)=cI_{\eta}(g, \varphi), \end{equation} where $c$ is a constant. Combining the Proposition \ref{fourierintegral} with the following result, we could prove Theorem \ref{weakformula}. \begin{theorem}\label{mainresult} For any $\eta \in \mathbb{Q}^{*}$, we have \begin{equation}E_\eta(g, s_{0}, \varphi)=I_{\eta}(g, \varphi) \end{equation} for all $\varphi \in S(V(\mathbb{A}))$ and $g \in \operatorname{SL}_{2}(\mathbb{A}).$ \end{theorem} \begin{proof} It suffices to prove that the constant $c=1$. We could choose functions $\varphi$ such that $\varphi_{\infty}$ has compact support. Let $\lambda(\varphi_{p})=\Phi_{p}(s_0)$, then \begin{eqnarray}\label{eisenstein coeff} &&E_\eta(e, s_{0}, \varphi)=\prod_{p \leq \infty}W_{\eta, p}(e, s_0, \Phi_{p})\nonumber\\ &&= \prod_{p \leq\infty}\int _{\mathbb{Q}_{p}} \int_{V(\mathbb{Q}_{p})} \psi_{p}(b Q(y)) \varphi_{p}(y) \omega_{ p} \cdot \psi_{p}(-\eta b) d_{p}b \nonumber\\ &&= \prod_{p \leq\infty} \int _{\mathbb{Q}_{p}} \int_{\mathbb{Q}_{p}} \psi_{p}(b u) M_{\varphi_{p}}(u) d_{p}u \cdot \psi_{p}(-\eta b) d_{p}b \nonumber\\ &&=\prod_{p \leq\infty} \int_{\mathbb{Q}_{p}} \widehat{M_{\varphi_{p}}}(b) \psi_{p}(- \eta b) d_{p}b. \end{eqnarray} Here \begin{center} $ M: S(V(\mathbb{Q}_{p}))\rightarrow S(\mathbb{Q}_{p}), \varphi_{p} \mapsto M_{\varphi_{p}}$ \end{center} is the map defined by integration over the fibers with respect to the measure determined by the restriction of the gauge form $\omega_{\eta}$. For archimedean case, the function $\widehat{M_{\varphi_{\infty}}}$ has a compact support. For the finite place, the function $\widehat{M_{\varphi_{p}}}$ lies in the Schwartz space $S(\mathbb{Q}_{p})$. So one has \begin{eqnarray} \int_{\mathbb{Q}_{p}} \widehat{M_{\varphi_{p}}}(b) \psi_{p}(- \eta b) d_{p}b = M_{\varphi_{p}}(\eta)= O_{\eta, p}(\varphi_{p}).\nonumber \end{eqnarray} In the above equation, we used the same measure given by $ \omega_{\eta}$ on $$O(V)_{x_{\eta}}(\mathbb{Q}_{p}) \setminus O(V)(\mathbb{Q}_{p}) \cong V_{p}[\eta].$$ Combining it with equations (\ref{thetacoeff}) and (\ref{eisenstein coeff}), one obtains \begin{eqnarray} E_{\eta}( e, s_{0}, \varphi) = \prod_{p < \infty} O_{\eta, p}(\varphi_{p}) O_{\eta, \infty}(\varphi_{\infty}) = I_{\eta}(e, \varphi).\nonumber \end{eqnarray} So the constant $c=1$, thus we get the result. \end{proof} \subsection{Relations between theta integral and Eisenstein series} Recall that \begin{equation} E(g, s, \Phi)=\frac{A_{-1}(g, \Phi)}{s-s_{0}}+A_{0}(g, \Phi)+O(s-s_{0}).\nonumber \end{equation} Define \begin{equation} \Phi_{1}(g, s)=\int_{\mathbb{A}}\Phi(wn(b)g, s)db, \end{equation} and the Laurent series is given by \begin{equation} \label{constant residue} \Phi_{1}(g, s)=\frac{A_{-1}(g, \Phi)}{s-s_{0}}+B_{0}(g, \Phi)+O(s-s_{0}). \end{equation} We identify notation $\varphi$ with $\Phi$ if $\Phi(g, s_0)=\lambda(\varphi)$. \begin{comment} The followings are main results of this subsection, which will be proved later. \begin{theorem}\label{main result} Let the notations be as above, the infinite part $\varphi_{\infty}$ is a polynomial times a Gaussian, then \begin{equation} \tilde{I}(g, \varphi)=A_{0}(g, \varphi)-B_{0}(g, \varphi),\nonumber \end{equation} $g\in P(\mathbb{A})$. Moreover if $A_{-1}(g, \varphi)=B_{0}(g, \varphi)=0$, then $$\tilde{I}(g, \varphi)=E(g, s_{0}, \varphi).$$ \end{theorem} \begin{proposition}\label{constant of residue} $A_{-1}(g, \Phi)$ is a constant function on $P(\mathbb{A})$. \end{proposition} \begin{proof} From the equality (\ref{constant term}), we know that $E_{P}(n(b)g, \varphi, s)=E_{P}(g, \varphi, s)$. So one obtains \begin{equation}\label{n(b)invariant} A_{-1}(n(b)g, \Phi)=A_{-1}(g, \Phi). \end{equation} Since \begin{equation}\label{whittintegral} \Phi_{1}(m(a)g, s)=\mid a\mid^{1-s}\Phi_{1}(g, s), \end{equation} one has \begin{equation}\label{m(a)invariant} A_{-1}(m(a)g, \Phi)=A_{-1}(g, \Phi). \end{equation} From it and equation (\ref{n(b)invariant}), the result follows. \end{proof} The $\eta$-th Fourier coefficient $E_{\eta}(g, s, \varphi)$ of the Eisenstein series $E(g, s, \varphi)$ is holomorphic at $s_{0}$ (Lemma \ref{eiscoeff}), and \begin{equation}E_{\eta}(g, s_{0}, \varphi)=A_{0, \eta}(g, \varphi), \end{equation} where $\eta \in \mathbb{Q}^{\ast}$. From Theorem \ref{mainresult}, we have \begin{equation}\label{Fourier coefficient} \tilde{I}_{\eta}(g, \varphi)=A_{0, \eta}(g, \varphi), \end{equation} for $\eta \in \mathbb{Q}^{\ast}$. \end{comment} Now we consider the constant term, i.e., the coefficient with $\eta=0$. The constant term of Eisenstein series is \begin{eqnarray}\label{constant term} E_{P}(g, s, \Phi)&=&\int_{\mathbb{A}/\mathbb{Q}}E(n(b)g, s, \Phi) db\\ &=&\Phi(g, s)+\Phi_{1}(g, s).\nonumber \end{eqnarray} \begin{proof}[\bf{Proof of Theorem \ref{result}}] Combining it with the Theorem \ref{integral conv}, it suffices to prove $\tilde{I}(g, \varphi)=E(g, s_{0}, \varphi).$ Firstly, we consider the constant term of both sides \begin{equation} \tilde{I}_{P}(g, \varphi)= \int_{[H]}\theta_{0}(g, h, \varphi) dh=\omega(g)\varphi(0). \end{equation} Since $\Phi_{1}(g, s_0)=0$, by equation (\ref{constant term}), one obtains $$E_{P}(g, s_0, \varphi)=\Phi(g, s_0)=\omega(g)\varphi(0)=\tilde{I}_{P}(g, \varphi).$$ Combining it with the Theorem \ref{mainresult}, we know all the Fourier coefficients are equal. By the next lemma, one obtains the result. \end{proof} \begin{lemma} Let $f(g)=\tilde{I}(g, \varphi)-E(g, s_{0}, \varphi)$, then $f=0$. \end{lemma} \begin{proof} Notice that $f$ is a function with all Fourier coefficients being $0$. Set \begin{center} $F(x)=f\bigg(\left( \begin{array}{cc} 1 & x \\ & 1 \\ \end{array} \right) g\bigg)$, $x\in\mathbb{A}$, \end{center} which is continuous and satisfies $F(x+a)=F(x)$ for any $a\in \mathbb{Q}$. Thus it may be regarded as a function on the compact group $\mathbb{A}/\mathbb{Q}$. It therefore has a Fourier expansion in terms of the characters of $\mathbb{A}/\mathbb{Q}$. All the character has the form $x\mapsto \psi(\eta x)$, where $\eta \in \mathbb{Q}$. Thus we have $$ F(x)=\sum_{\eta \in \mathbb{Q}}F_{\eta}\psi(\eta x),$$ where $F_{\eta}=\int_{\mathbb{A}/\mathbb{Q}}F(y)\psi(-\eta y)dx=f_{\eta}=0$. Then $F(x)=0$, hence $f(g)=F(0)=0$. \end{proof} \begin{comment} \begin{lemma}\label{constantlemma} \begin{equation} \tilde{I}_{P}(g, \varphi)=A_{0, P}(g, \varphi)-B_{0}(g, \varphi), \end{equation} where $B_{0}(g, \varphi)$ is defined in (\ref{constant residue}) and $A_{0, P}=\int_{\mathbb{Q}\setminus \mathbb{A}} A_0(n(b)g, \varphi) db$. \end{lemma} \begin{proof} One has $$\big(E_{P}(g, s, \varphi)- \frac{A_{-1}(g, \varphi)}{s-1}\big)\mid _{s=s_{0}}= A_{0, P}(g, \varphi).$$ On the other hand $E_{P}=\Phi_{0}+\Phi_{1}$, one obtains \begin{eqnarray} \label{constant}\tilde{I}_{P}(g, \varphi)-A_{0, P}(g, \varphi)&= &\tilde{I}_{P}(g, \varphi)-\big(E_{P}(g, s, \varphi)- \frac{A_{-1}(g, \varphi)}{s-1}\big)\mid _{s=s_{0}}\nonumber \\ &=& \tilde{I}_{P}(g, \varphi)-\Phi(g, s_{0})-B_{0}(g, \varphi)\nonumber\\ &=&-B_{0}(g, \varphi) , \end{eqnarray} \end{proof} Note that $B_{0}(g, \varphi)$ is $N(\mathbb{A})$-invariant. \begin{lemma}\label{residue invariant} $B_{0}(g, \Phi)$ is $N(\mathbb{A})$-invariant. \end{lemma} \begin{proof} For the first part, from the definition (\ref{constant residue}), \begin{center} $\Phi_{1}(g, s)=\frac{A_{-1}(g, \Phi)}{s-s_{0}}+B_{0}(g, \Phi)+O(s-s_{0}).$ \end{center} Since $\Phi_{1}(g, s)$ is $N(\mathbb{A})$-invariant, $B_{0}(g, \Phi)$ follows. \end{proof} {\bf Proof of Theorem \ref{main result}}: \begin{proof} We claim that \begin{equation}\label{main equality} \tilde{I}(g, \varphi)=A_{0}(g, \varphi)-B_{0}(g, \varphi),\end{equation} since all the Fourier coefficients of both side are the same which we will prove. For $\eta \in \mathbb{Q}^{\ast}$, since $B_{0}(g, \varphi)$ is $N(\mathbb{A})$-invariant, $$B_{0, \eta}(g, \varphi)=\int_{\mathbb{Q}\setminus \mathbb{A}}B_{0, \eta}(n(a)g, \varphi)\psi(\eta a)da=0.$$ From the equation (\ref{Fourier coefficient}), the $\eta$-th Fourier coefficient $$\tilde{I}_{\eta}(g, \varphi)=A_{0, \eta}(g, \varphi)-B_{0, \eta}(g, \varphi).$$ From the Lemma \ref{constantlemma}, the constant terms of the both sides of (\ref{main equality}) are equal. We will prove in the next lemma that all coefficient are equal implies what we want to show (\ref{main equality}), and other results follow. \end{proof} \end{comment} If $\varphi=\otimes_p\varphi_p \in S(V(\mathbb{A}))$ and $\varphi^{\prime}=\otimes_p\varphi_p^{\prime} \in S(V^{\prime}(\mathbb{A}))$ is a Kudla matching pair, i.e., $\lambda_V(\varphi)=\lambda_{V^{\prime}}(\varphi^{\prime})$, then $E(g, s, \varphi)=E(g, s, \varphi^{\prime})$. For $\tau=u+iv \in \H$, let $g_{\tau}=\kzxz{1}{u}{}{1}\kzxz{v^{\frac{1}{2}}}{}{}{v^{-\frac{1}{2}}}$, then one has: \begin{proposition}\label{matching theorem} Let notations be as above, one has $$\tilde{I}(g_{\tau}, \varphi)=E(g_{\tau}, s, \varphi)=E(g_{\tau}, s, \varphi^{\prime})=I(g_{\tau},\varphi^{\prime} ),$$ where $\varphi_{\infty}=\varphi_{\infty}^{sp}$ and take $\varphi_{\infty}^{'}$ to be $\varphi_{\infty}^{ra}$ or $\varphi_{\infty}^{sp}$ depends on $V^{\prime}$ is definite or indefinite, $\varphi_{\infty}^{ra}$ and $\varphi_{\infty}^{sp}$ are defined in Section \ref{application}. \end{proposition} \begin{proof} Let $(\varphi, \varphi^{\prime})$ be a matching pair, and we know that \begin{equation} E(g, s, \varphi)= E(g, s, \varphi^{\prime}), \end{equation} for all $g \in SL_{2}(\mathbb{A})$. Comparing the constant term $$E_P(g, s_0, \varphi^{\prime})= \Phi(g, s_0)+ \Phi_{1}(g, s_0)=\omega(g)\varphi^{\prime}(0)+ \Phi_1(g, s_0)$$ with $$I_P(g, \varphi^{\prime})=\omega(g)\varphi^{\prime}(0),$$ one has $ \Phi_1(g, s_0)=0$. From the Theorem \ref{result} and the Siegel-Weil formula, we obtain \begin{center} $\tilde{I}(g_{\tau}, \varphi)=E(g_{\tau}, s_{0}, \varphi)= E(g_{\tau}, s_{0}, \varphi^{\prime})= I(g_{\tau}, \varphi^{\prime})$. \end{center} \end{proof} \section{arithmetic geometry}\label{application} Given an orthogonal decomposition \begin{equation} \label{eq:spacedecomposition} V = V^+ \oplus V^-, \quad x = x^+ + x^-, \end{equation} with $V^+$ of signature $(2,0)$ and $V^-$ of signature $(0, 2)$. We define \begin{equation} \varphi^{sp}_\infty (x) = (4\pi (x^+, x^+) -1) e^{ -\pi (x^+, x^+) + \pi (x^-, x^-)}. \end{equation} For any $x=\left( \begin{array}{cc} x_{1} & x_{2} \\ x_{3}& x_{4}\\ \end{array} \right) \in V$, let$$x^{+}=\left( \begin{array}{cc} \frac{x_{1}+x_{4}}2 & \frac{x_{2}-x_{3}}2 \\ \frac{x_{3}-x_{2}}2& \frac{x_{1}+x_{4}}2\\ \end{array} \right), x^{-}=\left( \begin{array}{cc} \frac{x_{1}-x_{4}}2 & \frac{x_{2}+x_{3}}2 \\ \frac{x_{3}+x_{2}}2& \frac{x_{4}-x_{1}}2\\ \end{array} \right)$$ and it is easy to check that this standard decomposition is exactly the orthogonal decomposition. Then one has \begin{equation}\label{equsplit} \varphi^{sp}_\infty (x) =(2\pi ((x_{1}+x_{4})^{2}+(x_{2}-x_{3})^{2})-1)e^{-\pi(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2})}. \end{equation} Let $(V^{\prime}, Q^{'})$ be any other quaternion algebra over $\mathbb{Q}$, which is anisotropic. Let \begin{equation}\label{Gaussian} \varphi^{ra}_{\infty}(x) = e^{-2 \pi Q^{'}(x)} \end{equation} be the Gaussian of the space $V^{'}$. From \cite[Section 3.2]{DuYang} (Kudla used the notation $\tilde\phi(x, z)$), we know ($\varphi^{sp}_{\infty}(x)$, $\varphi^{ra}_{\infty}(x)$) is a matching pair such that \begin{equation}\lambda_V (\varphi^{sp}_{\infty})(g)=\lambda_{V^\prime}(\varphi^{ra}_{\infty})(g)=\Phi_\infty^2(g, 1),\end{equation} where $\Phi_\infty^{2}(gk_\theta,s)=e^{2\theta i}\Phi_\infty^{2}(g,s)$ for $k_\theta=\kzxz{cos\theta}{sin\theta}{-sin\theta}{cos\theta}$ and $\Phi_\infty^{2}(1,s)=1$. \subsection{Definite quaternions and representation numbers} \label{sect:definite} We assume that $D>0$ has odd number of prime factors, and let $B=B(D)$ be the associated definite quaternion. In this case, $V^\prime= (B, \det)$ is positive definite. For any $\varphi_f^{'} \in S(V(\mathbb{A}_f))$, the theta kernel $$ \theta(\tau, h, \varphi_f^{'} \otimes\varphi^{ra}_{\infty}) = v^{-1} \theta(g_\tau, h, \varphi_f^{'}\otimes \varphi^{ra}_{\infty}) $$ is a holomorphic modular form of weight $2$ for some congruence subgroup. Here $g_\tau = n(u) m(\sqrt v)$ for $\tau =u + i v \in \mathbb H$. So the integral $$ I(\tau, \varphi_f^{'}\otimes\varphi^{ra}_{\infty}): = v^{-1} I(g_\tau, \varphi_f^{'}\otimes\varphi^{ra}_{\infty}) $$ is also a modular form of weight $2$. For an even integral lattice $L$ in $V^{'}$, we let \begin{equation} \label{eq:new4.1} \theta(\tau, L) = \theta(\tau, \operatorname{char}(\widehat L) \otimes\varphi^{ra}_{\infty}), \quad I(\tau, L)=I(\tau, \operatorname{char}(\widehat L)\otimes \varphi^{ra}_{\infty}), \end{equation} where $\widehat L=L\otimes \widehat{\mathbb{Z}}$. Two lattices $L_1$ and $L_2$ in $V^{\prime}$ are equivalent if there is $h \in O(V^{'})(\mathbb{Q})$ such that $hL_1 =L_2$. Two lattices $L_1$ and $L_2$ are in the same genus if they are equivalent locally everywhere, i.e, there is $h \in O(V^{'}) (\mathbb{A})$ such that $h L_1= L_2$. Notice that $O(V^{'})(\mathbb{A})$ acts on the set of lattices as follows: $h L = (h_f \widehat L)\cap V^{\prime}$ where $h_f$ is the finite part of $h=h_f h_\infty$. Let $\operatorname{gen}(L)$ be the genus of $L$---the set of equivalence classes of lattices in the same genus of $L$. Then one has $$ O(V^{'})(\mathbb{Q}) \backslash O(V^{'})(\mathbb{A})/K(L) O(V^{'})(\mathbb{R}) \cong \operatorname{gen}(L), \quad [h] \mapsto hL, $$ where $K(L)$ is the stabilizer of $\widehat L$ in $O(V^{'})(\mathbb{A})$. \begin{lemma} \cite[Proposition 4.1]{DuYang}\label{Duyanglemma} Let $$ r_L(n) =|\{ x \in L:\, Q(x) =m\}|,$$ $$\quad r_{\operatorname{gen}(L)}(m) = \left(\sum_{L' \in \operatorname{gen}(L)} \frac{1}{|O(L')|}\right)^{-1} \sum_{L' \in \operatorname{gen}(L)} \frac{r_{L'}(m)}{|O(L')|}, $$ where $O(L^{'})$ is the stabilizer of $L^{'}$ in $O(V^{'})$. Then one has \begin{align*} I(\tau, L)=& \sum_{m=0}^\infty r_{\operatorname{gen}(L)}(m) q^m, \end{align*} where $q =e(\tau)$. \end{lemma} In this paper, we choose $L=(\mathcal O_D(N), \det)$ as an even integral lattice in $V^{\prime}$, then $r_L(m)=r_{D,N}(m)$. \subsection{Indefinite quaternions and Shimura curves} In this subsection, we assume that $D>0$ has even number of prime factors, then $B=B(D)$ is an indefinite quaternion. In this case, $V^\prime= (B, \det)$ is of signature $(2, 2)$. When $D>1$, according to \cite[Theorem 4.23]{KuIntegral}, the theta integral $I(g, \varphi)$ is a generating function of degrees of some divisors with respect to the tautological line bundle over the Shimura curve associated to $V^\prime$. In our case, the divisors can be identified with Hecke correspondences on a Shimura curve. When $D=1$, the case is similar. For a positive integer $m$, let $T_{D, N}(m)$ be the Hecke correspondence on Shimura curve $X_0^D(N)=\Gamma_0^D(N) \backslash \mathfrak{H}$ which is defined in Subsection \ref{quaternion}. The normalized degree is defined by \begin{equation} r_{D, N}(m)= -\frac{2}{ \operatorname{vol}(X_0^D(N), \Omega_0)}\deg T_{D, N}(m). \end{equation} We have the following result. \begin{lemma} \label{DuYang2} For $\varphi_f =\operatorname{char}(\widehat{\mathcal O_D(N)})$, when $D>1$ (\cite[Theorem 5.3]{DuYang}) one has $$ I(\tau, \varphi_f \otimes\varphi^{sp}_{\infty} ) = v^{-1} I(g_\tau, \varphi_f\otimes\varphi^{sp}_{\infty} ) = \sum_{m=0}^\infty r_{D, N}(m) q^m, $$ and when $D=1$, $$ \tilde{I}(\tau, \varphi_f \otimes\varphi^{sp}_{\infty} ) = v^{-1} I(g_\tau, \varphi_f\otimes\varphi^{sp}_{\infty} ) = \sum_{m=0}^\infty r_{D, N}(m) q^m, $$ where $r_{D, N}(0) =1$. \end{lemma} \begin{proof} The case $D=1$ follows from the same proof as in \cite[Theorem 5.3]{DuYang}, and we leave it to the readers. \begin{comment} Let $\Omega_0 = \frac{1}{2 \pi} y^{-2} dx\wedge dy$ be the differential on $X_0^D(N)$, and let $\pi_1$ and $\pi_2$ be two natural projections of $X_K =X_0^D(N) \times X_0^D(N)$ onto $X_0^D(N)$. We assume $$ K = \{ (k_1, k_2) \in \hat{\mathcal O}_D(N)^\times \times \hat{\mathcal O}_D(N)^\times:\, \det k_1= \det k_2 \} \subset Gspin(V)(\mathbb{A}_f) $$ which preserves the lattice $L=\mathcal O_D(N)$. Write $$ I(\tau, \varphi_f \varphi_\infty^{sp} ) = \sum_{m=0}^\infty c(m) q^m. $$ By \cite[Section 4.8]{KuIntegral}, one has $c(0) =1$ and for $m >0$ $$ c(m)=(\operatorname{vol}(X_K, \Omega^2))^{-1} \int_{ Z(m, \varphi_f) } \Omega. $$ Clearly, $$ \operatorname{vol}(X_K, \Omega^2) = \frac{1}{2} \frac{1}{4\pi^2} \int_{X_0^D(N) \times X_0^D(N)} \frac{dx_1\wedge dy_1}{y_1^2} \wedge \frac{dx_2\wedge dy_2}{y_2^2} =\frac{1}2 \operatorname{vol}(X_0^D(N), \Omega_0)^2. $$ On the other hand, $\Omega =-\frac{1}2( \pi_1^*(\Omega_0) + \pi_2^*(\Omega_0)$), then \begin{align*} \int_{ Z(m, \varphi_f) } \Omega&=-\frac{1}2 \int_{T_{D, N}(m)}( \pi_1^*(\Omega_0) + \pi_2^*(\Omega_0) \\ &= -\int_{T_{D, N}(m)} \pi_1^*(\Omega_0) \\ &= - \deg T_{D, N}(m) \int_{X_0^D(N)} \Omega_0. \end{align*} So $c(m) = r_{D, N}'(m)$ as claimed. \end{comment} \end{proof} \subsection{Relations to other quaternions} In this subsection, let $V^{\prime}$ be any division quaternion algebra over $\mathbb{Q}$ of discriminant $D>1$ over $\mathbb{Q}$. We prove Theorem \ref{rDN} as follows which could be used to compute the number $r_{D, N}$. \begin{proof}[\bf Proof of Theorem \ref{rDN}] Recall the definition in Section \ref{sect:Preli}, $$\varphi^{\prime}=\begin{cases} \operatorname{char}(\widehat{\mathcal{O}_D(N)}) \otimes\varphi^{ra}_{\infty} \in S(V^{\prime}(\mathbb{A})) & \hbox{if } V^{\prime}~is~definite,\\ \operatorname{char}(\widehat{\mathcal{O}_D(N)}) \otimes\varphi^{sp}_{\infty} \in S(V^{\prime}(\mathbb{A})) & \hbox{if } V^{\prime}~is~indefinite. \end{cases}$$ By Proposition \ref{globalmatching}, we know $(\varphi^{\prime},\varphi_{D}^{N} )$ is a matching pair, where $\varphi_{D}^{N}=\otimes_{p\leq \infty}\varphi_{p} \in S(V(\mathbb{A}))$ is constructed in this proposition. From Kudla's matching and the Siegel-Weil formula, one has $$I(g_{\tau}, \varphi^{\prime})=E(g_\tau, s_0, \varphi_{D}^{N}).$$ Comparing the Fourier coefficients of both sides, we have \begin{eqnarray} &&r_{D,N}(m)=q^{-m}v^{-1}E_m(g_\tau, s_0, \varphi_{D}^{N})\\ &=& v^{-1}q^{-m}\prod_{p< \infty}W_{m, p}(e, 1, \Phi_{p})\times W_{m, \infty}(g_{\tau}, 1, \Phi_{\infty}),\nonumber \end{eqnarray} where $\Phi_{p}(g_p, 1)=\lambda_{p}(\varphi_{p})(g_p)$. Here the local Whittaker function $$W_{m, p}(g, 1, \Phi_{p})=\int_{\mathbb{Q}_{p}} \Phi_{p}( wn(b)g, 1) \psi_{p} (-m b)db$$ is defined by equation (\ref{whittakerfun}). When $p=\infty$, it is known by \cite[Proposition 15.1]{KRYComp} \begin{equation}\label{infty} W_{m, \infty}(g_{\tau}, 1, \Phi_{\infty})=-4 \pi^{2}m q^m v. \end{equation} When $p$ is finite, $W_{m, p}(e, 1, \Phi_{p})$ is given in the Lemma \ref{densityfinite}. Then one has \begin{eqnarray} &&r_{D,N}(m)=-4 \pi^{2}m \prod_{p< \infty}W_{m, p}(e, 1, \Phi_{p})\\ &=&(-1)^{k+1}24m\prod_{p \nmid ND}\frac{p-p^{-\operatorname{ord}_pm}}{p-1}\prod_{p \mid D}\frac{1}{(p-1)p^{\operatorname{ord}_pm}}\prod_{p \mid N}\nonumber\\ &&\times \frac{2p-p^{-(\operatorname{ord}_pm-1)}-p^{-\operatorname{ord}_pm}}{p^{2}-1}.\nonumber \end{eqnarray} \end{proof} \begin{comment} \begin{remark} By Proposition \ref{matching theorem}, one has \begin{eqnarray} r_{D,N}(m)&=&q^{-m}v^{-1}\int_{[H]} \theta_{m}(g_{\tau}, h, \varphi_{D}^{N})dh\nonumber\\ &=& q^{-m}v^{-1}\int_{\operatorname{SL}_{2}(\mathbb{A})} \omega(g_{\tau}) \varphi_{D}^{N}(x_{m}h)dh\nonumber\\ &=&q^{-m}\int_{\operatorname{SL}_{2}(\mathbb{A}_{f})} \varphi_{D}^{N}(x_{m}h_f)dh_f \int_{\operatorname{SL}_{2}(\mathbb{R})}\psi_{\infty}(um)\varphi_{\infty}^{sp}(\sqrt{v}x_mh_\infty)dh_{\infty} \nonumber\\ &=&\int_{\operatorname{SL}_{2}(\mathbb{A}_{f})} \varphi_{D}^{N}(x_{m}h_f)dh_f \int_{\operatorname{SL}_{2}(\mathbb{R})}\phi_{\infty}(\sqrt{v}x_mh_\infty)dh_{\infty} ,\nonumber \end{eqnarray} where $\phi_{\infty}(x)=e^{2\pi Q(x)}\varphi_{\infty}^{sp}(x)$ and $\tau=u+iv$. The second step follows from the equation (\ref{coeff comp}). \end{remark} \end{comment} \begin{lemma}\label{densityfinite} Assume $\varphi_{D}^{N}=\otimes_{p\leq \infty}\varphi_{p} \in S(V(\mathbb{A}))$, and $\Phi_{p}(g_p, 1)=\lambda_{p}(\varphi_{p})(g_p)$, then \begin{equation} W_{m, p}(e, 1, \Phi_{p}) = \begin{cases} (1-p^{-2})\sum_{i=0}^{\operatorname{ord}_pm}p^{-1} & \hbox{if } p \nmid ND\nonumber\\ 2p^{-1}-p^{-\operatorname{ord}_pm-1}-p^{-\operatorname{ord}_pm-2} & \hbox{if } \nonumber p \mid N \\ -p^{-\operatorname{ord}_pm-2}(p+1) & \hbox{if } p \mid D.\nonumber\\ \end{cases} \end{equation} \end{lemma} \begin{proof} It suffices to compute the integral $\int_{\mathbb{Q}_{p}} \Phi_{i}^{sp}( w n(b), 1) \psi_{p} (-m b)db$, where $\Phi_{i}^{sp}(g_p, 1)=\lambda_{p}(\varphi_{i}^{sp})(g_p)$, $i=0, 1$. Here $\varphi_{i}^{sp}$ are defined in Proposition \ref{globalmatching}. One has the following decomposition $$\operatorname{SL}_2(\mathbb{Z}_p)=K_0(p) \cup N(\mathbb{Z}_p) w K_0(p),$$ where $K_0(p)=\{\kzxz{a}{b}{c}{d} \mid a,b,d\in \mathbb{Z}_p, c \in p\mathbb{Z}_p\}$. It is easy to check that $\Phi_{i}^{sp}(g_p, 1) \in I(1,\chi)^{K_0(p) }$, which is the $K_0(p)$-invariant subspace in the induced representation $I(1,\chi)$. Here $\chi$ is trivial. So any $\Phi \in I(1,\chi)^{K_0(p) }$ is determined by $\Phi(e, 1)$ and $\Phi(w, 1)$. It is easy to know that \begin{equation}\Phi_{0}^{sp}(e, 1)=\Phi_{0}^{sp}(w, 1)=\Phi_{1}^{sp}(e, 1)=1,~\Phi_{1}^{sp}(w, 1)=\frac{1}{p}, \end{equation} and $$ \int_{\mathbb{\mathbb{Z}}_{p}^{\times}}\psi_{p}(ab)db=\begin{cases} 1-p^{-1} & \hbox{if } \operatorname{ord}_p(a) \geq 0,\\ -\frac{1}{p} & \hbox{if } \operatorname{ord}_p(a) = -1,\\ 0 & \hbox{if } \operatorname{ord}_p(a) \leq -1.\\ \end{cases} $$ It is known that $w n(b)=\kzxz{-b^{-1}}{1}{}{-b} n_{b^{-1}}$ with $n_{b^{-1}}=\kzxz{1}{}{b^{-1}}{1}$, and $$\Phi( w n(b), 1)=\mid b \mid ^{-2}\Phi(n_{b^{-1}} , 1), ~for ~any~\Phi \in I(1,\chi).$$ Then one has \begin{eqnarray}\label{densityodd} &&~W_{m, p}(e, 1, \Phi_{0}^{sp})\\ &=& \int_{\mathbb{\mathbb{Z}}_{p}} \Phi_{0}^{sp}( w n(b), 1) \psi_{p} (-m b)db+ \sum_{k>0} \int_{p^{-k} \mathbb{\mathbb{Z}}_{p}^{\times}}\Phi_{0}^{sp}( w n(b), 1) \psi_{p} (-m b)db\nonumber\\ &=&1+\sum_{k>0}p^{-2k}\int_{p^{-k} \mathbb{\mathbb{Z}}_{p}^{\times} }\Phi_{0}^{sp}( n_{b^{-1}}, 1) \psi_{p} (-m b)db\nonumber\\ &=&1+\sum_{k>0}p^{-k}\int_{ \mathbb{\mathbb{Z}}_{p}^{\times} } \psi_{p} (-p^{-k}m b)db\nonumber\\ &=&(1-p^{-2})\sum_{i=0}^{\operatorname{ord}_pm}p^{-i}.\nonumber \end{eqnarray} By the same method, one has \begin{eqnarray}\label{densitysplit} &&~W_{m, p}(e, 1, \Phi_{1}^{sp})\\ &=& \int_{\mathbb{\mathbb{Z}}_{p}} \Phi_{1}^{sp}( w n(b), 1) \psi_{p} (-m b)db+ \sum_{k>0} \int_{p^{-k} \mathbb{\mathbb{Z}}_{p}^{\times}}\Phi_{1}^{sp}( w n(b), 1) \psi_{p} (-m b)db\nonumber\\ &=&p^{-1}+\sum_{k>0}p^{-2k}\int_{p^{-k} \mathbb{\mathbb{Z}}_{p}^{\times} }\Phi_{1}^{sp}( n_{b^{-1}}, 1) \psi_{p} (-m b)db\nonumber\\ &=&p^{-1}+\sum_{k>0}p^{-k}\int_{ \mathbb{\mathbb{Z}}_{p}^{\times} } \psi_{p} (-p^{-k}m b)db\nonumber\\ &=&2p^{-1}-p^{-(\operatorname{ord}_pm+1)}-p^{-(\operatorname{ord}_pm+2)}.\nonumber \end{eqnarray} Combining it with equation (\ref{densityodd}) and the Proposition \ref{globalmatching}, one could obtain the result. \end{proof} \begin{proof}[\bf{Proof of Corollary \ref{degree}}] By \cite[(2.7)]{KRYComp} and \cite[Lemma 5.3.2]{Miy}, one has \begin{align} \label{eq:volume} \operatorname{vol}(X_0^D(N), \Omega_0) &:= \int_{X_0^D(N)} \Omega_0 = -2 [\mathcal O_D^1 : \Gamma_0^D(N)] \zeta_D(-1) \\ &=\frac{ DN}6 \prod_{p|N} (1+p^{-1}) \prod_{p|D} (1-p^{-1} ) .\nonumber \end{align} Then from Theorem \ref{rDN}, we get the result. \end{proof} \section{Three and Four squares Problem}\label{foursquare} In this section, we prove the Theorem \ref{threefoursquareth}. We split it into two cases: one is four squares, the other is three squares . \subsection{Four squares sum} \label{secfour} The quaternion algebra associated to quadratic form $Q=x_1^2+x_2^2+x_3^2+x_4^2$ is $B(2)$, which could be written as $$B(2)=\mathbb{Q} +\mathbb{Q} i + \mathbb{Q} j+\mathbb{Q} k, i^2=j^2=k^2=-1 ~and~ ij=-ji=k.$$ We denote $V^{\prime}=(B(2), Q)$ and consider the order $ \mathcal{O}=\mathbb{Z}+\mathbb{Z} i+\mathbb{Z} j+\mathbb{Z} k$, which is not the maximal order or Eichler order. Now we fix the lattice $L=(\mathcal{O}, Q)$, then one has \begin{equation} r_{4}(m)=r_{L}(m), ~m>0. \end{equation} The class of $L$ in $gen(L)$ is 1, so we could compute it by Eisenstein series. Now let $\varphi_{p}^{\prime}=\operatorname{char}(L_p)$ and $\varphi_{\infty}^{\prime}=\varphi_{\infty}^{ra}$, and denote $\varphi^{\prime}=\otimes_{p\leq \infty}\varphi_{p}^{\prime} \in S(V^{\prime}(\mathbb{A}))$. Here $L_p=L\otimes\mathbb{Z}_p$. \begin{theorem} With the above notations, one has $$r_{4}(m)=\sum_{d \mid m, 4\nmid d}d.$$ \end{theorem} \begin{proof} From the Siegel-Weil formula and Lemma \ref{Duyanglemma}, one has \begin{equation}\label{equfoursquare} E(\tau, 1, \varphi^{\prime})=I(\tau, \varphi^{\prime})=\sum_{m\geq 0}r_{4}(m)q^m. \end{equation} It's an Eisenstein series of weight $2$. From above equation, one has \begin{eqnarray} r_{4}(m)&=&E_m(\tau, 1, \varphi^{\prime})q^{-m}\nonumber\\ &=&v^{-1}q^{-m}\prod_{p< \infty}W_{m, p}(e, 1, \Phi_{p})\times W_{m, \infty}(g_{\tau}, 1, \Phi_{\infty}),\nonumber \end{eqnarray} where $\Phi_p=\lambda(\varphi_{p}^{\prime})$. By the equation (\ref{infty}), we have \begin{equation}\label{whitinfinite} W_{m, \infty}(g_{\tau}, 1, \Phi_{\infty})=-4 \pi^{2}m q^m v. \end{equation} When $p$ is odd, one has $L_{p}^{\sharp}=L_{p}$, and the dual lattice is given by $L_{p}^{\sharp}=\{x \in V^{\prime}(\mathbb{Q}_p)\mid (x, L_p)\subseteq \mathbb{Z}_{p}\}$. Then $\Phi_{p}(g, s)$ is spherical, i.e., $\operatorname{SL}_{2}(\mathbb{Z}_p)$-invariant and $\Phi_{p}(e, s)=1$. By the equation (\ref{densityodd}), one has \begin{equation}\label{spherical} W_{m, p}(e, 1, \Phi_{p})=(1-p^{-2})\sum_{i=0}^{\operatorname{ord}_pm}p^{-i}=\frac{\sigma_{-1,p}(m)}{\zeta_p(2)}, \end{equation} where $\sigma_{-1,p}(m)=\sum_{i=0}^{\operatorname{ord}_pm}p^{-i}.$ The case $p=2$ is different. Notice that $L_{2}^{\sharp}/L_{2}\cong (\mathbb{Z}/2\mathbb{Z})^4$ with $L_{2}^{\sharp}=\frac{1}{2}L_{2}$. Let $K_0(4)=\{\kzxz{a}{b}{c}{d} \mid a,b,d\in \mathbb{Z}_2, c \in 4\mathbb{Z}_2\}$, then one has the following decomposition \begin{equation}\operatorname{SL}_2(\mathbb{Z}_2)=K_0(4) \cup n_2 K_0(4) \cup N(\mathbb{Z}_2) w K_0(4),~ ~n_2=\kzxz{1}{}{2}{1}. \end{equation} It is easy to see that $\Phi_{2}(g_2, 1) \in I(1,\chi)^{K_0(4) }$, which is the $K_0(4)$-invariant subspace in $I(1,\chi)$. One need to check that $\varphi_{2}^{\prime}$ is $K_0(4)$-invariant for the Weil representation. We check $\omega(n_4)\varphi_2^{\prime} =\varphi_2^{\prime}$ and leave others to the reader, where $n_4= w^{-1} n(-4) w=\kzxz{1}{}{4}{1}$. One has $$ \omega(w)\varphi_{2}^{\prime}(x) =\gamma(V^{\prime}_2) \operatorname{vol}(L_2)\operatorname{char}(L_{2}^{\sharp})(x) =-\frac{1}{4}\operatorname{char}(L_{2}^{\sharp})(x). $$ So $$ \omega(n(-4)w)\varphi_{2}^{\prime}(x)=-\frac{1}{4} \psi_2(-4 Q (x)) \operatorname{char}(L_{2}^{\sharp})(x)=-\frac{1}{4}\operatorname{char}(L_{2}^{\sharp})(x), $$ i.e., $$ \omega(n(-4)w)\varphi_{2}^{\prime}= \omega(w)\varphi_{2}^{\prime}. $$ Then we obtain $$ \omega(n_4) \varphi_2^{\prime} = \omega( w^{-1}) \omega(n(-4)w)\varphi_2^{\prime} =\varphi_2^{\prime}. $$ From the Weil representation, one has $$\Phi_{2}(e, 1)=1, \Phi_{2}(n_2, 1)=0, \Phi_{2}(w, 1)=-\frac{1}{4}.$$ By the same methods as equation (\ref{densityodd}), we know \begin{eqnarray}\label{densitytwo} &&W_{m, 2}(e, 1, \Phi_{2})\\ &=& \int_{\mathbb{\mathbb{Z}}_{2}} \Phi_{2}( w n(b), 1) \psi_{2} (-m b)db+ \sum_{k>0} \int_{2^{-k} U }\Phi_{2}( w n(b), 1) \psi_{2} (-m b)db\nonumber\\ &=&-\frac{1}{4}+\sum_{k>0}2^{-2k}\int_{p^{-k} U }\Phi_{2}( n_{b^{-1}}, 1) \psi_{2} (-m b)db,\nonumber \end{eqnarray} where $U=1+2\mathbb{Z}_2$ and $ n_{b^{-1}}=\kzxz{1}{}{b^{-1}}{1}$. Now we split it into two cases: $m$ is odd or even. {\bf When $m$ is odd } From the equation (\ref{densitytwo}), one has \begin{eqnarray} && W_{m, 2}(e, 1, \Phi_{2})\nonumber\\ &=& -\frac{1}{4}+2^{-2}\int_{2^{-1} U }\Phi_{2}( n_{b^{-1}}, 1) \psi_{2} (-m b)db\nonumber\\ &&+\sum_{k>1} 2^{-2k}\int_{2^{-k} U }\Phi_{2}( n_{b^{-1}}, 1) \psi_{2} (-m b)db\nonumber\\ &=&-\frac{1}{4}. \end{eqnarray} Notice that $$ \int_{U}\psi_{2}(ab)db=\begin{cases} \frac{1}{2} & \hbox{if } \operatorname{ord}_2(a) \geq 0,\\ -\frac{1}{2} & \hbox{if } \operatorname{ord}_2(a) = -1,\\ 0 & \hbox{if } \operatorname{ord}_2(a) \leq -1.\\ \end{cases} $$ Combing it with equations (\ref{infty}) and (\ref{spherical}), we obtain \begin{eqnarray}\label{oddcase} r_{4}(m)=8\sigma_{1}(m), \end{eqnarray} where $\sigma_{1}(m)=\sum_{d \mid m}d$. {\bf When $m$ is even } Assuming $\operatorname{ord}_{2}(m)=r$, we have \begin{eqnarray} W_{m, 2}(e, 1, \Phi_{2}) &=& -\frac{1}{4}+\sum_{k>1}^{r+1}2^{-2k}\int_{2^{-k} U }\Phi_{2}( n_{b^{-1}}, 1) \psi_{2} (-m b)db\nonumber\\ &&+\sum_{k>r+1} 2^{-2k}\int_{2^{-k} U }\Phi_{2}( n_{b^{-1}}, 1) \psi_{2} (-m b)db\nonumber\\ &=&-\frac{1}{4}+2^{-3}+\cdots 2^{-r-1}-2^{-r-2}\nonumber\\ &=&-2^{-r-1}-2^{-r-2}.\nonumber \end{eqnarray} Combing it with equations (\ref{whitinfinite}) and (\ref{spherical}), one obtains \begin{eqnarray} r_{4}(m)&=&(2^{-r-1}+2^{-r-2})4 \pi^{2}m \prod_{ p\neq 2}\frac{\sigma_{-1,p}(m)}{\zeta_p(2)}\nonumber\\ &=& 24 \sigma_{1}(\frac{m}{2^r})=8 \sigma_{1}(2)\sigma_{1}(\frac{m}{2^r})=8 \sigma_{1}(\frac{m}{2^{r-1}})=8 \sum_{d \mid m, 4 \nmid d}d. \end{eqnarray} Thus we finish the proof. \end{proof} {\bf The second proof:} By the Kudla's matching, we could compute $r_4(m)$ via the quadratic space $V=(M_2(\mathbb{Q}), Q)$. We choose $\varphi=\otimes_p\varphi_p \in S(V(\mathbb{A}))$ as follows, $$\varphi_p= \begin{cases} \varphi^{sp}_\infty & \hbox{if } p=\infty,\\ -\frac{1}{2}\varphi_0^{sp} -\frac{1}{2}\varphi_1^{sp}+2\varphi_2^{sp} & \hbox{if } p=2,\\ \varphi_{0}^{sp} & otherwise, \end{cases}$$ which satisfies $\lambda_p(\varphi_p)=\lambda_p(\varphi^{\prime}_p)=\Phi_p$. Then one has \begin{equation}\label{Kudlamatch} I(g_{\tau}, \varphi^{\prime})=E(g_{\tau}, 1, \varphi^{\prime})=E(g_{\tau}, 1, \varphi). \end{equation} Comparing $m$-th Fourier coefficient of above equation, one has \begin{eqnarray} &&r_{4}(m)=v^{-1}q^{-m}\prod_{p<\infty}W_{m, p}(e, 1, \varphi_{p}) W_{m, \infty}(g_{\tau}, 1, \varphi_{\infty}). \end{eqnarray} When $p=2$, $\Phi_{p}=-\frac{1}{2}\Phi_{0}^{sp} -\frac{1}{2}\Phi_{1}^{sp}+2\Phi_{2}^{sp}$, where $\Phi_{i}^{sp}=\lambda_p(\varphi_i^{sp})$ and $\varphi_i^{sp}$ is defined in Section \ref{sect:Preli}. When $i=0, 1$, the integral $W_{m, p}(g_p, 1, \Phi_{i}^{sp}) $ are given in the Lemma \ref{densityfinite}, and we leave $W_{m, p}(g_p, 1, \Phi_{2}^{sp})$ to the readers. When $p\neq 2$, the local integral can be computed similarly as the above theorem. Then one obtains the expected result. \subsection{ Three squares sum} \label{secthree} In this subsection, we consider the space $B^0(2)=\{ x \in B(2); \operatorname{tr}(x)=0\}$ and denote the quadratic space $V=(B^0(2), Q)$. Fix the lattice $\mathcal{L}=\mathbb{Z} i+\mathbb{Z} j+\mathbb{Z} k$. This lattice has genus one, thus one has $$r_{3}(m)=r_{\mathcal{L}}(m).$$ Define \begin{equation} b_p=\begin{cases} \frac{1-p^{-1}\chi_{d}(p)+p^{-t_p-1}\chi_{d}(p)-p^{-t_p-1}}{1-p^{-1}} &\hbox{if } p ~is ~ odd,\\ \frac{(1-\chi_{d}(p))(p^{-t_p}-p^{-t_p-2})}{1-p^{-1}}& \hbox{if } p=2. \end{cases} \end{equation} Let $\widetilde{\operatorname{SL}_2}$ be the metaplectic double cover of $\operatorname{SL}_2$, and we could write it as $\widetilde{\operatorname{SL}_2}=\operatorname{SL}_2\times \{\pm 1\} $. The multiplication is given by $$[g_1, \varepsilon_1][g_2, \varepsilon_2]=[g_1g_2, \varepsilon_1\varepsilon_2c(g_1, g_2)],$$ for the cocycle as in \cite{KRYannal}. We frequently abuse the notation and write $n(b)$, $m(a)$, $w$ for the elements $[n(b, 1]$, $[m(a)]$ and $[w, 1]$. Set $\varphi=\varphi_{f} \otimes\varphi_{\infty} \in S(V(\mathbb{A}))$, where $\varphi_{f}=char(\widehat{\mathcal{L}})$ and $\varphi_{\infty}=e^{-2\pi Q(x)}$. For $m>0$, let $-4m=dc^2$, where $d$ is the fundamental discriminant. Let $\chi_d$ be the corresponding Dirichlet character of field $\mathbb{Q}(\sqrt{d})$ and \begin{equation} w=\begin{cases} 2 & \hbox{if } d<-4,\\ 4 & \hbox{if } d=-4,\\ 6 & \hbox{if } d=-3. \end{cases} \end{equation} \begin{theorem}\label{threeth} With the above notations, one has \begin{equation}\label{threesum} r_3(m)=\frac{24 h(d) }{w}(1-\chi_d(2))\sum_{l \mid c, (l, 2)=1}l \prod_{q \mid l}(1-\chi_{d}(q)q^{-1}), \end{equation} where $h(d)$ is the class number of $\mathbb{Q}(\sqrt{d})$. \end{theorem} \begin{proof} By the Siegel-Weil formula and Lemma \ref{Duyanglemma}, we have \begin{equation}\label{equfoursquare} E(\tau, \frac{1}{2}, \varphi)=I(\tau, \varphi)=\sum_{m\geq 0}r_{3}(m)q^m. \end{equation} Comparing Fourier coefficients of both sides, one has \begin{eqnarray}\label{threeprod} &&r_{3}(m)=E_m(\tau, \frac{1}{2}, \varphi)q^{-m}\nonumber\\ &=&v^{-\frac{3}{4}}q^{-m} \prod_{p< \infty}W_{m, p}(e, \frac{1}{2}, \Phi_{p})\times W_{m, \infty}(g_{\tau}, \frac{1}{2}, \Phi_{\infty}), \nonumber \end{eqnarray} where $\Phi_p=\lambda(\varphi_{p})$. We identify notation $g_{\tau}$ with $[g_{\tau}, 1] \in \widetilde{\operatorname{SL}_2}(\mathbb{R})$. Following the methods given in \cite{KY}, we compute the local Whittaker functions as follows. It is known that \begin{equation}\label{threeinfinite} W_{m, \infty}(g_{\tau}, \frac{1}{2}, \Phi_{\infty})=-4 \pi\sqrt{2m}\zeta_8v^{\frac{3}{4}} q^m. \end{equation} Denote $t_p=\operatorname{ord}_p(c)$. When p is odd, one has \begin{eqnarray}\label{threeodd} &&W_{m, p}(e, \frac{1}{2}, \Phi_{p})\\ &=&\begin{cases} 1+p^{-1}-p^{-t_p-1}+p^{-t_p-1}\chi_{d}(p) &\hbox{if } p \nmid d,\\ 1+p^{-1}-p^{-t_p-1}-p^{-t_p-2} & \hbox{if } p \mid d, \end{cases}\nonumber\\ &=&\frac{L_p(1, \chi_{d})}{\zeta_p(2)}b_p,\nonumber \end{eqnarray} where \begin{eqnarray} b_p&=&\frac{1-p^{-1}\chi_{d}(p)+p^{-t_p-1}\chi_{d}(p)-p^{-t_p-1}}{1-p^{-1}}\nonumber\\ &=&p^{-t_p}\sum_{l \mid p^{t_p}}l \prod_{q \mid l}(1-\chi_{d}(q)q^{-1}). \end{eqnarray} When $p=2$, \begin{equation}\label{threetwo} W_{m, 2}(e, \frac{1}{2}, \Phi_{2})=-\frac{\zeta_8^{-1}}{2\sqrt{2}}L_{2}(1, \chi_{d})b_{2},\end{equation} where \begin{eqnarray} b_{2}=\frac{(1-\chi_{d}(2))(2^{-t_2}-2^{-t_2-2})}{1-2^{-1}}=\frac{3}{2}2^{-t_2}(1-\chi_d(2)). \end{eqnarray} So we obtain \begin{equation} \prod_{p}b_p=\frac{3}{2}c^{-1}(1-\chi_d(2))\sum_{l \mid c, (l, 2)=1}l \prod_{q \mid l}(1-\chi_{d}(q)q^{-1}), \end{equation} where $q$ runs over prime factors of $l$. By this equation and equations (\ref{threeinfinite}), (\ref{threeodd}) and (\ref{threetwo}), one has \begin{eqnarray}\label{threefinal} r_3(m)&=& 2 \pi\sqrt{m}L_{2}(1, \chi_{d})\prod_{p~odd}\frac{L_p(1, \chi_{d})}{\zeta_p(2)} \prod_{p }b_p \nonumber\\ &=&\frac{24 h(d) }{w}(1-\chi_d(2))\sum_{l \mid c, (l, 2)=1}l \prod_{q \mid l}(1-\chi_{d}(q)q^{-1}). \end{eqnarray} \end{proof} \begin{remark} From equation (\ref{threefinal}), it is easy to see that $r_{3}(m)=r_{3}(m)$. If $m\equiv 7 (\rm mod 8)$, then $d \equiv 5 (\rm mod 8)$. So one knows that $\mathbb{Q}(\sqrt{d})$ is split at $2$, thus $a_2=0$. We recover the well known result that $r_3(m)=0$ when $m=4^{a}(8k+7)$, for $a, k>0$. \end{remark} For a discriminant $m >0$, define Hurwitz class number $H(m)$ be the number of classes of not necessarily primitive positive definite quadratic forms of discriminant $-m$, except that those classes which have a representative which is a multiple of the form$x^2+y^2$ should be counted with weight $1/2$ and those which have a representative which is a multiple of the form $x^2 + xy + y^2$ should be counted with weight $1/3$. The number $H(m)$ is given by \begin{equation} H(m)=\frac{2h(d)}{w}\sum_{l \mid f}l \prod_{p \mid l}(1-\chi_{d}(p)p^{-1}), \end{equation} where $-m=df^2$. When $m$ is not a discriminant, $H(m)=0$. Comparing this equation with equation (\ref{threefinal}), one obtains the following result easily. \begin{corollary}[HZ, Section 2.1]\label{HZcor} Let $m>0$, one has \begin{equation} r_3(m)= \begin{cases} 12 H(4m) &\hbox{if } m\equiv 1~ or~2 ~({\rm mod} 4),\\ 24 H(m) &\hbox{if } m\equiv 3~ (\rm mod 8),\\ 0 &\hbox{if } m\equiv 7~(\rm mod 8),\\ r_3(\frac{m}{4}) &\hbox{if } m\equiv 0 ~(\rm mod 4). \end{cases} \end{equation} \end{corollary} Since $\sum_{m\geq 0}r_{3}(m)q^m$ is an Eisenstein series, then one could ask a question how about $H(\tau)=\sum_{m\geq 0}H(m)q^m$, where $H(0)=-\frac{1}{12}$ and $q=e^{2\pi i \tau}$. Zagier discovered the well known Eisenstein series in \cite[Theorem 2, Chapter 2]{HZ}. \begin{equation} E(\tau)= H(\tau)+\frac{1}{16 \pi \sqrt{v}}\sum_{m=-\infty}^{\infty}\beta(4 \pi m^2 v)q^{-m^2}, \end{equation} where $v=\Im(\tau)$ and $$\beta(t)=\int_{1}^{\infty}u^{-\frac{3}{2}}e^{-tu}du,~t\geq0.$$ $E(\tau)$ is a nonholomorphic Eisenstein series of weight $3/2$ and $H(\tau)$ is the holomorphic part. \section{Hardy's singular series}\label{sechardy} Hardy defined the singular series by \begin{equation} \rho_{s}(m)=\frac{\pi^{\frac{s}{2}}}{\Gamma(\frac{s}{2})}m^{\frac{s}{2}-1} \mathfrak{G}_{s}(m), \end{equation} where \begin{equation} \mathfrak{G}_{s}(m)=\sum_{k=1}^{\infty} A_k(m), \end{equation} and \begin{equation} A_k(m)=\sum_{h=1, (h,k)=1}^{k}(\frac{1}{k}\sum_{j=1}^ke^{2\pi i hj^2/k})^{s}e^{-2\pi imh/k}, ~A_1(m)=1 \end{equation} It is known by Hardy \cite{Ha2} that \begin{equation} A_{k_1k_2}(m)=A_{k_1}(m)A_{k_2}(m), (k_1, k_2)=1. \end{equation} Thus it suffices to study $A_k$ for $k$ a prime power. We write \begin{equation} \mathfrak{G}_{s}(m)=\sum_{k=1}^{\infty} A_k(m)=\prod S_p(m), \end{equation} where $$S_{p}(m)=\sum_{r=0}^{\infty}A_{p^r}(m).$$ Now we fix $s=3$ and denote $r_p=\operatorname{ord}_p(m)$. By Hardy's formula, and also in Dickson\cite{Di}, one has the following result:\\ If $r$ is an odd positive integer, \begin{eqnarray} &&A_{2^r}(m)= \begin{cases} 0 & \hbox{if } r>r_p+3,\\ 2^{-\frac{r-1}{2}} \cos\frac{2^{-r+3}m -3}{4}\pi & \hbox{if } r\leq r_p+3, 2^{-r+3}m \equiv 3~ \rm (mod ~4),\\ 0 & \hbox{if } r\leq r_p+3, 2^{-r+3}m \not\equiv 3~\rm (mod ~4), \end{cases}\nonumber\\ &&A_{p^r}(m)= \begin{cases} 0 & \hbox{if } r>r_p +1,\\ -p^{-\frac{r+1}{2}} (\frac{-p^{-r+1}m}{p})& \hbox{if } r= r_p +1,\\ 0 & \hbox{if } r< r_p +1. \end{cases}\nonumber \end{eqnarray} If $r$ is an even positive integer, \begin{eqnarray} &&A_{2^r}(m)= \begin{cases} 0 & \hbox{if } r>r_p+2,\\ 2^{-\frac{r-1}{2}} \cos\frac{2^{-r+3}m -3}{4}\pi & \hbox{if } r\leq r_p+2,\\ \end{cases}\nonumber\\ &&A_{p^r}(m)= \begin{cases} 0 & \hbox{if } r>r_p +1,\\ -p^{-\frac{r}{2}-1} & \hbox{if } r= r_p +1,\\ (p-1)p^{-\frac{r}{2}-1} & \hbox{if } r< r_p +1. \end{cases}\nonumber \end{eqnarray} Now we recall the normalized local Whittaker functions. For details see \cite[Section 4]{KY}. Let $(V, \mathbb{Q})$ be a quadratic space with dimension $n$. and let $L$ be a lattice in $V$. For any $\varphi_p \in S(V_{p})$, the Whittakler function $$W_{m, p}(e, s_{0}, \varphi_{p})=\gamma(V_p)\int_{\mathbb{Q}_p}\int_{V_p}\psi_{p}(bQ(x))\varphi_{p}(x)d_{V}x\psi_{p}(-mb)db,$$ where $d_{V}x$ is the self dual measure with respect to $\psi_p((x, y))$ and $\gamma(V_p)$ is the Weil index. Fix a $\mathbb{Z}_p$-basis $\{e_i \}$ of $L_p$, then $L_p\cong \mathbb{Z}_p^n$ . Let $dx=\prod dx_i$ be the standard measure on $V$, then one has \begin{equation} d_Vx=[L^{\sharp}_p : L_p]^{-\frac{1}{2}}dx= \mid \det G \mid_{p} ^{\frac{1}{2}}, \end{equation} where $G$ is the Gram matrix $((e_i, e_j))$ for $L_p$. Kudla and Yang defined \begin{equation}\label{norwhitt} W_{p}( s_{0}, m)=\int_{\mathbb{Q}_p}\int_{L_p}\psi_{p}(bQ(x))dx\psi_{p}(-mb)db. \end{equation} Then one has $$W_{p}( s_{0}, m)=\frac{W_{m, p}(e, s_{0}, \operatorname{char} (L_p))}{\gamma(V_p)\mid \det G \mid_{p} ^{\frac{1}{2}}}.$$ \begin{lemma}\label{lemnorm} Let $V=(B^0(2), Q)$ and the lattice $\mathcal{L}=\mathbb{Z} i+\mathbb{Z} j+\mathbb{Z} k$, which are defined in the Section \ref{foursquare}. One has \begin{equation} \gamma(V_p)\mid \det G \mid_{p} ^{\frac{1}{2}}= \begin{cases} 1 & \hbox{if } p~is ~odd,\\ -\zeta_{8}^{-1}\frac{1}{2\sqrt{2}} & \hbox{if } p=2. \end{cases} \end{equation} \end{lemma} \begin{proof} When $p$ is odd, $\gamma(V_p)=\mid \det G \mid_{p} ^{\frac{1}{2}}=1$. When $p=2$, $\gamma(V_p)=-\zeta_{8}^{-1}$ and $\mid \det G \mid_{p} ^{\frac{1}{2}}=\frac{1}{2\sqrt{2}}$. Then one obtains the result. \end{proof} For the convenience, we rewrite the Theorem \ref{localequality}. \begin{theorem} 1) Let $\mathcal{L}=\mathbb{Z} i+\mathbb{Z} j+\mathbb{Z} k$ be the lattice in quadratic space $V=(B^0(2), Q)$, one has \begin{equation}S_{p}(m)=W_{ p}(\frac{1}{2}, m). \end{equation} 2) Let $L=\mathbb{Z}+\mathbb{Z} i+\mathbb{Z} j+\mathbb{Z} k$ be the lattice in $V=(B(2), Q)$, one has \begin{equation}S_{p}(m)=W_{ p}(1, m). \end{equation} where $W_{ p}(\frac{1}{2}, m)$ and $W_{ p}(1, m)$ are normalized local Whittaker function defined by equation (\ref{norwhitt}). \end{theorem} \begin{proof} We only prove the case 1). Assume that $-4m=dc^2$ with fundamental discriminant $d$, and let $\chi_{d}$ be the associated Dirichlet character. Set $r_p=\operatorname{ord}_p(m)$ and $t_p=\operatorname{ord}_p(c)$. When $p$ is odd, \begin{eqnarray} &&S_{p}(m)= \sum_{r=0}^{r_p+1}A_{p^r}(m)\\ &=&1+\sum_{r=1, ~even}^{r_p}(p-1)p^{-\frac{r}{2}-1}+A_{p^{r_p+1}}\nonumber \\ &=&\begin{cases} 1+p^{-1}-p^{-t_p-1}+p^{-t_p-1}\chi_{d}(p) &\hbox{if } r_p~is~even,\\ 1+p^{-1}-p^{-t_p-1}-p^{-t_p-2} & \hbox{if } r_p~is~odd. \end{cases}\nonumber \end{eqnarray} Notice that $r_p=2t_p$($r_p=2t_p+1$) when $r_p$ is even(odd). By equation (\ref{threeodd}) and Lemma \ref{lemnorm}, one obtains $$S_{p}(m)=W_{m, p}(e, \frac{1}{2}, \operatorname{char}(\mathcal{L}_{p}))=W_{p}( \frac{1}{2}, m).$$ When $p=2$, \begin{eqnarray} &&S_{2}(m)= \sum_{r=0}^{r_2+3}A_{2^r}(m)\\ &=&1+\sum_{r=1, ~even}^{r_2+2}2^{-\frac{r-1}{2}} \cos\frac{2^{-r+3}m -3}{4}\pi+\sum_{r=1, ~odd}^{r_2+3}A_{2^{r}}\nonumber \\ &=&\begin{cases} 0 &\hbox{if } m \equiv 7(\rm mod ~8),\\ 1 &\hbox{if } m \equiv 3(\rm mod~ 8),\\ \frac{3}{2} &\hbox{if } m \equiv 1, 2, 5, 6(\rm mod ~8),\\ 2 S_{2}(\frac{m}{4}) &\hbox{if } 4 \mid m.\\ \end{cases}\nonumber \end{eqnarray} One could check that \begin{equation} S_{2}(m)=\frac{3}{2^{t_2+1}}\frac{1-\chi_{d}(2)}{1-\frac{\chi_{d}(2)}{2}}. \end{equation} By equation (\ref{threetwo}), we know that $$S_{2}(m)=\frac{W_{m, 2}(e, \frac{1}{2}, \operatorname{char} (\mathcal{L}_2))}{-\zeta_{8}^{-1}2\sqrt{2}}=W_{2}( \frac{1}{2}, m).$$ Thus we finish the proof of 1) and leave case 2) to the readers. \end{proof} The following result is a directly application of the above theorem. \begin{theorem} \cite{Ba} Let the notations be as above, one has \begin{equation} r_{s}(m)=\rho_{s}(m), ~s=3, 4. \end{equation} \end{theorem} \begin{proof} We just prove the case $s=3$. The case $s=4$ is similarly. From the Theorem \ref{threeth}, one knows that \begin{eqnarray}\label{hardyequ} &&r_{3}(m)\nonumber\\ &=&v^{-\frac{3}{4}}q^{-m}\prod_{p< \infty}W_{m, p}(e, \frac{1}{2}, \Phi_{p})\times W_{m, \infty}(g_{\tau}, \frac{1}{2}, \Phi_{\infty})\nonumber\\ &=&v^{-\frac{3}{4}}q^{-m}\prod_{p< \infty} \gamma(V_p)\mid \det G \mid_{p} ^{\frac{1}{2}} W_{ p}(\frac{1}{2}, m)\times W_{m, \infty}(g_{\tau}, \frac{1}{2}, \Phi_{\infty})\nonumber\\ &=&-\zeta_8^{-1}\frac{1}{2\sqrt{2}}v^{-\frac{3}{4}}q^{-m}\prod_{p}S_{p}(m) W_{m, \infty}(g_{\tau}, \frac{1}{2}, \Phi_{\infty}) \nonumber \end{eqnarray} By equation (\ref{threeinfinite}), we obtain the expected result $$r_{3}(m)=2\sqrt{m}\pi\prod_{p}S_{p}(m)=\varrho_{3}(m).$$ \end{proof} \begin{comment} \begin{example} Let $D=q$ be a prime and $N=1$. Then the order $\mathcal{O}_D(N)= \mathcal{O}_q(1)$. Let $\varphi_{f}= \bigotimes_{p}\varphi_{p}$, where $$ \varphi_p =\begin{cases} \operatorname{char}(M_{2\times 2}(\mathbb{Z}_{p})) &\hbox{if } p\neq q, \\ \frac{-2}{p-1} \operatorname{char}(M_{2\times 2}(\mathbb{Z}_{p})) + \frac{p+1}{p-1} \operatorname{char}(M_{2\times 2}^{1}(\mathbb{Z}_{p})) &\hbox{if } p=q, \end{cases} $$ where \begin{equation} M_{2\times 2}^{1}(\mathbb{Z}_{p}) =\bigg\{ \bigg(\begin{array}{cc} a & b \\ c & d \end{array} \bigg)\in M_{2\times 2}(\mathbb{Z}_{p})\bigg| c \equiv 0 \mod p\bigg\}.\end{equation} From \cite[Proposition 3.1]{DuYang}, $\varphi_{f}$ constructed above and $\varphi^{'}_{f}= \operatorname{char}(\widehat{\mathcal{O}_q(1)}) $ match. Here $\varphi_{f}=\frac{-2}{q-1} \operatorname{char}(\widehat{M_{2\times 2}(\mathbb{Z})}) + \frac{p+1}{p-1} \operatorname{char}(\widehat{M_{2\times 2}^{1}(\mathbb{Z})(q)})$, $$M_{2\times 2}^{1}(\mathbb{Z})(q)=\bigg\{ \bigg(\begin{array}{cc} a & b \\ c & b \end{array}\bigg) \in M_{2\times 2}(\mathbb{Z})\bigg| c\equiv 0 \mod q \bigg\}.$$ So $$r_{q,1}(n)= \frac{-2}{q-1} v^{-1}\int_{\operatorname{SL}_{2}} \omega(g_{\tau})\varphi_{1}(x_{n}h)dh+ \frac{q+1}{q-1}v^{-1}\int_{\operatorname{SL}_{2}} \omega(g_{\tau})\varphi_{2}(x_{n}h)dh,$$ where $$\varphi_{1,f}=\operatorname{char}(\widehat{M_{2\times 2}(\mathbb{Z})}), \varphi_{1,f}= \operatorname{char}(\widehat{M_{2\times 2}^{1}(\mathbb{Z})(q)}),$$ and $\varphi_{1,\infty}=\varphi_{2,\infty}=\varphi_\infty.$ \end{example} \end{comment} \section{acknowledgment} Thanks to ..., we write it later.
{ "timestamp": "2018-05-18T02:04:12", "yymm": "1804", "arxiv_id": "1804.05247", "language": "en", "url": "https://arxiv.org/abs/1804.05247" }
\section{Introduction} The cosmic rays with energy around the Greisen-Zatsepin-Kuzmin (GZK) cutoff ($\sim 50~{\rm EeV}$) would be strongly suppressed due to the interactions with the cosmic microwave background \cite{Greisen:1966jv, Zatsepin:1966jv}, i.e., $p+\gamma_{\rm CMB} \rightarrow p~({\rm or}~n)+{\rm \bf n} \pi$, $p+\gamma_{\rm CMB} \rightarrow \Delta^{+}(1232) \rightarrow p+ \pi^{0}~({\rm or}~n+\pi^{+})$, where ${\rm \bf n}$ is the total number of the produced $\pi$'s. The ultrahigh energy (UHE) neutrinos at the scale of EeV could be copiously produced by the subsequent decays of secondary charged pions and neutrons \cite{Beresinsky:1969qj}. The EeV neutrino remains to be detected, and the Antarctic Impulsive Transient Antenna (ANITA) experiment \cite{Gorham:2008dv} is dedicated to the detection of these cosmogenic neutrinos. In 2016, the ANITA experiment has reported one unusual steeply upward-pointing cosmic ray event 3985267 with shower energy around $0.6~{\rm EeV}$ during the ANITA-I flight \cite{Gorham:2016zah}. This shower event has the characteristics of the decay of a $\tau$-lepton, which is emerging from the surface of the ice with the zenith angle around $63^{\circ}$ \footnote{The reported emergence angle of the event 3985267 is $\sim 27^{\circ}$ below the horizontal, the corresponding zenith angle is thus $\sim 63^{\circ}$. One should be aware that the ANITA horizon is around $6^{\circ}$ below it's horizontal because of it's altitude.}, and the $\tau$-lepton should be interpreted as the product of a parent $\nu_{\tau}$ by the charged-current (CC) interactions with the Earth matter. However, such a hypothesis is strongly disfavored due to that the Earth CC attenuation coefficient is ~$4\times 10^{-6}$ for the neutrinos coming from such a steep angle \cite{Gorham:2016zah}. The associated event number around $E_{\nu} \sim 1~{\rm EeV}$ is negligible after adopting the IceCube bound on the diffuse EeV neutrinos \cite{Aartsen:2017mau}, approximately $E^2_{\nu} {\mathrm{d}\Phi_{\nu}}/{\mathrm{d} E_{\nu}} \lesssim 2\times 10^{-8}~{\rm GeV\cdot cm^{-2}s^{-1}sr^{-1}}$. In addition, there should be more Earth-skimming events than the steep events. The situation is worse after the ANITA detector observed the second such air shower event 15717147 with energy around $0.56~{\rm EeV}$ at a steeper zenith angle $\sim 55^{\circ}$ during the ANITA-III flight \cite{Gorham:2018ydl}. Possible explanations for the anomalous events including the large transient point-source flux \cite{Gorham:2018ydl}, the transition radiation of the Earth-skimming neutrinos \cite{Motloch:2016yic}, the sterile neutrino origin \cite{Cherry:2018rxj}, and the decay of the quasi-stable dark matter in the Earth's core \cite{Anchordoqui:2018ucj} have been investigated in the literature. After the report of the first anomalous event, Ref. \cite{Cherry:2018rxj} has proposed that the sterile neutrinos could be the origin of such an event. The sterile neutrinos are well motivated by several particle physics issues and experimental anomalies. The heavy sterile neutrinos can explain the mass generation of light neutrinos through the seesaw mechanism \cite{Minkowski:1977sc,Yanagida:1979as,Glashow:1979nm,GellMann:1980vs}, the sterile neutrino in the $\rm keV$ mass range is a good candidate of the warm dark matter \cite{Adhikari:2016bei}, and the anomalies of the short baseline experiments like LSND and MiniBOONE, the Gallium source experiments as well as the reactor neutrino experiments hint at the existence of the $\rm eV$-scale sterile neutrinos \cite{Aguilar:2001ty,AguilarArevalo:2008rc,Giunti:2010zu,Mention:2011rk,Abdurashitov:2009tn,Kaether:2010ag}. To explain the ANITA anomalous events, we need a strong sterile neutrino flux. The sources of the flux could be the superheavy dark matter decays \cite{Aartsen:2018mxl,Ema:2013nda,Esmaili:2013gha,Feldstein:2013kka,Ko:2015nma,Kachelriess:2008bk}, the topological defects \cite{Kachelriess:2008bk} or some exotic interaction \cite{Cherry:2014xra,Cherry:2016jol,Ahlgren:2013wba,Huang:2017egl,Farzan:2018gtr,Jeong:2018yts,Berryman:2018jxt} which converts active neutrinos into sterile neutrinos during their propagation. When the sterile flux goes through the Earth, they will experience a suppressed cross section due to the small active-sterile mixing. In such a way, the neutrinos can make their way to the thin interaction region below the ANITA detector, finally producing the $\tau$-lepton by the CC interaction with the ice, water or rock inside the interaction region. However, according to the analysis of Ref. \cite{Cherry:2018rxj}, the sterile origin is in mild tension with the steep emergence angle, e.g. only $10\%$ of the events are expected to emerge with the zenith angle smaller than $63^{\circ}$ for an active-sterile mixing angle $\theta =0.1$. Obviously, the second event 15717147 reported later \cite{Gorham:2018ydl} with zenith angle $55^{\circ}$ sharpens the tension. In our work, the sterile neutrino origin is reexamined, and we mainly have two treatments different from Ref. \cite{Cherry:2018rxj} which are addressed as follows: \begin{itemize} \item The neutrinos will lose coherence when strongly interacting with the ambient matter. After the sterile neutrino mass eigenstate flux entering the Earth, the matter will frequently measure the neutrino states such that the survived flux will collapse into the sterile state $\nu_{\rm s}$. \item Because a positive detection is made only when the payload of ANITA is covered by the induced impulse cones with angle around $1^{\circ}$ \cite{Gorham:2018ydl}, only a very small fraction of the plane flux from each direction can be detected. Thus the effective area $A_{\rm eff}{(\Omega)}$ should be much smaller than the expectation of Ref. \cite{Cherry:2018rxj}. \end{itemize} This work is organized as follows. In section 2, we investigate the evolution of the sterile neutrinos propagating in matter with the decoherence effect included, then the angular dependence is studied. In section 3, we give our predictions of the ANITA events for different sterile neutrino parameters based on several assumptions and approximations of the experimental setup. In section 4, we make our conclusion. \section{Propagation of Sterile Neutrinos} Regardless of the EeV sterile neutrino sources, the sterile neutrinos will lose their coherence after a long distance of galactic travelling. They will form the mass eigenstate fluxes $\nu_4$ and $\nu_1$ with fractions of $\cos^2{\theta}$ and $\sin^2{\theta}$, respectively, where $\theta$ is the active-sterile mixing angle, $\nu_1$ harmlessly represents the three active neutrino mass eigenstates, and $\nu_4$ can carry the mass at keV scale. When the $\nu_4$ flux propagates into the Earth, the Earth matter will frequently interact with, or equivalently measure, the neutrino's flavor. The $\nu_4$-state is the superposition of the active and sterile components, i.e. $\nu_4 = \sin{\theta}~\nu_{\rm a}+\cos{\theta}~\nu_{\rm s}$, and only the active component $\nu_{\rm a}$ \footnote{Since the overlaps of $\nu_{e}$-$\nu_{4}$ and $\nu_{\mu}$-$\nu_{4}$ are strongly constrained, the left available active component would be $\nu_{\tau}$, actually $\nu_{\tau}$ is what we are interested in. See \cite{Blennow:2018hto} and the references therein for recent constraints on the sterile neutrinos mixing.} is able to collide with the ambient matter through the CC or neutral current (NC) interactions. The matter will serve as the quantum discriminator which can resolve the mixing, making the $\nu_4$-state to collapse into either $\nu_{\rm a}$-state or $\nu_{\rm s}$-state, we refer the reader to \cite{Harris:1980zi,Stodolsky:1986dx,Raffelt:1992uj} for more details. To properly take the decoherence effect into account, we adopt the following evolution equation \begin{eqnarray} \label{eq:Evolution} i\frac{\mathrm{d}}{\mathrm{d} t} \left(\begin{matrix} c_{\tau} \\ c_{\rm s} \end{matrix}\right) = \frac{1}{2E_{\nu}}\left[ U\left(\begin{matrix} m^2_1 & 0 \\ 0 & m^2_4 \end{matrix}\right) U^{\dagger} + \left(\begin{matrix} A_{\rm NC} & 0 \\ 0 & 0 \end{matrix}\right) - i \left(\begin{matrix} E_{\nu}/L_{\rm atten} & 0 \\ 0 & 0 \end{matrix}\right)\right]\left(\begin{matrix} c_{\tau} \\ c_{\rm s} \end{matrix}\right) \end{eqnarray} for $\nu_4(t) = c_{\tau} \nu_{\tau} + c_{\rm s} \nu_{\rm s}$, where $U$ is the $2\times 2$ active-sterile mixing matrix with the mixing angle $\theta$, $m_1$ is the averaged active neutrino mass which is negligible compared with $m_4$ at keV scale, $A_{\rm NC} = -G_{\rm F} (1- Y_e) n_{\rm N} / \sqrt{2}$ is due to the NC matter effect with $G_{\rm F}$ being the Fermi constant, $Y_e$ the fraction of electrons and $n_{\rm N}$ the nucleon number density of the matter, and $L_{\rm atten}$ is the local attenuation length of the neutrino. $L_{\rm atten}$ depends on the nucleon density and the neutrino energy through $L_{\rm atten} = [\sigma(E_{\nu}) n_{\rm N}]^{-1}$. The number density profile of the Earth can be found in the PREM model \cite{Dziewonski:1981xy}. The NC and CC cross sections are referred to \cite{Gandhi:1995tf}, and we note that both the CC and NC interactions contribute to the attenuation effect. We have neglected the regeneration effects for simplicity. The $\tau$-lepton produced by the CC can decay back to $\nu_{\tau}$. For the NC interaction, the produced neutrino carries averagely $80\%$ of the initial energy, but not removed from the flux. Thus our simulation will be more conservative than the realistic case. The initial conditions for the evolution read $c_{\tau} (0) = \sin{\theta}$, $c_{\rm s} (0) = \cos{\theta}$ before the $\nu_4$ flux entering the Earth. Before turning to numerical demonstration of the evolution, we can first have some analytical observations. \begin{figure}[t] \centering \includegraphics[scale=0.75]{fig1.pdf} \caption{The evolution of the EeV neutrino survival probability with respect to the travelling distance, corresponding to the ANITA event 15717147. The active-sterile mixing angle is chosen as $\theta=0.1$, and the mass of $\nu_4$ could be $2~{\rm keV}$, $0.5~{\rm keV}$, or $0.2~{\rm keV}$. The dashed curve shows the evolution of the standard $\nu_{\tau}$ flux. The solid blue curve stands for the survival probability of the sterile component, while the solid red curves stand for that of the $\nu_{\tau}$ component in the context of active-sterile mixing. } \label{fig:Evolution} \end{figure} If we ignore the oscillation terms, i.e. the first two terms in the right-hand side of Eq.~({\ref{eq:Evolution}}), the evolution is trivial. The active and sterile components will evolve independently such that the active component is quickly absorbed by the Earth with only the unobservable sterile component left, and there will be null signal in the detector as in the standard case. However, the sterile and active neutrinos are actually mixed, and can oscillate from one to the other if the propagation length covers the oscillation length of $L_{\rm osc} \equiv 4\pi E_{\nu} / \Delta m^2_{41} \approx 2476~{\rm km}~[E_{\nu}/ (1~{\rm EeV})]~[(1~{\rm keV})^2/\Delta m^2_{41}]$. For the ANITA events 3985267 and 15717147 with emitting zenith angles of $63^{\circ}$ and $55^{\circ}$, the corresponding chord lengths are $5785~{\rm km}$ and $7309~{\rm km}$ respectively, assuming a spherical Earth shape. Therefore, it's quite evident that $\nu_4$ mass should be around the ${\rm keV}$ scale or even larger to convert the $\nu_{\rm s}$ flux into the $\nu_{\tau}$ flux when traversing the Earth. In such a way, the $\nu_{\tau}$ flux can be regenerated and survive the attenuation of the Earth. For the ${\rm keV}$-sterile neutrinos, the Mikheyev-Smirnov-Wolfenstein (MSW) resonance condition inside the Earth can be fulfilled. However, the associated total flavor conversion effect at the resonance point as in the discussion of solar neutrino problem is not available here. The matter term in the Earth mantle for the EeV neutrino reads $A_{\rm NC} \approx 0.1~{\rm keV}^2$. To satisfy the resonance condition $A_{\rm NC}= \Delta m_{41}^2 \cos{\theta}$ for the antineutrinos with mixing angle $\theta$, the $m_4$ should be around $0.3~{\rm keV}~[\sqrt{\cos\theta}]^{-1}$. The effective mass-squared difference when the resonance is achieved reads $\Delta\tilde{m}^2_{41}= \Delta m^2_{41} \sin{2\theta} \approx 0.2\sin{\theta}~{\rm keV}^{2}$, which corresponds to an oscillation length of $[12370/ \sin{\theta}]~{\rm km}\sim R/\sin{\theta}$ with $R = 12742~{\rm km} $ being the diameter of the Earth. Note that the density of the Earth changes very rapidly, by $20\%$ for the Earth mantle of $2000~{\rm km}$. It's very unlikely for those neutrinos to stay around the resonance while developing the phase. Our numerical calculation has included the matter effect without making any approximations. In Fig.~1, we show the evolution of the EeV neutrino fluxes with respect to the travelling distance in the Earth. With the zenith angle of $\theta_z = 55^{\circ}$, the corresponding chord length through the Earth matter is around $7309~{\rm km}$. The dashed curve demonstrates the attenuation effect for the standard active neutrinos. Ref. \cite{Gorham:2016zah} has given the Earth attenuation factor as $4\times 10^{-6}$ for the event 3985267. Our numerical results of the attenuation factor are $1.2\times 10^{-9}$ for the event 3985267, and $1.4 \times 10^{-13}$ for the event 15717147, with both CC and NC interactions taken into account \footnote{As has been mentioned before, Ref. \cite{Gorham:2016zah} has only considered the CC interaction in their estimation of the attenuation length. We have included the NC interaction for the conservative purpose. Note that the actual attenuation factor should be larger after the regeneration effect included.}. The solid blue curve represents the survival probability of the sterile neutrino component $|c_{\rm s}|^2$, and it stays almost around one during the propagation. The solid red curves show the evolutions of the $\nu_{\tau}$ component in $\nu_4(t)$ with the masses of $2~{\rm keV}$, $0.5~{\rm keV}$, and $0.2~{\rm keV}$, respectively. As has been expected before, the survival probability of $\nu_{\tau}$ drops with the decreasing $\nu_4$-mass because of the increasing oscillation length. For $m_{\rm 4}=2~{\rm keV}$, the survival probability of $\nu_{\tau}$ fluctuates around 0.01 due to the continuous regeneration from $\nu_{\rm s}$ flux, just as the Earth being transparent. Both of the two ANITA events have energys around $0.6~{\rm EeV}$, i.e., $E_{3985267}=0.6\pm 0.4~{\rm EeV}$, $E_{15717147}=0.56^{+0.3}_{-0.2}~{\rm EeV}$. \begin{figure}[t] \centering \includegraphics[scale=0.75]{fig2.pdf} \caption{The $\tau$-lepton transforming efficiency $\epsilon$ with respect to the emergency zenith angle. The sterile neutrino mass is fixed as $m_4=2~{\rm keV}$. The red, purple and blue curves stand for the cases with mixing angles of $\theta=0.1$, $\theta=0.2$ and $\theta=0.3$, respectively. The dashed curve in the bottom-right corner is just the efficiency of the pure active neutrino flux $\nu_{\tau}$. The two vertical lines correspond to the emergence angles of the two ANITA events $15717147$ and $3985267$. Note that $\theta_z \gtrsim 80^{\circ}$ is nearly above the ANITA horizon. The knee around $\theta_z = 30^{\circ}$ is due to the large density jump between the Earth outer core and mantle. The energy loss of $\tau$-leptons are simulated with the ASW model \cite{Armesto:2004ud,Alvarez-Muniz:2017mpk}.} \label{fig:ea} \end{figure} To simplify our calculation, we assume that the initial $\nu_4$ flux has almost monochromatic energy around $1~{\rm EeV}$. After these neutrinos entering the Earth, they can propagate almost freely to the other side of the Earth just as the $m_{\rm 4}=2~{\rm keV}$ case in Fig. 1. With a $\nu_{\tau}$ residue in the interaction region with depth around tens of kilometer below the ANITA balloon, the flux is eventually transformed into observable $\tau$-lepton flux by the CC interaction. We define the efficiency of the initial $\nu_4$ particles transformed into $\tau$-lepton as \begin{eqnarray} \label{eq:efficiency} \epsilon(\Omega) = \frac{ \mathrm{d} \Phi_{\tau}(E_{\rm min}, E_{\rm max})/ \mathrm{d} \Omega}{\mathrm{d} \Phi_{\nu_{4}}(E_{0}) / \mathrm{d} \Omega}, \end{eqnarray} where $E_0=1~{\rm EeV}$ is the initial neutrino energy, $\Phi_{\nu_4}$ stands for the isotropic flux of neutrinos, $\Phi_{\tau}(E_{\rm min}, E_{\rm max})$ is the produced $\tau$-lepton flux in the energy range of $[E_{\rm min}, E_{\rm max}]$ when they arrive at the Antarctic surface. We set $E_{\rm min} = 0.2~{\rm EeV}$ and $E_{\rm max} = 1~{\rm EeV}$ for our calculation, so that the two observed ANITA events are covered by $[E_{\rm min}, E_{\rm max}]$. The transforming efficiency $\epsilon(\Omega)$ measures the fraction of neutrinos being converted into the observable $\tau$-leptons during their way to ANITA, it should be noted that $\epsilon(\Omega)$ is direction-dependent. In Fig. 2, we show the angular dependence of the $\epsilon(\Omega)$ for the cases with $m_{\rm 4} = 2~{\rm keV}$. The dashed curve in the bottom-right corner shows the efficiency in the standard scenario. The $\tau$-leptons induced by the standard isotropic active neutrino flux should be concentrated around the large zenith angles, therefore the Earth-skimming shower should dominate the events as has been expected. The red curve with active-sterile mixing angle of $\theta=0.1$ is almost uniformly distributed for the entire zenith angle range. The efficiency is around $10^{-4}$, which means that no matter which angle the neutrino flux comes from, there will always be one observable $\tau$-lepton emitted after $10^4$ $\nu_4$ neutrinos entering the Earth. The Earth-skimming events don't have much advantage over the steep events in this case. However, as the active-sterile mixing angle increases, the sterile neutrino would become not so sterile due to the large mixing with the active neutrino. The efficiency tends to converge into the standard case. \section{ANITA Events Estimation} Due to the small Cherenkov ring angle of the EeV $\tau$-decay shower, only a very small fraction of the $\tau$-lepton flux obtained in the last section can be captured by the antennas of ANITA. \begin{figure}[t] \centering \includegraphics[scale=0.75]{fig3.pdf} \caption{The event number of ANITA for three months of exposure. For each mixing angle $\theta$, the corresponding IceCube bound on $\nu_4$ flux is saturated. The black solid curve is the event number for the total sky. The red (blue) dashed curve stands for the shower events emitted from $> 20^{\circ}~(< 20^{\circ})$ below the horizontal. $\theta_z = 84^{\circ}$ corresponds to the zenith angle of the ANITA horizon.} \label{fig:events} \end{figure} The angle of the light cone should be around $1^{\circ}$ \cite{Gorham:2018ydl}, so we expect the geometric area of ANITA should be about $A_{\rm gm} \approx 2\pi (D \times 1^{\circ})^2$ with $D$ being the distance from the ANITA payload to the initial point of the shower. For event $15717147$ with $\theta_z = 55^{\circ}$, we obtain the geometric area as $\sim 7.5~{\rm km}^2$, slightly larger than the estimation of the ANITA group $\sim 4~{\rm km}^2$ \cite{Gorham:2018ydl}. The realistic geometric area estimation requires the dedicated Monte Carlo simulation, and we simply assume a constant geometric area of $4~{\rm km}^2$ for all emergence angles to proceed our estimation. Simulations show that the $\tau$-lepton decay shower from larger zenith angle would have smaller impulse power \cite{Romero-Wolf:2017}, thus the steep shower seems more likely to be detected by ANITA than the Earth-skimming shower in the realistic case. The flux of the EeV $\nu_4$ is bounded by the IceCube observation as ${\mathrm{d}\Phi_{\nu_4}}/{\mathrm{d} \Omega} \lesssim 2\times 10^{-15}~[0.1/\sin{\theta}]^2~{\rm cm^{-2}s^{-1}sr^{-1}}$, note that the flux limit is relaxed by the mixing angle suppression. The final event is obtained as \begin{eqnarray} \label{eq:event} \mathscr{E}_{\rm ANITA} = \int \mathrm{d}\Omega \cdot \frac{\mathrm{d}\Phi_{\nu_4}}{\mathrm{d} \Omega}\times \epsilon(\Omega) \times A^{\rm ANITA}_{\rm gm}(\Omega) \times T_{\rm ANITA}\approx 0.9, \end{eqnarray} where $A^{\rm ANITA}_{\rm gm}(\Omega)$ is simply fixed to $4~{\rm km}^2$ as mentioned before, $T_{\rm ANITA}$ is the three months of exposure for ANITA, $\epsilon(\Omega)$ is obtained with $m_{\rm 4} = 2~{\rm keV},~\theta = 0.1$ in the last section, the flux takes the saturated value of the IceCube bound ${\mathrm{d}\Phi_{\nu_4}}/{\mathrm{d} \Omega} = 2\times 10^{-15}~[0.1/\sin{\theta}]^2~{\rm cm^{-2}s^{-1}sr^{-1}}$. One can identify the effective area as $A_{\rm eff} (\Omega)= \epsilon(\Omega)A^{\rm ANITA}_{\rm gm}(\Omega) \approx 10^7~{\rm cm}^2$, much smaller than the estimation $A_{\rm eff} \approx 10^{11}~{\rm cm}^2$ of \cite{Cherry:2018rxj}. Event number of 0.9 is obviously consistent with the ANITA observation. Let's check the situation for other experiments. The IceCube has a geometric area around $1~{\rm km}^2$ for the through-going track events, therefore we can estimate the event number of IceCube with six years observation as $\mathscr{E}_{\rm IC} = 6$, using $A^{\rm IC}_{\rm gm}(\Omega) \approx 1~{\rm km}^2$, $T_{\rm IC} \approx 6~{\rm years}$. As has been pointed out in \cite{Anchordoqui:2018ucj,Kistler:2016ask}, IceCube might already have observed one such $\tau$-track event with energy $\gtrsim 0.1~{\rm EeV}$ and emergence angle of $11.5^{\circ}$ below the horizon. The deposited energy of this track is $(2.6\pm 0.3)~{\rm PeV}$ \cite{Aartsen:2016xlq}, implying a $\mu$-lepton track with energy $\gtrsim 10~{\rm PeV}$ or a $\tau$-lepton track with energy $\gtrsim 0.1~{\rm EeV}$. No matter whether this event is an $\rm EeV$ $\tau$-lepton track or not, the IceCube observation is in considerable tension with ANITA under the sterile hypothesis. The non-observation of EeV neutrino events at AUGER shouldn't be a problem with viewing angle of only a few degrees below the horizon for the Earth-skimming events \cite{Aab:2015kma}, while the produced $\tau$-lepton flux can be almost uniformly distributed as in Fig. 2. In Fig. 3, we plot the relation between the ANITA's three months of event number and the active-sterile mixing angles. The mass of $\nu_4$ is chosen to be $2~{\rm keV}$, and the results don't differ much for $m_4 > 2~{\rm keV}$. The solid black curve is the total event number. The red dashed curve is the events within the zenith angle range of $[0^{\circ},70^{\circ}]$, which is larger than the blue one, i.e., the events within the zenith angle range of $[70^{\circ},84^{\circ}]$ for $\theta \lesssim 0.1$. Some comments on the figure are in order: \begin{itemize} \item Note that the event number curves in Fig. 3 can be extrapolated towards very small mixing angles, but the associated sterile neutrino flux is also required to be stronger. The flux should stay around the IceCube upper limit ${\mathrm{d}\Phi_{\nu_4}}/{\mathrm{d} \Omega} \approx 2\times 10^{-15}~[0.1/\sin{\theta}]^2~{\rm cm^{-2}s^{-1}sr^{-1}}$. \item The observations of ANITA and IceCube can't be well fitted at the same time by tuning the flux and parameters of the sterile neutrino. Independent from the properties of the sterile neutrino, the $\tau$-lepton decay events of ANITA and the through-going $\tau$-lepton track events of IceCube always have the following relation: \begin{eqnarray} \label{eq:AI} \frac{\mathscr{E}_{\rm IC}}{\mathscr{E}_{\rm ANITA} } \approx \frac{A^{\rm IC}_{\rm gm}\times T_{\rm IC}}{A^{\rm ANITA}_{\rm gm} \times T_{\rm ANITA}} \approx 6. \end{eqnarray} The unlikelihood that ANITA has two events while IceCube has one or null event can reach $3{\rm \sigma}$. Note that this estimation can also be applied to the scenario in \cite{Anchordoqui:2018ucj}, where the quasi-stable dark matter decay inside the Earth is the origin of the ANITA anomalous events. \end{itemize} A more reliable result depending on the dedicated Monte Carlo simulations of the ANITA experiment is beyond the scope of the present work. \section{Conclusion} In this work, we have reinvestigated the possibility of using sterile neutrino origin to explain the ANITA anomalous events. We find that the quantum decoherence effect is very important to account for the propagation behavior of EeV neutrino flux. The $\nu_{\tau}$ flux can be regenerated by the oscillation of $\nu_{\rm s}$-state during their propagation inside the Earth. For the sterile neutrinos with $m_{4} \gtrsim 1~{\rm keV}$, the Earth is almost transparent to the $\nu_{\tau}$ component in the $\nu_4$ flux. In this way, the neutrinos can losslessly reach the interaction volume below the ANITA payload with very steep angles. We have estimated the ANITA and IceCube event number, and find that averagely the ANITA experiment is able to observe one event during the three months of exposure, while the IceCube is supposed to detect six events for it's six years of data taking. To resolve the ANITA anomaly itself, the flux of $\nu_4$ should be around the upper IceCube limit ${\mathrm{d}\Phi_{\nu_4}}/{\mathrm{d} \Omega} \approx 2\times 10^{-15}~[0.1/\sin{\theta}]^2~{\rm cm^{-2}s^{-1}sr^{-1}}$. We have scanned the whole sterile neutrino parameter space, and find that the requirements on the sterile neutrino parameters are $m_{4} \gtrsim 1~{\rm keV}$, $\theta \lesssim 0.1$. If the dark matter decays are the sources of the sterile neutrinos, the mixing angle shouldn't be too small, because it might be too difficult for the dark matter decays to produce so much strong neutrino flux that saturates the IceCube bound. In our framework, the predicted EeV $\tau$-lepton track event number of IceCube is always averagely six times of the $\tau$-lepton decay shower event number of ANITA, i.e., ${\mathscr{E}_{\rm IC}}/{\mathscr{E}_{\rm ANITA} } \approx 6$. This result is independent of the sterile neutrino parameters as well as whether the regeneration effect is included or not. Even though there is an ${\cal O}(0.1~{\rm EeV})$ track candidate for the IceCube observation, these two experiments together are in strong tension with the sterile neutrino explanation. However, we expect the dedicated ANITA simulations to draw a more solid conclusion. \section*{Acknowledgements} I am indebted to S. Zhou for suggesting this work and for many valuable discussions and suggestions. I am also grateful to N. Nath and Q.R. Liu for useful discussions. This work is supported by the National Natural Science Foundation of China under grant No. 11775232.
{ "timestamp": "2018-04-17T02:10:38", "yymm": "1804", "arxiv_id": "1804.05362", "language": "en", "url": "https://arxiv.org/abs/1804.05362" }
\section{Introduction} To define a summoning task\cite{kent2013no,kent2012quantum}, we consider two parties, Alice and Bob, who each have networks of collaborating agents occupying non-overlapping secure sites throughout space-time. At some point $P$, Bob's local agent gives Alice's local agent a state $\ket{\psi}$. The physical form of $\ket{\psi}$ and the dimension of its Hilbert space $H$ are pre-agreed; Bob knows a classical description of $\ket{\psi}$, but from Alice's perspective it is a random state drawn from the uniform distribution on $H$. At further pre-agreed points (which are often taken to all be in the causal future of $P$, though this is not necessary), Bob's agents send classical communications in pre-agreed form, satisfying pre-agreed constraints, to Alice's local agents, which collectively determine a set of one or more valid return points. Alice may manipulate and propagate the state as she wishes, but must return it to Bob at one of the valid return points. We say a given summoning task is {\it possible} if there is some algorithm that allows Alice to ensure that the state is returned to a valid return point for any valid set of communications received from Bob. The ``no-summoning theorem'' \cite{kent2013no} states that summoning tasks in Minkowski space are not always possible. We write $Q \succ P$ if the space-time point $Q$ is in the causal future of the point $P$, and $Q \nsucc P$ otherwise; we write $Q \succeq P$ if either $Q \succ P$ or $Q=P$, and $Q \nsucceq P$ otherwise. Now, for example, consider a task in which Bob may request at one of two ``call'' points $c_i \succ P$ that the state be returned at a corresponding return point $r_i \succ c_i$, where $r_2 \nsucceq c_1$ and $r_1 \nsucceq c_2$. An algorithm that guarantees that Alice will return the state at $r_1$ if it is called at $c_1$ must work independently of whether a call is also made at $c_2$, since no information can propagate from $c_2$ to $r_1$; similarly if $1$ and $2$ are exchanged. If calls were made at both $c_1$ and $c_2$, such an algorithm would thus generate two copies of $\ket{\psi}$ at the space-like separated points $r_1$ and $r_2$, violating the no-cloning theorem. This distinguishes relativistic quantum theory from both relativistic classical mechanics and non-relativistic quantum mechanics, in which summoning tasks are always possible provided that any valid return point is in the (causal) future of the start point $P$. Further evidence for seeing summoning tasks as characterising fundamental features of relativistic quantum theory was given by Hayden and May \cite{hayden2016summoning}, who considered tasks in which a request is made at precisely one from a pre-agreed set of call points $\{c_1 , \ldots , c_n \}$; a request at $c_i$ requires the state to be produced at the corresponding return point $r_i \succ c_i$. They showed that, if the start point $P$ is in the causal past of all the call points, then the task is possible if and only if no two causal diamonds $D_i = \{ x : r_i \succeq x \succeq c_i \}$ are spacelike separated. That is, the task is possible unless the no-cloning and no-superluminal-signalling principles directly imply its impossibility. Wu et al. have presented a more efficient code for this task \cite{wu2018efficient}. Another natural type of summoning task allows any number of calls to be made at call points, requiring that the state be produced at any one of the corresponding return points. Perhaps counter-intuitively, this can be shown to be a strictly harder version of the task \cite{adlam2016quantum}. It is possible if and only if the causal diamonds can be ordered in sequence so that the return point of any diamond in the sequence is in the causal future of all call points of earlier diamonds in the sequence. Again, the necessity of this condition follows (with a few extra steps) from the no-superluminal-signalling and no-cloning theorems \cite{adlam2016quantum}. The constraints on summoning have cryptographic applications, since they can effectively force Alice to make choices before revealing them to Bob. Perhaps the simplest and most striking of these is a novel type of unconditionally secure relativistic quantum bit commitment protocol, in which Alice sends the unknown state at light speed in one of two directions, depending on her committed bit \cite{kent2011unconditionally}. The fidelity bounds on approximate quantum cloning imply \cite{kent2011unconditionally} the sum-binding security condition \begin{equation} p_0 + p_1 \leq 1 + {{2} \over {d+1}} \, , \end{equation} where $d = \dim (H)$ is the dimension of the Hilbert space of the unknown state and $p_b$ is the probability of Alice successfully unveiling bit value $b$. Summoning is also a natural primitive in distributed quantum computation, in which algorithms may effectively summon a quantum state produced by a subroutine to some computation node that depends on other computed or incoming data. From a fundamental perspective, the (im)possibility of various summoning tasks may be seen either as results about relativistic quantum theory or as candidate axioms for a reformulation of that theory. They also give a way of exploring and characterising the space of theories generalising relativistic quantum theory. From a cryptographic perspective, we would like to understand precisely which assumptions are necessary for the security of summoning-based protocols. These motivations are particularly strong given the relationship between no-summoning theorems and no-signalling, since we know that quantum key distribution and other protocols can be proven secure based on no-signalling principles alone. In what follows, we characterise that relationship more precisely, and discuss in particular the sense in which summoning-based bit commitment protocols are secure against potentially post-quantum but non-signalling participants. These are participants who may have access to technology that relies on some unknown theory beyond quantum theory. They may thus be able to carry out operations that quantum theory suggests is impossible. However, their technology must not allow them to violate a no-signalling principle. Exactly what this implies depends on which no-signalling principle is invoked. We turn next to discussing the relevant possibilities. \section{No-signalling principles and no-cloning} \subsection{No-signalling principles} The relativistic no-superluminal-signalling principle states that no classical or quantum information can be transmitted at faster than light speed. We can frame this operationally by considering a general physical system that includes agents at locations $P_1 , \ldots , P_n$. Suppose that the agent at each $P_i$ may freely choose inputs labelled by $A_i$ and receive outputs $a_i$, which may probabilistically depend on their and other inputs. Let $I = \{ i_1 , \ldots , i_b \}$ and $J = \{ j_1 , \ldots j_c \}$ be sets of labels of points such that $ P_{i_k} \nsucceq P_{j_l}$ for all $k \in \{ 1 , \ldots , b \}$ and $l \in \{ 1, \ldots , c \}$. Then we have \begin{eqnarray} \lefteqn{P( a_{i_1} \ldots a_{i_b} | A_{i_1} \ldots A_{i_b} ) =} \\ && p ( a_{i_1} \ldots a_{i_b} | A_{i_1} \ldots A_{i_b} A_{j_1} \ldots A_{j_c} ) \, . \nonumber \end{eqnarray} In other words, outputs are independent of spacelike or future inputs. The quantum no-signalling principle for an $n$-partite system composed of non-interacting subsystems states that measurement outcomes on any subset of subsystems are independent of measurement choices on the others. If we label the measurement choices on subsystem $i$ by $A_i$, and the outcomes for this choice by $a_i$, then we have \begin{equation}\label{nosignalling} P( a_{i_1} \ldots a_{i_m} | A_{i_1} \ldots A_{i_m} ) = P( a_{i_1} \ldots a_{i_m} | A_{1} \ldots A_{n} ) \, . \end{equation} That is, so long as the subsystems are non-interacting, the outputs for any subset are independent of the inputs for the complementary subset, regardless of their respective locations in space-time. The no-signalling principle for a generalised non-signalling theory extends this to any notional device with localised pairs of inputs (generalising measurement choices) and outputs (generalising outcomes). As in the quantum case, this is supposed to hold true regardless of whether the sites of the localised input/output ports are spacelike separated. Generalized non-signalling theories may include, for example, the hypothetical bipartite Popescu-Rohrlich boxes \cite{popescu1994quantum}, which maximally violate the CHSH inequality, while still precluding signalling between agents at each site. \subsection{The no-cloning theorem} The standard derivation of the no-cloning theorem \cite{wootters1982single,dieks1982communication} assumes a hypothetical quantum cloning device. A quantum cloning device $D$ should take two input states, a general quantum state $\ket{\psi}$ and a reference state $\ket{0}$, independent of $\ket{\psi}$. Since $D$ follows the laws of quantum theory, it must act linearly. Now we have \begin{equation} D \ket{\psi} \ket{0} = \ket{\psi} \ket{\psi} \, , \qquad D \ket{\psi'} \ket{0} = \ket{\psi'} \ket{\psi'} \, , \end{equation} for a faithful cloning device, for any states $\ket{\psi}$ and $\ket{\psi'}$. Suppose that $\braket{\psi'}{\psi} = 0$ and that $\ket{\phi} = a \ket{\psi} + b \ket{\psi'} $ is normalised. We also have \begin{equation} D \ket{\phi} \ket{0} = \ket{\phi} \ket{\phi} \, , \end{equation} which contradicts linearity. To derive the no-cloning theorem without appealing to linearity, we need to consider quantum theory as embedded within a more general theory that does not necessarily respect linearity. We can then consistently consider a hypothetical post-quantum cloning device $D$ which accepts quantum states $\ket{\psi}$ and $\ket{0}$ as inputs, and produces two copies of $\ket{\psi}$ as outputs: \begin{equation} D \ket{\psi} \ket{0} = \ket{\psi} \ket{\psi} \, . \end{equation} We will suppose that the cloning device functions in this way independent of the history of the input state. We will also suppose that it does not violate any other standard physical principles: in particular, if it is applied at $Q$ then it does not act retrocausally to influence the outcomes of measurements at earlier points $P \prec Q$. We can now extend the cloning device to a bipartite device comprising a maximally entangled quantum state, with a standard quantum measurement device at one end, and the cloning device followed by a standard quantum measurement device at the other end. This extended device accepts classical inputs (measurement choices) and produces classical outputs (measurement outcomes) at both ends. If we now further assume that the joint output probabilities for this extended device, for any set of inputs, are independent of the locations of its components, then we can derive a contradiction with the relativistic no-superluminal signalling principle. First suppose that the two ends are timelike separated, with the cloning device end at point $Q$ and the other end at point $P \prec Q$. A complete projective measurement at $P$ then produces a pure state at $Q$ in any standard version of quantum theory. The cloning device then clones this pure state. Different measurement choices at $P$ produce different ensembles of pure states at $Q$. These ensembles correspond to the same mixed state before cloning, but to distinguishable mixtures after cloning. The measurement device at $Q$ can distinguish these mixtures. Now if we take the first end to be at a point $P'$ spacelike separated from $Q$, by hypothesis the output probabilities remain unchanged. This allows measurement choices at $P'$ to be distinguished by measurements at $Q$, and so gives superluminal signalling \cite{gisin1998quantum}. It is important to note that the assumption of location-independence is not logically necessary, nor does it follow from the relativistic no-superluminal-signalling principle alone. Assuming that quantum states collapse in some well defined and localized way as a result of measurements, one can consistently extend relativistic quantum theory to include hypothetical devices that read out a classical description of the local reduced density matrix at any given point, i.e. the local quantum state that is obtained by taking into account (only) collapses within the past light cone \cite{kent2005nonlinearity}. This means that measurement events at $P$, which we take to induce collapses, are taken into account by the readout device at $Q$ if and only if $P \prec Q$. Given such a readout device, one can certainly clone pure quantum states. The device behaves differently, when applied to a subsystem of an entangled system, depending on whether the second subsystem is measured inside or outside the past light cone of the point at which the device is applies. It thus does not satisfy the assumptions of the previous paragraph. The discussion above also shows that quantum theory augmented by cloning or readout devices is not a generalized non-signalling theory. For consider again a maximally entangled bipartite quantum system with one subsystem at space-time point $P$ and the other at a space-like separated point $P'$. Suppose that the Hamiltonian is zero, and that the subsystem at $P'$ will propagate undisturbed to point $Q \succ P$. Suppose that a measurement device may carry out any complete projective measurement at $P$, and that at $Q$ there is a cloning device followed by another measurement device on the joint (original and cloned) system. As above, different measurement choices at $P$ produce different ensembles of pure states at $Q$, which correspond to the same mixed state before cloning, but to distinguishable mixtures after cloning. The measurement device at $Q$ can distinguish these mixtures. The output (measurement outcome) probabilities at $Q$ thus depend on the inputs (measurement choices) at $P$, contradicting Eqn. (\ref{nosignalling}). Assuming that nature is described by a generalized non-signalling theory thus gives another reason for excluding cloning or readout devices, without assuming that their behaviour is location-independent. In summary, neither the no-cloning theorem nor cryptographic security proofs based on it can be derived purely from consistency with special relativity. They require further assumptions about the behaviour of post-quantum devices available to participants or adversaries. Although this was noted when cryptography based on the no-signalling principle was first introduced \cite{barrett2005no}, it perhaps deserves re-emphasis. On the positive side, given these further assumptions, one can prove not only the no-cloning theorem, but also quantitative bounds on the optimal fidelities attainable by approximate cloning devices for qubits \cite{gisin1998quantum} and qudits \cite{navez2003cloning}. In particular, one can show \cite{navez2003cloning} that any approximate universal cloning device that produces output states $\rho_0$ and $\rho_1$ given a pure input qudit state $\ket{\psi}$ satisfies the fidelity sum bound \begin{equation}\label{acf} \braopket{\psi}{\rho_0}{\psi} + \braopket{\psi}{\rho_1}{\psi} \leq 1 + {{2} \over {d+1}} \, . \end{equation} It is worth stressing that (with the given assumptions) this bound applies for any approximate cloning strategy, with any entangled states allowed as input. \section{Summoning-based bit commitments and no-signalling} We recall now the essential idea of the flying qudit bit commitment protocol presented in Ref. \cite{kent2011unconditionally}, in its idealized form. We suppose that space-time is Minkowski and that both parties, the committer (Alice) and the recipient (Bob), have arbitrarily efficient technology, limited only by physical principles. In particular, we assume they both can carry out error-free quantum operations instantaneously and can send classical and quantum information at light speed without errors. They agree in advance on some space-time point $P$, to which they have independent secure access, where the commitment will commence. We suppose too that Bob can keep a state secure from Alice somewhere in the past of $P$ and arrange to transfer it to her at $P$. Alice's operations on the state can then be kept secure from Bob unless and until she chooses to return information to Bob at some point(s) in the future of $P$. We also suppose that Alice can send any relevant states at light speed in prescribed directions along secure quantum channels, either by ordinary physical transmission or by teleportation. They also agree on a fixed inertial reference frame, and two opposite spatial directions within that frame. For simplicity we neglect the $y$ and $z$ coordinates and take the speed of light $c=1$. Let $P = (0,0)$ be the origin in the coordinates $(x,t)$ and the opposite two spatial directions be defined by the vectors $v_0 = (-1 , 0 )$ and $v_1 = (1,0 )$. Before the commitment begins, Bob generates a random pure qudit $\ket{\psi} \in {\cal C}^d$. This is chosen from the uniform distribution, and encoded in some pre-agreed physical system. Again idealizing, we assume the dimensions of this system are negligible, and treat it as pointlike. Bob keeps his qudit secure until the point $P$, where he gives it to Alice. To commit to the bit $i \in \{0,1 \}$, Alice sends the state $\ket{\psi}$ along a secure channel at light speed in the direction $v_i$. That is, to commit to $0$, she sends the qudit along the line $L_0 = \{ (-t, t) , t > 0 \}$; to commit to $1$, she sends it along the line $L_1 = \{ (t, t) , t > 0 \}$. For simplicity, we suppose here that Alice directly transmits the state along a secure channel. This allows Alice the possibility of unveiling her commitment at any point along the transmitted light ray. To unveil the committed bit $0$, Alice returns $\ket{\psi}$ to Bob at some point $Q_0$ on $L_0$; to unveil the committed bit $1$, Alice returns $\ket{\psi}$ to Bob at some point $Q_1$ on $L_1$. Bob then tests that the returned qudit is $\ket{\psi}$ by carrying out the projective measurement defined by $P_{\psi} = \ket{\psi} \bra{\psi}$ and its complement $(I - P_{\psi})$. If he gets the outcome corresponding to $P_{\psi}$, he accepts the commitment as honestly unveiled; if not, he has detected Alice cheating. Now, given any strategy of Alice's at $P$, there is an optimal state $\rho_0$ she can return to Bob at $Q_0$ to maximise the chance of passing his test there, i.e. to maximize the fidelity $\braopket{\psi}{\rho_0}{\psi}$. There is similarly an optimal state $\rho_1$ that she can return at $Q_1$, maximizing $\braopket{\psi}{\rho_1}{\psi}$. The relativistic no-superluminal-signalling principle implies that her ability to return $\rho_0$ at $Q_0$ cannot depend on whether she chooses to return $\rho_1$ at $Q_1$, or vice versa. Hence she may return both (although this violates the protocol). The bound (\ref{acf}) on the approximate cloning fidelities implies that \begin{equation} \braopket{\psi}{\rho_0}{\psi} + \braopket{\psi}{\rho_1}{\psi} \leq 1 + {{2} \over {d+1} }\, . \end{equation} Since the probability of Alice successfully unveiling the bit value $b$ by this strategy is \begin{equation} p_b = \braopket{\psi}{\rho_b}{\psi} \, , \end{equation} this gives the sum-binding security condition for the bit commitment protocol \begin{equation} p_0 + p_1 \leq 1 + {{2} \over {d+1} }\, . \end{equation} Recall that the bound (\ref{acf}) follows from the relativistic no-superluminal-signalling condition together with the location-independence assumption for a device based on a hypothetical post-quantum cloning device applied to one subsystem of a bipartite entangled state. Alternatively, it follows from assuming that any post-quantum devices operate within a generalized non-signalling theory. The bit commitment security thus also follows from either of these assumptions. \subsection{Security against post-quantum no-superluminal-signalling adversaries?} It is a strong assumption that any post-quantum theory should be a generalized non-signalling theory satisfying Eqn. (\ref{nosignalling}). So it is natural to ask whether cryptographic security can be maintained with the weaker assumption that other participants or adversaries are able to carry out quantum operations and may also be equipped with post-quantum devices, but do not have the power to signal superluminally. It is instructive to understand the limitations of this scenario for protocols between mistrustful parties capable of quantum operations, such as the bit commitment protocol just discussed. The relevant participant here is Alice, who begins with a quantum state at $P$ and may send components along the lightlike lines $PQ_0$ and $PQ_1$. Without loss of generality we assume these are the only components: she could also send components in other directions, but relativistic no-superluminal-signalling means that they cannot then influence her states at $Q_0$ or $Q_1$. At any points $X_0$ and $X_1$ on the lightlike lines, before Alice has applied any post-quantum devices, the approximate cloning fidelity bound again implies that fidelities of the respective components $\rho_{X_0}$ and $\rho_{X_1}$ satisfy \begin{equation} \braopket{\psi}{\rho_{X_0}}{\psi} + \braopket{\psi}{\rho_{X_1}}{\psi} \leq 1 + {{2} \over {d+1} }\, . \end{equation} Now, if Alice possesses a classical no-superluminal-signalling device, such as a Popescu-Rohrlich box, with input and output ports at $X_0$ and $X_1$, and her agents at these sites input classical information uncorrelated with their quantum states, she does not alter the fidelities $\braopket{\psi}{\rho_{X_i}}{\psi}$. Any subsequent operation may reduce the fidelities, but cannot increase them. More generally, any operation involving the quantum states and devices with purely classical inputs and outputs cannot increase the fidelity sum bound (\ref{acf}). To see this, note that any such operation could be paralleled by local operations within quantum theory if the two states were held at the same point, since hypothetical classical devices with separated pairs of input and output ports are replicable by ordinary probabilistic classical devices when the ports are all at the same site. We need also to consider the possibility that Alice has no-superluminal signalling devices with quantum inputs and outputs. At first sight these may seem unthreatening. For example, while a device that sends the quantum input from $X_0$ to the output at $X_1$ and vice versa would certainly make the protocol insecure -- Alice could freely swap commitments to $0$ and $1$ -- such a device would be signalling. However, suppose that Alice's agents each have local state readout devices, which give Alice's agent at $X_0$ a classical description of the density matrix $\rho_{X_0}$ and Alice's agent at $X_1$ a classical description of the density matrix $\rho_{X_1}$. Suppose also that Alice has carried out an approximate universal cloning at $P$, creating mixed states $\rho_{X_0}$ and $\rho_{X_1}$ of the form \begin{equation} \rho_{X_i} = p_i \ket{\psi} \bra{\psi} + (1 - p_i ) I \, , \end{equation} where $0 < p_i < 1 $. This is possible provided that $p_0 + p_1 \leq 1 + {{2} \over {d+1} }$. From these, by applying their readout devices, each agent can infer $\ket{\psi}$ locally. Alice's outputs at $X_i$ have no dependence on the inputs at $X_{\bar{i}}$. Nonetheless, this hypothetical process would violate the security of the commitment to the maximum extent possible, since it would give $p_0 + p_1 =2$. To ensure post-quantum security, our post-quantum theory thus need assumptions -- like those spelled out earlier -- that directly preclude state readout devices and other violations of no-cloning bounds. \section{Discussion} Classical and quantum relativistic bit commitment protocols have attracted much interest lately, both because of their theoretical interest and because advances in theory \cite{chailloux2017relativistic} and practical implementation \cite{lunghi2013experimental, liu2014experimental, verbanis201624} suggest that relativistic cryptography may be in widespread use in the forseeable future. Much work on these topics is framed in models in which two (or more) provers communicate with one (or more) verifiers, with the provers being unable to communicate with one another during the protocol. Indeed, one round classical relativistic bit commitment protocols give a natural physical setting in which two (or more) separated provers communicate with adjacent verifiers, with the communications timed so that the provers cannot communicate between the commitment and opening phases. The verifiers are also typically unable to communicate, but this is less significant given the form of the protocols, and the verifiers are sometimes considered as a single entity when the protocol is not explicitly relativistic. Within the prover-verifier model, it has been shown that no single-round two-prover classical bit commitment protocol can be secure against post-quantum provers who are equipped with generalized no-signalling devices \cite{fehr2015multi}. It is interesting to compare this result with the signalling-based security proof for the protocol discussed above. First, of course, the flying qudit protocol involves quantum rather than classical communication between ``provers'' (Alice's agents) and ``verifiers'' (Bob's agents). Second, as presented, the flying qudit protocol involves three agents for each party. However, a similar secure bit commitment protocol can be defined using just two agents apiece. For example, Alice's agent at $P$ could retain the qudit, while remaining stationary in the given frame, to commit to $0$, and send it to Alice's agent at $Q_1$ (as before) to commit to $1$. They may unveil by returning the qudit at, respectively, $(0,t)$ or $(t,t)$. In this variant, the commitment is not secure at the point where the qudit is received, but it becomes secure in the causal future of $(t/2, t/2)$. Third, the original flying qudit protocol illustrates a possibility in relativistic quantum cryptography that is not motivated (and so not normally considered) in standard multi-prover bit commitment protocols. This is that, while there are three provers, communication between them in some directions is possible (and required) during the protocol. Alice's agent at $P$ must be able to send the quantum state to either of the agents at $Q_0$ or $Q_1$; indeed, a general quantum strategy requires her to send quantum information to both. Fourth, the security proof of the flying qudit protocol can be extended to generalised no-signalling theories. However, the protocol is not secure if the committer may have post-quantum devices that respect the no-superluminal signalling principle, but are otherwise unrestricted. Security proofs require stronger assumptions, such as that the commmitter is restricted to devices allowed by a generalized non-signalling theory. The same issue arises considering the post-quantum security of quantum key distribution protocols \cite{barrett2005no}), which are secure if a post-quantum eavesdropper is restricted by a generalised no-signalling theory but not if she is only restricted by the no-superluminal-signalling principle. One distinction is that quantum key distribution is a protocol between mutually trusting parties, Alice and Bob, whereas bit commitment protocols involve two mistrustful parties. It is true that quantum key distribution still involves mistrust, in that Alice and Bob mistrust the eavesdropper, Eve. However, if one makes the standard cryptographic assumption that Alice's and Bob's laboratories are secure, so that information about operations within them cannot propagate to Eve, one can justify a stronger no-signalling principle \cite{barrett2005no}. Of course, the strength of this justification may be questioned, given that one is postulating unknown physics that could imply a form of light speed signalling that cannot be blocked. But in any case, the justification is not available when one considers protocols between two mistrustful parties, such as bit commitment, and wants to exclude the possibility that one party (in our case Alice) cannot exploit post-quantum operations within her own laboratories (which may be connected, forming a single extended laboratory). Our discussion assumed a background Minkowski space-time, but generalizes to other space-times with standard causal structure, where the causal relation $\prec$ is a partial ordering. Neither standard quantum theory nor the usual form of the no-superluminal signalling principle hold in space-times with closed time-like curves, where two distinct points $P$ and $Q$ may obey both $P \prec Q$ and $Q \prec P$. Formulating consistent theories in this context requires further assumptions (see for example Ref. \cite{bennett2009can} for one analysis). The same is true of superpositions of space-times with indefinite causal order \cite{oreshkov2012quantum}. We leave investigation of these cases for future work. \vskip 10pt {\bf Acknowledgments} \qquad This work was partially supported by UK Quantum Communications Hub grant no. EP/M013472/1 and by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. I thank Claude Cr\'epeau and Serge Fehr for stimulating discussions and the Bellairs Research Institute for hospitality. \bibliographystyle{unsrtnat}
{ "timestamp": "2019-05-27T02:16:42", "yymm": "1804", "arxiv_id": "1804.05246", "language": "en", "url": "https://arxiv.org/abs/1804.05246" }
\section{I Introduction} Quasiparticle approach is part of the efforts to understand the physics of the quark-hadron transition characterized by a dramatic change in the number of degrees of freedom where nonperturbative effects are dominant. The model provides a reasonable, as well as phenomenological, description of the thermodynamic properties of quark-gluon plasma (QGP) which deviate significantly from those of an ideal gas of non-interacting quarks and gluons. The success of the quasiparticle picture thus strengthens the notion of quasiparticle ansatz. It may further open up new possibilities for the development of effective theories from a more fundamental viewpoint, concerning the underlying physics of QGP, which is nonperturbative in nature. Indeed, as indicated by lattice quantum chromodynamics (QCD) calculations, the QGP pressure and energy density deviate by about 15-20\% from the Stefan-Boltzmann limit even at temperatures $T > 3 T_c$~\cite{latt-review-01}. On the other hand, the square of the speed of sound, $c_s^2$, extracted from lattice QCD, is smaller than that of an ideal gas of massless particles. In particular, it is found that as the temperature decreases while the system approaches the transition region, $c_{s}^{2}$ reaches down to a minimum and then increases again in accordance with the hadronic resonance gas (HRG) description of the system~\cite{qgp-review-03}. Since these thermodynamical properties may lead to observable consequences through their impact on the hydrodynamically expanding phase during the relativistic heavy ion collisions, they are, therefore, essential features in the study of the strongly interacting QGP matter. The lattice QCD, as an exact and yet numerical technique to obtain the equation of state (EoS), it is still challenging to study finite density QCD in large baryon density and low temperature regions. Besides, there are other attempts to investigate thermal properties of the QGP such as dimensional reduction~\cite{qcd-phase-DR-01,qcd-phase-DR-02,qcd-phase-DR-03}, hard thermal loop (HTL) resummation scheme~\cite{qcd-HTL-01,qcd-HTL-02,qcd-HTL-03,qcd-HTL-04,qcd-HTL-05,qcd-HTL-06,qcd-HTL-07}, Polyakov-loop model~\cite{qcd-phase-PL-01,qcd-phase-PL-02}, as well as approaches in terms of hadronic degrees of freedom~\cite{qcd-phase-Sigma-01,qcd-phase-Sigma-02,qcd-phase-Sigma-03}. The subtlety among different approaches is how to appropriately tackle the nonperturbative regime of QCD, which, in particular, as the temperature decreases and approaches $T_c$, still cannot be accurately described to date. Inspired by its counterparts in other fields of physics, the quasiparticle ansatz assumes that the strongly interacting matter consists of non-interacting quanta which carry the same quantum numbers of quarks and gluons. The strong interactions between the elementary degrees of freedom are incorporated through the medium dependent quasiparticle mass. The quasiparticle approach was first introduced by Peshier $et~al.$~\cite{eos-quasiparticle-13} for the description of gluon plasma, where the temperature dependent particle mass was proposed. However, it was subsequently pointed out by Gorenstein and Yang~\cite{eos-quasiparticle-gorenstein-01} that thermodynamic quantities evaluated by using ensemble average may not agree with those obtained by thermodynamic relations. The issue can be resolved by reformulating the thermodynamics of the quasiparticle model through the requirement of an exact cancelation between the additional contributions from the temperature dependent particle mass and those from the bag constant. The latter is assumed to be temperature dependent and determined by the condition of thermodynamic consistency. Thereafter, the thermodynamical consistentancy were further explored by many other authors~\cite{eos-latt-11,eos-quasiparticle-14,eos-quasiparticle-07,eos-quasiparticle-03,eos-quasiparticle-04,eos-quasiparticle-15,eos-latt-17}. By appropriately addressing the question of gauge invariance, the effective mass of a particle can be defined either by the pole of the effective propagator or through the Debye screen mass extracted from the excitations at small momentum. The calculations using HTL approximation show that the gluon screen mass extracted from the dispersion relation for transverse gluons~\cite{qcd-HTL-08,qcd-HTL-09} are in accordance with the Debye mass obtained at the limit of small momentum~\cite{qcd-HTL-01,qcd-HTL-02,qcd-HTL-10}. Therefore, in practice, the specific forms of quasiparticle mass are taken as a function of temperature, chemical potential as well as the running coupling constant that are usually inspired by the HTL results. As a further matter, the running coupling can be replaced by an effective coupling, $G^2(T,\mu)$, which in turn is determined by a flow equation~\cite{eos-latt-11,eos-quasiparticle-16,eos-latt-12,eos-latt-16}. The latter is a partial differential equation, and its boundary condition can be chosen as the effective coupling at $\mu=0$, adjusted to the lattice QCD data. It is shown that the thermodynamic properties obtained from lattice calculations, especially those for nonvanishing chemical potential, are described remarkably well. In order to guarantee the thermodynamic consistency, the following relation is to be satisfied \begin{eqnarray} \left.\frac{\partial \ln Q_G}{\partial m}\right|_{T,\mu} = 0 . \label{gol} \end{eqnarray} where $Q_G$ is the the grand partition function. In literature, it is required subsequently~\cite{eos-quasiparticle-gorenstein-01,eos-latt-11} \begin{eqnarray} \frac{dB}{dm} = \left.\frac{\partial p(T,\mu,m)}{\partial m}\right|_{T,\mu} . \label{go0a} \end{eqnarray} Here, the bag constant $B$ is understood to be a function of the particle mass $m$ only, and its temperature (and chemical potential) dependence is inherited implicitly from that of the quasiparticle mass $m=m(T,\mu)$. It is straightforward to show that Eq.(\ref{go0a}) indeed implies to Eq.(\ref{gol}). However, if $B$ explicitly depends on temperature, there will be an extra contribution to the thermodynamic quantities which is not accounted for by Eq.(\ref{go0a}). By examing the r.h.s. of Eq.(\ref{go0a}), it turns out to be an explicit function of $T$, $\mu$ and $m$. Therefore, the requirement that the r.h.s. of Eq.(\ref{go0a}) is a function of temperature (and chemical potential) only through the quasiparticle mass furnishes a more stringent condition. In this work, we show that the above consideration leads to an integro-differential equation, which is equivalent to the flow equation introduced in the Ref.~\cite{eos-latt-11} under certain circumstances. Moreover, we show that there are also other possibilities which accommodate the requirement for thermodynamical consistency. The present work is organized as follows. In the next section, we review the question concerning thermodynamical consistency in the quasiparticle model. An integro-differential equation for quasiparticle mass is derived. Two special solutions are discussed, both are expressed in terms of a particle differential equation, and can be solved by the method of characteristics. We show that the first case is precisely what was derived and investigated by Peshier {\it et al}. The second solution, on the other hand, is an intrinsically different one where particle mass is found to be a function of momentum. The numerical results are presented in section III. By using the lattice QCD data at $\mu=0$ as the boundary condition, we show that both solutions can reasonably reproduce the recent lattice QCD results. In particular, the results concerning finite baryon density are presented. The last section is devoted to discussions and concluding remarks. \section{II Thermodynamic consistency for quasiparticle model with temperature and chemical potential dependent mass} In this section, the thermodynamic consistency for quasiparticle model is revisited. Our discussions are based on the quasiparticle model proposed by Begun {\it et al}.~\cite{eos-quasiparticle-gorenstein-02}. An interesting aspect of the approach, as pointed out by the authors, is the existence of an additional free parameter. To be specific, it is shown that pressure, while following its traditional definition in statistical physics, is determined up to an extra free parameter. Let us first write down the expressions for energy and particle number as they are formulated as ensamble average as follows, \begin{eqnarray} \langle E \rangle= \frac{\sum\limits_i E_i \exp (-\alpha N_i-\beta E_i)}{\sum\limits_i \exp (-\alpha N_i-\beta E_i)} , \nonumber\\ \langle N \rangle= \frac{\sum\limits_i N_i \exp (-\alpha N_i-\beta E_i)}{\sum\limits_i \exp (-\alpha N_i-\beta E_i)} .\label{ensembleavg} \end{eqnarray} where the ensamble average is carried out among all possible microscopic state $i$ of the system, and $N_i$ and $E_i$ are respectively the total number and total energy of the state in question. The above expression can be rewritten in terms of the grand partition function, \begin{eqnarray} Q_{G}=\langle \exp[-\alpha \hat{N}-\beta \hat{H}_{\mathrm{eff}}]\rangle , \end{eqnarray} where \begin{eqnarray} \hat{H}_{\mathrm{eff}}=\hat{H}_{\mathrm{id}}+E_0+E_1 . \end{eqnarray} Here $\hat{H}_{\mathrm{id}}$ is the Hamiltonian of ideal gas of quasiparticles \begin{eqnarray} \hat{H}_{\mathrm{id}}=\sum_j\sum_{\mathbf{k}}\omega(\mathbf{k})a_{\mathbf{k},j}^\dagger a_{\mathbf{k},j} , \label{ehid} \end{eqnarray} where $j$ corresponds to internal degrees of freedom. Here $E_0$ is a temperature and chemical potential dependent function associated with the bag constant $B$ proposed by Gorenstein and Yang~\cite{eos-quasiparticle-gorenstein-01}. This term is used to cancel out the effects of the temperature (and chemical potential) dependence of the quasiparticle mass through Eq.(\ref{gol}), to be discussed further below. $E_1$ is the above-mentioned free parameter, which is proportional to the temperature. The $E_1$ term is singled out from $E_0$ owing to its peculiar nature. As shown below, it allows one to further adjust the value of the pressure for any given energy density~\cite{eos-quasiparticle-gorenstein-02}. Quasiparticle ansatz assumes that one may carry out the calculations in the momentum space where the Hamiltonian is diagonal. To be more specific, one makes the following substitutions for the ideal gas part \begin{eqnarray} \sum_j\sum_{\mathbf{k}} \rightarrow \frac{gV}{(2\pi)^3}\int d\mathbf{k} . \end{eqnarray} where $g$ is the degeneracy factor. Now, thermodynamical quantities can also be expressed regarding the derivatives of the grand partition function. For instance, the energy density reads \begin{eqnarray} \varepsilon=\frac{\langle E\rangle}{V}=-\frac{1}{V}\frac{\partial \ln Q_G}{\partial\beta}=\epsilon_{\mathrm{id}}+\frac{E_0}{V}+\frac{E_1}{V}+\frac{1}{V}\langle \beta\frac{\partial E_1}{\partial\beta}\rangle = \epsilon_{id} + B. \label{energydensity} \end{eqnarray} where \begin{eqnarray} \epsilon_{\mathrm{id}}=\frac{g}{2\pi^2}\int_0^\infty \frac{k^2dk\omega^*(k,T,\mu)}{\exp[(\omega^*(k,T,\mu)-\mu)/T]\mp 1}+\mathrm{c.t.} \,\,, \end{eqnarray} with on-shell dispersion relation \begin{eqnarray} \omega^*(k,T,\mu)=\sqrt{m(T,\mu)^2+k^2} , \label{disponshell} \end{eqnarray} and $B = \lim_{V\rightarrow \infty}\frac{E_0}{V} $ is the bag constant and the counter term ``$\mathrm{c.t.}$" indicates contributions from anti-particles obtained by the substitution $\mu\rightarrow -\mu$ in the foregoing term. Here, the contribution from the temperature dependence of quasiparticle mass has already been canceled out with the temperature dependence of $E_0$. If the system has vanishing chemical potential $\mu = 0$, one has $B=B(\mu=0, T)\equiv B(T)$ and $m=m(\mu=0, T)\equiv m(T)$, in general, one can invert the second function to find $T=T(m)$ and express $B$ as a function of $m$. Thus the above requiement Eq.(\ref{gol}) regarding $E_0$ implies \begin{eqnarray} \frac{dB}{dm} = -\frac{gm}{2\pi^2}\int_0^\infty \frac{k^2dk}{\omega^*(k,T)}\frac{1}{\exp[(\omega^*(k,T))/T]\mp 1} . \label{go1} \end{eqnarray} At finite baryon density, however, one is dealing with a bivariate function $B=B(\mu, T)$. Thus the above argument is not valid. In general, $B$ may explicitly depend on $T$ besides its dependence through $m$, but one still can write down \begin{eqnarray} \frac{\partial B}{\partial T}= -\frac{g}{2\pi^2}\int_0^\infty \frac{k^2dk}{\omega^*(k,T,\mu)}\frac{1}{\exp[(\omega^*(k,T,\mu)-\mu)/T]\mp 1}m\frac{\partial m}{\partial T} + \mathrm{c.t.}\,\,. \label{go2} \end{eqnarray} Furthermore, since $E_1$ is linear in $1/\beta$, one has $\langle\beta\frac{\partial E_1}{\partial\beta}\rangle=\beta\frac{\partial E_1}{\partial\beta}=-E_1$, thus the last equality of Eq.(\ref{energydensity}) is justified. Similarly, the pressure is interpreted as a ``general force", which reads \begin{eqnarray} p=\frac{1}{\beta}\frac{\partial \ln Q_G}{\partial V}=\frac{1}{\beta}\frac{\ln Q_G}{V}=p_{\mathrm{id}} - B - \frac{E_1}{V} \label{pressure}, \end{eqnarray} where \begin{eqnarray} p_{\mathrm{id}}&=&\frac{\mp g}{2\pi^2}\int_0^\infty k^2dk\ln \left\{ 1\mp\exp\left[\left(\mu-\omega^*(k,T,\mu)\right)/T\right]\right\}+\mathrm{c.t.} \nonumber\\ &=&\frac{g}{12\pi^2}\int_0^\infty \frac{k^3dk}{\exp[(\omega^*(k,T,\mu)-\mu)/T]\mp 1}\left.\frac{\partial \omega^*(k,T,\mu)}{\partial k}\right|_{T,\mu}+\mathrm{c.t.}\,\,. \label{fpid} \end{eqnarray} We note the presence of the term regarding $E_1$ in the resulting expression for the pressure, but not in that for the energy density. The number density reads \begin{eqnarray} n=\frac{\langle N \rangle}{V}=-\frac{1}{V}\frac{\partial \ln Q_G}{\partial\alpha}=n_{\mathrm{id}} ,\label{fnb} \end{eqnarray} with \begin{eqnarray} n_{\mathrm{id}}=\frac{g}{2\pi^2}\int_0^\infty \frac{k^2dk}{\exp[(\omega^*(k,T,\mu)-\mu)/T]\mp 1}-\mathrm{c.t.} \,\, . \end{eqnarray} Again, the contribution from the chemical potential dependence of quasiparticle mass in the ideal gas term and that from $E_0$ term cancel out each other if $B$ satisfies \begin{eqnarray} \frac{\partial B}{\partial\mu}= -\frac{g}{2\pi^2}\int_0^\infty \frac{k^2dk}{\omega^*(k,T,\mu)}\frac{1}{\exp[(\omega^*(k,T,\mu)-\mu)/T]\mp 1}m\frac{\partial m}{\partial \mu} + \mathrm{c.t.}\,\,. \label{go3} \end{eqnarray} We note that the resultant expressions for the thermodynamic quantities, namely, Eq.(\ref{energydensity}), Eq.(\ref{pressure}) and Eq.(\ref{fnb}), are thermodynamically as well as statistically consistent. The reasons are twofold. Firstly, the expressions for energy and particle density are in accordance with the conventional definition regarding ensemble average\footnote{This can be seen by comparing the r.h.s. of Eq.(\ref{energydensity}) and Eq.(\ref{fnb}) against Eq.(\ref{ensembleavg}).}, while they can also been conveniently expressed in standard form as derivatives of the grand partition function, as emphasized by other authors~\cite{eos-quasiparticle-14,eos-quasiparticle-03}. Moreover, from the viewpoint of statistical physics, those ensemble averages are meaningful, only when one can match those quantities, term by term, to the first law of thermodynamics~\cite{book-statistical-mechanics-pathria}. In this context, thermodynamical consistency is guaranteed. Subsquently, any other thermodynamical quantities can be then derived from a thermodynamic potential, which summarizes all the constitutive properties of a body that thermodynamics represents. Now, it is not difficult to see that the second requirement is indeed achieved by evaluating the total derivative of $q=\ln Q_G$, to be specific, one can readily verify that \begin{eqnarray} dq=-\langle N \rangle d\alpha - \langle E \rangle d\beta - \beta p dV . \end{eqnarray} By comparing the above expression with the first law of thermodynamics, namely, \begin{eqnarray} d\langle E\rangle=T dS - p dV + \mu d\langle N\rangle . \end{eqnarray} it is inferred that \begin{eqnarray} &&\beta = \frac{1}{k_{B}T} , \nonumber \\ &&\alpha = -\frac{\mu}{k_{B}T} , \nonumber\\ &&q+\alpha N +\beta E = \frac{S}{k_{B}} .\label{eentropy} \end{eqnarray} Since the first law of thermodynamics holds, it is natural to expect that all thermodynamical quantities defined through the above procedure automatically satisfy any thermodynamical relations, such as: \begin{eqnarray} \epsilon \equiv \frac{E}{V} = T\left.\frac{\partial p}{\partial T}\right|_{V,\mu} - p +\mu n , \end{eqnarray} which is frequently discussed in the literature. As discussed above, for the case of finite density, $B$ has to satisfied both Eq.(\ref{go2}) and Eq.(\ref{go3}) simultaneously, which is not equivalent to Eq.(\ref{go0a}). In fact, the symmetry of second derivatives for Eq.(\ref{go2}) and Eq.(\ref{go3}) implies the following integro-differential equation: \begin{equation} \begin{aligned} \llangle m\frac{\partial m}{\partial T} \rrangle_- = \llangle m\frac{\partial m}{\partial \mu}\rrangle_+ , \label{gozero} \end{aligned} \end{equation} where \begin{equation} \begin{aligned} \llangle O \rrangle_- = \int_0^\infty k^2dk\left\{\frac{\exp[(\omega^*-\mu)/T]}{(\exp[(\omega^*-\mu)/T]\mp 1)^2 T}-\mathrm{c.t.}\right\}O(k) , \\ \llangle O \rrangle_+ = \int_0^\infty k^2dk\left\{\frac{\exp[(\omega^*-\mu)/T](\omega^*-\mu)}{(\exp[(\omega^*-\mu)/T]\mp 1)^2 T^2}+\mathrm{c.t.}\right\}O(k) . \label{gozero2} \end{aligned} \end{equation} For the most general cases, the particle mass is a function of momentum, $m=m(k,T,\mu)$, and therefore $B$ is actually a functional of $m$ besides a function of $T$ and $\mu$, and derivatives in equations such as Eq.(\ref{go0a}) should be understood as functional derivatives. In the present study, we assume for simplicity that for an anti-particle $\bar{X}$, $m_{\bar{X}}(k,T,-\mu)=m_X(k,T,\mu)\equiv m(k,T,\mu)$, and again, ``$\mathrm{c.t.}$" indicates the counter term due to the contributions from anti-particles, they are obtained from the foregoing term by substituting $\mu\rightarrow -\mu$ and $X\rightarrow \bar{X}$. In what follows we will discuss two special solutions of Eq.(\ref{gozero}). \subsection{III The momentum independent solution} Let us first consider the case where the quasiparticle mass is only a function of temperature and chemical potential, $m=m(T,\mu)$. Then both $m$ and its derivatives can be moved out of the integrals with respect to $k$, and therefore, Eq.(\ref{gozero}) gives \begin{equation} \begin{aligned} \frac{\partial m}{\partial T}\llangle 1 \rrangle_- = \frac{\partial m}{\partial \mu}\llangle 1 \rrangle_+ \label{go9} , \end{aligned} \end{equation} or, \begin{equation} \begin{aligned} \frac{\partial m}{\partial T}\frac{\partial}{\partial \mu}\left(\left.\frac{\partial p_{\mathrm{id}}}{\partial m}\right|_{T,\mu} \right) =\frac{\partial m}{\partial \mu}\frac{\partial}{\partial T}\left(\left.\frac{\partial p_{\mathrm{id}}}{\partial m}\right|_{T,\mu}\right) . \label{go9b} \end{aligned} \end{equation} when expressed in terms of $p_{\mathrm{id}}$ of Eq.(\ref{fpid}). By summing both sides of the above equation to the Maxwell relation of the ideal gas, \begin{equation} \begin{aligned} \frac{\partial}{\partial\mu}\left(\left.\frac{\partial p_{\mathrm{id}}}{\partial T}\right|_{m,\mu}\right) =\frac{\partial}{\partial T}\left(\left.\frac{\partial p_{\mathrm{id}}}{\partial \mu}\right|_{m,T}\right) , \end{aligned} \end{equation} and taking into account Eq.(\ref{go2}) and (\ref{go3}), one recovers \begin{equation} \begin{aligned} \frac{\partial s}{\partial \mu}=\frac{\partial^2 p}{\partial T\partial \mu} =\frac{\partial^2 p}{\partial \mu\partial T}=\frac{\partial n}{\partial T} .\label{eq7ref-eos-latt-11} \end{aligned} \end{equation} This is a Maxwell relation, precisely Eq.(7) of Ref.\cite{eos-latt-11}, which was subsequently used to determine the flow equation for the running coupling constant. Alternatively, from our viewpoint, Eq.(\ref{go9}) is a condition to determine the particle mass $m(T,\mu)$. It is not difficult to see that Eq.(\ref{go9}) can be formally solved by using the method of characteristics. As shown in Appendix, its solution consists of characteristic curves for given $m$ satisfying \begin{eqnarray} \frac{d \mu}{d T}=-\frac{\llangle 1 \rrangle_+}{\llangle 1 \rrangle_-} . \nonumber \end{eqnarray} One may make use of the lattice data at zero chemical potential as the boundary condition. Then again, one may simply solve $m(T,\mu)$ by carrying out numerical integral from the $\mu=0$ boundary onto the $T-\mu$ plane where $\mu\ne 0$. \subsection{IV A special momentum dependent solution} In general, as the solution of Eq.(\ref{gozero}), the quasiparticle mass is a function of $k$, $T$ and $\mu$. For this case, we only discuss a special solution which possesses a rather simple form. It is obtained by assuming the integrands on the both sides are the same. In other words, \begin{equation} \begin{aligned} \left\{\frac{\exp[(\omega^*-\mu)/T]T}{(\exp[(\omega^*-\mu)/T]\mp 1)^2}-\mathrm{c.t.}\right\}\frac{\partial m}{\partial T} =\left\{\frac{\exp[(\omega^*-\mu)/T](\omega^*-\mu)}{(\exp[(\omega^*-\mu)/T]\mp 1)^2}+\mathrm{c.t.}\right\}\frac{\partial m}{\partial \mu} . \label{godown} \end{aligned} \end{equation} Since $\omega^*$ is involved in the above equation, the resultant particle mass is indeed a function of $k$. Then again, the above equation can be solved by using the method of characteristics, and its solution consists of characteristic curves for given $\omega^*$. In particular, if the contributions from anti-particles are insignificant, namely, $\mu \gg 1$, Eq.(\ref{godown}) can be further simplified to \begin{eqnarray} \frac{\partial m}{\partial \mu} =\frac{T}{(\omega^*(k,T,\mu)-\mu)}\frac{\partial m}{\partial T} , \label{goup} \end{eqnarray} which possesses the following analytic solution (see also the Appendix) \begin{eqnarray} m=f\left(\frac{T\omega^*}{\omega^*-\mu}\right) , \label{go7b} \end{eqnarray} where $f(T)\equiv m(T,\mu=0)$ is determined by the boundary condition. \section{V Numerical Results} \begin{figure} \begin{tabular}{cc} \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig1_massg}} \end{minipage} & \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig1_massuds}} \end{minipage} \\ \end{tabular} \caption{(Color online) The resultant temperature dependent quasiparticle mass for glouns, light and strange quarks at zero chemical potential.} \label{fitmt} \end{figure} Now, we are in a position to present the numerical results and to compare them to the recent lattice data for $N_f=2+1$ favor QCD system~\cite{lattice-12,lattice-14,lattice-18,lattice-15,lattice-19}. We first show the calculated thermodynamical quantities for the case of momentum dependent quasiparticle mass. Here, the free parameters are the effective masses of gluons, of light as well as strange quarks as functions of temperature at zero chemical potential, and a constant related to $E_1$. Once they are determined, one may evaluate all thermodynamical quantities such as energy density, pressure, and entropy density at zero as well as finite baryon density. In addition, we also calculate the trace anomaly, sound velocity, and the particle number susceptibility defined as \begin{eqnarray} \chi^{ab}_2=\frac{T}{V}\frac{1}{T^2}\left.\frac{\partial^2 \ln Q_G(T,\mu_u,\mu_d,\mu_s)}{\partial\mu_a\partial\mu_b}\right|_{\mu_a=\mu_b=0} . \end{eqnarray} \begin{figure} \begin{tabular}{cc} \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig2_pes}} \end{minipage} & \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig2_trace}} \end{minipage} \\ \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig2_cs2}} \end{minipage} & \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig2_deltatrace}} \end{minipage} \end{tabular} \caption{(Color online) The calculated thermodynamical quantities for both vanishing and finite baryon chemical potential. The thermodynamical quantities obtained by the present model is shown in dotted blue curves. The calculated results truncated in terms of $\frac{\mu}{T}$ up to second are shown in dotted green curves. They are compared with those of lattice QCD calculations the Wuppertal-Budapest~\cite{lattice-12,lattice-14} and HotQCD~\cite{lattice-18,lattice-15,lattice-19} Collaborations, indicated by filled red circles and grey squares (with error bars when it applies) respectively. (a) and (b): the results of entropy density, energy density, pressure and trace anomaly at zero baryon chemical potential. (c): calculated speed of sound. (d): trace anomaly for different values of chemical potential. } \label{fit2latticeA} \end{figure} In particular, the relavant quantities $\chi^{B}_2$ and $\chi^{L}_2$ in the present model~\cite{lattice-12} read: \begin{eqnarray} \chi^{B}_2=\frac{1}{9}\left[\chi^u_2+\chi^d_2+\chi^s_2+2\chi^{us}_{11}+2\chi^{ds}_{11}+2\chi^{ud}_{11}\right]=\frac{1}{9}\left[2\chi^u_2+\chi^s_2\right] , \end{eqnarray} and \begin{eqnarray} \chi^{L}_2=\frac{1}{9}\left[\chi^u_2+\chi^d_2+2\chi^{ud}_{11}\right]=\frac{2}{9}\chi^u_2 . \label{eq33} \end{eqnarray} The x-axis of the plots are chosen to be $T/T_c$, where the value for the transition temperature $T_c=0.15 $GeV is taken~\cite{lattice-14,lattice-15,lattice-09,lattice-08}. Then all these results are compared to those obtained by the lattice QCD calculations by Wuppertal-Budapest~\cite{lattice-12,lattice-14} as well as HotQCD collaborations~\cite{lattice-18,lattice-15,lattice-19}. The parameters of the present approach are determined as follows. Firstly, the lattice data~\cite{lattice-14} on particle susceptibility of light $\chi_2^L$ quarks is used to determine the quasiparticle mass of light quarks at vanishing chemical potential. Subsequently, the quasiparticle mass of strange quark as a function of temperature is determined by the particle susceptibility regarding baryon chemical potential $\chi_2^B$. Then the gluon mass is used to fit the energy density for $n_B=0$. Also, to compare the results between Eq.(\ref{go9}) and Eq.(\ref{godown}), we assume that the quasiparticle mass is momentum independent at $\mu=0$\footnote{This is a simplifying assumption, and it may be not valid in general. A more realistic approach is to accommodate the existing results regarding the momentum dependence of parton mass.}, so that both equations are solved by using the same boundary condition. Finally, $E_1$ is tuned to further improve the pressure as a function of temperature at zero baryon density. The resultant particle masses at $\mu_q=0$ are show in Fig.\ref{fitmt}, the constant of integration for the bag constant is taken to be $B(T_c, \mu=0) =0.12 \times T_c^{4}$, and the value of $E_1$ is found to be $\beta E_1/V=2.305 \times 10^{-4} $GeV$^3 $. The particle mass at finite chemical potential is subsequently evaluated according to Eq.(\ref{godown}). We note that, in principle, it seems to be more reasonable to adjust the model parameters to the lattice data of $\chi_2^u$ and $\chi_2^s$, instead of $\chi_2^L$ and $\chi_2^B$. This is because the lattice results show that $\chi_2^B$ contains flavor correlations. However, since our quasiparticle model does not take into account the contributions from mixed cumulant terms, such as $\chi_{11}^{ud}$ in Eq.(\ref{eq33}), it is found that in practice the proposed model calibration leads to a better fit to the existing lattice data. \begin{figure} \begin{tabular}{cc} \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig2_deltap}} \end{minipage} & \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig2_mub2vs4}} \end{minipage} \\ \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig2_chi_2B}} \end{minipage} & \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig2_chi_4B}} \end{minipage} \end{tabular} \caption{(Color online) The calculated thermodynamical quantities for both vanishing and finite baryon chemical potential. The thermodynamical quantities obtained by the present model is shown in dotted blue curves. The calculated results truncated in terms of $\frac{\mu}{T}$ up to second and fourth order are shown in dotted green and dashed purple curves respectively. They are compared with those of lattice QCD calculations the Wuppertal-Budapest~\cite{lattice-12,lattice-14} and HotQCD~\cite{lattice-18,lattice-15,lattice-19} Collaborations, indicated by filled red circles and grey squares (with error bars when it applies) respectively. (a) and (b): the difference of pressure for given $\mu_B$ or $\mu_B/T$ as a function of temperature. The calculations have been carried out by using different truncations and the results are compared against corresponding lattice data. (c) and (d): the second and fourth order cumulants of particle number fluctuations, $\chi_2$ and $\chi_4$. } \label{fit2latticeB} \end{figure} The resultant thermodynamic quantities are presented in Fig.\ref{fit2latticeA} and Fig.\ref{fit2latticeB}. One observes that, overall, a reasonably good agreement is achieved, especially for quark number susceptibility, besides the energy density, entropy density, and pressure. It is also worth pointing out that in our present approach, we did not introduce any renormalization for the degeneracy factor, which is adopted as an additional free parameter by some of the quasiparticle approaches. The only discrepancies are observed for the quantities associated with the first and second derivative of the grand partition function for the region where $T<T_c$ . For instance, the pressure difference is related to the expansion in terms of $\mu/T$. Therefore the deviation becomes larger for smaller temperature. It is probably related to the peak of $\chi_4$ at $T_c$~\cite{lattice-16} which has not been appropriately considered in the present study. As explained above, the fit was only carried out regarding the $\chi_2$ lattice data. Since the lattice QCD results were obtained by a Taylor expansion in terms of $\frac{\mu}{T}$, it is thus meaningful to show our results also truncated to the corresponding order when comparing to them. This is shown in Fig.\ref{fit2latticeB} (c) and (d). It is noted when we evaluate the pressure difference expanded up to the order of $\left(\frac{\mu}{T}\right)^2$, the calculated curve stays closer to the lattice results, as expected. The is shown by the dotted green curves in Fig.\ref{fit2latticeB} (c). But since the present quasiparticle model does not consider any contribution from the mixed second order derivative such as $\chi^{ud}_{11}$, it is merely understood as a result of appropriate parameterization. For the same reason, the results on fourth-order cumulant $\chi_4^B$ presents more substantial discrepancies. Probably due to a similar reason, some small deviation is also found for the calculated sound velocity as a function of temperature. However, by adjusting the gluon mass in the region of temperature $T\sim T_c$, we were able to reproduce the behavior of the sound speed which increases again as the temperature reaches the sector associated with the hadronic resonance gas. From a practical viewpoint, these difference can also be amended by manually connecting the quasiparticle EoS to that of the hadronic resonance gas model. \begin{figure} \begin{tabular}{cc} \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig3_mq_t}} \end{minipage} & \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig3_dmudm_t}} \end{minipage} \\ \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig3_mq_k}} \end{minipage} & \begin{minipage}{250pt} \centerline{\includegraphics[width=250pt]{fig3_mq_limit}} \end{minipage} \end{tabular} \caption{(Color online) (a) and (b): the calculated quasiparticle mass of light quarks and its derivative as functions of temperature for different baryon chemical potentials, obtained by solving Eq.(\ref{godown}), in comparison with those by solving Eq.(\ref{go9}), the latter is equivalent to the approach by Peshier {\it et al}.~\cite{eos-quasiparticle-13}. (c): the quasiparticle mass of light quarks as a function of momentum for the solution discussed in this work. (d): the calculated asymptotic behavior of quasiparticle masses, in comparison with a model~\cite{eos-latt-16} inspired by the gauge-independent hard thermal/dense loop (HTL) calculations.} \label{massfunction} \end{figure} To order to compare two different solutions discussed in the previous section, we solve Eq.(\ref{go9}) and Eq.(\ref{godown}) respectively but fitting to the given boundary condition at $n_B=0$ defined by the lattice data. The corresponding results are shown in Fig.\ref{massfunction}, where the obtained particle masses are presented as a function of temperature. In the first case, since the mass is also a function of momentum $k$, the presented results are average values evaluated by using the same weight on the r.h.s. of Eq.(\ref{go2}) or (\ref{go3}). Numerically, one finds that the particles masses from two different schemes are quite close to each other. Though it seems to be a somewhat a surprising result, we understand that it could be merely owing to that both approaches are tuned to reproduce the lattice data and the fact the numerically obtained momentum dependence of quasiparticle mass is not strong at all. The latter is observed in Fig.\ref{massfunction} (c), which presents the obtained momentum dependence of quark masses for a given temperature but with different values of chemical potential. It is observed that the quasiparticle mass decreases slightly but monotonically and converges to a given value as the momentum increases. As the chemical potential increases, the dependence becomes stronger, though the overall dependence is not significant. Last but not least, we show that the results obtained in the present approach are consistent with the established perturbative limit. This is achieved by carrying out calculations by using the quasiparticle model proposed in Ref.~\cite{eos-latt-16} with the following forms for the quasiparticle masses \begin{eqnarray} m_{a}^{2}=m_{a0}^{2}+ \Pi_{a} , \end{eqnarray} where $a = g, q, s$ and the quasiparticle self-energies adopt the asymptotic forms of the gauge-independent hard thermal/dense loop (HTL) calculations~\cite{qcd-HTL-01,book-thermo-field-theory-Bellac}: \begin{eqnarray} \Pi_{g}&&= \left( \left[ 3+ \frac{N_f}{2} \right] T^2 + \frac{3}{2 \pi^{2}} \sum_f \mu_{f}^{2} \right) \frac{G^2}{6}, \\ \Pi_{q}&&=2 m_{q0} \sqrt{\frac{G^2}{6} \left( T^2 + \frac{\mu_{q}^{2}}{\pi^{2}} \right)} + \frac{G^2}{3} \left( T^2 + \frac{\mu_{q}^{2}}{\pi^{2}} \right),\\ \Pi_{s}&&=2 m_{s0} \sqrt{\frac{G^2}{6} T^2} + \frac{G^2}{3} T^2 . \label{mHTL} \end{eqnarray} For the high temperature region, the coupling is taken to have the form of the perturbative running coupling at two-loop order \begin{eqnarray} G^2(T,\mu_q=0)=\frac{16 \pi^{2}}{\beta_0 \log \xi^2} \left[ 1- \frac{2 \beta_1}{\beta_0} \frac{\log \left(\log \xi^2 \right)}{\log \xi^2}\right] , \end{eqnarray} with \begin{eqnarray} \beta_0&&=\frac{11 N_c - 2 N_f}{3},\\ \beta_1&&=\frac{34 N_{c}^{2} - 13 N_c N_f - 3 N_f /N_c }{6} , \end{eqnarray} and \begin{eqnarray} \xi=\lambda \frac{T-T_s}{T_c}. \end{eqnarray} which regulates the infrared divergence of the running coupling. For the parameters, the scale parameter and the temperature shift are chosen to be $\lambda=1.5$ and $T_s=0.15 T_c$. This is done so that the model may adequately reproduce the recent lattice data~\cite{lattice-12,lattice-14} in the intermediate temperature region, while the remaining parameters are taken to be the same as used in literature~\cite{eos-quasiparticle-feg-1}. The calculated asymptotic behavior of the quasiparticle mass is shown in Fig.\ref{massfunction} (d). As described above, the quasiparticle masses at vanishing chemical potential are adjusted to reproduce the lattice data at the intermediate temperature. We first interpolate the lattice data, and then make use of the obtained expression to evaluate the particle masses for the whole temperature range. The interpolation is carried out by specifically requiring the asymptotic behavior in Eq.(\ref{mHTL}) is attained at the limit $T\rightarrow \infty$. It is shown that our present approach is indeed consistent with the established perturbative limit. Owing to Eq.(\ref{mHTL}), at very high temperature but physically relevant finite chemical potential, the limit established above does not change at all, which is also confirmed by the numerical calculations. \section{VI Concluding remarks} To summarize, in this work we study the thermodynamic consistency of the quasiparticle model and its implications on quasiparticle mass. We have found new possible solutions that have not be explored before, and an essential characteristic of these solutions is that the quasiparticle mass is also a function of the momentum. Consequently, thermodynamical quantities are actually {\it functionals} of particle mass, and in this case, the formulation concerning the derivatives with respect to $m$, such as $dB/dm$ on the l.h.s. of Eq.(\ref{go0a}), cease to be well defined. As discussed in the previous sections, such momentum dependence of quasiparticle mass is not a free parameterization but is derived from the requirement of thermodynamical consistency. In particular, we investigated one special solution, and find that it is consistent with the most recent lattice data. In fact, the momentum dependent effective mass is a meaningful concept. For instance, results on the gluon~\cite{qcd-RGZ-01,qcd-RGZ-02,qcd-RGZ-04,qcd-RGZ-05} and quark propagator~\cite{qcd-GZ-02} in terms of the Gribov-Zwanziger framework show that the resultant pole masses indeed are functions of momentum. Also, other non-perturbative approaches such as the Schwinger-Dyson equation indicate that both gluon~\cite{qcd-DSE-02} and quark~\cite{qcd-DSE-03,qcd-DSE-04} dynamic masses are momentum dependent. In particular, the concept of momentum dependent self-energy has been investigated by many authors in the context of quasiparticle model~\cite{eos-quasiparticle-17,eos-quasiparticle-18,eos-quasiparticle-19,eos-quasiparticle-20}. Besides, we show that the scenario discussed previously by other authors~\cite{eos-latt-11,eos-quasiparticle-16,eos-latt-16,eos-latt-12} can be readily restored if one enforces that quasiparticle mass is only a function of temperature and chemical potential. From our viewpoint, however, the derived ``flow equation" for the running coupling~\cite{eos-latt-11} can alternatively be written down as an equation in terms of the quasiparticle mass. We also investigated a special solution where quasiparticle mass is a function of the momentum, by simply matching the integrants of the integro-differential equation. By numerical calculations, we show that the difference between these different schemes are not very significant, once the lattice data at zero chemical potential is used as a constraint. Partly inherited from most quasiparticle approaches, the present model does not naturally address the flavor off-diagonal correlations. The latter subsequently leads to deviation from the lattice data in the transition region at fourth order and beyond. Also, as the present model still show some discrepancy from the lattice data for the region $T<T_c$, it seems natural to smoothly connect the EoS in this region to that of hadronic resonance gas model. In Ref.~\cite{sph-eos-1}, a critical point is implemented phenomenologically at finite baryon chemical potential. Since the EoS plays an essential role in the hydrodynamic description of relativistic heavy-ion collisions~\cite{sph-review-1,sph-eos-2,sph-eos-3}, one can employ this scheme to study the properties of the system regarding the existence of the critical point, especially their particular consequences owing to the hydrodynamic evolution of the system. Hopefully, some observables can be compared to the ongoing RHIC beam energy scan program~\cite{RHIC-star-bes-01,RHIC-star-bes-02,RHIC-star-bes-03,RHIC-star-bes-04}. We plan to carry out a hydrodynamic study of the relevant quantities using the proposed EoS. \section*{Acknowledgments} We are thankful for valuable discussions with Bruno W. Mintz, Arlene C. Aguilar, Tereza Mendes, and Aritra Bandyopadhyay. We gratefully acknowledge the financial support from Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP), Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado do Rio de Janeiro (FAPERJ), Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq), and Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (CAPES). A part of the work was developed under the project INCTFNA Proc. No. 464898/2014-5. This research is also supported by the Center for Scientific Computing (NCC/GridUNESP) of the S\~ao Paulo State University (UNESP). \section*{Appendix} In this section, we show how the solutions of Eq.(\ref{go9}) and Eq.(\ref{goup}) are obtained. As a matter of fact, the procedure to solve the above equations is very similar, while the latter is slightly more complicated. Therefore, in what follows, we explicitly derive the solution of Eq.(\ref{goup}) and briefly discuss how that of Eq.(\ref{go9}) is obtained. One first rewrites Eq.(\ref{goup}) by defining \begin{eqnarray} w = \omega^*-\mu \end{eqnarray} Since $m=\sqrt{(w+\mu)^2-k^2}$, considering $k$ merely as a parameter in $m=m(k,T,\mu)$, and $(w+\mu)$ as an intermediate variable, one has \begin{eqnarray} \frac{\partial m}{\partial \mu}&=&\frac{\partial m}{\partial (w+\mu)}\frac{\partial{(w+\mu)}}{\partial \mu} , \nonumber \\ \frac{\partial m}{\partial T}&=&\frac{\partial m}{\partial (w+\mu)}\frac{\partial{(w+\mu)}}{\partial T}=\frac{\partial m}{\partial (w+\mu)}\frac{\partial{w}}{\partial T} . \nonumber \end{eqnarray} Thus Eq.(\ref{goup}) implies \begin{eqnarray} \frac{\partial{(w+\mu)}}{\partial \mu} = \frac{T}{w}\frac{\partial{w}}{\partial T} , \end{eqnarray} or equivalently, \begin{eqnarray} w\frac{\partial{w}}{\partial \mu} - T\frac{\partial{w}}{\partial T}+w=0 , \end{eqnarray} whose solution can be obtained by using the method of characteristics~\cite{book-methods-mathematical-physics-01}. To be specific, the above partial different equation can be fit into the formal form \begin{eqnarray} a(\mu,T,w)\frac{\partial w}{\partial \mu}+b(\mu,T,w)\frac{\partial w}{\partial T}=c(\mu,T,w) ,\label{ffchar} \end{eqnarray} with \begin{eqnarray} a(\mu,T,w)&=&w ,\nonumber \\ b(\mu,T,w)&=&-T ,\nonumber \\ c(\mu,T,w)&=&-w . \end{eqnarray} whose formal solution is the surface, defined by $f(\mu,T,w)=w-w(\mu,T)=0$, tangent to the vector field $(a(\mu,T,w),b(\mu,T,w),c(\mu,T,w))$, namely, \begin{eqnarray} \frac{d\mu}{w}=\frac{dT}{-T}=\frac{dw}{-w} . \end{eqnarray} As it contains two independent equtions, one may conveniently select \begin{eqnarray} d(\mu+w)&=&0 ,\nonumber \end{eqnarray} and \begin{eqnarray} d\left[\ln\left(\frac{w}{T}\right)\right]&=&d\left(\frac{w}{T}\right)=0 .\nonumber \end{eqnarray} This indicates that, for any function $F(u,v)$, the desired solution $w$ satisfies \begin{eqnarray} F\left(\frac{w}{T},(w+\mu)\right)=0 . \end{eqnarray} Now, as disscuss in the above text, the solution of the equation is determined by the boundary condition at $\mu=0$, where $m(k,T,\mu=0)\equiv f(T)$. In other words, the form of $F$ shall be determined by the boundary condition. If one defines $F_0(u,v)\equiv F(\mu=0)$, it is readily to verify that\footnote{It is in fact one of many equivalent choices, {\it e.g.}, another possibility is $F(u,v)=uf^{-1}(\sqrt{v^2-k^2})-v$.} \begin{eqnarray} F(u,v)=\sqrt{f\left(\frac{v}{u}\right)^2+k^2}-v \end{eqnarray} indeed satisfies Eq.(\ref{goup}). Subsequently, the general solution of $\omega^*(k,T,\mu)$ for finite chemical potential is given by \begin{eqnarray} \sqrt{f\left(\frac{T\omega^*}{\omega^*-\mu}\right)^2+k^2}-\omega^*=0 , \end{eqnarray} or \begin{eqnarray} m=f\left(\frac{T\omega^*}{\omega^*-\mu}\right) , \end{eqnarray} which is Eq.(\ref{go7b}). As for Eq.(\ref{go9}), one may immediately recognize that the equation possesses the same form of Eq.(\ref{ffchar}) by recognizing \begin{eqnarray} a(\mu,T,m)&=&\llangle 1 \rrangle_+ ,\nonumber \\ b(\mu,T,m)&=&-\llangle 1 \rrangle_- ,\nonumber \\ c(\mu,T,m)&=&0 . \end{eqnarray} Therefore, the formal solution reads \begin{eqnarray} \frac{d\mu}{\llangle 1 \rrangle_+}=\frac{dT}{\llangle 1 \rrangle_-} , \end{eqnarray} which is the solution presented in the main text. \bibliographystyle{h-physrev}
{ "timestamp": "2019-07-24T02:03:35", "yymm": "1804", "arxiv_id": "1804.05376", "language": "en", "url": "https://arxiv.org/abs/1804.05376" }
\section{Introduction}\label{Intro} Unmanned aerial vehicles (UAVs) will be ubiquitous and will play a vital role in various sectors ranging from medical and agricultural to surveillance and public safety. Providing connectivity to UAVs is crucial for data collection and dissemination in such applications. Unlike current wireless UAV connectivity that relies on short-range communication technologies (e.g., WiFi, Bluetooth), cellular connectivity allows beyond line-of-sight control, low latency, real time communication, robust security, and ubiquitous coverage. In essence, cellular-connected UAVs will lead to many new application use cases which we classify into three primary categories: UAV-based delivery systems (UAV-DSs), UAV-based real-time multimedia streaming (UAV-RMS) networks, and UAV-enabled intelligent transportation systems (UAV-ITSs), as shown in Figure~\ref{UAV_applications}. However, to reap the benefits of cellular-connected UAVs for UAV-DSs, UAV-RMS, and UAV-ITSs use cases, various unique communication and security challenges for each of such applications need to be addressed. For instance, efficient handover and online path planning are more crucial for UAV-DSs applications while cooperative multi-UAV data transmission and secured consensus of UAV swarms are unique for UAV-ITSs. In this scope, artificial intelligence (AI) based solution schemes are regarded as a powerful tool for addressing the aforementioned challenges\footnote{For more information, technical details related to the proposed AI techniques can be found at~\cite{tutorial_ML}}. It is worthwhile noting that such challenges can also be addressed at different levels such as the PHY layer and 3D coverage enhancement\footnote{Some existing surveys already discuss some of these issues~\cite{sebastien, mozaffari_survey}.}. In this regard, AI-based solution schemes assist in addressing the aforementioned challenges while yielding new improvements in the design of the network. Although many approaches exist for addressing the aforementioned challenges, we focus on machine learning solutions\footnote{The proposed machine learning techniques are mainly divided into two phases, the training phase followed by the testing phase. Therefore, although the training phase requires some heavy computation, it does not have any impact on the behavior of the UAVs during the testing phase, which refers to the actual execution time.} due to their inherent ability for predicting future network states thus allowing the UAVs to adapt to the dynamics of the network in an online manner. In particular, machine learning techniques allow the UAVs to generalize their observations to unseen network states and can scale to large-sized networks which therefore makes them suitable for UAV applications. Moreover, for such UAV-based applications, energy efficiency and computation capability is a key design constraint. Consequently, the main scope of this work is to highlight the advantages that AI brings for cellular-connected UAVs, under various constraints. In this regard, current existing literature study the changes in the radio environment of cellular-connected UAVs with altitude and analyze the corresponding implications on mobility performance~\cite{sebastien}. Moreover, in~\cite{mozaffari_survey}, the authors provide an overview on the opportunities and challenges for the use of UAVs for wireless communication applications; however, the primary focus is on their application as base stations (BSs). The authors in~\cite{ismail} propose a trajectory optimization scheme for cellular-connected UAVs while guaranteeing cellular connectivity. Although the works in~\cite{mozaffari_survey} and~\cite{ismail} discuss cellular-connected UAVs, they do not focus on the specifics of UAV-DS, UAV-RMS, and UAV-ITS applications, nor do they address AI or security challenges. Therefore, despite being interesting, none of the existing works propose and evaluate AI-based solutions for addressing both wireless and security challenges that arise in the context of cellular-connected UAVs. In essence, the state-of-the-art does not study the potential of AI as a means for addressing the challenges of integrating cellular-connected UAVs across various applications. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=9cm]{figures/UAV_applications} \caption{Cellular-connected UAVs applications in UAV-based delivery systems, UAV-based real-time multimedia streaming networks, and UAV-enabled intelligent transportation systems.}\label{UAV_applications} \end{center} \vspace{-0.6cm} \end{figure} The main contribution of this paper is to expose the major wireless and security challenges that arise in different UAV-based applications and suggest artificial neural network (ANN) based solution approaches for addressing such challenges. In particular, we focus on three major use cases for cellular-connected UAVs: UAV-based delivery systems, UAV-based real-time multimedia streaming networks, and UAV-enabled intelligent transportation systems. For each one of these use cases, we introduce the main technical challenges, in terms of wireless connectivity and security (as illustrated in Figure~\ref{UAV_challenges}), while outlining new AI-inspired solutions to address those challenges. The introduced AI solutions enable the UAVs to predict future network changes thus adaptively optimizing their actions in order to efficiently manage their resources while securing a safe operation. We also provide preliminary simulation results to showcase the benefits of the introduced solutions for each cellular-connected UAV application use case. Here, we restrict our attention to the security at higher communication layers since physical layer security issues and solutions have been discussed in \cite{security_ref}. The rest of this paper is organized as follows. Section~\ref{section:UAV_DS} presents the communication and wireless challenges in UAV-DS and proposes AI-based solution schemes for such challenges. Section~\ref{section:UAV_RMS} highlights the main communication and security challenges in UAV-RMS applications and the corresponding proposed ANN-based solutions. Section~\ref{section:UAV_ITS} provides ANN-based solution schemes for the main communication and security challenges in UAV-ITSs. Finally, conclusions are given in Section~\ref{section:conc}. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=7cm]{figures/Applications} \caption{Examples of wireless and security challenges of cellular-connected UAVs in UAV-based delivery systems, UAV-based real-time multimedia streaming networks, and UAV-enabled intelligent transportation systems.}\label{UAV_challenges} \end{center} \vspace{-0.6cm} \end{figure} \section{UAV-Based Delivery Systems}\label{section:UAV_DS} \subsection{Motivation} UAV-based delivery systems have received much attention recently for various applications such as postal and package delivery (e.g., Amazon prime), food delivery, transport of medicines and vaccinations, and drone taxis for delivery of people~\cite{UAV_delivery}. Compared to conventional delivery methods, UAV-DSs allow a faster delivery process at a reduced cost. They can also provide mission critical services reaching remote and inaccessible areas. To reap the benefits of UAV-DSs, it is important to provide cellular connectivity to the UAVs for control and signaling data transmission. In essence, providing cellular connectivity to delivery UAVs allows network operators to track their location and guarantee a secure delivery of the transported goods. Therefore, to realize such benefits, it is important to address several wireless and security challenges related to cellular-connected UAV-DSs, ranging from efficient handover and path planning to cyber-physical attacks. \subsection{Wireless Challenges and AI Solutions} \subsubsection{Ultra-Reliable and Low-Latency Communications (URLLC)}\label{URLLC} In UAV-DS, the UAVs must send critical control information while delivering goods to their destinations. This, in essence, requires latency of $1$ ms or less and exceedingly stringent reliability with a target block error rate as low as $10^{-5}$~\cite{URLLC_requirements}, especially in mission-critical scenarios such as medical delivery. Wireless latency encompasses both signaling overhead and data transmission. To achieve low signaling latency, channel estimation can be predicted in advance using AI thus allowing a proactive allocation of radio resources. This can be realized by incorporating a long-short term memory (LSTM) cell at the UAV level for learning a sequence of future channel states~\cite{ursula_TWC_1}. LSTMs are effective in dealing with long term dependencies which makes them suitable for learning a sequence of a time-dependent vector. Moreover, in a large network of UAVs, constantly communicating with a remote cloud can introduce substantial communication and signaling delays. To reduce such delays, one can rely on on-device machine learning or edge AI. As opposed to centralized, cloud-based AI schemes, on-device machine learning is based on a distributed machine learning approach, such as \emph{federated learning} (FL), in which the training data describing a particular AI task (e.g., resource management or computing) is stored in a distributed fashion across the UAVs and the optimization problem is solved collectively~\cite{federated_learning}. This in turn enables a large number of UAVs to collaboratively allocate their radio resources in a distributed way thus reducing wireless congestion and device-to-cloud latency. Finally, it is important to note that transmission latency can be further reduced by improving wireless connectivity, as discussed in Section~\ref{multimedia_wireless}. \subsubsection{Efficient Handover} In UAV-DSs, the UAVs face frequent handovers and handover to distant cells thus resulting in a ping-pong effect. As opposed to ground UEs, cellular-connected UAVs exhibit LoS links with multiple neighboring BSs simultaneously which, along with dynamic channel variations, can result in a fluctuation in the quality of their wireless transmission. In this context, it is necessary to have complete and sequential information about the channel signal quality at different locations before and after the current location of a particular UAV. As such, bidirectional LSTM cells (bi-LSTMs) are suited for addressing this challenge as they exploit both the previous and the future context, by processing the input data (i.e., channel quality) from two directions with two separate hidden layers. In particular, one LSTM layer processes the input sequence in the forward direction, while the other LSTM layer processes the input in the reverse direction~\cite{bi-LSTM}. Therefore, instead of accounting for the next time step only, this scheme enables each UAV to consider the channel quality at its previous and future sequence locations. This framework can hence be trained to allow the UAVs to update their corresponding cell association vector while avoiding frequent handovers based on previous and future channel signal quality. \subsubsection{Autonomous Path Planning with Connectivity Constraints} A critical factor for UAV-DSs is to maintain reliable cellular connectivity for the UAVs at each time instant along their corresponding paths while also minimizing the total time required to accomplish their delivery mission. In essence, a delivery UAV must maintain a minimum signal-to-noise-and-interference (SINR) ratio along its path to guarantee a reliable communication link for its control information. This naturally depends on the UAV's location, cell association vector, transmit power level, and the location of the serving ground BS. As such, a key challenge for UAV-DSs is to optimize the UAVs' paths so as to reduce their total delivery time while guaranteeing reliable wireless connectivity and thus an instantaneous SINR threshold value. Although a centralized approach can update the path plan of each UAV, this would require real-time tracking of the UAVs and control signals to be transmitted to the UAVs at all time. Moreover, a centralized approach incurs high round-trip latencies and requires a central entity to acquire full knowledge of the current network state. To overcome these challenges, \emph{online} edge algorithms must be implemented individually by each UAV to plan its future path. In this regard, convolutional neural networks (CNNs) can be combined with a deep reinforcement learning (RL) algorithm based on a recurrent neural network (RNN) (e.g., echo state network (ESN) or LSTM) at the UAV level resulting in a CNN-RNN scheme. ESN exhibits dynamic temporal behavior and is characterized by its adaptive memory that enables it to store necessary previous state information to predict the future steps of each UAV. Meanwhile, CNN are mainly used for image recognition and thus can be used for identifying the UAV's environment by extracting features from input images. For instance, CNNs aid the UAVs in identifying the location of the ground BSs, ground UEs, and other UAVs in the network. These extracted features are then fed to a deep RNN which can be trained to learn an optimized sequence of the UAV's future steps, that would minimize its delivery time and guarantee a reliable cellular connectivity at each time instant, based on the input features. In this regard, in~\cite{ursula_path_planning}, we proposed a deep RL framework based on ESN (D-ESN) for optimizing the trajectories of multiple cellular-connected UAVs in an online manner while minimizing latency and interference. For simplicity, we consider an input vector describing the locations of the neighboring ground BSs and other UAVs instead of extracting such features from a CNN. To highlight the gain of D-ESN for path planning, we compare the average values of the (a) wireless latency per UAV and (b) rate per ground UE resulting from the proposed path planning scheme and the shortest path scheme, as shown in Fig.~\ref{path_planning}. Clearly, from Fig.~\ref{path_planning}, we can see that exploiting a deep ESN-based path planning scheme under connectivity constraints for cellular-connected UAVs results in a more reliable wireless connectivity and in lower latency, as compared to wireless-unaware shortest path scheme. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=9cm]{figures/scalability.eps} \caption{Performance assessment of the proposed deep ESN-based path planning algorithm in terms of average (a) wireless latency per UAV and (b) rate per ground UE as compared to the shortest path approach, for different number of UAVs~\cite{ursula_path_planning}.}\label{path_planning} \end{center} \vspace{-0.6cm} \end{figure} \subsection{Security Challenges and AI Solutions} Due to the UAVs' altitude limitations and the LoS communication link with the ground BS, UAV-based delivery systems are vulnerable to \emph{cyber-physical (CP) attacks} in which an adversary aims at compromising a delivery UAV, taking over its control, and ultimately destroying, delaying, or stealing the transported goods. To thwart such CP attacks, the UAV can create a CP threat map in which the adversaries locations can be categorized based on the environmental objects where the UAVs can be physically attacked as well as communication network where the cyber attacks can be imposed to the communication link. Even though prior works assume that a threat map is predetermined\cite{Sanjab}, it is important to create such map in an online manner in order to account for real-time changes in the environment and to overcome the memory limitation of the UAVs for storing a large-scale map. To realize this, a CNN can be trained for classifying the high-risk locations by taking as input the images of the UAV's surrounding environment along each position of its path. From the operator's perspective, it is also important to detect any potential attack by identifying any abnormal or undesirable behavior in the UAVs' motion. Therefore, given their capability of dealing with time-series data, RNNs can be adopted for capturing the UAV's motion characteristics by feeding them with the UAV's dynamics such as its position, speed, acceleration, and destination location. In this case, the RNN's output will be the predicted UAV's normal motion and, thus, using this output the operator can distinguish UAV's abnormal motion which is resulted from a CP attack. \section{UAV-Based Real-Time Multimedia Streaming Applications}\label{section:UAV_RMS} \subsection{Motivation} One key use case for cellular-connected UAVs is to provide various real-time multimedia streaming applications such as online video streaming and broadcasting, UAV-enabled virtual reality (VR), online tracking and localization of mobile targets, and surveillance. In essence, providing cellular connectivity to the UAVs enables online transmission of data and low-latency wireless communication which are essential factors for multimedia streaming applications. To enable effective delivery of such real-time multimedia using cellular-connected UAVs, several wireless and security challenges need to be addressed ranging from interference management to authentication. \subsection{Wireless Challenges and AI Solutions}\label{multimedia_wireless} \subsubsection{Interference Management} For UAV-RMS applications, UAVs will mainly transmit data in the \emph{uplink}. Nevertheless, the ability of cellular-connected UAVs to establish LoS connectivity with multiple ground BSs can lead to substantial mutual interference among them as well as to the ground users. To address this challenge, new improvements in the design of future cellular networks such as advanced receivers, cell coordination, 3D frequency reuse, and 3D beamforming, are needed. For instance, due to their ability of recognizing and classifying images, CNNs can be implemented on each UAV in order to identify several features of the environment such as the location of UAVs, BSs, and ground UEs. Such an approach will enable each UAV to adjust its beamwidth tilt angle so as to minimize the interference on the ground UEs. Moreover, in streaming scenarios, UAV trajectory optimization is also essential. In particular, physical layer solutions such as 3D beamforming, can be combined with an interference-aware path planning scheme to guarantee more efficient communication links for both ground and aerial users. Such a path planning scheme (e.g., such as the one we proposed in~\cite{ursula_path_planning}) allows the UAVs to adapt their movement based on the rate requirements of both aerial UAV-UEs and ground UEs, thus improving the overall network performance. \subsubsection{UAV-enabled Edge Caching} For various real-time multimedia streaming applications, cellular-connected UAVs must generate videos from data files collected using sensors and cameras. For instance, in UAV-enabled VR applications, the UAVs will generate $360^\circ$ videos for each user. However, each UAV can only collect a limited number of data files which might not be sufficient for generating all the requested videos. Meanwhile, cache-enabled UAVs can store common data files related to popular content or for generating videos that users may request in the future thus reducing the number of data files that UAVs need to collect when a request is made~\cite{mingzhe_caching}. For instance, for UAV-enabled VR applications, cache-enabled UAVs can directly store a $360^\circ$ video and send a rotated version of this stored video according to each user's viewing perspective. Moreover, for game broadcast applications, cache-enabled UAVs can store the environment of the game and thus would only need to track the motions of the players for updating the cached data. Here, CNNs can once again be adopted for allowing cache-enabled UAVs to store popular videos or common data files. In particular, CNNs can extract and store the common features of the data files that are requested by different users or by each user, at different time slots. Furthermore, CNNs can be used to record the features of each UAV's surrounding environment. Consequently, when the UAVs need to collect data in a new environment, they would only need to collect new features that are not already recorded by the CNNs. In this context, RNNs can also be employed for predicting the users' video requests. In fact, the context requests of users can be correlated over time, and thus, RNNs can enable the UAVs to cache in advance the predicted future requests or other popular multimedia files. \begin{figure}[t] \centering \subfigure[]{{\setlength{\belowdisplayskip}{10 pt}} \label{figure2a} \includegraphics[width=6.5cm]{figures/figure3.eps}} \subfigure[]{ \label{figure2b} \includegraphics[width=6cm]{figures/figure9.eps}} \vspace{-0.4cm} \caption{\label{figure2b} (a) Comparison of the content request probability predictions for the proposed conceptor ESN algorithm with the real data and (b) the average UAV transmit power as a function of the number of users in the network for the proposed conceptor ESN algorithm with and without caching~\cite{mingzhe_caching}.} \vspace{-0.75cm} \end{figure} Based on our work in~\cite{mingzhe_caching}, we introduce an ESN-based algorithm for predicting the user's content request distributions. The input to the proposed framework is the users' context information such as age, gender, and job, and the output of the ESN-based algorithm is the distribution of the users' content requests. Therefore, based on the users' content request distributions, the UAVs can determine the contents to store at the UAV cache and, thus, transmit the cached contents to the users without the need for backhaul connections. Using real data from \emph{Youku}, Fig. \ref{figure2a} shows that the ESN-based algorithm can accurately predict the content request distribution of a given user. Fig. \ref{figure2b} (b) shows the average transmit power per UAV of cache-enabled UAVs as a function of the number of users. In Fig. \ref{figure2b}, we can see that the proposed ESN algorithm for cache-enabled UAVs yields a considerable reduction in transmit power compared to a baseline without caching. \subsubsection{Identification of Aerial and Ground Users} As shown in~\cite{sebastien}, the radio propagation environment experienced by cellular-connected UAVs differs from that experienced by ground users. Consequently, to maximize the total network performance, a network operator must allocate its radio resources differently between airborne and ground users, especially for UAV-RMS applications. To realize this, network operators should be capable of differentiating an airborne user from a ground one which cannot be achieved by solely relying on self-reporting due to the possibility of a faulty report. Instead, network operators can utilize wireless cellular radio measurements such as reference signal received power (RSRP), received signal strength indicator (RSSI), and reference signal received quality (RSRQ) for user classification. These features can essentially act as an input to a deep belief network (DBN) which can be trained for classifying an airborne user from a ground one. In essence, DBNs are deep architectures that consist of a stack of restricted Boltzmann machines (RBMs), thus having the benefit that each layer can learn more complex features than the layers before it. In essence, a pre-training step is done in DBNs thus overcoming the vanishing gradient problem. This is then followed by fine-tuning the network weights using conventional error back propagation algorithm. \vspace{-0.1cm} \subsection{Security Challenges and AI Solutions} In UAV-RMS applications, an attacker can disrupt the UAV's data transmissions by forging the identities of the transmitting UAVs' and sending disrupted data using their identity. This type of \emph{insider attacks} becomes particularly acute in a large-scale UAV system. In particular, the BS must process the received multimedia files from all the UAVs and allocate computational resources for authenticating the UAVs. However, in large-scale networks, authenticating all the UAVs at once exceeds the BS's computational resources, thus, incurring delay for processing the received files. To avoid this delay, the BS can authenticate only a fraction of the UAVs at each time step. To realize this, the BS could implement a deep RL algorithm based on LSTM in order to learn what signals to authenticate at each time step of its authentication process. In particular, this framework takes as an input a sequence of previous security states of each UAV indicating whether a UAV was previously vulnerable to attacks, and learns a sequence of future authentication decisions for each UAV. LSTMs are suitable for this application since they can learn the interdependence of UAVs' vulnerability at the past time steps, memorize the importance of UAVs to the BS, and map the past sequence of UAV states to a future decision sequence. To analyze the performance of LSTM-based deep RL method for authentication, based on \cite{ferdowsi2018deep}, we consider a network of 1000 UAVs that transmit multimedia streams to a BS. We analyze different scenarios in which different proportions of available UAVs are vulnerable to cyber attacks. Fig. \ref{fig:authentication} assesses the performance of the LSTM-based deep RL framework compared to two baseline authentication scenarios. From Fig. \ref{fig:authentication}, we can see that the proposed algorithm performs the same as the two other baselines in the low range of proportion of vulnerable UAVs. However, as the number of vulnerable UAVs increases the LSTM-based deep RL outperforms the two other baselines and reduces the proportion of compromised UAVs in the network. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=8cm]{figures/attacker.eps} \caption{The proportion of compromised cellular-connected UAVs as a function of the proportion of vulnerable UAVs in a large-scale UAV system authentication~\cite{ferdowsi2018deep}.}\label{fig:authentication} \end{center} \vspace{-0.6cm} \end{figure} \section{UAV-Enabled Intelligent Transportation Systems}\label{section:UAV_ITS} \vspace{-0.2cm} \subsection{Motivation} Integrating UAVs in an intelligent transportation system (ITS) would control road traffic, monitor incidents, and enforce road safety. For instance, UAVs can provide a quick report in case of an accident and can act as flying roadside units, speed cameras, and dynamic traffic signals. Moreover, for vehicular platoons, to reduce wireless network congestion, a cellular-connected UAV can send control and network related information to one of the vehicles only and this vehicle can share the information with other vehicles in the platoon via dedicated short range communication links. UAVs can also track the behavior of a platoon thus detecting any compromised vehicle. Therefore, to reap the benefits of UAV-ITS, several wireless and security challenges need to be addressed ranging from cooperative multi-UAV data transmission and multimodal data integration to secured consensus of UAV swarms. \subsection{Wireless Challenges and AI Solutions} \begin{table*}[t!]\footnotesize \setlength{\belowcaptionskip}{0pt} \setlength{\abovedisplayskip}{3pt} \captionsetup{belowskip=0pt} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#1.6\end{tabular}} \setlength{\abovecaptionskip}{2pt} \renewcommand{\captionlabelfont}{\small} \captionsetup{justification=centering} \caption{Cellular-connected UAV use cases, challenges, and ANN-based solution schemes.}\label{UAV_table} \centering \tabcolsep=0.03cm \begin{tabular}{|c|c|c|c||c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Wireless and Security Challenges}& \multicolumn{3}{|c||}{\textbf{UAV-based Applications}} & \multicolumn{10}{|c|}{\textbf{ANN-based Solutions}}\\ \hline & UAV-DS & UAV-RMS & UAV-ITS & FL & bi-LSTM & CNN-RNN & D-ESN & CNN & ESN & DBN & LSTM & DSC & m-RBM\\ \hline URLLC &\checkmark & & &\checkmark & & & & & & & & &\\ \hline Efficient Handover &\checkmark & & & &\checkmark & & & & & & & &\\ \hline Autonomous Path Planning &\checkmark & & & & &\checkmark &\checkmark & & & & & & \\ \hline Interference Management & &\checkmark & & & & & &\checkmark & & & & & \\ \hline UAV-enabled Edge Caching & &\checkmark & & & & & &\checkmark &\checkmark & & & & \\ \hline Identification of Aerial and Ground Users & &\checkmark & & & & & & & & \checkmark& & & \\ \hline Cooperative Multi-UAV Data Transmission & & &\checkmark & & & & & & & & &\checkmark & \\ \hline Multimodal Sensor Fusion & & &\checkmark & & & & & & & & & &\checkmark \\ \hline Cyber-Physical Attacks &\checkmark & & & & & & &\checkmark & & & & & \\ \hline Authentication of UAVs & &\checkmark & & & & & & & & & \checkmark & & \\ \hline Secured Consensus of UAV Swarms & & &\checkmark &\checkmark & & & & & & & & & \\ \hline \end{tabular} \vspace{-0.24cm} \end{table*} \subsubsection{Cooperative Multi-UAV Data Transmission} In UAV-ITSs, each UAV is generally equipped with multiple sensors such as LiDAR and GPS and would therefore need to send different types of multimedia files and/or big data (e.g., 3D-map representation of the environment) to either other UAVs, vehicles, or the infrastructure, simultaneously. In such scenarios, it would be essential for different UAVs in a given geographical area to coordinate their data transmission. In other words, instead of each UAV transmitting the whole data file, e.g., area map, to its corresponding vehicle, each UAV will transmit a different part of the data file to all of the vehicles in a given geographical area thus resulting in a faster data transmission and a lower power consumption per UAV. In this regard, deep spectral clustering (DSC) learning can be adopted for grouping the UAVs into several clusters for data transmission based on their location, type of sensors they encompass, data files they need to transmit, and the location and number of vehicles in the network. In essence, DSC learns a map that embeds this input data into the eigenspace of their associated graph Laplacian matrix and thus clusters them accordingly. Consequently, DSC endows the UAVs with the capability of transmitting correlated data in a cooperative and distributed manner to the vehicles. This would essentially result in a faster data transmission to the vehicles thus allowing them to make real-time decisions for a safe navigation among the surrounding traffic. DSC can be combined with cooperative game theory, for further analysis of cooperative swarms of UAVs. Moreover, the presence of high mobility in ITSs along with cooperative UAV swarms, requires revisiting the interference and resource management schemes of Sections~\ref{section:UAV_DS} and~\ref{section:UAV_RMS} to handle the more dynamic and cooperative ITS environment. \subsubsection{Multimodal Sensor Fusion} In UAV-ITS, UAVs must transmit each one of their sensor readings to other network nodes, thus, resulting in cellular network congestion in case of dense UAV deployment. However, energy consumption and bandwidth allocation are important factors that determine the maximum operation time of the UAVs. As such, to reduce the power and bandwidth allocated for transmitting the sensor readings, a UAV can integrate its heterogeneous sensor readings into one vector thus resulting in less data transmissions over the UAV-vehicle links while also providing a more comprehensive assessment of the environment. Nevertheless, there exists differences between sensors ranging from sampling rates to the data generation model thus making UAV-based ITS sensor integration challenging. In this regard, multimodal RBMs (m-RBMs) are a suitable tool for combining different perspectives captured in signals of multimodal data for a system with multiple sensors~\cite{multimodal}. A m-RBM can be implemented at the UAV level thus identifying nonintuitive features largely from cross-sensor correlations which can yield accurate estimation. From the UAV's perspective, this approach enables each UAV to have a better assessment of its environment. For instance, a system trained simultaneously to detect an accident, high speed vehicle, and an anomalous vehicle does better than three separate systems trained in isolation since the single network can share information among the separate tasks. From the wireless network perspective, multimodal sensor fusion improves the UAV's energy efficiency and results in less data transmissions over the UAV-vehicle links thus reducing wireless congestion and enabling a larger number of UAVs to be served simultaneously. \subsection{Security Challenges and AI Solutions} For UAV-ITS, a swarm of coordinated UAVs has the capability of performing missions compared to single UAVs. Swarming UAVs communicate with each other while in flight to reach a consensus over their defined task, and can respond to changing conditions autonomously. A good analogy would be a dense flock of starlings reacting to a sudden threat like a hawk. Nevertheless, this data sharing scheme among a swarm of UAVs is generally prone to \emph{adversarial machine learning} attacks in which an attacker can join the swarm and alter their shared data, which results in non-harmonious movements as well as collisions. To overcome this challenge, federated learning can be adopted for a swarm of UAVs. In federated learning, each UAV receives the common task that needs to be accomplished by the UAV swarm from the BS and improves its learning model for completing the required tasks based on its collected data only. Then, each UAV summarizes the changes in its learning model and shares this summary with other UAVs in the swarm. This, indeed, will solve the vulnerability of raw data transmission between the UAVs and thus mitigating the risk of the adversarial machine learning. Table~\ref{UAV_table} provides a summary of the wireless and security challenges of cellular-connected UAVs in UAV-DS, UAV-RMS, and UAV-ITS while suggesting ANN-based solution schemes. \section{Conclusion}\label{section:conc} In this paper, we have summarized the main use cases of cellular-connected UAVs in UAV-DS, UAV-RMS, and UAV-ITS applications. We have highlighted the main wireless and security challenges that arise in such scenarios while introducing various AI-based solutions for addressing such challenges. Preliminary simulation results have shown the benefits of the introduced solutions for each cellular-connected UAV application use case. \bibliographystyle{IEEEtran} \section{Introduction}\label{Intro} Unmanned aerial vehicles (UAVs) will be ubiquitous and will play a vital role in various sectors ranging from medical and agricultural to surveillance and public safety. Providing connectivity to UAVs is crucial for data collection and dissemination in such applications. Unlike current wireless UAV connectivity that relies on short-range communication technologies (e.g., WiFi, Bluetooth), cellular connectivity allows beyond line-of-sight control, low latency, real time communication, robust security, and ubiquitous coverage. In essence, cellular-connected UAVs will lead to many new application use cases which we classify into three primary categories: UAV-based delivery systems (UAV-DSs), UAV-based real-time multimedia streaming (UAV-RMS) networks, and UAV-enabled intelligent transportation systems (UAV-ITSs), as shown in Figure~\ref{UAV_applications}. However, to reap the benefits of cellular-connected UAVs for UAV-DSs, UAV-RMS, and UAV-ITSs use cases, various unique communication and security challenges for each of such applications need to be addressed. For instance, efficient handover and online path planning are more crucial for UAV-DSs applications while cooperative multi-UAV data transmission and secured consensus of UAV swarms are unique for UAV-ITSs. In this scope, artificial intelligence (AI) based solution schemes are regarded as a powerful tool for addressing the aforementioned challenges\footnote{For more information, technical details related to the proposed AI techniques can be found at~\cite{tutorial_ML}}. It is worthwhile noting that such challenges can also be addressed at different levels such as the PHY layer and 3D coverage enhancement\footnote{Some existing surveys already discuss some of these issues~\cite{sebastien, mozaffari_survey}.}. In this regard, AI-based solution schemes assist in addressing the aforementioned challenges while yielding new improvements in the design of the network. Although many approaches exist for addressing the aforementioned challenges, we focus on machine learning solutions\footnote{The proposed machine learning techniques are mainly divided into two phases, the training phase followed by the testing phase. Therefore, although the training phase requires some heavy computation, it does not have any impact on the behavior of the UAVs during the testing phase, which refers to the actual execution time.} due to their inherent ability for predicting future network states thus allowing the UAVs to adapt to the dynamics of the network in an online manner. In particular, machine learning techniques allow the UAVs to generalize their observations to unseen network states and can scale to large-sized networks which therefore makes them suitable for UAV applications. Moreover, for such UAV-based applications, energy efficiency and computation capability is a key design constraint. Consequently, the main scope of this work is to highlight the advantages that AI brings for cellular-connected UAVs, under various constraints. In this regard, current existing literature study the changes in the radio environment of cellular-connected UAVs with altitude and analyze the corresponding implications on mobility performance~\cite{sebastien}. Moreover, in~\cite{mozaffari_survey}, the authors provide an overview on the opportunities and challenges for the use of UAVs for wireless communication applications; however, the primary focus is on their application as base stations (BSs). The authors in~\cite{ismail} propose a trajectory optimization scheme for cellular-connected UAVs while guaranteeing cellular connectivity. Although the works in~\cite{mozaffari_survey} and~\cite{ismail} discuss cellular-connected UAVs, they do not focus on the specifics of UAV-DS, UAV-RMS, and UAV-ITS applications, nor do they address AI or security challenges. Therefore, despite being interesting, none of the existing works propose and evaluate AI-based solutions for addressing both wireless and security challenges that arise in the context of cellular-connected UAVs. In essence, the state-of-the-art does not study the potential of AI as a means for addressing the challenges of integrating cellular-connected UAVs across various applications. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=9cm]{figures/UAV_applications} \caption{Cellular-connected UAVs applications in UAV-based delivery systems, UAV-based real-time multimedia streaming networks, and UAV-enabled intelligent transportation systems.}\label{UAV_applications} \end{center} \vspace{-0.6cm} \end{figure} The main contribution of this paper is to expose the major wireless and security challenges that arise in different UAV-based applications and suggest artificial neural network (ANN) based solution approaches for addressing such challenges. In particular, we focus on three major use cases for cellular-connected UAVs: UAV-based delivery systems, UAV-based real-time multimedia streaming networks, and UAV-enabled intelligent transportation systems. For each one of these use cases, we introduce the main technical challenges, in terms of wireless connectivity and security (as illustrated in Figure~\ref{UAV_challenges}), while outlining new AI-inspired solutions to address those challenges. The introduced AI solutions enable the UAVs to predict future network changes thus adaptively optimizing their actions in order to efficiently manage their resources while securing a safe operation. We also provide preliminary simulation results to showcase the benefits of the introduced solutions for each cellular-connected UAV application use case. Here, we restrict our attention to the security at higher communication layers since physical layer security issues and solutions have been discussed in \cite{security_ref}. The rest of this paper is organized as follows. Section~\ref{section:UAV_DS} presents the communication and wireless challenges in UAV-DS and proposes AI-based solution schemes for such challenges. Section~\ref{section:UAV_RMS} highlights the main communication and security challenges in UAV-RMS applications and the corresponding proposed ANN-based solutions. Section~\ref{section:UAV_ITS} provides ANN-based solution schemes for the main communication and security challenges in UAV-ITSs. Finally, conclusions are given in Section~\ref{section:conc}. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=7cm]{figures/Applications} \caption{Examples of wireless and security challenges of cellular-connected UAVs in UAV-based delivery systems, UAV-based real-time multimedia streaming networks, and UAV-enabled intelligent transportation systems.}\label{UAV_challenges} \end{center} \vspace{-0.6cm} \end{figure} \section{UAV-Based Delivery Systems}\label{section:UAV_DS} \subsection{Motivation} UAV-based delivery systems have received much attention recently for various applications such as postal and package delivery (e.g., Amazon prime), food delivery, transport of medicines and vaccinations, and drone taxis for delivery of people~\cite{UAV_delivery}. Compared to conventional delivery methods, UAV-DSs allow a faster delivery process at a reduced cost. They can also provide mission critical services reaching remote and inaccessible areas. To reap the benefits of UAV-DSs, it is important to provide cellular connectivity to the UAVs for control and signaling data transmission. In essence, providing cellular connectivity to delivery UAVs allows network operators to track their location and guarantee a secure delivery of the transported goods. Therefore, to realize such benefits, it is important to address several wireless and security challenges related to cellular-connected UAV-DSs, ranging from efficient handover and path planning to cyber-physical attacks. \subsection{Wireless Challenges and AI Solutions} \subsubsection{Ultra-Reliable and Low-Latency Communications (URLLC)}\label{URLLC} In UAV-DS, the UAVs must send critical control information while delivering goods to their destinations. This, in essence, requires latency of $1$ ms or less and exceedingly stringent reliability with a target block error rate as low as $10^{-5}$~\cite{URLLC_requirements}, especially in mission-critical scenarios such as medical delivery. Wireless latency encompasses both signaling overhead and data transmission. To achieve low signaling latency, channel estimation can be predicted in advance using AI thus allowing a proactive allocation of radio resources. This can be realized by incorporating a long-short term memory (LSTM) cell at the UAV level for learning a sequence of future channel states~\cite{ursula_TWC_1}. LSTMs are effective in dealing with long term dependencies which makes them suitable for learning a sequence of a time-dependent vector. Moreover, in a large network of UAVs, constantly communicating with a remote cloud can introduce substantial communication and signaling delays. To reduce such delays, one can rely on on-device machine learning or edge AI. As opposed to centralized, cloud-based AI schemes, on-device machine learning is based on a distributed machine learning approach, such as \emph{federated learning} (FL), in which the training data describing a particular AI task (e.g., resource management or computing) is stored in a distributed fashion across the UAVs and the optimization problem is solved collectively~\cite{federated_learning}. This in turn enables a large number of UAVs to collaboratively allocate their radio resources in a distributed way thus reducing wireless congestion and device-to-cloud latency. Finally, it is important to note that transmission latency can be further reduced by improving wireless connectivity, as discussed in Section~\ref{multimedia_wireless}. \subsubsection{Efficient Handover} In UAV-DSs, the UAVs face frequent handovers and handover to distant cells thus resulting in a ping-pong effect. As opposed to ground UEs, cellular-connected UAVs exhibit LoS links with multiple neighboring BSs simultaneously which, along with dynamic channel variations, can result in a fluctuation in the quality of their wireless transmission. In this context, it is necessary to have complete and sequential information about the channel signal quality at different locations before and after the current location of a particular UAV. As such, bidirectional LSTM cells (bi-LSTMs) are suited for addressing this challenge as they exploit both the previous and the future context, by processing the input data (i.e., channel quality) from two directions with two separate hidden layers. In particular, one LSTM layer processes the input sequence in the forward direction, while the other LSTM layer processes the input in the reverse direction~\cite{bi-LSTM}. Therefore, instead of accounting for the next time step only, this scheme enables each UAV to consider the channel quality at its previous and future sequence locations. This framework can hence be trained to allow the UAVs to update their corresponding cell association vector while avoiding frequent handovers based on previous and future channel signal quality. \subsubsection{Autonomous Path Planning with Connectivity Constraints} A critical factor for UAV-DSs is to maintain reliable cellular connectivity for the UAVs at each time instant along their corresponding paths while also minimizing the total time required to accomplish their delivery mission. In essence, a delivery UAV must maintain a minimum signal-to-noise-and-interference (SINR) ratio along its path to guarantee a reliable communication link for its control information. This naturally depends on the UAV's location, cell association vector, transmit power level, and the location of the serving ground BS. As such, a key challenge for UAV-DSs is to optimize the UAVs' paths so as to reduce their total delivery time while guaranteeing reliable wireless connectivity and thus an instantaneous SINR threshold value. Although a centralized approach can update the path plan of each UAV, this would require real-time tracking of the UAVs and control signals to be transmitted to the UAVs at all time. Moreover, a centralized approach incurs high round-trip latencies and requires a central entity to acquire full knowledge of the current network state. To overcome these challenges, \emph{online} edge algorithms must be implemented individually by each UAV to plan its future path. In this regard, convolutional neural networks (CNNs) can be combined with a deep reinforcement learning (RL) algorithm based on a recurrent neural network (RNN) (e.g., echo state network (ESN) or LSTM) at the UAV level resulting in a CNN-RNN scheme. ESN exhibits dynamic temporal behavior and is characterized by its adaptive memory that enables it to store necessary previous state information to predict the future steps of each UAV. Meanwhile, CNN are mainly used for image recognition and thus can be used for identifying the UAV's environment by extracting features from input images. For instance, CNNs aid the UAVs in identifying the location of the ground BSs, ground UEs, and other UAVs in the network. These extracted features are then fed to a deep RNN which can be trained to learn an optimized sequence of the UAV's future steps, that would minimize its delivery time and guarantee a reliable cellular connectivity at each time instant, based on the input features. In this regard, in~\cite{ursula_path_planning}, we proposed a deep RL framework based on ESN (D-ESN) for optimizing the trajectories of multiple cellular-connected UAVs in an online manner while minimizing latency and interference. For simplicity, we consider an input vector describing the locations of the neighboring ground BSs and other UAVs instead of extracting such features from a CNN. To highlight the gain of D-ESN for path planning, we compare the average values of the (a) wireless latency per UAV and (b) rate per ground UE resulting from the proposed path planning scheme and the shortest path scheme, as shown in Fig.~\ref{path_planning}. Clearly, from Fig.~\ref{path_planning}, we can see that exploiting a deep ESN-based path planning scheme under connectivity constraints for cellular-connected UAVs results in a more reliable wireless connectivity and in lower latency, as compared to wireless-unaware shortest path scheme. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=9cm]{figures/scalability.eps} \caption{Performance assessment of the proposed deep ESN-based path planning algorithm in terms of average (a) wireless latency per UAV and (b) rate per ground UE as compared to the shortest path approach, for different number of UAVs~\cite{ursula_path_planning}.}\label{path_planning} \end{center} \vspace{-0.6cm} \end{figure} \subsection{Security Challenges and AI Solutions} Due to the UAVs' altitude limitations and the LoS communication link with the ground BS, UAV-based delivery systems are vulnerable to \emph{cyber-physical (CP) attacks} in which an adversary aims at compromising a delivery UAV, taking over its control, and ultimately destroying, delaying, or stealing the transported goods. To thwart such CP attacks, the UAV can create a CP threat map in which the adversaries locations can be categorized based on the environmental objects where the UAVs can be physically attacked as well as communication network where the cyber attacks can be imposed to the communication link. Even though prior works assume that a threat map is predetermined\cite{Sanjab}, it is important to create such map in an online manner in order to account for real-time changes in the environment and to overcome the memory limitation of the UAVs for storing a large-scale map. To realize this, a CNN can be trained for classifying the high-risk locations by taking as input the images of the UAV's surrounding environment along each position of its path. From the operator's perspective, it is also important to detect any potential attack by identifying any abnormal or undesirable behavior in the UAVs' motion. Therefore, given their capability of dealing with time-series data, RNNs can be adopted for capturing the UAV's motion characteristics by feeding them with the UAV's dynamics such as its position, speed, acceleration, and destination location. In this case, the RNN's output will be the predicted UAV's normal motion and, thus, using this output the operator can distinguish UAV's abnormal motion which is resulted from a CP attack. \section{UAV-Based Real-Time Multimedia Streaming Applications}\label{section:UAV_RMS} \subsection{Motivation} One key use case for cellular-connected UAVs is to provide various real-time multimedia streaming applications such as online video streaming and broadcasting, UAV-enabled virtual reality (VR), online tracking and localization of mobile targets, and surveillance. In essence, providing cellular connectivity to the UAVs enables online transmission of data and low-latency wireless communication which are essential factors for multimedia streaming applications. To enable effective delivery of such real-time multimedia using cellular-connected UAVs, several wireless and security challenges need to be addressed ranging from interference management to authentication. \subsection{Wireless Challenges and AI Solutions}\label{multimedia_wireless} \subsubsection{Interference Management} For UAV-RMS applications, UAVs will mainly transmit data in the \emph{uplink}. Nevertheless, the ability of cellular-connected UAVs to establish LoS connectivity with multiple ground BSs can lead to substantial mutual interference among them as well as to the ground users. To address this challenge, new improvements in the design of future cellular networks such as advanced receivers, cell coordination, 3D frequency reuse, and 3D beamforming, are needed. For instance, due to their ability of recognizing and classifying images, CNNs can be implemented on each UAV in order to identify several features of the environment such as the location of UAVs, BSs, and ground UEs. Such an approach will enable each UAV to adjust its beamwidth tilt angle so as to minimize the interference on the ground UEs. Moreover, in streaming scenarios, UAV trajectory optimization is also essential. In particular, physical layer solutions such as 3D beamforming, can be combined with an interference-aware path planning scheme to guarantee more efficient communication links for both ground and aerial users. Such a path planning scheme (e.g., such as the one we proposed in~\cite{ursula_path_planning}) allows the UAVs to adapt their movement based on the rate requirements of both aerial UAV-UEs and ground UEs, thus improving the overall network performance. \subsubsection{UAV-enabled Edge Caching} For various real-time multimedia streaming applications, cellular-connected UAVs must generate videos from data files collected using sensors and cameras. For instance, in UAV-enabled VR applications, the UAVs will generate $360^\circ$ videos for each user. However, each UAV can only collect a limited number of data files which might not be sufficient for generating all the requested videos. Meanwhile, cache-enabled UAVs can store common data files related to popular content or for generating videos that users may request in the future thus reducing the number of data files that UAVs need to collect when a request is made~\cite{mingzhe_caching}. For instance, for UAV-enabled VR applications, cache-enabled UAVs can directly store a $360^\circ$ video and send a rotated version of this stored video according to each user's viewing perspective. Moreover, for game broadcast applications, cache-enabled UAVs can store the environment of the game and thus would only need to track the motions of the players for updating the cached data. Here, CNNs can once again be adopted for allowing cache-enabled UAVs to store popular videos or common data files. In particular, CNNs can extract and store the common features of the data files that are requested by different users or by each user, at different time slots. Furthermore, CNNs can be used to record the features of each UAV's surrounding environment. Consequently, when the UAVs need to collect data in a new environment, they would only need to collect new features that are not already recorded by the CNNs. In this context, RNNs can also be employed for predicting the users' video requests. In fact, the context requests of users can be correlated over time, and thus, RNNs can enable the UAVs to cache in advance the predicted future requests or other popular multimedia files. \begin{figure}[t] \centering \subfigure[]{{\setlength{\belowdisplayskip}{10 pt}} \label{figure2a} \includegraphics[width=6.5cm]{figures/figure3.eps}} \subfigure[]{ \label{figure2b} \includegraphics[width=6cm]{figures/figure9.eps}} \vspace{-0.4cm} \caption{\label{figure2b} (a) Comparison of the content request probability predictions for the proposed conceptor ESN algorithm with the real data and (b) the average UAV transmit power as a function of the number of users in the network for the proposed conceptor ESN algorithm with and without caching~\cite{mingzhe_caching}.} \vspace{-0.75cm} \end{figure} Based on our work in~\cite{mingzhe_caching}, we introduce an ESN-based algorithm for predicting the user's content request distributions. The input to the proposed framework is the users' context information such as age, gender, and job, and the output of the ESN-based algorithm is the distribution of the users' content requests. Therefore, based on the users' content request distributions, the UAVs can determine the contents to store at the UAV cache and, thus, transmit the cached contents to the users without the need for backhaul connections. Using real data from \emph{Youku}, Fig. \ref{figure2a} shows that the ESN-based algorithm can accurately predict the content request distribution of a given user. Fig. \ref{figure2b} (b) shows the average transmit power per UAV of cache-enabled UAVs as a function of the number of users. In Fig. \ref{figure2b}, we can see that the proposed ESN algorithm for cache-enabled UAVs yields a considerable reduction in transmit power compared to a baseline without caching. \subsubsection{Identification of Aerial and Ground Users} As shown in~\cite{sebastien}, the radio propagation environment experienced by cellular-connected UAVs differs from that experienced by ground users. Consequently, to maximize the total network performance, a network operator must allocate its radio resources differently between airborne and ground users, especially for UAV-RMS applications. To realize this, network operators should be capable of differentiating an airborne user from a ground one which cannot be achieved by solely relying on self-reporting due to the possibility of a faulty report. Instead, network operators can utilize wireless cellular radio measurements such as reference signal received power (RSRP), received signal strength indicator (RSSI), and reference signal received quality (RSRQ) for user classification. These features can essentially act as an input to a deep belief network (DBN) which can be trained for classifying an airborne user from a ground one. In essence, DBNs are deep architectures that consist of a stack of restricted Boltzmann machines (RBMs), thus having the benefit that each layer can learn more complex features than the layers before it. In essence, a pre-training step is done in DBNs thus overcoming the vanishing gradient problem. This is then followed by fine-tuning the network weights using conventional error back propagation algorithm. \vspace{-0.1cm} \subsection{Security Challenges and AI Solutions} In UAV-RMS applications, an attacker can disrupt the UAV's data transmissions by forging the identities of the transmitting UAVs' and sending disrupted data using their identity. This type of \emph{insider attacks} becomes particularly acute in a large-scale UAV system. In particular, the BS must process the received multimedia files from all the UAVs and allocate computational resources for authenticating the UAVs. However, in large-scale networks, authenticating all the UAVs at once exceeds the BS's computational resources, thus, incurring delay for processing the received files. To avoid this delay, the BS can authenticate only a fraction of the UAVs at each time step. To realize this, the BS could implement a deep RL algorithm based on LSTM in order to learn what signals to authenticate at each time step of its authentication process. In particular, this framework takes as an input a sequence of previous security states of each UAV indicating whether a UAV was previously vulnerable to attacks, and learns a sequence of future authentication decisions for each UAV. LSTMs are suitable for this application since they can learn the interdependence of UAVs' vulnerability at the past time steps, memorize the importance of UAVs to the BS, and map the past sequence of UAV states to a future decision sequence. To analyze the performance of LSTM-based deep RL method for authentication, based on \cite{ferdowsi2018deep}, we consider a network of 1000 UAVs that transmit multimedia streams to a BS. We analyze different scenarios in which different proportions of available UAVs are vulnerable to cyber attacks. Fig. \ref{fig:authentication} assesses the performance of the LSTM-based deep RL framework compared to two baseline authentication scenarios. From Fig. \ref{fig:authentication}, we can see that the proposed algorithm performs the same as the two other baselines in the low range of proportion of vulnerable UAVs. However, as the number of vulnerable UAVs increases the LSTM-based deep RL outperforms the two other baselines and reduces the proportion of compromised UAVs in the network. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=8cm]{figures/attacker.eps} \caption{The proportion of compromised cellular-connected UAVs as a function of the proportion of vulnerable UAVs in a large-scale UAV system authentication~\cite{ferdowsi2018deep}.}\label{fig:authentication} \end{center} \vspace{-0.6cm} \end{figure} \section{UAV-Enabled Intelligent Transportation Systems}\label{section:UAV_ITS} \vspace{-0.2cm} \subsection{Motivation} Integrating UAVs in an intelligent transportation system (ITS) would control road traffic, monitor incidents, and enforce road safety. For instance, UAVs can provide a quick report in case of an accident and can act as flying roadside units, speed cameras, and dynamic traffic signals. Moreover, for vehicular platoons, to reduce wireless network congestion, a cellular-connected UAV can send control and network related information to one of the vehicles only and this vehicle can share the information with other vehicles in the platoon via dedicated short range communication links. UAVs can also track the behavior of a platoon thus detecting any compromised vehicle. Therefore, to reap the benefits of UAV-ITS, several wireless and security challenges need to be addressed ranging from cooperative multi-UAV data transmission and multimodal data integration to secured consensus of UAV swarms. \subsection{Wireless Challenges and AI Solutions} \begin{table*}[t!]\footnotesize \setlength{\belowcaptionskip}{0pt} \setlength{\abovedisplayskip}{3pt} \captionsetup{belowskip=0pt} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#1.6\end{tabular}} \setlength{\abovecaptionskip}{2pt} \renewcommand{\captionlabelfont}{\small} \captionsetup{justification=centering} \caption{Cellular-connected UAV use cases, challenges, and ANN-based solution schemes.}\label{UAV_table} \centering \tabcolsep=0.03cm \begin{tabular}{|c|c|c|c||c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Wireless and Security Challenges}& \multicolumn{3}{|c||}{\textbf{UAV-based Applications}} & \multicolumn{10}{|c|}{\textbf{ANN-based Solutions}}\\ \hline & UAV-DS & UAV-RMS & UAV-ITS & FL & bi-LSTM & CNN-RNN & D-ESN & CNN & ESN & DBN & LSTM & DSC & m-RBM\\ \hline URLLC &\checkmark & & &\checkmark & & & & & & & & &\\ \hline Efficient Handover &\checkmark & & & &\checkmark & & & & & & & &\\ \hline Autonomous Path Planning &\checkmark & & & & &\checkmark &\checkmark & & & & & & \\ \hline Interference Management & &\checkmark & & & & & &\checkmark & & & & & \\ \hline UAV-enabled Edge Caching & &\checkmark & & & & & &\checkmark &\checkmark & & & & \\ \hline Identification of Aerial and Ground Users & &\checkmark & & & & & & & & \checkmark& & & \\ \hline Cooperative Multi-UAV Data Transmission & & &\checkmark & & & & & & & & &\checkmark & \\ \hline Multimodal Sensor Fusion & & &\checkmark & & & & & & & & & &\checkmark \\ \hline Cyber-Physical Attacks &\checkmark & & & & & & &\checkmark & & & & & \\ \hline Authentication of UAVs & &\checkmark & & & & & & & & & \checkmark & & \\ \hline Secured Consensus of UAV Swarms & & &\checkmark &\checkmark & & & & & & & & & \\ \hline \end{tabular} \vspace{-0.24cm} \end{table*} \subsubsection{Cooperative Multi-UAV Data Transmission} In UAV-ITSs, each UAV is generally equipped with multiple sensors such as LiDAR and GPS and would therefore need to send different types of multimedia files and/or big data (e.g., 3D-map representation of the environment) to either other UAVs, vehicles, or the infrastructure, simultaneously. In such scenarios, it would be essential for different UAVs in a given geographical area to coordinate their data transmission. In other words, instead of each UAV transmitting the whole data file, e.g., area map, to its corresponding vehicle, each UAV will transmit a different part of the data file to all of the vehicles in a given geographical area thus resulting in a faster data transmission and a lower power consumption per UAV. In this regard, deep spectral clustering (DSC) learning can be adopted for grouping the UAVs into several clusters for data transmission based on their location, type of sensors they encompass, data files they need to transmit, and the location and number of vehicles in the network. In essence, DSC learns a map that embeds this input data into the eigenspace of their associated graph Laplacian matrix and thus clusters them accordingly. Consequently, DSC endows the UAVs with the capability of transmitting correlated data in a cooperative and distributed manner to the vehicles. This would essentially result in a faster data transmission to the vehicles thus allowing them to make real-time decisions for a safe navigation among the surrounding traffic. DSC can be combined with cooperative game theory, for further analysis of cooperative swarms of UAVs. Moreover, the presence of high mobility in ITSs along with cooperative UAV swarms, requires revisiting the interference and resource management schemes of Sections~\ref{section:UAV_DS} and~\ref{section:UAV_RMS} to handle the more dynamic and cooperative ITS environment. \subsubsection{Multimodal Sensor Fusion} In UAV-ITS, UAVs must transmit each one of their sensor readings to other network nodes, thus, resulting in cellular network congestion in case of dense UAV deployment. However, energy consumption and bandwidth allocation are important factors that determine the maximum operation time of the UAVs. As such, to reduce the power and bandwidth allocated for transmitting the sensor readings, a UAV can integrate its heterogeneous sensor readings into one vector thus resulting in less data transmissions over the UAV-vehicle links while also providing a more comprehensive assessment of the environment. Nevertheless, there exists differences between sensors ranging from sampling rates to the data generation model thus making UAV-based ITS sensor integration challenging. In this regard, multimodal RBMs (m-RBMs) are a suitable tool for combining different perspectives captured in signals of multimodal data for a system with multiple sensors~\cite{multimodal}. A m-RBM can be implemented at the UAV level thus identifying nonintuitive features largely from cross-sensor correlations which can yield accurate estimation. From the UAV's perspective, this approach enables each UAV to have a better assessment of its environment. For instance, a system trained simultaneously to detect an accident, high speed vehicle, and an anomalous vehicle does better than three separate systems trained in isolation since the single network can share information among the separate tasks. From the wireless network perspective, multimodal sensor fusion improves the UAV's energy efficiency and results in less data transmissions over the UAV-vehicle links thus reducing wireless congestion and enabling a larger number of UAVs to be served simultaneously. \subsection{Security Challenges and AI Solutions} For UAV-ITS, a swarm of coordinated UAVs has the capability of performing missions compared to single UAVs. Swarming UAVs communicate with each other while in flight to reach a consensus over their defined task, and can respond to changing conditions autonomously. A good analogy would be a dense flock of starlings reacting to a sudden threat like a hawk. Nevertheless, this data sharing scheme among a swarm of UAVs is generally prone to \emph{adversarial machine learning} attacks in which an attacker can join the swarm and alter their shared data, which results in non-harmonious movements as well as collisions. To overcome this challenge, federated learning can be adopted for a swarm of UAVs. In federated learning, each UAV receives the common task that needs to be accomplished by the UAV swarm from the BS and improves its learning model for completing the required tasks based on its collected data only. Then, each UAV summarizes the changes in its learning model and shares this summary with other UAVs in the swarm. This, indeed, will solve the vulnerability of raw data transmission between the UAVs and thus mitigating the risk of the adversarial machine learning. Table~\ref{UAV_table} provides a summary of the wireless and security challenges of cellular-connected UAVs in UAV-DS, UAV-RMS, and UAV-ITS while suggesting ANN-based solution schemes. \section{Conclusion}\label{section:conc} In this paper, we have summarized the main use cases of cellular-connected UAVs in UAV-DS, UAV-RMS, and UAV-ITS applications. We have highlighted the main wireless and security challenges that arise in such scenarios while introducing various AI-based solutions for addressing such challenges. Preliminary simulation results have shown the benefits of the introduced solutions for each cellular-connected UAV application use case. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-12-03T02:04:34", "yymm": "1804", "arxiv_id": "1804.05348", "language": "en", "url": "https://arxiv.org/abs/1804.05348" }
\section{Introduction} In 1776, Euler (\cite{Euler}) considered certain power series, the so-called double zeta values, and showed several relations among them. More than 200 years later, the {\it multiple zeta value} (MZV for short) which is more general series $$\zeta(k_1, \dots, k_r) := \sum_{0<m_1<\cdots<m_r}\frac{1}{m_1^{k_1}\cdots m_r^{k_r}}$$ converging for $k_1,\dots,k_r\in\mathbb{N}$ and $k_r>1$, was discussed by Ecalle (\cite{Eca}) in 1981. In 1990s, these values also came to be focused by Hoffman (\cite{Hof}) and Zagier (\cite{Zag}). MZVs admit an iterated integral expressions, which enable us to regard them as a period of a certain motives (\cite{DG}, \cite{Go} and \cite{Te}) and calculate the Kontsevich invariant in knot theory (\cite{LM}). MZVs are also related to mathematical physics (\cite{BK1} and \cite{BK2}). MZVs are regarded as special values at positive integer points of the several variables complex analytic function, the {\it multiple zeta-function} (MZF for short), which is defined by \begin{equation*} \zeta(s_1, \dots, s_r) := \sum_{0<m_1<\cdots<m_r}\frac{1}{m_1^{s_1}\cdots m_r^{s_r}}. \end{equation*} It converges absolutely in the region \begin{equation*} \{(s_1,\dots,s_r)\in\mathbb{C}^r\ |\ \frak{R}(s_{r-k+1}+\cdots+s_r)>k\ (1\leq k\leq r)\}. \end{equation*} In the early 2000s, Zhao (\cite{Zhao}) and Akiyama, Egami and Tanigawa (\cite{AET}) independently showed that MZF can be meromorphically continued to $\mathbb{C}^r$. Especially, in \cite{AET}, the set of all singularities of the function $\zeta(s_1,\dots,s_r)$ is determined as \begin{align*} &s_r=1,\nonumber\\ &s_{r-1}+s_r=2,1,0,-2,-4,\dots,\\ &s_{r-k+1}+\cdots+s_r=k-n\quad (3\leq k\leq r,\ n\in\mathbb{N}_0).\nonumber \end{align*} Because almost all of integer points with non-positive arguments are located in the above singularities, the special values of MZF there are indeterminate in all cases except for $\zeta(-k)$ at $k\in\mathbb{N}_0$, and $\zeta(-k_1,-k_2)$ at $k_1,k_2\in\mathbb{N}_0$ with $k_1+k_2$ odd. Actually, giving a nice definition of ``$\zeta(-k_1,\dots,-k_r)$'' for $k_1,\dots,k_r\in\mathbb{N}_0$ is one of our most fundamental problems. In order to resolve all infinitely many singularities of MZF, the desingularization method was introduced by Furusho, Komori, Matsumoto and Tsumura in \cite{FKMT1}. By applying this method to $\zeta(s_1,\dots,s_r)$, they constructed the {\it desingularized MZF} $\zeta_r^{\rm des}(s_1,\dots,s_r)$ which is entire on the whole space $\mathbb{C}^r$. The functions are represented by finite linear combinations of shifted MZFs (cf. Proposition \ref{prp:1.1}.). The {\it desingularized value} \begin{equation*} \zeta_r^{\rm des}(-k_1,\dots,-k_r)\in\mathbb{C} \end{equation*} is defined to be the special value of $\zeta_r^{\rm des}(s_1,\dots,s_r)$ at $(s_1,\dots,s_r)=(-k_1,\dots,-k_r)$ for $k_1,\dots,k_r\in\mathbb{N}_0$ (see Definition \ref{def:1.2.1}). In \cite{FKMT1}, its generating function given by \begin{equation}\label{eqn:0.4} Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_r) := \sum_{k_1,\dots,k_r=0}^{\infty}\frac{(-t_1)^{k_1}\cdots(-t_r)^{k_r}}{k_1!\cdots k_r!}\zeta_r^{\rm des}(-k_1,\dots,-k_r) \end{equation} in $\mathbb{C}[[t_1,\dots,t_r]]$ was calculated and desingularized values were explicitly described in terms of the Bernoulli numbers (see Proposition \ref{prp:1.1.1}.). In contrast, Connes and Kreimer (\cite{CK}) started a Hopf algebraic approach to the renormalization procedure in the perturbative quantum field theory. A fundamental tool in their work is the {\it algebraic Birkhoff decomposition}. By applying this decomposition to a certain Hopf algebra related to MZVs, Guo and Zhang (\cite{GZ}) introduced the {\it renormalized values} which satisfy the harmonic-type product formulae. Later, Manchon and Paycha (\cite{MP}) and Ebrahimi-Fard, Manchon and Singer (\cite{EMS2}) introduced the different renormalized values which obey the harmonic-type product formulae by using different Hopf algebras. Ebrahimi-Fard, Manchon and Singer (\cite{EMS1}) also introduced another type of the renormalized values satisfying the shuffle-type product. We denote their values as \begin{equation*} \zeta_{\scalebox{0.5}{\rm EMS}}(-k_1,\dots,-k_r)\in\mathbb{C} \end{equation*} for $k_1,\dots,k_r\in \mathbb{N}_0$, and their generating function as \begin{equation}\label{eqn:0.5} Z_{\scalebox{0.5}{\rm EMS}}(t_1,\dots,t_r) := \sum_{k_1,\dots,k_r=0}^{\infty}\frac{(-t_1)^{k_1}\cdots(-t_r)^{k_r}}{k_1!\cdots k_r!}\zeta_{\scalebox{0.5}{\rm EMS}}(-k_1,\dots,-k_r) \end{equation} in $\mathbb{C}[[t_1,\dots,t_r]]$. In the paper \cite{Komi}, the author revealed the following relationship between generating functions (\ref{eqn:0.4}) and (\ref{eqn:0.5}): \begin{equation}\label{eqn:0.8} Z_{\scalebox{0.5}{\rm EMS}}(t_1,\dots,t_r) = \prod_{i=1}^{r}\frac{1-e^{-t_i-\cdots-t_r}}{t_i+\cdots+t_r}\cdot Z_{\scalebox{0.5}{\rm FKMT}}(-t_1,\dots,-t_r). \end{equation} The following recurrence formulae were essential for the proof of the equation (\ref{eqn:0.8}): \begin{equation}\label{eqn:0.11} Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_r) = Z_{\scalebox{0.5}{\rm FKMT}}(t_2,\dots,t_r)\cdot Z_{\scalebox{0.5}{\rm FKMT}}(t_1+\cdots+t_r). \end{equation} By these recurrence formulae, we will show our main theorem (Theorem \ref{prop:2.1}) that $\zeta_r^{\rm des}(-k_1,\dots,-k_r)$ fulfills the following shuffle-type product formula (\ref{thm:0.1}) which are shown to hold for $\zeta_{\scalebox{0.5}{\rm EMS}}(-k_1,\dots,-k_r)$ by \cite{EMS1}:\\ \noindent \smallskip {\bf Theorem \ref{prop:2.1}} {\it For $k_1,\dots,k_p,l_1,\dots,l_q\in\mathbb{N}_0$, we have} \begin{align}\label{thm:0.1} &&\zeta_p^{\rm des}(-k_1,\dots,-k_p)\zeta_q^{\rm des}(-l_1,\dots,-l_q)\hspace{6.6cm} \\ &&= \sum_{\substack{i_1 + j_1=l_1\\ \scalebox{0.5}{\rotatebox{90}{$\cdots$}}\\i_q + j_q=l_q}}\prod_{a=1}^q(-1)^{i_a}\binom{l_a}{i_a} \zeta_{p+q}^{\rm des}(-k_1,\dots,-k_{p-1},-k_p-i_1- \cdots -i_q,-j_1,\dots,-j_q). \nonumber \end{align} The above recurrence formula (\ref{eqn:0.11}) also yields \begin{equation}\label{eqn:0.9} \zeta_r^{\rm des}(-k_1,\dots,-k_r)=\sum_{\substack{i+j=k_r \\ i,j\geq0}} \binom{k_r}{i}\zeta_{r-1}^{\rm des}(-k_1,\dots,-k_{r-2},-k_{r-1}-i)\zeta_1^{\rm des}(-j) \end{equation} for $k_1,\dots,k_r\in\mathbb{N}_0$. We will extend the equation (\ref{eqn:0.9}) to the equation (\ref{eqn:4.1}) by replacing $-k_1,\dots,-k_{r-1}\in\mathbb{Z}_{\leq0}$ with $s_1,\dots,s_{r-1}\in\mathbb{C}$ in Proposition \ref{thm:3.1}. The plan of our paper goes as follows. In \S1, we will review the algebraic Birkoff decomposition and the definition of the renormalized values in \cite{EMS1}. In \S2, we will recall the definition of the desingularized MZFs and the desingularized values introduced by Furusho, Komori, Matsumoto and Tsumura in \cite{FKMT1}. In \S3, we will prove the shuffle-type product formulae of desingularized values at non-positive integer points (Theorem \ref{prop:2.1}). In \S4, we will show the formula (\ref{eqn:4.2}) in Proposition \ref{thm:3.1}, which generalizes the equation (\ref{thm:0.1}) in the case of $q=1$. \section{Algebraic Birkhoff decomposition and renormalized values} In this section, we assume $\mathcal{H}$ is a Hopf algebra over $\mathbb{Q}$, $\mathcal{A}:=\mathbb{Q}[[z]][z^{-1}]$ and $\mathcal{L}(\mathcal{H},\mathcal{A}):=\{f:\mathcal{H}\rightarrow\mathcal{A}\ |\ \mbox{$f$ is a $\mathbb{Q}$-linear map}\}$. For the maps $f,g\in\mathcal{L}(\mathcal{H},\mathcal{A})$, we define the convolution $f*g\in\mathcal{L}(\mathcal{H},\mathcal{A})$ by \begin{equation*} f*g:=m\circ(f\otimes g)\circ\Delta, \end{equation*} where $m$ is the product of $\mathcal{A}$ and $\Delta$ is the coproduct of $\mathcal{H}$. Then, the subset \begin{equation*} G(\mathcal{H},\mathcal{A}):=\{\ f\in\mathcal{L}(\mathcal{H},\mathcal{A})|\ f(1)=1\} \end{equation*} forms a group with the above convolution product $*$. \begin{thm}[\cite{CK}, \cite{EMS1}: {\bf the algebraic Birkhoff decomposition}]\label{thm:1.1} \ \\For $f\in G(\mathcal{H},\mathcal{A})$, there are unique linear maps $f_+:\mathcal{H}\rightarrow\mathbb{Q}[[z]]$ and $f_-:\mathcal{H}\rightarrow\mathbb{Q}[z^{-1}]$ with $f_-(1)=1\in\mathbb{Q}$ such that \begin{equation*} f=f_-^{-1}*f_+, \end{equation*} where $f_-^{-1}$ is the inverse element of $f_-$ in $G(\mathcal{H},\mathcal{A})$. Moreover the maps $f_-$ and $f_+$ form algebra homomorphisms if the map $f$ is an algebra homomorphism. \end{thm} Let $\mathbb{Q}\langle d,y\rangle$ be the $\mathbb{Q}$-vector space generated by the word (including 1) of $d$ and $y$. We define the $\mathbb{Q}$-algebra $(\mathbb{Q}\langle d,y\rangle,\shuffle_0)$ by the new product $\shuffle_0:\mathbb{Q}\langle d,y\rangle^{\otimes2}\rightarrow\mathbb{Q}\langle d,y\rangle$ which is a $\mathbb{Q}$-linear map recursively defined by \begin{align*} 1\shuffle_0 w & :=w\shuffle_01:=w, \\ yu\shuffle_0v & :=u\shuffle_0yv:=y(u\shuffle_0 v), \\ du\shuffle_0 dv & :=d(u\shuffle_0 dv)-u\shuffle_0 d^2v, \end{align*} for words $u$,$v$ and $w$ of $d$ and $y$. This algebra $(\mathbb{Q}\langle d,y\rangle,\shuffle_0)$ forms a non-commutative algebra. We consider the following set $\mathcal{S}$: \begin{equation*} \mathcal{S}:=\langle d^k\{d(u\shuffle_0v)-du\shuffle_0v-u\shuffle_0dv\},\ wd\ |\ \mbox{$u,v,w$: words, $k\in\mathbb{N}_0$}\rangle_{(\mathbb{Q}\langle d,y\rangle,\shuffle_0)}, \end{equation*} that is, to be the two-sided ideal of $(\mathbb{Q}\langle d,y\rangle,\shuffle_0)$ algebraically generated by the above elements. Then, the quotient \begin{equation*} \mathcal{H}_0:=\mathbb{Q}\langle d,y\rangle/\mathcal{S}, \end{equation*} forms a commutative and cocommutative Hopf algebra (its coproduct is not the deconcatenation coproduct. For detail, see \cite{EMS1},\cite{Komi}). We define the $\mathbb{Q}$-linear map $\phi:\mathcal{H}_0\rightarrow\mathcal{A}$ by $\phi(1):=1$ and for $k_1,\dots,k_n\in\mathbb{N}_0$, \begin{equation*} \phi(d^{k_1}y\cdots d^{k_r}y)(z):=\partial^{k_1}_z(x\partial^{k_2}_z)\cdots (x\partial^{k_r}_z)(x(z)), \end{equation*} where $x:=x(z):=\frac{e^z}{1-e^z}\in\mathbb{Q}[[z]][z^{-1}]$ and $\partial_z$ is the derivative by $z$. \begin{prp}[{\rm \cite[\S4.2]{EMS1}}]\label{prp:2.1} The $\mathbb{Q}$-linear map $\phi:\mathcal{H}_0\rightarrow\mathbb{Q}[[z]][z^{-1}]$ is well-defined and forms algebra homomorphism. \end{prp} By applying Theorem \ref{thm:1.1} to this map $\phi$, we obtain the algebra homomorphism $\phi_+:\mathcal{H}_0\rightarrow\mathbb{Q}[[z]]$. \begin{dfn}[{\rm \cite[\S4.2]{EMS1}}] The renormalized values\footnote{If we follow the notations of \cite{EMS1}, it should be denoted by $\zeta_+(-k_r,\dots,-k_1)$.} $\zeta_{\scalebox{0.5}{\rm EMS}}(-k_1,\dots, -k_r)$ is defined by \begin{equation*} \zeta_{\scalebox{0.5}{\rm EMS}}(-k_1,\dots, -k_r):=\lim_{z\rightarrow0}\phi_+(d^{k_r}y\cdots d^{k_1}y)(z) \end{equation*} for $k_1,\dots,k_r\in\mathbb{N}_0$. \end{dfn} These renormalized values coincide with special values of the meromorphic continuation of MZFs at non-positive integer points which are non-singular, i.e., \begin{prp}[{\rm \cite[Theorem 4.3]{EMS1}}] For $k\in\mathbb{N}_0$, we have \begin{equation*} \zeta_{\scalebox{0.5}{\rm EMS}}(-k)=\zeta(-k), \end{equation*} and for $k_1,k_2\in\mathbb{N}_0$ with $k_1+k_2$ odd, we have \begin{equation*} \zeta_{\scalebox{0.5}{\rm EMS}}(-k_1, -k_2)=\zeta(-k_1, -k_2). \end{equation*} \end{prp} By Theorem \ref{thm:1.1} and Proposition \ref{prp:2.1}, we get the proposition below: \begin{prp}[{\rm \cite[\S4.2]{EMS1}}: {\bf shuffle-type product formula}]\label{prop:2.3.1} \ \\For the elements $w$ and $w'$ of $\mathcal{H}_0$, we have $$\phi_+(w\shuffle_0w')=\phi_+(w)\phi_+(w').$$ \end{prp} Here are examples in lower depth: \begin{exa} For $a,b,c \in \mathbb{N}_0$, we have \begin{align*} \zeta_{\scalebox{0.5}{\rm EMS}}(-a)\cdot\zeta_{\scalebox{0.5}{\rm EMS}}(-b)&=\sum_{k=0}^a(-1)^k\binom{a}{k}\zeta_{\scalebox{0.5}{\rm EMS}}(-b-k,-a+k),\\ \zeta_{\scalebox{0.5}{\rm EMS}}(-a)\cdot\zeta_{\scalebox{0.5}{\rm EMS}}(-b,-c)&=\sum_{\substack{i_1+j_1=b \\ i_2+j_2=c}}(-1)^{i_1+i_2}\binom{b}{i_1}\binom{c}{i_2}\zeta_{\scalebox{0.5}{\rm EMS}}(-a-i_1-i_2,-j_1,-j_2). \end{align*} \end{exa} In the paper \cite{Komi}, the author showed the explicit formula of $\zeta_{\scalebox{0.5}{\rm EMS}}(-k_1,\dots, -k_n)$: \begin{prp}[\cite{Komi}] For $r\in\mathbb{N}$, we have \begin{equation*} Z_{\scalebox{0.5}{\rm EMS}}(t_1,\dots,t_r)=\prod_{i=1}^r\frac{(t_i+\cdots+t_r)-(e^{t_i+\cdots+t_r}-1)}{(t_i+\cdots+t_r)(e^{t_i+\cdots+t_r}-1)}, \end{equation*} where $Z_{\scalebox{0.5}{\rm EMS}}(t_1,\dots,t_r)$ is the generating function {\rm (\ref{eqn:0.5})} of $\zeta_{\scalebox{0.5}{\rm EMS}}(-k_1,\dots, -k_r)$. \end{prp} \section{Desingularization of multiple zeta-functions} In this section, we review desingularized values introduced by Furusho, Komori, Matsumoto and Tsumura in \cite{FKMT1}. In \S1.1, we recall the definition of the desingularized MZF, and explain some remarkable properties of this function. In \S1.2, we review desingularized values and their generating function. \subsection{Desingularized MZFs} In this subsection, we review the definition of desingularized MZF and the two properties, i.e., desingularized MZF can be analytically continued to $\mathbb{C}^r$ as an entire function (Proposition \ref{prp:1.2}) and can be represented by a finite ``linear'' combination of MZFs (Proposition \ref{prp:1.1}). We consider the generating function\footnote{It is denoted by $\tilde{\mathfrak{H}}_n\left((t_j);(1);c\right)$ in \cite{FKMT1}.} $\tilde{\mathfrak{H}}_r\left(t_1,\dots,t_r;c\right) \in \mathbb{C}[[t_1,\dots,t_r]]$ (cf. \cite[Definition 1.9]{FKMT1}): \begin{align*} \tilde{\mathfrak{H}}_r\left(t_1,\dots,t_r;c\right)&:=\prod_{j=1}^r\left(\frac{1}{\exp{\left(\sum_{k=j}^r t_k\right)}-1}-\frac{c}{\exp{\left(c\sum_{k=j}^r t_k\right)}-1}\right)\\ &=\prod_{j=1}^r\left(\sum_{m=1}^{\infty}(1-c^m)B_m\frac{\left(\sum_{k=j}^r t_k\right)^{m-1}}{m!}\right) \end{align*} for $c\in\mathbb{R}$. Here $B_m\ (m\geq0)$ is the Bernoulli number which is defined by \begin{equation}\label{eqn:1.1.4} \displaystyle\frac{x}{e^x-1}:=\sum_{m\geq0}\frac{B_m}{m!}x^m. \end{equation} We note that $B_0=1$, $B_1=-\frac{1}{2}$, $B_2=\frac{1}{6}$. \begin{dfn}[{\rm \cite[Definition 3.1]{FKMT1}}] For non-integral complex numbers $s_1,\dots,s_r$, {\it desingularized MZF} $\zeta_r^{\rm des}(s_1,\dots,s_r)$ is defined by \begin{align} \label{eqn:1.1.2}&\zeta_r^{\rm des}(s_1,\dots,s_r) \\ &:=\lim_{\substack{c\rightarrow1\\c\in\mathbb{R}\setminus\{1\}}}\frac{1}{(1-c)^r}\prod_{k=1}^r\frac{1}{(e^{2\pi is_k}-1)\Gamma(s_k)}\int_{\mathcal{C}^r}\tilde{\mathfrak{H}}_r\left(t_1,\dots,t_r;c\right)\prod_{k=1}^r t_k^{s_k-1}d t_k. \nonumber \end{align} Here $\mathcal{C}$ is the path consisting of the positive real axis (top side), a circle around the origin of radius $\varepsilon$ (sufficiently small), and the positive real axis (bottom side). \end{dfn} One of the remarkable properties of desingularized MZF is that it is an entire function, i.e., the equation (\ref{eqn:1.1.2}) is well-defined as an analytic function by the following proposition. \begin{prp}[{\rm \cite[Theorem 3.4]{FKMT1}}]\label{prp:1.2} The equation $\zeta_r^{\rm des} (s_1,\dots,s_r)$ can be analytically continued to $\mathbb{C}^r$ as an entire function in $(s_1,\dots,s_r)\in \mathbb{C}^r$ by the following integral expression: \begin{align*} \label{eqn:1.1.2}\zeta_r^{\rm des}&(s_1,\dots,s_r) =\prod_{k=1}^r\frac{1}{(e^{2\pi is_k}-1)\Gamma(s_k)}\\ &\cdot\int_{\mathcal{C}^n}\prod_{j=1}^r\lim_{\substack{c\rightarrow1\\c\in\mathbb{R}\setminus\{1\}}}\frac{1}{1-c}\left(\frac{1}{\exp{\left(\sum_{k=j}^r t_k\right)}-1}-\frac{c}{\exp{\left(c\sum_{k=j}^r t_k\right)}-1}\right)\prod_{k=1}^r t_k^{s_k-1}d t_k. \end{align*} \end{prp} For indeterminates $u_j$ and $v_j\ (1\leq j\leq r)$, we set \begin{equation}\label{eqn:1.1.3} \mathcal{G}_r(u_1,\dots,u_r; v_1,\dots,v_r):=\prod_{j=1}^r\left\{1-(u_jv_j+\cdots+u_r v_r)(v_j^{-1}-v_{j-1}^{-1})\right\} \end{equation} with the convention $v_0^{-1}:=0$, and we define the set of integers $\{a^r_{\Bold{\footnotesize$l$},\Bold{\footnotesize$m$}}\}$ by \begin{equation}\label{eqn:1.1.4} \mathcal{G}_r(u_1,\dots,u_r; v_1,\dots,v_r)=\sum_{\substack{\mbox{\boldmath {\footnotesize$l$}}=(l_j)\in\mathbb{N}_0^r\\ \mbox{\boldmath {\footnotesize$m$}}=(m_j)\in\mathbb{Z}^r \\ |\mbox{\boldmath {\footnotesize$m$}}|=0}}a^r_{\mbox{\boldmath {\footnotesize$l$}},\mbox{\boldmath {\footnotesize$m$}}}\prod_{j=1}^ru_j^{l_j}v_j^{m_j}. \end{equation} Here, $|\mbox{\boldmath {\footnotesize$m$}}|:=m_1+\cdots+ m_r$.\\ Another remarkable properties of desingularized MZF is that the function is given by a finite ``linear'' combination of shifted MZFs, i.e., \begin{prp}[{\rm \cite[Theorem 3.8]{FKMT1}}]\label{prp:1.1} For $s_1,\dots,s_r \in \mathbb{C}$, we have the following equality between meromorphic functions of the complex variables $(s_1,\ldots,s_r)$: \begin{equation}\label{eqn:1.1.5} \zeta_r^{\rm des}(s_1,\dots,s_r)=\sum_{\substack{\mbox{\boldmath {\footnotesize$l$}}=(l_j)\in\mathbb{N}_0^r\\ \mbox{\boldmath {\footnotesize$m$}}=(m_j)\in\mathbb{Z}^r \\ |\mbox{\boldmath {\footnotesize$m$}}|=0}}a^r_{\mbox{\boldmath {\footnotesize$l$}},\mbox{\boldmath {\footnotesize$m$}}}\left(\prod_{j=1}^r(s_j)_{l_j}\right)\zeta(s_1+m_1,\dots,s_r+m_r). \end{equation} Here, $(s)_{k}$ is the {\it Pochhammer symbol}, that is, for $k\in\mathbb{N}$ and $s\in\mathbb{C}$ $(s)_{0}:=1$ and $(s)_k:=s(s+1)\cdots(s+k-1)$. \end{prp} \subsection{Desingularized values} We review the definition of desingularized values and their explicit formula (Proposition \ref{prp:1.1.1}), and then we give a recurrence formula of desingularized values (Corollary \ref{crl:1.1.1}). \begin{dfn}\label{def:1.2.1} For $k_1,\dots,k_r \in \mathbb{N}_0$, {\it desingularized value} $\zeta_r^{\rm des}(-k_1,\dots,-k_r)\in\mathbb{C}$ is defined to be the special value of desingularized MZF $\zeta_r^{\rm des}(s_1,\dots,s_r)$ at $(s_1,\dots,s_r)=(-k_1,\dots,-k_r)$. \end{dfn} The generating function $Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_r)$ of $\zeta_r^{\rm des}(-k_1,\dots,-k_r)$ in the equation (\ref{eqn:0.4}) is explicitly calculated as follows. \begin{prp}[{\rm \cite[Theorem 3.7]{FKMT1}}]\label{prp:1.1.1} We have \begin{equation*} Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_r) = \prod_{i=1}^r\frac{(1-t_i-\cdots-t_r)e^{t_i+\cdots+t_r}-1}{(e^{t_i+\cdots+t_r}-1)^2}. \end{equation*} In terms of $\zeta_r^{\rm des}(-k_1,\dots,-k_r)$ for $k_1,\dots,k_r\in\mathbb{N}_0$, the above equation is reformulated to \begin{equation*} \zeta_r^{\rm des}(-k_1,\dots,-k_r)=(-1)^{k_1+\cdots+k_r}\sum_{\substack{\nu_{1i}+\cdots+\nu_{ii}=k_i\\1\leq i\leq r}}\prod_{i=1}^r\frac{k_i!}{\prod_{j=i}^r\nu_{ij}!}B_{\nu_{ii}+\cdots+\nu_{ir}+1}. \end{equation*} \end{prp} By the above proposition we have the following recurrence formula: \begin{crl}\label{crl:1.1.1} \begin{equation}\label{eqn:1.2.1} Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_r) = Z_{\scalebox{0.5}{\rm FKMT}}(t_2,\dots,t_r)\cdot Z_{\scalebox{0.5}{\rm FKMT}}(t_1+\cdots+t_r) \quad(r \in \mathbb{N}). \end{equation} In terms of $\zeta_r^{\rm des}(-k_1,\dots,-k_r)$, the equation {\rm (\ref{eqn:1.2.1})} is reformulated to \begin{equation}\label{eqn:1.2.2} \zeta_r^{\rm des}(-k_1,\dots,-k_r) = \sum_{\substack{i_2 + j_2=k_2\\\scalebox{0.5}{\rotatebox{90}{$\cdots$}}\\i_r + j_r=k_r}}\prod_{a=2}^r\binom{k_a}{i_a}\zeta_{r-1}^{\rm des}(-i_2,\dots,-i_r)\zeta_1^{\rm des}(-k_1-j_2-\dots-j_r) \end{equation} for $k_1,\dots,k_r \in \mathbb{N}_0$ \end{crl} In the paper \cite{Komi}, the author showed that the desingularized values $\zeta_r^{\rm des}(-k_1,\dots,-k_r)$ and renormalized values $\zeta_{\scalebox{0.5}{\rm EMS}}(-k_1,\dots,-k_r)$ in \cite{EMS1} are equivalent (the equation (\ref{eqn:0.8})), i.e. \begin{thm} For $r\in\mathbb{N}$, we have \begin{equation*} Z_{\scalebox{0.5}{\rm EMS}}(t_1,\dots,t_r) = \prod_{i=1}^{r}\frac{1-e^{-t_i-\cdots-t_r}}{t_i+\cdots+t_r}\cdot Z_{\scalebox{0.5}{\rm FKMT}}(-t_1,\dots,-t_r). \end{equation*} \end{thm} The following is an example of our equivalence: \begin{exa}\label{ex:3.1} For $k \in \mathbb{N}_0$, we have \begin{align*} &\zeta_{\scalebox{0.5}{\rm EMS}}(-k) = \displaystyle\sum_{i+j=k}\binom{k}{i}\frac{(-1)^j}{i+1}\zeta_{\scalebox{0.5}{\rm FKMT}}(-j),\\ &\zeta_{\scalebox{0.5}{\rm FKMT}}(-k) = (-1)^{k}\displaystyle\sum_{i+j=k}\binom{k}{i}B_{i}\zeta_{\scalebox{0.5}{\rm EMS}}(-j). \end{align*} \end{exa} \section{The product formulae at non-positive integer points} In this section, we prove the shuffle-type product formulae of desingularized values at non-positive integer points (Theorem \ref{prop:2.1}). \begin{lmm}\label{lmm:1.1} For $r\in\mathbb{N}$, we have \begin{equation}\label{eqn:1.1} Z_{\scalebox{0.5}{\rm FKMT}}(u_1)\cdots Z_{\scalebox{0.5}{\rm FKMT}}(u_r)=Z_{\scalebox{0.5}{\rm FKMT}}(u_1-u_2,u_2-u_3,\dots,u_{r-1}-u_r,u_r). \end{equation} \end{lmm} \begin{proof} Let $r \in \mathbb{N}$. Using the equation (\ref{eqn:1.2.1}) repeatedly, we get \begin{equation*} Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_r) = \prod_{i=1}^r Z_{\scalebox{0.5}{\rm FKMT}}(t_i+\cdots+t_r). \end{equation*} We replace $t_i + \cdots +t_r$ to $u_i$ for $i=1,\dots,r$ in this formula. Then, we obtain the equation (\ref{eqn:1.1}). \end{proof} Calculating simply, we obtain the following lemma. \begin{lmm}\label{lmm:1.2} For $r\in\mathbb{N}$, $a_1,\dots,a_r\in \mathbb{C}$ and $f:\mathbb{N}_0\rightarrow\mathbb{C}$, we have \begin{align*} \sum_{k=0}^{\infty}\frac{(a_1+\cdots+a_r)^k}{k!}f(k)&=\sum_{k=0}^{\infty}\frac{f(k)}{k!}\sum_{i_1+\cdots+i_r=k}\frac{k!}{i_1!\cdots i_r!}a_1^{i_1}\cdots a_r^{i_r} \\ &=\sum_{i_1,\dots,i_r=0}^{\infty}\frac{a_1^{i_1}\cdots a_r^{i_r}}{i_1!\cdots i_r!}f(i_1+\cdots+i_r). \end{align*} \end{lmm} Using the above two lemmas, we have the following theorem. \begin{thm}\label{prop:2.1} For $p,q\in\mathbb{N}$ and $k_1,\dots,k_p,l_1,\dots,l_q\in\mathbb{N}_0$, we have \begin{align}\label{eqn:1.2} &&\zeta_p^{\rm des}(-k_1,\dots,-k_p)\zeta_q^{\rm des}(-l_1,\dots,-l_q)\hspace{6.6cm} \\ &&= \sum_{\substack{i_1 + j_1=l_1\\ \scalebox{0.5}{\rotatebox{90}{$\cdots$}}\\i_q + j_q=l_q}}\prod_{a=1}^q(-1)^{i_a}\binom{l_a}{i_a} \zeta_{p+q}^{\rm des}(-k_1,\dots,-k_{p-1},-k_p-i_1- \cdots -i_q,-j_1,\dots,-j_q). \nonumber \end{align} \end{thm} \begin{proof} Using the equation (\ref{eqn:1.2.1}) repeatedly, we get {\small \begin{equation*} Z_{\scalebox{0.5}{\rm FKMT}}(s_1,\dots,s_p)Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_q) = Z_{\scalebox{0.5}{\rm FKMT}}(s_1+\cdots+s_p) \cdots Z_{\scalebox{0.5}{\rm FKMT}}(s_p)Z_{\scalebox{0.5}{\rm FKMT}}(t_1+\cdots+t_q)\cdots Z_{\scalebox{0.5}{\rm FKMT}}(t_q). \end{equation*} By putting $u_i = \left\{\begin{array}{cc} s_i+\cdots+s_p & (1\leq i \leq p), \\ t_{i-p}+\cdots+t_q & (p+1\leq i \leq p+q), \end{array}\right.$ and applying the equation (\ref{eqn:1.1}) to the above equation, we have \begin{align*} &Z_{\scalebox{0.5}{\rm FKMT}}(s_1,\dots,s_p)Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_q) \\ =& Z_{\scalebox{0.5}{\rm FKMT}}(s_1,\dots,s_{p-1},s_p-t_1-\cdots-t_q,t_1,\dots,t_q) \\ =& \sum_{k_1,\dots,k_p\geq0}\frac{(-s_1)^{k_1}\cdots(-s_{p-1})^{k_{p-1}}(-s_p+t_1+\cdots+t_q)^{k_p}}{k_1!\cdots k_{p-1}!k_p!} \\ &\hspace{5em}\cdot\sum_{j_1,\dots,j_q\geq0}\frac{(-t_1)^{j_1}\cdots(-t_q)^{j_q}}{j_1!\cdots j_q!}\zeta_{p+q}^{\rm des}(-k_1,\dots,-k_p,-j_1,\dots,-j_q) \\ =& \sum_{\substack{k_1,\dots,k_{p-1}\geq0\\j_1,\dots,j_q\geq0}}\frac{(-s_1)^{k_1}\cdots(-s_{p-1})^{k_{p-1}}}{k_1!\cdots k_{p-1}!}\frac{(-t_1)^{j_1}\cdots(-t_q)^{j_q}}{j_1!\cdots j_q!} \\ &\hspace{5em}\cdot\sum_{k_p\geq0}\frac{(-s_p+t_1+\cdots+t_q)^{k_p}}{k_p!}\zeta_{p+q}^{\rm des}(-k_1,\dots,-k_p,-j_1,\dots,-j_q). \\ \intertext{Using Lemma \ref{lmm:1.2}, we get} &Z_{\scalebox{0.5}{\rm FKMT}}(s_1,\dots,s_p)Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_q) \\ =& \sum_{\substack{k_1,\dots,k_{p-1}\geq0\\j_1,\dots,j_q\geq0}}\frac{(-s_1)^{k_1}\cdots(-s_{p-1})^{k_{p-1}}}{k_1!\cdots k_{p-1}!}\frac{(-t_1)^{j_1}\cdots(-t_q)^{j_q}}{j_1!\cdots j_q!} \\ &\hspace{3em}\cdot\sum_{k_p,i_1,\dots,i_q\geq0}\frac{(-s_p)^{k_p}t_1^{i_1}\cdots t_q^{i_q}}{k_p!i_1!\cdots i_q!}\zeta_{p+q}^{\rm des}(-k_1,\dots,-k_p-i_1-\cdots-i_q,-j_1,\dots,-j_q) \\ =& \sum_{k_1,\dots,k_p\geq0}\frac{(-s_1)^{k_1}\cdots(-s_p)^{k_p}}{k_1!\cdots k_p!} \\ &\hspace{1em}\cdot\sum_{\substack{i_1,\dots,i_q\geq0\\j_1,\dots,j_q\geq0}}\frac{t_1^{i_1}\cdots t_q^{i_q}}{i_1!\cdots i_q!}\frac{(-t_1)^{j_1}\cdots(-t_q)^{j_q}}{j_1!\cdots j_q!}\zeta_{p+q}^{\rm des}(-k_1,\dots,-k_p-i_1-\cdots-i_q,-j_1,\dots,-j_q) \end{align*} \begin{align*} &Z_{\scalebox{0.5}{\rm FKMT}}(s_1,\dots,s_p)Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_q) \\ =& \sum_{k_1,\dots,k_p\geq0}\frac{(-s_1)^{k_1}\cdots(-s_p)^{k_p}}{k_1!\cdots k_p!} \\ &\hspace{0em}\cdot\sum_{\substack{i_1,\dots,i_q\geq0\\j_1,\dots,j_q\geq0}}\frac{(-t_1)^{i_1+j_1}\cdots(-t_q)^{i_q+j_q}}{i_1!\cdots i_q!j_1!\cdots j_q!}(-1)^{i_1+\cdots+i_q}\zeta_{p+q}^{\rm des}(-k_1,\dots,-k_p-i_1-\cdots-i_q,-j_1,\dots,-j_q) \\ =& \sum_{\substack{k_1,\dots,k_p\geq0\\l_1,\dots,l_q\geq0}}\frac{(-s_1)^{k_1}\cdots(-s_p)^{k_p}}{k_1!\cdots k_p!}\frac{(-t_1)^{l_1}\cdots(-t_q)^{l_q}}{l_1!\cdots l_q!} \\ &\hspace{0em}\cdot\sum_{\substack{i_1 + j_1=l_1\\ \scalebox{0.5}{\rotatebox{90}{$\cdots$}}\\i_q + j_q=l_q}}\prod_{a=1}^q\binom{l_a}{i_a}(-1)^{i_a} \zeta_{p+q}^{\rm des}(-k_1,\dots,-k_p-i_1-\cdots-i_q,-j_1,\dots,-j_q). \\ \end{align*}} On the other hand, by the definition of $Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_q)$, we have {\small \begin{align*} Z_{\scalebox{0.5}{\rm FKMT}}&(s_1,\dots,s_p)Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_q) \\ =& \left\{\sum_{k_1,\dots,k_p\geq0}\frac{(-s_1)^{k_1}\cdots(-s_p)^{k_p}}{k_1!\cdots k_p!}\zeta_p^{\rm des}(-k_1,\dots,-k_p)\right\} \\ &\hspace{10em}\cdot\left\{\sum_{l_1,\dots,l_q\geq0}\frac{(-t_1)^{l_1}\cdots(-t_q)^{l_q}}{l_1!\cdots l_q!}\zeta_q^{\rm des}(-l_1,\dots,-l_q)\right\} \\ =& \sum_{\substack{k_1,\dots,k_p\geq0\\l_1,\dots,l_q\geq0}}\frac{(-s_1)^{k_1}\cdots(-s_p)^{k_p}}{k_1!\cdots k_p!}\frac{(-t_1)^{l_1}\cdots(-t_q)^{l_q}}{l_1!\cdots l_q!}\zeta_p^{\rm des}(-k_1,\dots,-k_p)\zeta_q^{\rm des}(-l_1,\dots,-l_q). \\ \end{align*}} Therefore, we obtain the equation (\ref{eqn:1.2}). \end{proof} Here are examples for $(p,q)=(1,1),\ (1,2)$. \begin{exa} For $a,b,c\in\mathbb{N}_0$, we have \begin{align*} \zeta_1^{\rm des}(-a)\zeta_1^{\rm des}(-b)&=\sum_{i_1+j_1=b}(-1)^{i_1}\binom{b}{i_1}\zeta_2^{\rm des}(-a-i_1,-j_1), \\ \zeta_1^{\rm des}(-a)\zeta_2^{\rm des}(-b,-c)&=\sum_{\substack{i_1+j_1=b \\ i_2+j_2=c}}(-1)^{i_1+i_2}\binom{b}{i_1}\binom{c}{i_2}\zeta_3^{\rm des}(-a-i_1-i_2,-j_1,-j_2). \end{align*} \end{exa} \begin{rem} In order to prove Theorem \ref{prop:2.1}, we essentially used only the property (\ref{eqn:1.2.1}) of $Z_{\scalebox{0.5}{\rm FKMT}}(t_1,\dots,t_r)$, which also holds for $Z_{\scalebox{0.5}{\rm EMS}}(t_1,\dots,t_r)$, so $\zeta_r^{\rm des}(-k_1,\dots,-k_r)$ satisfies the same shuffle-type product formula to $\zeta_{\scalebox{0.5}{\rm EMS}}(-k_1,\dots,-k_r)$ introduced in \cite{EMS1}. \end{rem} \section{More general product formulae} In this section, we prove a generalization of the equation (\ref{eqn:0.9}) in Proposition \ref{thm:3.1} and {\it general} ``shuffle product'' between $\zeta_r^{\rm des}(s_1,\dots,s_{r-1})$ and $\zeta_1^{\rm des}(-l)$ in Proposition \ref{crl:3.2}. We assume $r\in\mathbb{N}_{\geq2}$ in this section. We start with the following lemma on the property of the Pochhammer symbol. \begin{lmm}\label{lmm:3.2} For $a,b\in\mathbb{C}$ and $n\in\mathbb{N}_0$, we have \begin{equation*} (a+b)_n=\sum_{i+j=n}\binom{n}{i}(a)_i(b)_j. \end{equation*} \end{lmm} \begin{proof} By considering the Taylor expansion of $(1-t)^{-a}$, we get \begin{equation*} (1-t)^{-a}=\sum_{n\geq0}\frac{(a)_n}{n!}t^n. \end{equation*} Because we have $(1-t)^{-a-b}=(1-t)^{-a}(1-t)^{-b}$, by comparing the coefficient of this equation, we obtain the claim. \end{proof} The above lemma is used in the proof of Proposition \ref{prp:3.1}. \\ Next, we prove a property of $\mathcal{G}_r((u_j);(v_j))$ defined by the equation (\ref{eqn:1.1.3}). \begin{prp}\label{prp:3.1} We have \begin{align}\label{eqn:3.1} \mathcal{G}_r&\left(u_1,\dots,u_r; v_1,\dots,v_{r-1},\frac{u_r+z}{u_r}v_{r-1}\right) \\ &=(z+1)\mathcal{G}_{r-1}(u_1,\dots,u_{r-2},u_{r-1}+u_r+z; v_1,\dots,v_{r-1}). \nonumber \end{align} \end{prp} \begin{proof} By the definition of $\mathcal{G}_r((u_j);(v_j))$, we have \begin{align*} &\mathcal{G}_r\left(u_1,\dots,u_r; v_1,\dots,v_{r-1},\frac{u_r+z}{u_r}v_{r-1}\right) \\ =&\prod_{j=1}^{r-1}\left\{1-\left(u_jv_j+\cdots+u_{r-1} v_{r-1}+u_r\frac{u_r+z}{u_r}v_{r-1}\right)(v_j^{-1}-v_{j-1}^{-1})\right\} \\ &\cdot \left\{1-u_r\frac{u_r+z}{u_r}v_{r-1}\left(\left(\frac{u_r+z}{u_r}v_{r-1}\right)^{-1}-v_{r-1}^{-1}\right)\right\} \\ =&\prod_{j=1}^{r-1}\left\{1-\left(u_jv_j+\cdots+u_{r-1} v_{r-1}+(u_r+z)v_{r-1}\right)(v_j^{-1}-v_{j-1}^{-1})\right\} \\ &\cdot \left\{1-(u_r+z)v_{r-1}\left(\frac{u_r}{u_r+z}-1\right)v_{r-1}^{-1}\right\} \end{align*} \begin{align*} =&\prod_{j=1}^{r-1}\left\{1-\left(u_jv_j+\cdots+u_{r-2} v_{r-2}+(u_{r-1}+u_r+z)v_{r-1}\right)(v_j^{-1}-v_{j-1}^{-1})\right\} \\ &\cdot \left\{1-\left(u_r-(u_r+z)\right)\right\}\\ =&(z+1)\mathcal{G}_{r-1}(u_1,\dots,u_{r-2},u_{r-1}+u_r+z; v_1,\dots,v_{r-1}). \end{align*} \end{proof} It is easy to prove the following lemma by comparing coefficients $a^r_{\Bold{\footnotesize$l$},\Bold{\footnotesize$m$}}$ of the equations (\ref{eqn:1.1.3}) and (\ref{eqn:1.1.4}). \begin{lmm}\label{lmm:3.1} Let $\Bold{$l$}:=(l_j)\in\mathbb{N}_0^r$ and $\Bold{$m$}:=(m_j)\in\mathbb{Z}^r$. If $m_r\neq l_r-1, l_r$ or $m_r<0$, then we have \begin{equation} a^r_{\Bold{\footnotesize$l$},\Bold{\footnotesize$m$}}=0. \end{equation} \end{lmm} For our simplicities, we employ the following symbols: \begin{notation} Let $s_1,\dots,s_r$ and $z$ be indeterminates. For $r$-tuple symbol $\Bold{$s$}:=(s_1,\dots,s_r)$, the symbols $\Bold{$s$}'$ and $\Bold{$s$}^-$ are defined by \begin{align*} \Bold{$s$}'&:=(s_1,\dots,s_{r-2},s_{r-1}+s_r), \\ \Bold{$s$}^-&:=(s_1,\dots,s_{r-1}), \\ |\Bold{$s$}|&:=s_1+\cdots+s_r, \end{align*} and we define $\Bold{$z$}:=(\underbrace{0,\dots,0}_{r-1},z)$. \end{notation} \begin{lmm}\label{lmm:3.3} For the functions $f:\mathbb{Z}^r\rightarrow\mathbb{C}$ and $g:\mathbb{N}_0^{r+1}\rightarrow\mathbb{C}$ with \begin{equation*} \#\{\Bold{$n$}\in\mathbb{Z}^r\ |\ f(\Bold{$n$})\neq0\}<\infty \ \mbox{and} \ \#\{\Bold{$a$}\in\mathbb{N}_0^{r+1}\ |\ g(\Bold{$a$})\neq0\}<\infty, \end{equation*} we have \begin{align}\label{eqn:3.7} \sum_{\substack{\Bold{\footnotesize$n$}=(n_j)\in\mathbb{Z}^r \\ |\Bold{\footnotesize$n$}|=0}}f(\Bold{$n$}) &=\sum_{\substack{\Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}}\sum_{\substack{p+q=m_{r-1} \\ p,q\in\mathbb{Z}}}f(\Bold{$m$}^-,p,q), \\ \label{eqn:3.8} \sum_{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r}g(\Bold{$l$}',l_{r-1},l_r) &=\sum_{\Bold{\footnotesize$k$}=(k_j)\in\mathbb{N}_0^{r-1}}\sum_{\substack{p+q=k_{r-1} \\ p,q\in\mathbb{N}_0}}g(\Bold{$k$},p,q). \end{align} \end{lmm} \begin{proof} We only prove the equation (\ref{eqn:3.7}), because the proof of the equation (\ref{eqn:3.8}) can be done in the same way to that of the equation (\ref{eqn:3.7}). We have \begin{align*} \sum_{\substack{\Bold{\footnotesize$n$}=(n_j)\in\mathbb{Z}^r \\ |\Bold{\footnotesize$n$}|=0}}f(\Bold{$n$}) &=\sum_{n_1,\dots,n_{r-2},n_{r-1}\in\mathbb{Z}}f(n_1,\dots,n_{r-2},n_{r-1},-n_1-\cdots-n_{r-2}-n_{r-1}) \\ &=\sum_{m_1,\dots,m_{r-2}\in\mathbb{Z}}\sum_{n_{r-1}\in\mathbb{Z}}f(m_1,\dots,m_{r-2},n_{r-1},-m_1-\cdots-m_{r-2}-n_{r-1}). \\ \intertext{When we put $m_{r-1}:=-m_1-\cdots-m_{r-2}$, then $m_{r-1}$ can run over all integers. So we get} &=\sum_{\substack{\Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}}\sum_{n_{r-1}\in\mathbb{Z}}f(m_1,\dots,m_{r-2},n_{r-1},m_{r-1}-n_{r-1}). \\ \intertext{When we put $p:=n_{r-1}$ and $q:=m_{r-1}-n_{r-1}$, then $p$ and $q$ run over all integers with $p+q=m_{r-1}$. So we obtain} &=\sum_{\substack{\Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}}\sum_{\substack{p+q=m_{r-1} \\ p,q\in\mathbb{Z}}}f(\Bold{$m$}^-,p,q). \end{align*} \end{proof} Using Lemma \ref{lmm:3.1} and Lemma \ref{lmm:3.3}, we get the following corollary. \begin{crl}\label{crl:3.1} For $\Bold{$l$}:=(l_1,\dots,l_r)\in\mathbb{N}_0^r$ and $\Bold{$m$}:=(m_1,\dots,m_{r-1})\in\mathbb{Z}^{r-1}$ with $|\Bold{$m$}|=0$, we have \begin{align}\label{eqn:3.2} a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}+a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}&=\binom{l_{r-1}+l_r}{l_{r-1}}a^{r-1}_{\Bold{\footnotesize$l$}',\Bold{\footnotesize$m$}} \\ &=-a^r_{\left(\Bold{\footnotesize$l$}^-,l_r+1\right),(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}. \nonumber \end{align} \end{crl} \begin{proof} Let $r\in\mathbb{N}$. By the equation (\ref{eqn:1.1.4}) (the definition of the coefficient $a^r_{\Bold{\footnotesize$l$},\Bold{\footnotesize$m$}}$ of the function $\mathcal{G}_r$), we have {\small \begin{align}\label{eqn:3.3} \mathcal{G}_r\left(u_1,\dots,u_r; v_1,\dots,v_{r-1},\frac{u_r+z}{u_r}v_{r-1}\right) =\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$n$}=(n_j)\in\mathbb{Z}^r \\ |\Bold{\footnotesize$n$}|=0}} a^r_{\Bold{\footnotesize$l$},\Bold{\footnotesize$n$}}\left(\prod_{j=1}^ru_j^{l_j}\right)\left(\prod_{j=1}^{r-1}v_j^{n_j}\right)\left(\frac{u_r+z}{u_r}v_{r-1}\right)^{n_r}. \end{align}} By using the equation (\ref{eqn:3.7}) of Lemma \ref{lmm:3.3}, we have \begin{align*} &\mathcal{G}_r\left(u_1,\dots,u_r; v_1,\dots,v_{r-1},\frac{u_r+z}{u_r}v_{r-1}\right) \\ &=\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \sum_{\substack{p+q=m_{r-1}\\ p,q\in\mathbb{Z}}} a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,p,q)}\left(\prod_{j=1}^ru_j^{l_j}\right)\left(\prod_{j=1}^{r-2}v_j^{m_j}\right)v_{r-1}^p\left(\frac{u_r+z}{u_r}v_{r-1}\right)^q \\ &=\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \sum_{\substack{p+q=m_{r-1}\\ p,q\in\mathbb{Z}}} a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,p,q)}\left(\prod_{j=1}^ru_j^{l_j}\right)\left(\frac{u_r+z}{u_r}\right)^q\left(\prod_{j=1}^{r-1}v_j^{m_j}\right). \end{align*} By Lemma \ref{lmm:3.1}, we get $a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,p,q)}=0$ for $q\neq l_r-1,l_r$. So we have \begin{align*} &\mathcal{G}_r\left(u_1,\dots,u_r; v_1,\dots,v_{r-1},\frac{u_r+z}{u_r}v_{r-1}\right) \\ &=\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \left\{a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)u_r(u_r+z)^{l_r-1}\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)\right. \nonumber\\ &\hspace{3.5cm}\left.+ a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)(u_r+z)^{l_r}\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)\right\} \\ &=\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \left\{a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)(-z)(u_r+z)^{l_r-1}\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)\right. \nonumber\\ &\hspace{3.2cm}+ a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)(u_r+z)^{l_r}\left(\prod_{j=1}^{r-1}v_j^{m_j}\right) \nonumber\\ &\hspace{3.2cm}\left.+ a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)(u_r+z)^{l_r}\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)\right\}. \end{align*} By Lemma \ref{lmm:3.1}, we get $a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}+1,-1)}=0$ (i.e. the case of $l_r=0$). By replacing $l_r-1$ with $l_r$, we have \begin{align*} &\mathcal{G}_r\left(u_1,\dots,u_r; v_1,\dots,v_{r-1},\frac{u_r+z}{u_r}v_{r-1}\right) \\ &=-z\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \left\{\vphantom{\cdot\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)(u_r+z)^{l_r}} a^r_{(\Bold{\footnotesize$l$}^-,l_r+1),(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)(u_r+z)^{l_r}\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)\right\} \\ &\hspace{2.2cm}+\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \left\{\vphantom{\cdot\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)(u_r+z)^{l_r}} \left(a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}+ a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}\right)\right.\\ &\hspace{6cm}\left.\cdot\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)(u_r+z)^{l_r}\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)\right\}. \end{align*} On the other hand, we have \begin{align*} &(z+1)\mathcal{G}_{r-1}(u_1,\dots,u_{r-2},u_{r-1}+u_r+z; v_1,\dots,v_{r-1}) \\ &=(z+1)\sum_{\substack{\Bold{\footnotesize$k$}=(k_j)\in\mathbb{N}_0^{r-1} \\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} a^{r-1}_{\Bold{\footnotesize$k$},\Bold{\footnotesize$m$}}\left(\prod_{j=1}^{r-2}u_j^{k_j}\right)(u_{r-1}+u_r+z)^{k_{r-1}}\left(\prod_{j=1}^{r-1}v_j^{m_j}\right) \hspace{1.8cm} \\ &=(z+1)\sum_{\substack{\Bold{\footnotesize$k$}=(k_j)\in\mathbb{N}_0^{r-1}\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} a^{r-1}_{\Bold{\footnotesize$k$},\Bold{\footnotesize$m$}}\left(\prod_{j=1}^{r-2}u_j^{k_j}\right) \sum_{\substack{p+q=k_{r-1} \\ p,q\in\mathbb{N}_0}} \binom{k_{r-1}}{p}u_{r-1}^p(u_r+z)^q\left(\prod_{j=1}^{r-1}v_j^{m_j}\right). \end{align*} By using the equation (\ref{eqn:3.8}) of Lemma \ref{lmm:3.3}, we have {\small \begin{align}\label{eqn:3.4} &(z+1)\mathcal{G}_{r-1}(u_1,\dots,u_{r-2},u_{r-1}+u_r+z; v_1,\dots,v_{r-1}) \\ &=(z+1)\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(n_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \binom{l_{r-1}+l_r}{l_{r-1}} a^{r-1}_{\Bold{\footnotesize$l$}',\Bold{\footnotesize$m$}}\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)(u_r+z)^{l_r}\left(\prod_{j=1}^{r-1}v_j^{m_j}\right). \hspace{1.cm} \nonumber \end{align}} By comparing the coefficients of (\ref{eqn:3.3}) and (\ref{eqn:3.4}), we obtain (\ref{eqn:3.2}). \end{proof} \noindent By tracing the proof of Corollary \ref{crl:3.1} inversely, we get the following proposition. \begin{prp}\label{prp:3.1} For $s_1,\dots,s_r,z\in\mathbb{C}$, \begin{align}\label{eqn:3.5} \sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$n$}=(n_j)\in\mathbb{Z}^r \\ |\Bold{\footnotesize$n$}|=0}} a^r_{\Bold{\footnotesize$l$},\Bold{\footnotesize$n$}}\left(\prod_{j=1}^r(s_j)_{l_j}\right)&\frac{\Gamma(s_r+n_r+z)\Gamma(-z)}{\Gamma(s_r+n_r)}\zeta_{r-1}(\Bold{$s$}'+\Bold{$n$}'+\Bold{$z$}') \\ &=(1+z)\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}\zeta_{r-1}^{\rm des}(\Bold{$s$}'+\Bold{$z$}'), \nonumber \end{align} holds except for singularities. \end{prp} \begin{proof} Let $s_1,\dots,s_r,z\in\mathbb{C}$. Using Corollary \ref{crl:3.1}, we have {\small \begin{align}\label{eqn:3.6} &(z+1)\sum_{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r} \binom{l_{r-1}+l_r}{l_{r-1}} a^{r-1}_{\Bold{\footnotesize$l$}',\Bold{\footnotesize$m$}}\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right)(s_r+z)_{l_r} \\ &=-z\sum_{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r} \left\{\vphantom{\cdot\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)(u_r+z)^{l_r}} a^r_{(\Bold{\footnotesize$l$}^-,l_r+1),(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right) (s_r+z)_{l_r}\right\} \nonumber\\ &\quad+\sum_{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r} \left\{\vphantom{\cdot\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)(u_r+z)^{l_r}} \left(a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}+ a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}\right)\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right) (s_r+z)_{l_r}\right\}. \nonumber \end{align}} \normalsize{By multiplying the function $\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}')$ and taking summation over $\Bold{$m$}\in\mathbb{Z}^{r-1}$ with $|\Bold{$m$}|=0$, we have} {\footnotesize \begin{align*} &\sum_{\substack{\Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \mbox{\normalsize{(R.H.S. of (\ref{eqn:3.6}))}}\cdot\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}') \\ &=-z\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \left\{\vphantom{\cdot\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)(u_r+z)^{l_r}} a^r_{(\Bold{\footnotesize$l$}^-,l_r+1),(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right) (s_r+z)_{l_r}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}')\right\}\\ &\hspace{1cm}+\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \left\{\vphantom{\cdot\left(\prod_{j=1}^{r-1}u_j^{l_j}\right)\left(\prod_{j=1}^{r-1}v_j^{m_j}\right)(u_r+z)^{l_r}} \left(a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}+ a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}\right)\right. \\ &\hspace{5cm}\left.\cdot\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right) (s_r+z)_{l_r}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}')\right\} \\ &=\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \left\{a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right) (-z)(s_r+z)_{l_r-1}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}')\right. \\ &\hspace{2.5cm}+ a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right) (s_r+z)_{l_r}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}') \\ &\hspace{3cm}\left.+ a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right) (s_r+z)_{l_r}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}')\right\} \end{align*} \begin{align*} &=\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \left\{a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r+1,l_r-1)}\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right) (s_r+l_r-1)(s_r+z)_{l_r-1}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}')\right. \nonumber \\ &\hspace{3cm}\left.+ a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,m_{r-1}-l_r,l_r)}\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right) (s_r+z)_{l_r}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}')\right\}. \nonumber \\ \end{align*}} By Lemma \ref{lmm:3.1}, we get $a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,p,q)}=0$ for $q\neq l_r-1,l_r$. So we have \begin{align*} &\sum_{\substack{\Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \mbox{\normalsize{(R.H.S. of (\ref{eqn:3.6}))}}\cdot\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}') \nonumber\\ &=\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \sum_{\substack{p+q=m_{r-1}\\ p,q\in\mathbb{Z}}} a^r_{\Bold{\footnotesize$l$},(\Bold{\footnotesize$m$}^-,p,q)}\left(\prod_{j=1}^r(s_j)_{l_j}\right) \frac{(s_r+z)_q}{(s_r)_q}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}'). \end{align*} By using the equation (\ref{eqn:3.7}) of Lemma \ref{lmm:3.3}, we have \begin{align}\label{eqn:4.10} &\sum_{\substack{\Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \mbox{\normalsize{(R.H.S. of (\ref{eqn:3.6}))}}\cdot\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}') \\ &=\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$n$}=(n_j)\in\mathbb{Z}^r \\ |\Bold{\footnotesize$n$}|=0}} a^r_{\Bold{\footnotesize$l$},\Bold{\footnotesize$n$}}\left(\prod_{j=1}^r(s_j)_{l_j}\right)\frac{(s_r+z)_{n_r}}{(s_r)_{n_r}}\zeta_{r-1}(\Bold{$s$}'+\Bold{$n$}'+\Bold{$z$}'). \nonumber \end{align} We have $\Gamma(s+n)=(s)_n\Gamma(s)$ for $s\in\mathbb{C}$ and $n\in\mathbb{N}_0$, by the relation $\Gamma(s+1)=s\Gamma(s)$. By multiplying the equation (\ref{eqn:4.10}) with ${\Gamma(s_r+z)\Gamma(-z)}/{\Gamma(s_r)}$, we obtain \begin{align}\label{eqn:4.11} &\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}\cdot (\mbox{L.H.S. of (\ref{eqn:4.10})}) \\ &=\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$n$}=(n_j)\in\mathbb{Z}^r \\ |\Bold{\footnotesize$n$}|=0}} a^r_{\Bold{\footnotesize$l$},\Bold{\footnotesize$n$}}\left(\prod_{j=1}^r(s_j)_{l_j}\right)\frac{\Gamma(s_r+n_r+z)\Gamma(-z)}{\Gamma(s_r+n_r)}\zeta_{r-1}(\Bold{$s$}'+\Bold{$n$}'+\Bold{$z$}'). \nonumber \end{align} \normalsize{On the other hand, we have} \begin{align*} &\sum_{\substack{\Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \mbox{(L.H.S. of (\ref{eqn:3.6}))}\cdot\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}') \\ &=(z+1)\sum_{\substack{\Bold{\footnotesize$l$}=(l_j)\in\mathbb{N}_0^r\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \binom{l_{r-1}+l_r}{l_{r-1}} a^{r-1}_{\Bold{\footnotesize$l$}',\Bold{\footnotesize$m$}}\left(\prod_{j=1}^{r-1}(s_j)_{l_j}\right)(s_r+z)_{l_r}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}'). \intertext{By using the equation (\ref{eqn:3.8}) of Lemma \ref{lmm:3.3}, we have} &=(z+1)\sum_{\substack{\Bold{\footnotesize$k$}=(k_j)\in\mathbb{N}_0^{r-1}\\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \sum_{\substack{p+q=k_{r-1} \\ p,q\in\mathbb{N}_0}} \binom{k_{r-1}}{p} a^{r-1}_{\Bold{\footnotesize$k$},\Bold{\footnotesize$m$}} \\ &\hspace{4.5cm}\cdot\left(\prod_{j=1}^{r-2}(s_j)_{k_j}\right)(s_{r-1})_p(s_r+z)_q\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}'). \end{align*} Using Lemma \ref{lmm:3.2}, we have {\small \begin{align}\label{eqn:4.12} &\sum_{\substack{\Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} \mbox{(L.H.S. of (\ref{eqn:3.6}))}\cdot\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}') \\ &=(z+1)\sum_{\substack{\Bold{\footnotesize$k$}=(k_j)\in\mathbb{N}_0^{r-1} \\ \Bold{\footnotesize$m$}=(m_j)\in\mathbb{Z}^{r-1} \\ |\Bold{\footnotesize$m$}|=0}} a^{r-1}_{\Bold{\footnotesize$k$},\Bold{\footnotesize$m$}}\left(\prod_{j=1}^{r-2}(s_j)_{k_j}\right)(s_{r-1}+s_r+z)_{k_r-1}\zeta_{r-1}(\Bold{$s$}'+\Bold{$m$}+\Bold{$z$}'). \nonumber \end{align}} By multiplying the equation (\ref{eqn:4.12}) with ${\Gamma(s_r+z)\Gamma(-z)}/{\Gamma(s_r)}$ and by the equation (\ref{eqn:1.1.5}) of the desingularized function $\zeta_r^{\rm des}(\Bold{$s$})$, we obtain \begin{align}\label{eqn:4.13} \frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}\cdot(\mbox{R.H.S. of (\ref{eqn:4.12})})=(1+z)\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}\zeta_{r-1}^{\rm des}(\Bold{$s$}'+\Bold{$z$}'). \end{align} So we obtain the equation (\ref{eqn:3.5}), by combining the equations (\ref{eqn:4.11}) and (\ref{eqn:4.13}) because we have $(\ref{eqn:4.10})=(\ref{eqn:4.12})$. \end{proof} \begin{prp}\label{thm:3.1} For $s_1,\dots,s_{r-1}\in\mathbb{C}$ and $k\in\mathbb{N}_0$, we have \begin{equation}\label{eqn:4.2} \zeta_r^{\rm des}(s_1,\dots,s_{r-1},-k)=\sum_{i+j=k}\binom{k}{i}\zeta_{r-1}^{\rm des}(s_1,\dots,s_{r-2},s_{r-1}-i)\zeta_1^{\rm des}(-j). \end{equation} \end{prp} \begin{proof} Let $\Bold{$s$}:=(s_1,\dots,s_r)\in\mathbb{C}^r$. By Mellin-Barnes integral formula, we obtain the following formula (\cite[the equation (3.7)]{Matsumoto}); \begin{equation*} \zeta_r(\Bold{$s$})=\frac{1}{2\pi i}\int_{(c)}\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}\zeta_{r-1}(\Bold{$s$}'+\Bold{$z$}')\zeta(-z)dz, \end{equation*} for $\Re{(s_j)}>1\ (1\leq j\leq r)$, $-\Re{(s_r)}<c<0$ and the path of integration is the vertical line $\Re{(z)}=c$. By this formula and the definition of $\zeta_r^{\rm des}(\Bold{$s$})$, we have \begin{align*} \zeta_r^{\rm des}(\Bold{$s$}) =&\sum_{\substack{\mbox{\boldmath {\footnotesize$l$}}=(l_j)\in\mathbb{N}_0^r\\ \mbox{\boldmath {\footnotesize$n$}}=(n_j)\in\mathbb{Z}^r \\ |\mbox{\boldmath {\footnotesize$n$}}|=0}}a^r_{\mbox{\boldmath {\footnotesize$l$}},\mbox{\boldmath {\footnotesize$n$}}}\left(\prod_{j=1}^r(s_j)_{l_j}\right)\zeta(\Bold{$s$}+\Bold{$n$}). \\ =&\frac{1}{2\pi i}\int_{(c)}\sum_{\substack{\mbox{\boldmath {\footnotesize$l$}}=(l_j)\in\mathbb{N}_0^r\\ \mbox{\boldmath {\footnotesize$n$}}=(n_j)\in\mathbb{Z}^r \\ |\mbox{\boldmath {\footnotesize$n$}}|=0}}a^r_{\mbox{\boldmath {\footnotesize$l$}},\mbox{\boldmath {\footnotesize$n$}}}\left(\prod_{j=1}^r(s_j)_{l_j}\right)\frac{\Gamma(s_r+n_r+z)\Gamma(-z)}{\Gamma(s_r+n_r)} \\ &\hspace{5cm}\cdot\zeta_{r-1}(\Bold{$s$}'+\Bold{$n$}'+\Bold{$z$}')\zeta(-z)dz. \\ \intertext{Using Proposition \ref{prp:3.1}, we get} =&\frac{1}{2\pi i}\int_{(c)}(1+z)\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}\zeta_{r-1}^{\rm des}(\Bold{$s$}'+\Bold{$z$}')\zeta(-z)dz. \\ \intertext{By Proposition \ref{prp:1.1}, we have the formula $\zeta_1^{\rm des}(s)=(1-s)\zeta(s)$, so we obtain} =&\frac{1}{2\pi i}\int_{(c)}\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}\zeta_{r-1}^{\rm des}(\Bold{$s$}'+\Bold{$z$}')\zeta_1^{\rm des}(-z)dz. \\ \intertext{For $M\in\mathbb{N}$ and sufficiently small $\varepsilon>0$, we set $\mathcal{D}:=\{z\in\mathbb{C}\ |\ c<\Re{(z)}<M-\varepsilon\}$. For $z\in\mathcal{D}$, we have $\Re{(s_r+z)}>0$ by $-\Re{(s_r)}<c<0$. So singularities of the above integrand, which lie on $\mathcal{D}$, are only $z=0,1,2,\dots,M-1$. By using the residue theorem, we get} =&-\sum_{j=0}^{M-1}{\rm Res}\left[\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}\zeta_{r-1}^{\rm des}(\Bold{$s$}'+\Bold{$z$}')\zeta_1^{\rm des}(-z),z=j\right] \\ &+\frac{1}{2\pi i}\int_{(M-\varepsilon)}\frac{\Gamma(s_r+z)\Gamma(-z)}{\Gamma(s_r)}\zeta_{r-1}^{\rm des}(\Bold{$s$}'+\Bold{$z$}')\zeta_1^{\rm des}(-z)dz. \intertext{By the same arguments to those of \cite{Matsumoto}, the above second term converge. By using the fact that the residue of gamma function $\Gamma(s)$ at $s=-j$ is $\frac{(-1)^j}{j!}$, we get} =&\sum_{j=0}^{M-1}\binom{-s_r}{j}\zeta_{r-1}^{\rm des}(s_1,\dots,s_{r-2},s_{r-1}+s_r+j)\zeta_1^{\rm des}(-j) \\ &+\frac{1}{2\pi i\Gamma(s_r)}\int_{(M-\varepsilon)}\Gamma(s_r+z)\Gamma(-z)\zeta_{r-1}^{\rm des}(\Bold{$s$}'+\Bold{$z$}')\zeta_1^{\rm des}(-z)dz. \\ \end{align*} Setting $s_r=-k$ and $M=k+1$ for $k\in\mathbb{N}_0$, we obtain \begin{equation*} \zeta_r^{\rm des}(s_1,\dots,s_{r-1},-k)=\sum_{j=0}^{k}\binom{k}{j}\zeta_{r-1}^{\rm des}(s_1,\dots,s_{r-2},s_{r-1}-k+j)\zeta_1^{\rm des}(-j), \end{equation*} because $1/\Gamma(-k)=0$ for $k\in\mathbb{N}_0$. \end{proof} \begin{rem} In case of $r=2$, the above theorem recovers the equation \begin{equation*} \zeta_2^{\rm des}(s,-N)=\sum_{i+j=N} \binom{N}{i}\zeta_1^{\rm des}(s-i)\zeta_1^{\rm des}(-j) \end{equation*} shown in \cite[Proposition 4.3]{FKMT2}. \end{rem} By Proposition \ref{thm:3.1}, we obtain the following corollary. \begin{prp}\label{crl:3.2} For $s_1,\dots,s_{r-1}\in\mathbb{C}$ and $l\in\mathbb{N}_0$, we have \begin{equation}\label{eqn:4.1} \zeta_{r-1}^{\rm des}(s_1,\dots,s_{r-1})\zeta_1^{\rm des}(-l)=\sum_{i+j=l}(-1)^i\binom{l}{i}\zeta_r^{\rm des}(s_1,\dots,s_{r-2},s_{r-1}-i,-j). \end{equation} \end{prp} \begin{proof} We prove this claim by induction on $l$. It is clear that the case of $l=0$ follows from the case of $k=0$ of Proposition \ref{thm:3.1}. By putting $k=l_0\ (\geq1)$ in the equation (\ref{eqn:4.2}), we get \begin{align*} \zeta_{r-1}^{\rm des}(s_1&,\dots,s_{r-1})\zeta_1^{\rm des}(-l_0) \\ &=\zeta_r^{\rm des}(s_1,\dots,s_{r-1},-l_0)-\sum_{j=0}^{l_0-1}\binom{l_0}{j}\zeta_{r-1}^{\rm des}(s_1,\dots,s_{r-2},s_{r-1}-l_0+j)\zeta_1^{\rm des}(-j). \end{align*} In the second term of the right hand side of this equation, by using our induction hypothesis (i.e. the equation (\ref{eqn:4.1}) in the case of $0\leq l\leq l_0-1$), we obtain the equation (\ref{eqn:4.1}) of $l=l_0$. \end{proof} Putting $p=r-1$ and $q=1$ in Theorem \ref{prop:2.1}, we obtain \begin{equation*} \zeta_{r-1}^{\rm des}(-k_1,\dots,-k_{r-1})\zeta_1^{\rm des}(-l)= \sum_{i + j=l}(-1)^{i}\binom{l}{i} \zeta_{r}^{\rm des}(-k_1,\dots,-k_{r-2},-k_{r-1}-i,-j) \end{equation*} for $k_1,\dots,k_{r-1},l\in\mathbb{N}_0$. Therefore the equation (\ref{eqn:4.1}) can be regarded as a generalization of this equation. \begin{rem} In our forthcoming paper \cite{Komi2}, we will show a more general formula which extends both Theorem \ref{prop:2.1} and Proposition \ref{crl:3.2}. \end{rem} \bigskip \thanks{ {\it Acknowledgements}. The author is cordially grateful to Professor H. Furusho for guiding him towards this topic and for giving useful suggestions to him. He greatly appreciates the referee's numerous and helpful comments. This work was supported by JSPS KAKENHI Grant Number JP18J14774.}
{ "timestamp": "2020-02-26T02:13:28", "yymm": "1804", "arxiv_id": "1804.05568", "language": "en", "url": "https://arxiv.org/abs/1804.05568" }
\section{Introduction} Exotic hadrons have been proposed to be important probes in understanding the fundamentals of the strong interaction in hadron physics \cite{Jaffe:1976ig,Jaffe:1976ih}. The excitement in the subject has restarted from the observation of $D_{sJ}(2317)$ \cite{exp1} and $X(3872)$ \cite{exp2}, whose masses did not fit well within the conventional potential model approaches, and continues to the present day with the recent observation of $P_c(4380)^+$ and $P_c(4450)^+$ \cite{exp3}. Detailed theoretical studies on the structure and properties of these states have been reported using various models \cite{review1,review2,review3,review4}. Moreover, it has been argued that relativistic heavy ion collisions provide an excellent venue to produce some of these and previously proposed exotic states because they contain heavy quarks, which are profusely produced in these experiments \cite{exhic1, exhic2, exotic}. Among many exotic hadrons, we focus here on the proposed doubly charmed tetraquark $T_{cc}(cc\bar{u}\bar{d}=DD^*)$ with the quantum number $I(J^P)=0(1^+)$ \cite{potential,Lipkin:1986dw,Manohar:1992nd}. There are several reasons why $T_{cc}$ is of particular interest. First of all, it is a flavor exotic tetraquark, which has never been observed before. Second, with the recent discovery of the doubly charmed baryon at CERN \cite{Aaij:2017ueg}, the possibility of observing a similar doubly charmed hadron with the light quark replaced by a strongly correlated light anti-diquark seems quite plausible. Finally, analyzing the structure of this particle in the constituent quark model, one finds that this particle is the only candidate where there is a strong attraction in the compact configuration compared to two separated meson. This is so because while previously observed exotic candidates such as the $X(3872)$ is composed of $q\bar{q} Q\bar{Q}$, where $q,Q$ are light and heavy quarks respectively, the proposed $T_{cc}$ state is composed of $QQ \bar{q} \bar{q}$ quarks. The latter quark structure favors a compact tetraquark configuration as the additional light anti-diquark structure $\bar{q} \bar{q}$ in the isospin zero channel provides an attraction larger than that for the two $Q \bar{q}$ in a separated meson configurations \cite{Park:2013fda,Hyodo:2017hue,Luo:2017eub}. Hence, $T_{cc}$ is a unique multiquark candidate state that could be compact. The measured yields of ground state particles and their ratios from relativistic heavy ion collisions can be well described by statistical models \cite{stat1,stat,Stachel:2013zma}. On the other hand, there are indications that yields for resonances with structures different from ground states, deviate from the statistical model prediction \cite{Kanada-Enyo:2006dxd,Cho:2014xha}. In particular, it was argued that the yields of compact multiquark configurations would be an order of magnitude suppressed compared to a molecular configuration or a usual hadron with the same quantum number and mass, if allowed, which should follow the statistical model prediction \cite{exhic1,exhic2,exotic}. However, these results were obtained without considering the hadronic effects, which could change the initial production rate at the chemical freeze-out due to the interaction with other particles during the hadronic expansion before the kinetic freeze-out. The importance of this effect has been confirmed for states with large intrinsic width such as the $K^*$, which has been observed both at RHIC and LHC with yield ratios to the $K$ that are systematically reduced compared to the statistical model predictions \cite{Adam:2017zbf}. If the hadronic effects are large, the hope of using production yields to discriminate the structure of an exotic particle through its production could be problematic. In fact, for similar reasons, the hadronic effects of exotic candidates have been estimated for the $D_{sJ}(2317)$ \cite{ko} and $X(3872)$ \cite{cho,pion}. In this work, we estimate the hadronic effects on the $T_{cc}$ yields in heavy ion collisions to assess if the initial yields at the hadronization point is maintained so that its structure can be discriminated. We also solve a hydrodynamic model based on the lattice equation of state with and without viscosity, and parameterize the resulting time dependence of the temperature and volume during the hadronic phase at both RHIC and LHC that will be used in this and in similar future works. This work is organized as follows. In Section \ref{hydro}, we introduce a simplified hydrodynamic model to calculate and parameterize the time dependence of the temperature and volume of the hadronic phase at RHIC and LHC. In Section \ref{hadronization}, we discuss the hadronization in relativistic heavy ion collisions and the $T_{cc}$ yields calculated in two possible scenarios, where $T_{cc}$ is either a compact configuration with suppressed yield estimated within the coalescence model or a weakly bound molecular configuration that should follow the statistical model prediction. In Section \ref{cross_section}, the cross sections of the $T_{cc}$ absorption by pions are calculated in the quasifree approximation. In Section \ref{evolution}, the time evolution of the $T_{cc}$ abundance is studied by solving the rate equation in the two possible scenarios. In Section \ref{finalstates}, we give possible production final states that can be used to observe these states from heavy ion collisions. Finally, we summarize our results in Section \ref{summary}. \section{Hydrodynamic equation for the hadronic phase} \label{hydro} Hydrodynamic equations are given by $\partial_\mu T^{\mu\nu}=0$, where the energy-momentum tensor $T^{\mu\nu}=(e+p)u^\mu u^\nu-pg^{\mu\nu}+\pi^{\mu\nu}$ with $e$, $p$, $u^\mu$, and $\pi^{\mu\nu}$ being, respectively, the energy density, pressure, four-velocity of flow, and the traceless symmetric shear tensor. For simplicity, we assume the boost-invariance and consider central collisions, that is, symmetric expansion in the transverse plane. Then there are only two independent hydrodynamic equations \cite{Heinz:2005bw}: \begin{eqnarray} \frac{1}{\tau}\partial_\tau(\tau T^{\tau \tau})+\frac{1}{r} \partial_r(r T^{r \tau})&=&-\frac{1}{\tau}(p+\tau^2\pi^{\eta\eta})\, , \label{energy6}\\ \frac{1}{\tau}\partial_\tau(\tau T^{\tau r})+\frac{1}{r} \partial_r(r T^{r r})&=&\frac{1}{r}(p+r^2\pi^{\phi\phi}) \, , \label{momentum} \end{eqnarray} in the $(\tau, r, \phi, \eta)$ coordinate system defined by \begin{eqnarray} \tau&=&\sqrt{t^2-z^2} \, , ~~~~~\eta=\frac{1}{2}\ln \frac{t+z}{t-z} \, ,\nonumber\\ r&=&\sqrt{x^2+y^2} \, , ~~~~\phi=\tan^{-1}(y/x) \, . \end{eqnarray} Nonvanishing energy-momentum tensors and shear tensors are respectively expressed as \cite{Heinz:2005bw} \begin{eqnarray} T^{\tau\tau}&=&(e+P_r)u_\tau^2 -P_r \, ,\nonumber\\ T^{\tau r}&=&(e+P_r)u_\tau u_r \, ,\nonumber\\ T^{r r}&=&(e+P_r)u_r^2+P_r \, , \end{eqnarray} where $P_r\equiv p-\tau^2\pi^{\eta\eta}-r^2\pi^{\phi\phi}$ is the effective radial pressure, and \begin{eqnarray} \pi^{\tau r}&=&v_r\pi^{rr} \, ,\nonumber\\ \pi^{\tau\tau}&=&v_r\pi^{\tau r}=v_r^2\pi^{rr} \, ,\nonumber\\ \pi^{rr}&=&-\gamma_r^2(r^2\pi^{\phi\phi}+\tau^2\pi^{\eta\eta}) \, , \end{eqnarray} with $v_r$ being the radial velocity and the shear tensors $\pi^{\phi\phi}$ and $\pi^{\eta\eta}$ being the only independent ones. The components $\pi^{\phi\phi}$ and $\pi^{\eta\eta}$ are boost-invariant in the radial direction and satisfy the following simplified Israel-Stewart equations: \begin{eqnarray} (\partial_\tau +v_r \partial_r)\pi^{\eta \eta}&=&-\frac{1}{\gamma_r \tau_\pi}\bigg[\pi^{\eta \eta}-\frac{2\eta_s}{\tau^2}\bigg(\frac{ \theta}{3}-\frac{\gamma_r}{\tau}\bigg)\bigg] \, ,\label{shear1a}\\ (\partial_\tau +v_r \partial_r)\pi^{\phi \phi} &=&-\frac{1}{\gamma_r \tau_\pi}\bigg[\pi^{\phi \phi}- \frac{2\eta_s}{r^2}\bigg(\frac{\theta}{3}-\frac{\gamma_r v_r}{r}\bigg)\bigg] \, , \label{shear1b} \end{eqnarray} where \begin{eqnarray} \theta=\partial\cdot u=\frac{1}{\tau}\partial_\tau (\tau \gamma_r)+ \frac{1}{r}\partial_r(rv_r \gamma_r) \, ,\nonumber \end{eqnarray} with $\eta_s$ and $\tau_\pi$ being the shear viscosity and the relaxation time for the particle distributions, respectively. Furthermore, the condition $u_\mu (T_{;\nu}^{\nu \mu})=0$, where $T_{;\nu}^{\nu \mu}$ is the covariant derivative and the flow velocity $(u_\tau, u_r, u_\phi, u_\eta)=(\gamma/\cosh\eta,\gamma v_r,0,0)$ reduces to $(\gamma_r,\gamma_r v_r,0,0)$ with $\gamma_r=1/\sqrt{1-v_r^2}$ in midrapidity, leads to \begin{eqnarray} &&\frac{1}{\tau}\partial_\tau (\tau s \gamma_r)+\frac{1}{r}\partial_r (rs\gamma_r v_r)=-\frac{1}{T}\bigg[\frac{u_\tau}{\tau}\tau^2\pi^{\eta\eta} \nonumber\\ &&\qquad\qquad\qquad\qquad\qquad +\frac{u_r}{r}r^2\pi^{\phi\phi}-(\partial_\tau u_\tau+\partial_r u_r) (r^2\pi^{\phi\phi}+\tau^2\pi^{\eta \eta})\bigg] \, , \label{entropy6} \end{eqnarray} where $s=(e+p)/T$ is the local entropy density in the hot dense matter. Eq.~(\ref{entropy6}) shows that the total entropy is not conserved in the presence of nonzero shear tensors. Integrating Eqs. (\ref{energy6}), (\ref{shear1a}), (\ref{shear1b}), and (\ref{entropy6}) over the transverse plane, we have \cite{Song:2010fk} \begin{eqnarray} &&\partial_\tau (A\tau \langle T^{\tau \tau}\rangle)=-(p+\pi^\eta_\eta)A \, ,\label{energy7}\\ \nonumber\\ &&\frac{T}{\tau}\partial_\tau (A\tau s \langle \gamma_r\rangle)= -A\bigg\langle\frac{\gamma_r v_r}{r}\bigg\rangle \pi^\phi_\phi- \frac{A\langle \gamma_r\rangle}{\tau}\pi^\eta_\eta\nonumber\\ &&~~~~~~~\qquad\qquad\qquad\qquad\qquad +\bigg[\partial_\tau(A\langle \gamma_r\rangle)-\frac{\gamma_R \dot{R}}{R}A\bigg](\pi^\phi_\phi+\pi^\eta_\eta) \, ,\label{entropy7}\\ \nonumber\\ &&\partial_\tau (A\langle \gamma_r\rangle \pi^\eta_\eta) -\bigg[ \partial_\tau(A\langle\gamma_r\rangle)+2\frac{A\langle\gamma_r\rangle} {\tau} \bigg]\pi^\eta_\eta\nonumber\\ &&~~~~\qquad\qquad\qquad\qquad\qquad =-\frac{A}{\tau_\pi}\bigg[\pi^\eta_\eta-2\eta_s\bigg(\frac{ \langle\theta\rangle}{3}-\frac{\langle\gamma_r\rangle}{\tau}\bigg) \bigg] \, ,\label{entropy7b}\\ \nonumber\\ &&\partial_\tau(A\langle\gamma_r\rangle~ \pi^\phi_\phi)-\bigg[\partial_\tau(A\langle\gamma_r\rangle)+2A\bigg\langle\frac{\gamma_r v_r} {r}\bigg\rangle\bigg]\pi^\phi_\phi\nonumber\\ &&~~~~\qquad\qquad\qquad\qquad\qquad =-\frac{A}{\tau_\pi}\bigg[ \pi^\phi_\phi-2\eta_s \bigg(\frac{ \langle\theta\rangle}{3}-\bigg\langle\frac{\gamma_r v_r}{r}\bigg \rangle\bigg)\bigg]\label{shear7b} \, , \end{eqnarray} where $A=\pi R^2$ ($R$ is the radius of nuclear matter), $\langle T^{\tau\tau}\rangle=\int dA \, T^{\tau\tau}/A=(e+p)\langle \gamma_r^2\rangle-p, \, \langle u^\tau \rangle=\langle \gamma_r \rangle$, $\pi^\eta_\eta\equiv\tau^2\pi^{\eta\eta}$, and $\pi^\phi_\phi\equiv r^2\pi^{\phi\phi}$. We note that the total derivatives with respect to $r$ disappear due to the boundary condition. Assuming that the radial flow velocity is a linear function of the radial distance from the center, that is, $\gamma_r v_r=\gamma_R \dot{R}(r/R)$, where $\dot{R}=\partial R/\partial \tau$ and $\gamma_R=1/\sqrt{1-\dot{R}^2}$, \begin{eqnarray} \langle\gamma_r^2\rangle&=&1+\frac{\gamma_R^2 \dot{R}^2}{2} \, ,\nonumber\\ \langle\gamma_r^2 v_r^2\rangle&=&\frac{\gamma_R^2 \dot{R}^2}{2} \, ,\nonumber\\ \langle\gamma_r\rangle&=&\frac{2}{3\gamma_R^2 \dot{R}^2} \left(\gamma_R^3-1\right) \, ,\nonumber\\ \bigg\langle\frac{\gamma_r v_r}{r}\bigg\rangle&=&\frac{\gamma_R \dot{R}}{R} \, . \label{gamma} \end{eqnarray} Here, we make the assumption that nuclear matter has a definite boundary and $e$, $s$, and $p$ are uniform inside. In real hydrodynamic simulations, the energy-momentum tensor is numerically calculated for all time-space cells leading to a different temperature for each cell so that the hypersurface for a constant temperature has a complex structure in the $xy\tau$-space. But at the same time, one finds that most of the points composing the hypersurface are located on a semiconstant $\tau$ plane \cite{Song:2011kw}. That is why the blast wave model had been successful and widely used before sophisticated hydrodynamics became popular. This is the basis for our approximation. We then numerically solve simultaneous Eqs. (\ref{energy7}) to (\ref{shear7b}) by using the lattice equation of state \cite{Song:2010fk, Borsanyi:2010cj}. The ratio of the shear viscosity to entropy density is taken to be $1/(4\pi)$ for QGP \cite{Kovtun:2004de}, and ten times this value for hadron gas \cite{Demir:2008tr}. For the relaxation time $\tau_\pi$, we assume $\eta/\tau_\pi=sT/3$ for both QGP and hadron gas \cite{Song:2009gc}. The initial thermalization time for hydrodynamic simulations is assumed to be 0.5 fm/c, and the initial radius is given by the transverse area where the local temperature is above 150 MeV. Although the hydrodynamic approach is marginal in the hadron gas phase, it has successfully reproduced abundant experimental data from relativistic heavy ion collisions \cite{Kolb:2000sd,Schenke:2010nt}. According to the hydrodynamic calculations, the temperature and volume during the hadronic phase for LHC and RHIC change with time as shown in Fig. \ref{TVfig}. We now parameterize the results for the $\tau$ dependence of the volume and temperature using the following form \cite{TVmodel,ko}: \begin{eqnarray} \label{TVeq} V(\tau)&=&\pi\left[R+v(\tau-\tau_C)+\frac{a}{2}(\tau-\tau_C)^2\right]^2 c\tau \, , \nonumber\\ T(\tau)&=&T_C-(T_H-T_F)\left(\frac{\tau-\tau_H}{\tau_F-\tau_H}\right)^\alpha \qquad\mbox{for $\tau>\tau_H$} \, , \end{eqnarray} with $T_c \, (\tau_c)$, $T_H \, (\tau_H)$, and $T_F\, (\tau_F)$ being the critical, hadronization, and kinetic freeze-out temperature (time), respectively. In Eq. (\ref{TVeq}), we take $T_H=156\, (162)$ MeV, $T_F=115 \, (119)$ MeV for LHC (RHIC), and $T_C=T_H$ by following the first scenario of Ref. \cite{exotic}. $R$, $v$, $a$, and $\alpha$ have been treated as fitting parameters. All the parameters used in the model are given in Table \ref{TVcoeff}. \begin{table} \centering \caption{ Parameters used in the phenomenological model of Eq. (\ref{TVeq}). } \newcolumntype{C}{>{\centering\arraybackslash}p{9.1ex}} \begin{tabular}{ C C C C C C C C C C} \hline \hline & & $T_C=T_H$ & $T_F$ & $\tau_C=\tau_H$ & $\tau_F$ & $R$ & $v$ & $a$ & $\alpha$\\ & & (MeV) & (MeV) & (fm/c) & (fm/c) & (fm)& (c)& ($c^2$/fm) &\\ \hline LHC &ideal & 156 & 115 & 8.1 & 18.3 & 12.1 & 0.70 & 0.022 & 0.95 \\ &viscous & 156 & 115 & 8.3 & 19.5 & 11.9 & 0.67 & 0.020 & 0.93 \\ RHIC & ideal&162 & 119 & 6.1 & 15.1 & 9.9 & 0.59 & 0.030 & 0.85 \\ & viscous &162 & 119 & 6.1 & 15.7 & 9.8 & 0.58 & 0.024 & 0.79 \\ \hline \hline \end{tabular} \label{TVcoeff} \end{table} \begin{figure} \includegraphics[width=0.45\textwidth]{temp.eps} \includegraphics[width=0.45\textwidth]{vol.eps} \caption{(a) Temperature and (b) volume for LHC and RHIC during the hadronic expansion \cite{exotic}. } \label{TVfig} \end{figure} \section{Hadronization in relativistic heavy ion collisions} \label{hadronization} We assume that conventional hadrons such as $\pi$, $D$ and $D^*$ are in chemical and thermal equilibrium when they are produced at the chemical freeze-out. The abundance of a particle in equilibrium is statistically given by \cite{statrev} \begin{eqnarray} \label{stateq} N_i^{eq}(\tau)&=&g_i\gamma_iV(\tau)\int \frac{d^3\bm{p}}{(2\pi)^3}f(\bm{p}) \, , \nonumber\\ &=& \frac{1}{2\pi^2} \, g_i \gamma_i m_i^2 \, V(\tau) \, T(\tau) \, K_2\left(\frac{m_i}{T(\tau)}\right) \, , \end{eqnarray} where $g_i=(2S_i+1)(2I_i+1)$ is the spin and isospin degeneracy and $\gamma_i$ is the fugacity. In the second line, the Boltzmann distribution $f(\bm{p})=\exp[-\sqrt{\bm{p}^2+m_i^2}/T(\tau)]$ has been used and $K_2$ is the modified Bessel function of the second kind. For simplicity we ignore the correction term to $f(\bm{p})$ for shear viscosity. Since the production and annihilation cross sections of charm quarks are small \cite{lag,charm1,charm2}, the number of charm quarks is conserved during the time evolution of the hadronic matter. From the total number of charm quarks, $N_c=11 \, (4.1)$ \cite{exotic}, the charm fugacity is determined as $\gamma_c=51 \, (22)$ for LHC (RHIC). Here the charm fugacity is slightly different from that in Ref.~\cite{exotic} because we use only $D,D^*,D_s,\, \mbox{and}\, D^*_s$ to saturate the charm quarks as in Eq.~(\ref{charm-fugacity}). By following Refs. \cite{ko,pion}, the number of pions at RHIC is set to be $926$ at the kinetic freeze-out. For that purpose, we introduce a pion chemical potential with effective fugacity of 1.4 and use the same factor at LHC. This effect is to include the feed-down contributions from excited states such as the omega, delta, and $K^*$. Although these pions will only have a limited contribution to the absorption during hadronic phase, we will include them in the calculation to allow for the maximum effect. If the $T_{cc}$ is of a molecular configuration composed of a weakly bound $D D^*$, the production yield is expected to follow the statistical model prediction as the production yield of light nuclei do so. In such a case, the number of the doubly charmed $T_{cc}$ is given by Eq. (\ref{stateq}) with $\gamma_c^2$, $V(\tau_H)$ and $T(\tau_H)$ for the fugacity, volume and temperature, respectively. On the other hand, if the $T_{cc}$ is a compact multiquark state with the size of a usual hadron, then the production yield would be suppressed compared to the statistical model prediction. The production yields have been estimated by the coalescence model, whose parameters have been fitted to reproduce the ground state hadron yields \cite{exotic}. The two cases are summarized in Table \ref{NTcc}. Throughout this paper, we use the average masses: $m_\pi=137.3$ MeV, $m_D=1867.2$ MeV, and $m_{D^*}=2008.6$ MeV \cite{pdg}. \begin{table} \centering \caption{$T_{cc}$ yields at hadronization.} \newcolumntype{C}{>{\centering\arraybackslash}p{22ex}} \begin{tabular}{ C C C} \hline \hline & molecular & compact multiquark \\ \hline LHC & $2.0\times10^{-3}$ & $1.1\times10^{-4}$ \\ RHIC & $5.1\times10^{-4}$ & $5.0\times10^{-5}$ \\ \hline \hline \end{tabular} \label{NTcc} \end{table} \section{$T_{cc}$ absorption cross sections} \label{cross_section} The $T_{cc}$ can be produced or destroyed by interacting with other comoving particles during the hadronic expansion stage. Since pions are the most abundant particles with small mass, the interaction with them is the main contribution to the $T_{cc}$ abundance. In this section, we calculate the absorption cross sections of the $T_{cc}$ by pions in the quasifree approximation. \begin{figure} \begin{center} \begin{picture}(350,90)(0,70) \Text(21,130)[r]{\small(a)} \Line(-20,120)(5,90) \Line(25,90)(50,120) \Line(-20,90)(50,90) \Line(-20,70)(50,70) \Text(-20,125)[r]{\small$\pi$} \Text(58,125)[r]{\small$\pi$} \Text(23,82)[r]{\small$D^*$} \Text(-22,90)[r]{\small$D$} \Text(-20,70)[r]{\small$D^*$} \Text(60,90)[r]{\small$D$} \Text(65,70)[r]{\small$D^*$} \Text(131,130)[r]{\small(b)} \Line(95,120)(140,90) \Line(110,90)(155,120) \Line(90,90)(160,90) \Line(90,70)(160,70) \Text(95,125)[r]{\small$\pi$} \Text(163,125)[r]{\small$\pi$} \Text(133,82)[r]{\small$D^*$} \Text(88,90)[r]{\small$D$} \Text(90,70)[r]{\small$D^*$} \Text(171,90)[r]{\small$D$} \Text(176,70)[r]{\small$D^*$} \Text(241,130)[r]{\small(c)} \Line(200,120)(225,90) \Line(245,90)(270,120) \Line(200,90)(270,90) \Line(200,70)(270,70) \Text(200,125)[r]{\small$\pi$} \Text(278,125)[r]{\small$\pi$} \Text(240,82)[r]{\small$D$} \Text(200,90)[r]{\small$D^*$} \Text(200,70)[r]{\small$D$} \Text(285,90)[r]{\small$D^*$} \Text(281,70)[r]{\small$D$} \Text(351,130)[r]{\small(d)} \Text(315,125)[r]{\small$\pi$} \Text(383,125)[r]{\small$\pi$} \Text(350,82)[r]{\small$D$} \Text(312,90)[r]{\small$D^*$} \Text(310,70)[r]{\small$D$} \Text(396,90)[r]{\small$D^*$} \Text(392,70)[r]{\small$D$} \Line(315,120)(360,90) \Line(330,90)(375,120) \Line(310,90)(380,90) \Line(310,70)(380,70) \end{picture} \end{center} \caption{Diagrams contributing to the $T_{cc}$ abundance. In the quasifree approximation, (a) and (b) correspond to the elastic scattering $D+\pi\rightarrow D+\pi$, and (c) and (d) to $D^*+\pi\rightarrow D^*+\pi$. } \label{diagram} \end{figure} The quasifree approximation has been previously used to estimate the dissociation of charmonia by partons \cite{ralf}. The approximation was shown to be valid when the binding energies of charmonia are small at high temperature, and $c$ and $\bar{c}$ quarks inside charmonia can be treated like quasifree particles \cite{quasifree} (see Appendix \ref{appendix-23} for the details). In fact, for the charmonium case, it can be seen that an exact next to leading order QCD calculation allowing for the compact size gives a similar result for the thermal width \cite{Park:2007zza} as that obtained using the quasifree approximation when the process involves the same number of initial and final states. Here, we estimate the dissociation cross section of the $T_{cc}$ by pions by estimating the $D$ and $D^*$ components of the $T_{cc}$ in two possible scenarios under the quasifree approximation. In the quasifree approximation, the cross section of $T_{cc}+\pi\rightarrow D+D^*+\pi$ can be evaluated by adding the elastic scattering $D+\pi\rightarrow D+\pi$ and $D^*+\pi\rightarrow D^*+\pi$ (see Fig. \ref{diagram}). For the effective interaction vertices, we use the following interaction Lagrangian \cite{lag}: \begin{equation} \label{Leff} \mathcal{L}_{\pi DD^*}=ig_{\pi DD^*}D^{*\mu} \bm{\tau}\cdot(\bar{D} \partial_\mu \bm{\pi}-\partial_\mu\bar{D}\bm{\pi})+\mbox{h.c.} \, , \end{equation} where $\bm{\tau}$ are the Pauli matrices, $\bm{\pi}$ is the pion isospin triplet, and $D=(D^0, D^+)$ and $D^*=(D^{*0},D^{*+})$ are the pseudoscalar and vector charm meson doublets, respectively. The meson coupling $g_{\pi DD^*}$ is determined from the $D^*\rightarrow D\pi$ decay width \begin{equation} \Gamma_{D^*\rightarrow D\pi}= \frac{g_{\pi DD^*}^2 p_{cm}^3}{2\pi m_{D^*}^2} \, , \end{equation} where $p_{cm}$ is the momentum in the center of mass frame. By comparing with the experimental data, the full width $\Gamma_{D^*\rightarrow D\pi}=83.4$ keV \cite{pdg}, we obtain $g_{\pi DD^*}\simeq7.8$. The scattering amplitude of the process $D(p_1)+\pi(p_2)\rightarrow D(p_3)+\pi(p_4)$ is then given as \begin{equation} \mathcal{M}_{D\pi\rightarrow D\pi}= \mathcal{M}^{(a)}+\mathcal{M}^{(b)} \, , \end{equation} where \begin{eqnarray} \label{mtx1} \mathcal{M}^{(a)}&=&\frac{2^{N(\pi^\pm)/2}g_{\pi DD^*}^2}{s-m_{D^*}^2} \left[-g^{\mu\nu}+\frac{(p_1+p_2)^\mu(p_1+p_2)^\nu}{m_{D^*}^2}\right] (p_1-p_2)_\mu (p_3-p_4)_\nu \, , \nonumber\\ \mathcal{M}^{(b)}&=&\frac{2^{N(\pi^\pm)/2}g_{\pi DD^*}^2}{u-m_{D^*}^2} \left[-g^{\mu\nu}+\frac{(p_1-p_4)^\mu(p_1-p_4)^\nu}{m_{D^*}^2}\right] (p_1+p_4)_\mu (p_2+p_3)_\nu \, . \end{eqnarray} Here, $N(\pi^\pm)$ is the number of charged pions involved in initial and final states of the process (see Table \ref{process}). For $D^*(p_1)+\pi(p_2)\rightarrow D^*(p_3)+\pi(p_4)$, we have \begin{equation} \mathcal{M}_{D^*\pi\rightarrow D^*\pi}= \mathcal{M}^{(c)}+\mathcal{M}^{(d)} \, , \end{equation} with \begin{eqnarray} \label{mtx2} \mathcal{M}^{(c)}&=&-\frac{2^{N(\pi^\pm)/2}g_{\pi DD^*}^2 \epsilon_1^\mu\epsilon_3^{*\nu}} {s-m_{D}^2}(p_1+2p_2)_\mu(p_3+2p_4)_\nu \, , \nonumber\\ \mathcal{M}^{(d)}&=&-\frac{2^{N(\pi^\pm)/2}g_{\pi DD^*}^2 \epsilon_1^\mu\epsilon_3^{*\nu}} {u-m_{D}^2}(-p_1+2p_4)_\mu(2p_2-p_3)_\nu \, . \end{eqnarray} In the center of mass frame, the spin and isospin averaged cross section is \begin{equation} \label{sigma} \sigma=\frac{1}{64\pi^2g_1g_2s}\frac{|\bm{p}_f|}{|\bm{p}_i|}\int d\Omega \sum_{S,I}|\mathcal{M}|^2F^4 \, , \end{equation} where $g_1$ and $g_2$ are the degeneracies of initial particles, $\bm{p}_i$ ($\bm{p}_f$) is the spatial momentum of initial (final) particles, and the summation is over the spins and isospins of both initial and final particles. The relevant processes are listed in Table \ref{process}. At each interaction vertex, we have used the following form factors: \begin{equation} F=\frac{\Lambda^2}{\Lambda^2+(\omega^2-m_{ex}^2)} \qquad \mbox{and} \qquad \frac{\Lambda^2}{\Lambda^2+\bm{q}^2} \, , \end{equation} for the $s$- and $u$-channels, respectively. Here, the cutoff $\Lambda=1.0$ GeV is used, $m_{ex}$ is the mass of the exchanged particle, $\omega$ is the total energy of incoming particles in the s-channel, and $\bm{q}$ is the momentum transfer in the u-channel in the center of mass frame. Using the form factors, the cross sections do not increase with the total center of mass energy. \begin{table} \centering \caption{$2\rightarrow 2$ processes contributing to the spin and isospin averaged cross section of Eq. (\ref{sigma}). With the effective Lagrangian of Eq. (\ref{Leff}), the matrix elements involve the factor $2^{N(\pi^\pm)/2}$ in Eqs. (\ref{mtx1}) and (\ref{mtx2}). } \newcolumntype{C}{>{\centering\arraybackslash}p{20ex}} \begin{tabular}{ C C C C } \hline \hline process & diagram & process & diagram \\ \hline $D^+\pi^0\rightarrow D^+\pi^0$ & (a)+(b) & $D^{*+}\pi^0\rightarrow D^{*+}\pi^0$ & (c)+(d) \\ $D^+\pi^0\rightarrow D^0\pi^+$ & (a)+(b) & $D^{*+}\pi^0\rightarrow D^{*0}\pi^+$ & (c)+(d) \\ $D^+\pi^-\rightarrow D^+\pi^-$ & (a) & $D^{*+}\pi^-\rightarrow D^{*+}\pi^-$ & (c) \\ $D^+\pi^-\rightarrow D^0\pi^0$ & (a)+(b) & $D^{*+}\pi^-\rightarrow D^{*0}\pi^0$ & (c)+(d) \\ $D^+\pi^+\rightarrow D^+\pi^+$ & (b) & $D^{*+}\pi^+\rightarrow D^{*+}\pi^+$ & (d) \\ $D^0\pi^+\rightarrow D^0\pi^+$ & (a) & $D^{*0}\pi^+\rightarrow D^{*0}\pi^+$ & (c) \\ $D^0\pi^+\rightarrow D^+\pi^0$ & (a)+(b) & $D^{*0}\pi^+\rightarrow D^{*+}\pi^0$ & (c)+(d) \\ $D^0\pi^0\rightarrow D^0\pi^0$ & (a)+(b) & $D^{*0}\pi^0\rightarrow D^{*0}\pi^0$ & (c)+(d) \\ $D^0\pi^0\rightarrow D^+\pi^-$ & (a)+(b) & $D^{*0}\pi^0\rightarrow D^{*+}\pi^-$ & (c)+(d) \\ $D^0\pi^-\rightarrow D^0\pi^-$ & (b) & $D^{*0}\pi^-\rightarrow D^{*0}\pi^-$ & (d) \\ \hline \hline \end{tabular} \label{process} \end{table} To take into account the thermal effects, we define $\langle \sigma_{ab\rightarrow cd}v_{ab}\rangle$, the product of the cross section of two-body scattering ($ab\rightarrow cd$) and the relative velocity between initial particles, $v_{ab}=\sqrt{(p_a\cdot p_b)^2-m_a^2m_b^2}/(E_aE_b)$, averaged over the thermal momentum distributions of initial particles \cite{sv1,sv2}: \begin{eqnarray} \langle \sigma_{ab\rightarrow cd}v_{ab}\rangle(\tau) &=&\frac{\int d^3\bm{p}_ad^3\bm{p}_b \, f_a(\bm{p}_a)f_b(\bm{p}_b) \, \sigma_{ab\rightarrow cd}v_{ab}} {\int d^3\bm{p}_a d^3\bm{p}_b \, f_a(\bm{p}_a)f_b(\bm{p}_b)} \, , \nonumber\\ &=&\left[4\left(\frac{m_a}{T(\tau)}\right)^2\left(\frac{m_b}{T(\tau)}\right)^2 \, K_2\left(\frac{m_a}{T(\tau)}\right) K_2\left(\frac{m_b}{T(\tau)}\right) \right]^{-1} \int_{z_0} dz\, \sigma(\sqrt{s}=zT(\tau)) \nonumber\\ &&\qquad \times \left[z^2-\left(\frac{m_a+m_b}{T(\tau)}\right)^2\right] \left[z^2-\left(\frac{m_a-m_b}{T(\tau)}\right)^2\right] K_1(z) \, , \label{thermal-average} \end{eqnarray} where $z_0=\mbox{max}[(m_a+m_b)/T(\tau),(m_c+m_d)/T(\tau)]$. It should be noted, however, that we are approximating $\sigma_{T_{cc} \pi \rightarrow DD^* \pi}$ by $\sigma_{D \pi \rightarrow D \pi}$ and $\sigma_{D^* \pi \rightarrow D^* \pi}$. Hence, when taking the thermal distribution, the distribution $f_a( \bm{p}_a)$ should be that of the $T_{cc}$. Furthermore, the threshold should also involve that of $m_D+m_{D^*} \rightarrow m_{T_{cc}}$. This amounts to taking $m_a=m_c= m_{T_{cc}}$ instead of $m_D$ or $m_{D^*}$. The same approximation will be taken when calculating the inverse process. The derivation of this formula is given in Appendix \ref{appendix-rate}. \begin{figure} \includegraphics[width=0.45\textwidth]{sig.eps} \includegraphics[width=0.45\textwidth]{sv.eps} \caption{ (a) The cross sections as functions of the total center of mass energy and (b) the thermally averaged cross sections contributing to the absorption of the $T_{cc}$ by pions. } \label{cross} \end{figure} Fig. \ref{cross} shows the cross sections and the thermally averaged ones of the elastic scattering $D(D^*)+\pi\rightarrow D(D^*)+\pi$. The cross section of the s-channel in the process $D+\pi\rightarrow D+\pi$ has a peak near the threshold energy $\sqrt{s}_0=m_D+m_\pi$ since $m_D+m_\pi\approx m_{D^*}$. Similarly, the cross section of the u-channel in $D^*+\pi\rightarrow D^*+\pi$ diverges near $\sqrt{s}_0=m_{D^*}+m_\pi$. \section{Time evolution of the $T_{cc}$ abundance} \label{evolution} We consider the time evolution of the $T_{cc}$ abundance governed by (see Appendix \ref{appendix-rate}) \begin{multline} \label{evolve} \frac{dN_{T_{cc}}(\tau)}{d\tau}= \langle \sigma_{T_{cc}\pi\rightarrow DD^*\pi} v_{T_{cc}\pi}\rangle(\tau) \, n_\pi(\tau) \bigg[- N_{T_{cc}}(\tau) +N^{eq}_{T_{cc}}(\tau) \, \frac{N_D(\tau) \, N_{D^*}(\tau)}{N^{eq}_D(\tau) \, N^{eq}_{D^*}(\tau)} \, \bigg] , \end{multline} where $n_\pi(\tau)=N_\pi(\tau)/V(\tau)$ and the superscript $eq$ denotes the corresponding number in equilibrium. In the quasifree approximation, the absorption of the $T_{cc}$ can be taken into account by using the two-body scattering $D(D^*)+\pi\rightarrow D(D^*)+\pi$, \begin{equation} \langle \sigma_{T_{cc}\pi\rightarrow DD^*\pi} v_{T_{cc}\pi}\rangle(\tau)= c_1 \langle \sigma_{D\pi\rightarrow D\pi} v_{T_{cc}\pi}\rangle(\tau) + c_1 \langle \sigma_{D^*\pi\rightarrow D^*\pi} v_{T_{cc}\pi}\rangle(\tau) \, , \end{equation} where the factor $c_1$ will depend on the configuration of $T_{cc}$ for which we will consider the following two cases. \begin{enumerate} \item {\it Compact configuration}: Compact configuration is expected when the $T_{cc}$ is composed dominantly of a color triplet $\bar{q}\bar{q}$ state and a color anti-triplet $cc$ state \cite{potential,Lipkin:1986dw}. Then the decomposition into two $c \bar{q}$ states will result in the color decomposition given as \cite{Park:2013fda} \begin{eqnarray} T_{cc} = \frac{1}{\sqrt{3}} \left(D_1D_1^{*}\right) -\sqrt{\frac{2}{3}}\left(D_8D_8^{*}\right)\, , \label{color-decom} \end{eqnarray} where $D_1, D_8$ respectively denote the singlet and octet components of the $c \bar{q}$ color state. Hence, due to the coupling to color singlet states, we will take $c_1=\frac{1}{3}$. \item {\it Molecular configuration}: If the diquark correlation is not strong enough, the $T_{cc}$ could be a molecular configuration of $D,D^*$ coming from the long range pion exchange \cite{Manohar:1992nd,Hyodo:2017hue}. For this case we take $c_1=1$. \end{enumerate} The production term of Eq. (\ref{evolve}) has three bodies in the initial state, and we have approximated it using the equilibrium condition as derived in Appendix~\ref{appendix-rate}. \begin{figure} \includegraphics[width=0.45\textwidth]{result.eps} \caption{The expected time evolution of the $T_{cc}$ abundance in Pb+Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV at LHC and Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV at RHIC. } \label{result} \end{figure} To obtain the abundance of the $T_{cc}$, we have solved the rate equation Eq. (\ref{evolve}) with the initial yields $N_{T_{cc}}(\tau_H)$ given in Table \ref{NTcc}. By using the equilibrium distributions for $N_D(\tau)$ and $N_{D^*}(\tau)$, the numerical results are shown in Fig. \ref{result}. Here we have used the time dependencies obtained by ideal hydrodynamic calculations. Those obtained using viscous hydrodynamics give almost the same result. In the first term of Eq. (\ref{evolve}), the absorption rate of the $T_{cc}$ is approximately $0.06$ c/fm because $\langle\sigma_{D\pi\rightarrow D\pi}v\rangle(\tau)\sim 6$ mb in Fig. \ref{cross} (b) and $n_\pi(\tau)\sim 0.1\, \mbox{fm}^{-3}$. This alone would lead to about 45\% reduction of the abundance as the typical lifetime of the hadronic phase is 10 fm/c. On the other hand, the production rate is approximately $\mathcal{O}(10^{-4})$ smaller than the absorption rate, which can be seen easily from the factor $N_{T_{cc}}^{eq}(\tau)/[N_D^{eq}(\tau)N_{D^*}^{eq}(\tau)]$. Hence, its contribution becomes important only at high density when the numbers of $D,D^*$ mesons are large. Effectively, the production depends on the relative abundance between $N_{T_{cc}}(\tau)$ and $N_{T_{cc}}^{eq}(\tau)$. For molecular configurations, while $N_{T_{cc}}(\tau_H)=N_{T_{cc}}^{eq}(\tau_H)$, the equilibrium number decreases and hence the number of $T_{cc}$ decreases (less than $42\%$) with time. For a compact multiquark state with relatively small initial yields, the number of the $T_{cc}$ increases but it remains to be an order of magnitude smaller than a molecular configuration as the cross section for production is as small as that for the absorption. The final yield of the $T_{cc}$ depends strongly on the initial number at hadronization. Because of the large initial yield, the expected abundance of the $T_{cc}$ at LHC is larger than that at RHIC. These results mean that the numbers of charm quarks and the $T_{cc}$ produced from the quark-gluon plasma phase are important to determine the final abundance of the $T_{cc}$. We can conclude that for both the RHIC and LHC experiments, the large difference between the statistical and coalescence expectations, obtained assuming that the $T_{cc}$ is a compact multiquark or molecular configuration, remains until the kinetic freeze-out. We have also considered the case that $D$ and $D^*$ are not in chemical equilibrium. This is important as the total number of charm quarks is expected to be conserved during the hadronic phase. The processes where the numbers of $D,D^*$ change are related to Eq.~(\ref{evolve}), where the absorption of the $T_{cc}$ is related to the production of $D,D^*$ and its inverse relation. However, instead of solving the coupled rate equations involving charmed hadrons, we will consider two extreme cases. \begin{enumerate} \item After the chemical freeze-out, the numbers of $D$ and $D^*$ will be assumed to be constant. \\ \item We will assume that the inelastic cross sections involving light hadrons are large so that the ratios of charmed hadrons follow the equilibrium ones until the kinetic freeze-out point. While extreme, such a scenario seems to be consistent with the experimental findings for the $K^*/K$ ratios from heavy ion collisions \cite{Cho:2015qca}. This scenario is easily implemented by allowing the fugacity $\gamma_c(\tau)$ to depend on time during the hadronic phase using the following condition: \begin{eqnarray} \sum_{D_i=D,D^*,D_s,D_s^*}N_{D_i}(\tau) \, &=& \gamma_c(\tau) \bigg[ N^{0}_D(\tau)+N^{0}_{D^*}(\tau) +N^0_{D_s}(\tau)+N^{0}_{D_s^*}(\tau) \bigg] \, , \nonumber\\ &=& \mbox{total number of charm quarks} \, , \label{charm-fugacity} \end{eqnarray} where $N_{D_i}^{0}(\tau)$'s are the equilibrium numbers given in Eq.~(\ref{stateq}) without the fugacity. Once $\gamma_c(\tau)$ is obtained, one can assume that the individual numbers also satisfy the similar relations at each time. \begin{eqnarray} N_D(\tau) & = & \gamma_c(\tau) \, N^{0}_D(\tau) \, , \nonumber \\ N_{D^*}(\tau) & = & \gamma_c(\tau) \, N^{0}_{D^*}(\tau) \, . \label{limitingsolution} \end{eqnarray} This will guarantee that the charm-anticharm annihilation processes are small such that the total numbers of charmed and anticharmed mesons remain constant throughout the hadronic phase. \end{enumerate} The correct numbers would be somewhere between the two extreme cases. Fig. \ref{result_mc} shows the results for the two cases. When $D$ and $D^*$ are not in equilibrium, the $T_{cc}$ is more likely to be produced for both molecular and compact configurations. In fact, the number of the $T_{cc}$ is largest in the limit of Eq. (\ref{charm-fugacity}). However, we still find that even in this extreme limit, the abundance for a compact multiquark state at the end of the hadronic phase remains to be a factor of 5 smaller than that for a molecular configuration. \begin{figure} \includegraphics[width=0.45\textwidth]{result_lm.eps} \includegraphics[width=0.45\textwidth]{result_lc.eps} \caption{$N_{D,D^*}(\tau)$ dependence on the $T_{cc}$ yields for (a) molecular and (b) compact configurations. } \label{result_mc} \end{figure} \section{Final states} \label{finalstates} Here we will list the possible final states that could be measured to reconstruct the $T_{cc}$ from heavy ion collisions. The model calculations at present vary on the exact value of the binding energy. Therefore, we will probe all possibilities \cite{Lee:2007tn}. It should be noted that one could also look at the charge conjugate final states and search for $T_{\bar{c}\bar{c}}$ mesons. \begin{enumerate} \item $m_{T_{cc}} \ge m_D+m_{D^*}$: In this case, \begin{equation} \label{case1} T_{cc}~~ \rightarrow ~~{\rm (a)\,}D^0 +D^{*+} ~~~{\rm or}~~~ {\rm (b)\,}D^+ +D^{*0} ~~~{\rm or}~~~ {\rm (c)\,}D^++D^++\pi^-. \end{equation} As $D^{*+} \rightarrow D^0+\pi^+$ and $D^0 \rightarrow K^-+\pi^+$, (a) can be reconstructed with vertex detectors. $D^{*0}$ in (b) may not be easy to detect directly. \\ \item $m_D+m_{D^*} \ge m_{T_{cc}} \ge m_D+m_{D} +m_\pi $: This would be the most likely case for a compact multiquark state. Then, the virtual $D^{*+}$ component can decay into $D^0 +\pi^+$ so that a detectable final state would be \begin{equation} \label{case2} T_{cc} ~~\rightarrow~~ D^0 +D^0 +\pi^+ . \end{equation} The final state involving $T_{cc} \rightarrow D^0 +D^+ +\pi^0$ would be harder to identify. We note that the final state of Eq. (\ref{case2}) is not distinguishable with that of Eq. (\ref{case1}) (a). \\ \item $ m_{T_{cc}} \le m_D+m_{D} +m_\pi $: In this case, the virtual $D^*$ component should also decay into $D +\pi$ so that a detectable final state would be \begin{equation} T_{cc} ~~\rightarrow~~ D^0 +K^-+\pi^+ +\pi^+ ~~~{\rm or}~~~ D^+ +K^-+\pi^++\pi^+ +\pi^- . \end{equation} \end{enumerate} Among all the above cases, Eqs. (\ref{case1}) (c) $(D^++D^++\pi^-)$ and (\ref{case2}) $( D^0 +D^0 +\pi^+)$ seem to be the most probable case to reconstruct the $T_{cc}$. \section{Summary} \label{summary} We have investigated the hadronic effects on the $cc\bar{q}\bar{q}$ tetraquark state by focusing on the $T_{cc}$ multiplicity during the hadronic phase at RHIC and LHC. In particular, we have considered the absorption by pions and the inverse process within the quasifree approximation, where the $T_{cc}$ is considered as a $D,D^*$ state with appropriate coupling strength depending on whether it has a compact multiquark or molecular structure. We have extracted the time dependence of the volume and temperature for the hadronic phase for both the RHIC and LHC from the hydrodynamic calculations based on the lattice equation of state with or without viscosity. By solving the rate equation for the $T_{cc}$ and estimating the changes for the $D$ and $D^*$ number, we have calculated how much the structure dependent initial number changes during the hadronic phase. Furthermore, we have also considered all the possible final states that could be measured to reconstruct the $T_{cc}$ from heavy ion collisions. Among all the cases, we find $D^++D^++\pi^-$ and $D^0 +D^0 +\pi^+$ to be the most probable case to reconstruct the $T_{cc}$. For a molecular configuration, where the initial number of the $T_{cc}$ is expected to follow the statistical model prediction, the absorption effect is larger than production and reduces the abundance by about $42\%$. When a compact tetraquark structure is assumed, the initial number estimated from a coalescence model is an order of magnitude smaller than that from the statistical model estimate, and hence production is larger. However, we find that due to the small cross section of about 5 mb, the rate of change is not large enough so that the initial order of magnitude difference in the assumed abundance is maintained at the end of the hadronic phase. This suggests that measuring the $T_{cc}$ from heavy ion collisions could also tell us about the nature of its structure, which could either be a compact multiquark state or a loosely bound molecular configuration. \section*{Acknowledgements} This work was supported by the Korea National Research Foundation under the grant number 2016R1D1A1B03930089, by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2016R1C1B1016270), and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2018R1C1B6008119).
{ "timestamp": "2018-04-17T02:10:08", "yymm": "1804", "arxiv_id": "1804.05336", "language": "en", "url": "https://arxiv.org/abs/1804.05336" }
\section*{Abstract} In this work we find that the at the high polar magnetic fields of magnetars, $B \sim 10^{14-15}$ G, the outermost crust ($\rho < 10^7$ gm/cc) of the star can become a transverse insulator and a filamentary crystal along the field direction. Also, the transverse conductivity in the crust goes inversely as the square of polar magnetic field (as $1/B^2$). At these high fields the transverse crustal currents associated with the polar magnetic field can then dissipate more effectively via Ohm's law. \newline \section{Introduction} Till recently, the majority of neutron stars were considered to be pulsars which carry an inherited or fossil polar magnetic field, of the order of $B \sim 10^{9-12}$ G, from their erstwhile collapse . However, now we have an increasing population of magnetars which are usually isolated neutron stars with the largest known polar magnetic fields ($10^{14} - 10^{15}$ G) that have much smaller spin down ages of $10^{3}$ - $10^{5}$ years. Unlike pulsars, they are distinguished by the emission of a quiescent radiative X-ray luminosity of $10^{34}$ - $10^{36}$ erg/s. Besides, some of them emit repeated flares or bursts of energy typically of $10^{42} - 10^{44}$ erg, and at times of even higher intensity \cite{Hurley,Palmer}. The periods of magnetars fall in a surprisingly narrow window of 2-12 s. \emph{However, there are exceptional magnetars which may not share all these features.}\\ At such large periods, the energy emitted in both quiescent emission and flares far exceeds the loss in their rotational energy through dipole radiation. The most likely energy source for these emissions is their magnetic energy, yet there is no evidence of a decrease in their surface (polar) magnetic fields with time \cite{Thompson}. There have been many attempts to explain some of this physics of which the most canonical is the magnetar model of Duncan and Thompson \cite{DuncanThompson,ThompsonDuncan}, which is known as the {\em dynamo mechanism for magnetars}. This model requires the collapse of a large mass progenitor to a star which starts its life with a period close to a millisecond. At the high temperatures generated by the collapse process, such a fast rotation can amplify the inherited pulsar valued field of $10^{12}$ G to $10^{15}$ G or more. However, as described below, several observations on magnetars are hard to understand from such a model.\\ If magnetars are born with their high surface magnetic fields, then after a flare, one would expect a decrease in the magnetic energy and consequently a {\em decrease} in the polar magnetic field of the magnetar, accompanied by a fall in the dipole radiation mediated spin down rate. However,this is not the case and sometimes the opposite is seen: the spin down rate {\em increases} after the flare indicating an increase in the surface magnetic field \cite{DarDR,Dar,Marsden,Kaspi}. Further, inspite of the magnetic energy loss from steady X-ray emission, the magnetic field of magnetars appears to remain high all the way till the end of spin down, when the stars have the largest periods.\\ In an earlier work \cite{DipSoni,mag}, it was shown that it may be possible to explain many unusual features of magnetars if they have a core with a large magnetic moment density, created by a strong interaction phase transition in the high density core at birth. Initially, the core magnetic field is shielded by screening currents set up by the change of flux in the electron plasma in and around the core. In time, the screening currents dissipate till finally the field emerges at the surface of the star. In this model, the field increases with time till it reaches its relaxed state after the screening currents have dissipated.\\ Whereas, it is the dynamo currents that dissipate in the first model, in the screened core model \cite{DipSoni,mag,Soni3}, above, it is the screening currents that dissipate. The effect on the polar magnetic field is the opposite. In the dynamo model, $B_{polar}$ goes down through dissipation with time. In the screened core model, $B_{polar}$ goes up as the screening currents dissipate.\\ We note that at the high polar crustal and surface magnetic fields, $B_{polar} > 10^{13 - 14}G$, the Landau radius becomes smaller than the Bohr radius. When the magnetic fields go up $10^{15(14)}$ G), the electrons in the outer shell of the crust ( for density $ \sim 10^7 $gm/cc) are localised, approximately in a landau radius which is much smaller than the inter ionic spacing. We then have get deformed cigar like ions which are almost neutral. The coulomb repulsion between the ions comes down making the crystal binding weaker. The screening length becomes smaller than the inter atomic distance \cite{ShaRed,Bed} in the transverse direction. The overlap between the electron wavefunctions between sites in the transverse direction diminishes with a simultaneous drop in the transverse conductivity. We find that at such high magnetic fields the outermost crust ($\rho < 10^7$ gm/cc) of the star can become a transverse insulator and a filamentary crystal along the field direction\\ Further, we find that the at the typical high polar magnetic fields of magnetar, $ B \sim 10^{14-15} $ Gauss, the transverse conductivity in the crust goes inversely as the square of polar magnetic field (as $1/B^2 $). At these high fields, this indicates that the transverse crustal currents associated with the polar magnetic field can then dissipate more effectively via Ohm's law. This could explain the anomalously high X-ray luminosity of magnetars.\\ \section{The ground state at high polar magnetic fields ($ B \sim 10^{14-15} $ G): Landau levels} When the external magnetic field rises to above $10^{13}$ -- $10^{14}$ G, the magnetic field dominates the motion in the direction transverse to the field as the atoms transform to a cylindrical shape along the field. This happens when the Landau radius in the lowest Landau level, $ r_L = \sqrt{hc/(2 \pi B e)}$ becomes smaller than the Bohr radius of the most tightly bound electron orbit, $ a_0/Z = \hbar^2/(e^2 m_e) \times (1/Z)$. However, the field starts influencing the electron dynamics even at $B \sim 10^{12}$ G.\\ In the presence of a constant magnetic field, the electron states in the plane transverse to the magnetic fields are called Landau levels. The lowest Landau level has a large degeneracy. This degeneracy is given by $ABe/(hc)$ where $A$ is the area over which magnetic field is deployed, and $\Phi_0 = hc/e $ is the unit quantum flux.\\ First we need to write down the electron density, $n_e $, when only the lowest Landau level is fully occupied in the plane that is transverse to the magnetic field and the ground state is a one dimensional Fermi sea along the direction of the field, \begin{equation} n_e = n_{t{2d}} \times n_{1d} \end{equation} where $n_{1d} = k^f/(2\pi)$ is the 1 dimensional electron Fermi sea density along the direction of the magnetic field and $n_{t{2d}} = Be/(hc)$ is the 2 dimensional transverse electron density or the number of electrons per unit area. Also, $k^f = n_e h c\cdot 2\pi/( B e)$ and the Fermi momentum, $p_{z}^f = (h/2\pi) k^f $.\\ Now the expression for the relativistic electron energy eigenvalues in an external magnetic field is given in Shapiro and Dong Lai \cite{shapiro} and Dong Lai \cite{Dong Lai}, \begin{equation} E_{\nu,p_z} = \sqrt{(p_z c)^2 + m^2c^4 (1 + \nu \frac{2B}{B_c})} \end{equation} where $m$ is the electron mass, B is the external magnetic field, $\nu$ is the order of the Landau level that takes integer values starting from $\nu = 0 $, and $B_c = m^2 c^3 /(h/(2\pi) \times e)$.\\ The Fermi energy for the zeroth Landau level is given by, \begin{equation} E_f ^0 = \sqrt{(p_z^f c)^2 + m^2 c^4} \end{equation} where $p_z^f c = h k^f/(2\pi) = n_e h^2 c/( B e)$ and the energy gap between the first and the zeroth Landau level is, \begin{equation} \Delta = \sqrt{ (p_z^f c)^2 + m^2c^4 (1 + \frac{2B h e}{2\pi m^2c^3})} - \sqrt{(p_z^f c)^2 + m^2c^4 } \end{equation} The condition for the sole occupation of the lowest Landau level is then \begin{equation*} \frac{E_f^0}{\Delta} < 1 \end{equation*} \begin{center} or \end{center} \begin{equation*} \sqrt{ 1 + \frac{2B c h e}{2\pi (1 + (p_z^f c)^2/ ( m^2 c^4))}} - 1 >1 \end{equation*} After a little algebra, this condition maybe written as \begin{equation} 2 \leq \sqrt{1 + \frac{2B h e}{m^2 c^3 2\pi (1 +\kappa)}} \end{equation} where $\kappa = n_e^2 h^4/(B^2 e^2 m^2)$.\\ For $ B = 10^{15}$ G, this yields, \begin{equation} n_e \leq 3.8 \times 10^{31}~\text{/cc} \end{equation} This translates to an average mass density, \begin{equation} \rho \leq 6.3 \times 10^7~\text{g/cc} \end{equation} At this density, we find that the Fermi energy, $E_f $ is 1.97 MeV, which falls in the relativistic regime. This further implies that for fields $B = 10^{15}$ G, we are in the nucleon capture regime between $^{62}$Ni and $^{64}$Ni nuclei in the crust.\\ From the same set of steps for fields $B = 10^{14}$ G, we find the condition for the sole occupation of the lowest Landau level is, \begin{equation} n_e \leq 0.7 \times 10^{30}~\text{/cc} \end{equation} which translates to a mass density, \begin{equation} \rho \leq 1.15 \times 10^6~\text{g/cc} \end{equation} We note that the value for the kinetic energy, $p_{z}^f c \sim 0.36$ MeV which is somewhat less than the mass energy. This establishes that here we are not in a relativistic regime. This falls in the outer crust where, $^{56}$Fe is the favoured nucleus.\\ We have found that if we have the condition that only the lowest (zeroth) Landau level is occupied, we have an upper limit for the nucleon density which is compatible with only the outer shell of the crust of the neutron star, $\rho \sim 4 \times 10^{7} - 1.2 \times 10^{6}$ gm/cc. At higher densities the next Landau level gets occupied. We shall discuss the significance of this shortly. \newline \subsection{The Crystal} The atoms in the crust are strongly compressed by gravity and get completely ionised so that the electrons form a degenerate Fermi sea at magnetic fields characteristic of pulsars $\sim 10^{12}$ G. In the outer crust, approximately a little less than half the number (Z) of nucleons in the ions are protons (charge neutrality requires the number of protons must be the same as the number of conducting electrons) and the rest are neutrons. This results in a strong Coulomb repulsion between the ions. It is the interplay between gravity and Coulomb repulsion that makes the crystalline crust. The nucleon density is a little more than the twice the electron density. Starting with $^{56}$Fe ($10^6$ g/cc) at the outer crust, as density increases inward, we go through the neutron capture regime to $^{64}$Ni and then all the way to neutron rich $^{118}$Kr ($10^{11}$ g/cc) at the inner crust (see \cite{Baym} and references therein).\\ Now let us examine the inter-ionic distance in the crust and compare it with the Landau radius for the threshold density for occupation of the lowest Landau level for magnetic fields.\\ For external fields of $B= 10^{15}$ G, the electron density at the threshold of occupation of only the lowest Landau level is $n_e \leq 3.8 \times 10^{31}$ /cc. Given that this neutron capture regime is made up of $^{63}$Ni nuclei, we have a nucleon density of $2.25$ times the electron density and an inter-ionic distance of $\sim 900$ fm whereas the Landau radius, which is independent of the electron density as it depends only on B, is $r_l \sim 80$ fm. We thus expect that all the electrons can be attracted to a smaller cylinder around the positive ions that are in a one dimensional array along the magnetic field lines. The size of electron wave functions is of the same order as the Landau radius \cite{Dong Lai}.\\ For external fields of $B= 10^{14}$ G, the electron density at the threshold of occupation of only the lowest Landau level is $n_e \leq 0.7 \times 10^{30}$ /cc. Given that this crustal regime is made up of $^{56}$Fe nuclei, we have a nucleon density of $2.15$ times the electron density and an inter-ionic distance of $\sim 3300$ fm whereas the the Bohr radius is only $\sim 300$ fm. We thus expect that all the electrons can be attracted to a smaller cylinder around the positive ions along the magnetic field lines.\\ This implies that for those densities where only the lowest Landau level is occupied we are in a regime where the Landau radius is about ten times smaller than the inter-ionic distance. Though the electrons in the Landau level do not have a fixed centre or axis, in this case, they will crowd about the ions as they are attracted to ions and can thus reduce the Coulomb energy of the ions. One effect of this is that the original crystal is lost and it will recast itself into a one dimensional crystal along the field but lose its moorings in the transverse direction and it will resemble strands of spaghetti and also turn from a conductor to an insulator. This large distance between the strands in the transverse direction means that the neutral cylindrical atoms do not feel much Coulomb repulsion between them in the transverse plane. They can then flop around or oscillate and will feel the Coulomb repulsion if they approach their transverse neighbours. In this regime, the system is termed strongly quantizing (Landau). Since the electrons are tightly bound along the magnetic field lines, the transverse conductivity is very low.\\ At densities higher than the threshold density, further inside the crust, the next Landau level will get occupied, marginally restoring some conductivity. Even at higher density when the second Landau level is occupied, the system continues to be Landau quantizing - in other words the magnetic field strongly influences the dynamics. The wavefunction spreads out somewhat but the conductivity stays low. However, at even higher density, when many Landau levels are occupied, the system will become non quantizing and the effect of the magnetic field diminishes.The wavefunctions spread out, mimicking a Fermi sea and restoring conductivity. In fact a detailed account of this phenomenon is given in Dong Lai's paper \cite {Dong Lai}. We now go on to look at the behaviour of the transverse conductivity at high polar field which determines the ohmic dissipation in the crust. This is responsible for the a large part of the X-ray luminosity of the magnetars. \newline \subsection{Transverse Conductivity Dependence on Magnetic Field} In this section we will briefly review the electrical conductivity at high magnetic fields. Whatever be the model for magnetars, we know that it is the dissipation of the magnetic field energy that fuels the X-ray luminosity, which is a distinguishing feature of magnetars. Further, the polar magnetic field depends on currents that run in the plane transverse to the field. Hence it is the transverse conductivity that will determine the dissipation of these currents, which in turn influence the polar field. In an early work, Haensel et al \cite{Haensel}, find that the electrical conductivity has an inverse dependence on the ambient magnetic field. We point to some further work on this subject\\ The main charge transport in the crust is due to electron currents. In what follows, we write the equations governing electron motion in a medium with electric and magnetic fields. These results are for the single particle equations where the direct effects of the Fermi sea have not been factored in, but are replaced by an average equilibrium velocity called the drift velocity that describes the system. Such a situation is often likened to the Druid conductivity. For high polar magnetic fields in the magnetar ball park, $B_p > 10^{14}$ G, the isotropic Fermi sea gets deformed into Landau levels in the direction transverse to the magnetic fields but continues as a one dimensional Fermi sea along the direction of the magnetic field. Above, we have considered the threshold densities below which only the lowest Landau level is occupied for magnetar strength magnetic fields. Due to the large degeneracy of the lowest Landau level, all the electrons in this level are in the same state. Thus, a single particle model for electrons in a magnetic and electric field can be used (Druid model) to describe the dynamics \cite{Soni3}. We have \begin{equation} m^* \Big(\frac{d\vec v}{dt} +\frac{ \vec v}{\tau}\Big) = -e\vec E + \frac{\vec v}{c} x \vec B \end{equation} where $\vec v$ is the drift velocity, $ \tau$ is the relaxation time (or collision time), and $m^*$ is the effective mass.\\ In the steady state, $\frac{d\vec v}{dt} = 0$.\\ Defining, $\sigma_0 = n e \lambda$, as the isotropic conductivity in the absence of the magnetic field \\ where, $n$ is the electron density and $e$ is the electric charge and $\lambda = e \tau/ {m^*}$ we can use the following the transverse conductivities as given in \cite{Soni3}, with, $\alpha = (1 + \beta^2 )$ and $ \beta = \lambda B_z/c$ \\ \begin{align} \sigma_{xx} & =\sigma_{yy} = \frac{\sigma_0}{\alpha}\\ \sigma_{xy} & = -\sigma_{yx} = -\frac{\sigma_0 \lambda B_z}{c \alpha}\\ \end{align} It is to be noted that the transverse conductivities depend on the magnetic field $B_z$, whereas the isotropic conductivity does not. We can now write down the Ohmic magnetic field decay times, \begin{align} {\tau^D}_{ohm} &= \frac{ 4\pi\sigma_0\cdot {L^2}}{ c^2 \alpha} \sim \frac{4\pi(n^2 e^2)\cdot {L^2}}{\sigma_0 B^2}~\text{and}\\ \sigma_{xx} &=\sigma_{yy} = \frac{\sigma_0}{ \alpha} \end{align} where ${\tau^D}_{ohm}$, is the diagonal transverse ohmic dissipation time and we have used $\lambda =\sigma_0/(n e) $ Similarly \begin{align} \tau^{ND}_{ohm} &= \frac{ 4\pi\sigma_0 L^2}{c^2 \alpha} \times\beta~\text{and}\\ \sigma_{xy} &= -\sigma_{yx} = -\frac{\sigma_0 \lambda B_z}{c \alpha} \end{align} where ${\tau^{ND}}_{ohm}$ is the non diagonal( Hall) transverse dissipation time.\\ From the above, we find that the conductivity tensor in this Landau quantized regime is highly anisotropic. The parameter that controls the conductivities is $\beta = \frac{\lambda B_z}{c}$. If $\beta \gg 1$, then we can neglect the factor of unity in, $\alpha = (1 + \beta^2)$ and the conductivity becomes magnetic field dependent. For large fields, it is the second term in $\alpha = (1 + \beta^2 )$ that dominates, eg., $\sigma_{xx} =\sigma_{yy} = \sigma_0/\alpha$, go inversely as the \textsl{square} of the magnetic field ($1/B^2$). On the other hand, the non diagonal conductivity, $\sigma_{xy} = -\sigma_{yx}$ goes as ($1/B$).\\ There is also work that considers the conductivity at typical magnetic fields in the range $B \sim 10^{12-14}$ G, in the non-Landau quantizing regime. It is interesting that Harutyunyan and Sedrakian \cite{Israeli} find (fig. 8) that even in the range, $B \sim 10^{12-14}$ G in the non quantizing regime, the above $B$ dependences hold approximately for the transverse conductivity: $ \sigma_{xx} = \sigma_{yy} \sim 1/B^2$ and $\sigma_{xy} = -\sigma_{yx}$ goes as $1/B$ \cite{Israeli}. Furthermore, we also find from their figure that the diagonal transverse conductivity is roughly proportional to the square of the mass density whereas the non-diagonal transverse conductivity is proportional to the mass density. \newline \section{Discussion} In any magnetar model the currents that determine the polar magnetic fields are in the plane transverse to the field. We find that the diagonal transverse conductivity has a $1/ B^2$ dependence. Such a strong dependence on the magnetic field would imply that magnetar crusts can have a conductivity that can be even $10^6$ times lower than those for pulsars. At pulsar valued magnetic fields ( $ \sim 10^{12} $ G) the outer electrons form a Fermi sea, as is the case for metals and the crust is a crystal in which the gravitational pressure is balanced by the coulomb repulsion between the ions. But when the magnetic fields go up ($ 10^{15(14)}$ G), the electrons in the outer shell of the crust ( for density $ \sim 10^7 $gm/cc) are localised, approximately in a landau radius, which is much smaller than the inter ionic spacing. We then have get deformed cigar like ions which are almost neutral and thus deprived of adequate Coulomb repulsion. So we have a spaghetti like 1 D crystal along the field lines that is not tethered in the transverse plane. Since these localised electrons cannot support transverse currents, the outer shell of the crust behaves like an transverse insulator. Along the field lines we have a regular 1 D conductor. Therefore, if we have an initial state, with a high polar magnetic field, it will be supported by transverse currents that exist only below this thin shell at the surface. However, the resistivity will come down as we move to the interior of higher density . These currents will suffer ohmic dissipation and simultaneously the polar fields will gradually diminish. The heat so generated can escape only along the direction of the field lines via electron motion. Thus, thermal transport is suppressed as it is highly anisotropic and one directional. Furthermore, the temperatures usually associated with magnetar surfaces, $ < 10^7 $ K, are much lower than the Fermi energy which is of order , 1 Mev $\sim 10^{10}$ K - thus there is also a Fermi suppression factor, $ k_T/E_f $. This heat would then accumulate below the shell heating it up from below. As the below surface temperatures grows, thermal agitation from the heating layer below could set up phonon like modes in the spaghetti to dissipate the heat which could also destabilize the spaghetti and give rise to bursting phenomena. Gradually, as the field goes down below , $10^{14}$ G, the spaghetti insulator will also thin out and allow heat to exit. In the dynamo model, we start with a high polar magnetic field and so we expect the ground state in the outer crust to be a spaghetti insulator, where we may expect to see the above phenomena. On the other hand, in the screened core model, the field rises as it emerges out, cleaving the crystal and flaring. It is only at the later evolution that the crust becomes a mushy spaghetti insulator. In the screened charge model only when the polar field gets high does this effect step in, but by then most of the heat and screening currents have dissipated. We have given a simple model of the ground state of matter in magnetar crusts and the magnetic field dependence of conductivity of the crust. It is also provides new and interesting insights into the the properties of matter at very high magnetic fields particularly in the outer crust of magnetars. \newline \section{Acknowledgement} We thank G. Baskaran, Dipankar Bhattacharya, Sameer Patel and Sajal Gupta for discussions. \bibliographystyle{abbrv}
{ "timestamp": "2019-05-28T02:20:01", "yymm": "1804", "arxiv_id": "1804.05343", "language": "en", "url": "https://arxiv.org/abs/1804.05343" }
\section*{Abstract} Mendelian randomization uses genetic variants to make causal inferences about a modifiable exposure. Subject to a genetic variant satisfying the instrumental variable assumptions, an association between the variant and outcome implies a causal effect of the exposure on the outcome. Complications arise with a binary exposure that is a dichotomization of a continuous risk factor (for example, hypertension is a dichotomization of blood pressure). This can lead to violation of the exclusion restriction assumption: the genetic variant can influence the outcome via the continuous risk factor even if the binary exposure does not change. Provided the instrumental variable assumptions are satisfied for the underlying continuous risk factor, causal inferences for the binary exposure are valid for the continuous risk factor. Causal estimates for the binary exposure assume the causal effect is a stepwise function at the point of dichotomization. Even then, estimation requires further parametric assumptions. Under monotonicity, the causal estimate represents the average causal effect in `compliers', individuals for whom the binary exposure would be present if they have the genetic variant and absent otherwise. Unlike in randomized trials, genetic compliers are unlikely to be a large or representative subgroup of the population. Under homogeneity, the causal effect of the exposure on the outcome is assumed constant in all individuals; often an unrealistic assumption. We here provide methods for causal estimation with a binary exposure (although subject to all the above caveats). Mendelian randomization investigations with a dichotomized binary exposure should be conceptualized in terms of an underlying continuous variable. \clearpage \setstretch{1} Mendelian randomization is the use of genetic variants as instrumental variables to test for or estimate the causal effect of a risk factor (referred to here as an exposure) on an outcome using observational data \citep{daveysmith2003, burgess2015book}. The primary objective of Mendelian randomization is to find modifiable exposures that are worthwhile therapeutic targets and can be intervened on to improve health outcomes. An instrumental variable must be associated with the exposure of interest (relevance), only affects the outcome through the exposure (exclusion restriction), and does not share any causes with the outcome (exchangeability). Recently, several Mendelian randomization studies have employed binary measures as the exposure variable. Examples include analyses assessing the causal effect of cannabis initiation on schizophrenia (and of schizophrenia on cannabis initiation) \citep{gage2016, vaucher2016}, and of diabetes status on endometrial cancer \citep{nead2015}. In this short manuscript, we discuss issues relating to causal estimation in the Mendelian randomization setting with a binary exposure. For ease of presentation, we initially assume a single genetic variant is used as an instrumental variable; this restriction is later relaxed. The intended primary audience of this manuscript is Mendelian randomization practitioners, and the aim of the manuscript is to communicate the practical consequences of these methodological issues for Mendelian randomization investigations. As such, we focus on methods and approaches that are likely to be the most relevant to scenarios that are common in applied practice. In particular, we focus on methods that can be performed using summarized data, which comprise genetic associations with the exposure estimated using regression methods, that are routinely reported by large consortia \citep{burgess2013genepi}. Although our focus is on practitioners, we also provide technical asides and references for methodologically-focused readers. \subsection*{Random assignment in a trial as a paradigm instrumental variable} Consider a double-blind, placebo-controlled randomized trial with two time-fixed treatment arms (referred to as treatment and control) and complete follow-up data. An intention-to-treat effect estimate is typically reported: the causal effect of allocation to treatment as opposed to control. When there is substantial non-compliance, investigators may be interested in testing whether the treatment itself has an effect on the outcome (as opposed to simply allocation to treatment), or in estimating the causal effect of the treatment itself. Testing for a treatment or `per-protocol' effect can be achieved through the intention-to-treat analysis: unless random assignment somehow affects the outcome directly (e.g., because blinding is broken or a placebo effect is present), an association between treatment allocation and the outcome will only arise if the treatment has a causal effect on the outcome \citep{didelez2007}. Estimating the average treatment effect in the full study population further requires additional homogeneity conditions \citep{hernan2006, aronow2013, wang2018}; sufficient conditions are linearity of the instrumental variable--exposure, instrumental variable--outcome and exposure--outcome relationships with no effect heterogeneity. Without additional conditions, only bounds for the average treatment effect are obtainable \citep{balke1997}. These bounds can also be used to assess the validity of a genetic variant as an instrumental variable \citep{ramsahai2011, swanson2018}, although this approach is rarely informative in practice, and alternative ways of assessing instrument validity (such as understanding the biological role of the genetic variant, and assessing its associations with known confounders) are more likely to be fruitful in practice \citep{burgess2014twosample}. Alternatively, investigators often estimate an effect in a subgroup of the population under a weaker assumption. Specifically, we consider the subgroup of the population consisting of `compliers' -- individuals who would receive the treatment if allocated to treatment, and would not receive treatment if allocated to not receive treatment. The effect in this subgroup can be estimated under the assumption that there are no defiers -- individuals who would only take treatment if randomly allocated not to do so, and who would not take treatment if allocated to take it \citep{angrist1996}. This is known as the monotonicity assumption -- allocation to taking the treatment can only increase the value of the exposure, not decrease it. This effect, which can be estimated using standard instrumental variable techniques, is known as the local average treatment effect (LATE) or the complier average causal effect (CACE) in the literature \citep{yau2001}. Of note, we cannot identify individual compliers as we cannot see individuals' treatment levels under both levels of treatment allocation. However, it is possible to identify the proportion of the study population who are compliers, and to describe relative characteristics of the compliers compared to non-compliers using measured baseline covariates \citep{angrist2009}. In well-designed randomized trials, compliers are likely to be common, and the assumption that there are no defiers is often considered reasonable. \subsection*{Who are the genetic `compliers'?} Monotonicity in the context of Mendelian randomization means that increasing the number of variant alleles for an individual can only increase the exposure from absent to present (or leave it constant), and can never decrease it. The analogue of `compliers' in Mendelian randomization are individuals who would have the exposure present if they possess an exposure-increasing genetic variant, but not otherwise. As genetic variants tend to have small effects on phenotypic variables, such compliers are likely to be uncommon. This means that the group of genetic compliers is not likely to be representative of the general population. Also, the group of compliers may well differ greatly between different study populations. As an example, folate deficiency has been hypothesized as a causal risk factor for coronary heart disease \citep{lewis2005}. The complier population (and therefore the instrumental variable estimate) would differ greatly in a population where large numbers of people are borderline folate deficient compared with a population where relatively few people are folate deficient. (A similar problem would occur in randomized trials conducted in different populations.) The analogous assumption in Mendelian randomization to the `no defiers' assumption is that increases in the genotype variable would lead to increases (or no change) in the exposure for all individuals in the population (or equivalently, decreases or no change in the exposure for all individuals) \citep{hernan2006}. With a genetic variant that takes multiple values, the equivalent assumption is that the exposure is a non-decreasing (or non-increasing) function of the genetic variant. In this case (and in the case with multiple genetic variants), the instrumental variable estimate is a weighted average of LATEs \citep{angrist2000}. In the context of RCTs, even if individual compliers cannot be identified, the subgroup of compliers may be of interest either because it represents a large or representative subgroup of the population, or due to patterns of non-compliance in the trial being anticipated to be repeated outside the trial setting. However, in Mendelian randomization, the subgroup of genetic `compliers' is unlikely to represent those individuals in the population who would respond to a treatment that influences the target exposure, particularly if the treatment has a greater effect on the risk factor than the genetic variant. Hence, under the `no defiers' assumption, the interpretation of a causal estimate in a Mendelian randomization investigation in which the instrumental variable assumptions are satisfied is that of an average causal effect in those individuals whose exposure status would vary depending on whether they have a particular genetic variant or not. We additionally note that the subgroup of genetic compliers would differ between genetic variants. This provides yet another reason why causal estimates based on different genetic variants may vary even if all the genetic variants are valid instruments. \subsection*{What is the true risk factor underlying the exposure?} The above interpretation assumes that the instrumental variable assumptions are satisfied. These assumptions imply that the only influence of the instrumental variable on the outcome is via the exposure -- if the instrumental variable changes, but the exposure stays the same, then the outcome should not change. However, for most binary exposures used in Mendelian randomization investigations, there is an underlying continuous risk factor for which the binary variable is a dichotomization. As a simple example, the binary exposure hypertension is a dichotomization of the continuous risk factor blood pressure. In more complex examples, an underlying continuous latent variable can be hypothesized even if it cannot be measured, such as a continuous spectrum of sub-clinical mental health problems for the binary exposure schizophrenia. If the binary exposure is a dichotomization of a continuous risk factor, then the instrumental variable assumptions are likely to be violated. For the example of hypertension, if elevated blood pressure is a causal risk factor for a particular outcome then genetic variants that are associated with blood pressure will be associated with the outcome even in a population where no-one suffers from clinically-defined hypertension. Hence, changes in the genetic variants will lead to increases in blood pressure and consequently to changes in the outcome even if the exposure status for hypertension remains fixed for all individuals in the population. An instrumental variable for a continuous exposure can only be an instrumental variable for the dichotomization of the exposure if the exposure--outcome causal relationship is a strict stepwise threshold at the point of dichotomization (in which case the dichotomized exposure is a representation of the true risk factor). However, provided that the instrumental variable assumptions are satisfied for the continuous risk factor, testing for an association with the outcome is still a valid test of the causal null hypothesis for the binary exposure. There are two main consequences of this. First, such a Mendelian randomization study should be conceptualized as an investigation into the (possibly latent) underlying continuous risk factor, rather than the binary dichotomization of this variable. At minimum, the instrumental variable assumptions should be assessed with the continuous risk factor in mind. Second, a causal estimate from a Mendelian randomization investigation with a dichotomized binary exposure does not have a clear interpretation due to the binary exposure variable not capturing the true causal relationship. There are several reasons why a Mendelian randomization estimate may differ from the effect of an intervention even for a continuous exposure (for example, genetic variants have long-term influences acting from the beginning of life, whereas interventions are more short-term and are applied to mature individuals) \citep{burgess2012bmj, swanson2017}. With a binary exposure, these concerns are even greater. \subsection*{Causal estimation with a binary exposure} Despite this, suppose that we want to calculate a causal effect with a binary exposure, under the assumption that the exposure has a stepwise effect on the outcome. This may be because we truly believe in the homogeneity assumptions, or we truly believe in the monotonicity assumption and regard the genetic compliers as a worthwhile subgroup of the population in which to estimate an average causal effect. Or, more likely, because a causal effect estimate is required for pragmatic reasons, such as to perform a power calculation or to inform policymakers of the expected impact of intervention on the exposure. Other reasons for estimating a causal parameter include efficient testing of the causal null hypothesis with multiple instrumental variables (under the homogeneity assumptions, the two-stage least squares estimate, or equivalently the inverse-variance weighted estimate, is the optimally efficient combination of the instruments for testing for a causal effect \citep{wooldridge2009ch15}) and use of a robust method with multiple genetic variants (such as the MR-Egger method \citep{bowden2015} or weighted median method \citep{bowden2015median} -- these methods make weaker assumptions, not requiring all genetic variants to satisfy the instrumental variable assumptions). If the binary exposure is a dichotomization of a continuous risk factor, then power calculations are likely to be conservative, as the effect of the genetic variant on the outcome will not be fully captured by the binary exposure. Two options for causal estimation are: i) estimating the effect on the outcome per (say) 1\% absolute increase in the probability of the exposure; ii) estimating the effect on the outcome per (say) doubling of the probability (or odds) of the exposure. We concentrate on estimation methods based on regression (usually linear or logistic) for several reasons. First, often researchers perform their analyses using summarized association estimates -- beta-coefficients from regression analyses of the exposure and outcome on a genetic variant -- and do not have access to individual-level data. These beta-coefficients represent the average change in the trait (exposure or outcome) per additional copy of the effect allele. Secondly, these approaches result in causal estimates with a simple and relevant interpretation, and which can be compared to estimates in the literature from other analytical approaches. Thirdly, often there are technical restrictions on the data analysis -- for example, it may be necessary to fit a mixed model to account for relatedness between individuals, to adjust for several principal components of ancestry, or to provide a coordinated approach to analysis across different datasets. These restrictions are easiest to accommodate in a regression framework. These estimation procedures require strict linearity and homogeneity assumptions; full details are available elsewhere \citep{hernan2006, didelez2007}. The parametric assumptions for these two options are mutually incompatible. Additionally, regression coefficients will generally be variation dependent on the baseline risk, a nuisance parameter \citep{richardson2017}. If individual-level data are available, then alternative approaches to estimation can be taken \citep{aronow2013, wang2018}. If the genetic associations with the exposure are estimated using linear regression, then they represent absolute changes in the prevalence of the exposure. This enables estimation of the causal effect of an intervention in the prevalence of the exposure on an absolute scale. It is sensible to scale the causal effect to consider a modest increase in the prevalence of the exposure (say a 1\% or a 10\% increase), as a unit increase would represent the average causal effect of a population intervention from 0\% prevalence of the exposure to 100\% prevalence -- an unrealistic intervention in practice. However, absolute associations with a binary variable do not make sense in case-control settings (where cases are those with the exposure), as they depend on the ratio of cases to controls chosen by the investigator. If the genetic associations with the exposure are estimated using logistic regression, then they represent log odds ratios. The causal estimate would then represent the change in the outcome per unit change in the exposure on the log odds scale. A unit increase in the log odds of a variable corresponds to a 2.72 ($= \exp 1$)-fold multiplicative increase in the odds of the variable. If the exposure is rare then the odds of the exposure is approximately equal to the probability of the exposure. The causal estimate represents the average change in the outcome per 2.72-fold increase in the prevalence of the exposure (for example, an increase in the exposure prevalence from 1\% to 2.72\%). It may be more interpretable to think instead about the average change in the outcome per doubling (2-fold increase) in the prevalence of the exposure. This can be obtained by multiplying the causal estimate by 0.693 ($= \log_e 2$). \subsection*{Discussion} In this short manuscript, we have discussed statistical issues for Mendelian randomization with a binary exposure. A summary of the arguments made in the paper is provided as Figure~\ref{summ}. Under the more plausible assumption of monotonicity, the estimate from a Mendelian randomization study with a binary exposure represents the average causal effect in `compliers'; the subgroup of individuals for whom the presence or absence of the genetic variant used as an instrument determines whether individuals have the exposure present or not. Under the less plausible assumption of homogeneity, the estimate of the causal effect only makes sense if the effect of the exposure on the outcome has a strict stepwise form -- only changes in whether the binary exposure is present or absent will affect the outcome. If the binary exposure is a dichotomization of a continuous variable, then the causal estimate does not have a clear interpretation. In such a case, causal inferences will only be valid provided that the instrumental variable assumptions are satisfied for the continuous risk factor -- in particular, if the effect of the genetic variant on the outcome is completely mediated via the continuous risk factor. However, as the effect of the genetic variant on the outcome is not completely mediated via the binary exposure, power calculations are likely to be conservative. In summary, applying Mendelian randomization with a binary exposure requires careful consideration. When the binary exposure is a dichotomization of an underlying continuous risk factor, causal assumptions should be assessed and causal inferences should be conceptualized with respect to the underlying continuous risk factor. Tests for causal effects may be achieved readily without using the exposure information, but estimation procedures for a binary exposure require strong assumptions that are unlikely to be biologically plausible in common Mendelian randomization settings. \vspace{6mm} \noindent \noindent \textbf{Funding:} Stephen Burgess is supported by a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society (Grant Number 204623/Z/16/Z). \\ \noindent \noindent \textbf{Conflict of Interest:} The authors declare that they have no conflict of interest. \bibliographystyle{DeGruyter}
{ "timestamp": "2018-04-17T02:14:19", "yymm": "1804", "arxiv_id": "1804.05545", "language": "en", "url": "https://arxiv.org/abs/1804.05545" }
\section{Introduction} Graph eigenvalues play a powerful role in the study of random walks. In particular, eigenvalues are a primary tool for bounding a number of key random walk parameters, such as mixing time. Consequently, bounds on graph eigenvalues are not only of interest in themselves, but also may have immediate implications for the behavior of the random walk (for a survey, see \cite{lovasz1993random}). In the case of the {\it relaxation time} of a discrete reversible Markov chain, eigenvalues themselves define the quantity of interest. In this paper, we examine an extremal problem concerning the normalized Laplacian spectral gap, the reciprocal of which defines the relaxation time of a random walk. The normalized Laplacian matrix $\mathcal{L}$ of a graph $G$ is \begin{align*} \mathcal{L}&= I-T^{-1/2}AT^{-1/2}, \end{align*} where $T$ denotes the diagonal degree matrix with $(u,u)$ entry equal to $d(u)$ and $A$ denotes the adjacency matrix. Throughout, we assume $G$ is simple, meaning $G$ has no loops or multiple edges. We write the eigenvalues of $\mathcal{L}$ in increasing order, where \[ 0=\lambda_0\leq\lambda_1\leq\dots\leq\lambda_{n-1} \leq 2. \] It is well-known (c.f. \cite{chung1997spectral}) that the second eigenvalue or spectral gap of $\mathcal{L}$ is nonzero if and only if $G$ is connected, and can be characterized as \[ \lambda_1 = \inf_{\substack{f \\ \sum_{u}f(u)d(u)=0}}\frac{\displaystyle\sum_{u \sim v} (f(u)-f(v))^2}{\displaystyle\sum_{v}f(v)^2 d(v)}, \] with corresponding eigenvector $g=T^{1/2}f$. We call the nontrivial function $f$ achieving the above infimum the {\it harmonic eigenfunction} of $\mathcal{L}$. Landau and Odlyzko proved the following lower bound on $\lambda_1$. \begin{theorem}[Landau, Odlyzko \cite{Land81}] \label{thm:land} For a connected graph on $n$ vertices with maximum degree $\Delta$ and diameter $D$, we have \[ \lambda_1 \geq \frac{1}{n \Delta (D+1)}. \] \end{theorem} In \cite{chung1997spectral}, Chung gives an improved lower bound on $\lambda_1$ in terms of the graph's diameter and volume, where $\mathrm{vol}(G)= \sum_{u \in V(G)} d(u)$. \begin{theorem}[Chung \cite{chung1997spectral}] \label{fansBound} For a connected graph $G$ with diameter $D$, we have \[ \lambda_1 \geq \displaystyle \frac{1}{D \cdot \mathrm{vol} (G)}. \] \end{theorem} For symmetrical graphs, stronger lower bounds may be obtained. For example, Chung showed that for a vertex-transitive graph with degree $k$ and diameter $D$, we have \[ \lambda_1 \geq \frac{1}{kD^2}. \] In this paper, we have two main results. First, we improve the constant in the statement of Theorem \ref{fansBound}. \begin{theorem}\label{fanImprovement} For a connected graph $G$ with diameter $D$, we have \[ \lambda_1 \geq \frac{4}{D \cdot \mathrm{vol}({G})}. \] \end{theorem} The above lower bound is in fact asymptotically best possible (see further discussions later in Remark \ref{rem:sharp}). Second, we examine the minimal value of $\lambda_1$ over all connected graphs on $n$ vertices. \begin{theorem} \label{min54} The minimum normalized Laplacian spectral gap $\alpha(n)$, defined by \[ \alpha(n)=\min\{\lambda_1(G): G \mbox{ is a simple, connected graph on $n$ vertices} \} \] satisfies \[ \alpha(n) \sim \frac{54}{n^3}. \] \end{theorem} As an immediate consequence of Theorem \ref{min54}, we confirm a conjecture of Aldous and Fill on {\it relaxation time}. The {relaxation time} $\tau$ of a random walk on a (connected) graph $G$ with probability transition matrix $P=T^{-1}A$ is defined as \[ \tau(G) = \frac{1}{1-\rho_{n-1}}, \] where $\rho_1\leq \dots \leq \rho_{n-1} < \rho_n=1$ denote the eigenvalues of $P$. A central problem in the study of random walks is to determine the {\it mixing time}, the required number of steps in the random walk guaranteeing closeness to the stationary distribution. As seen throughout the literature \cite{aldous2002reversible, chung1997spectral, levin2017markov}, the eigenvalue $\rho_{n-1}$ and hence the relaxation time is the primary term controlling mixing time. Therefore, relaxation time is directly associated with the rate of convergence for a random walk. At least as early as 1994, Aldous and Fill \cite[Problem 6.13, p.~216]{aldous2002reversible} conjectured the following concerning relaxation time: \begin{conjecture}[Aldous and Fill, c.~1994] \label{conj:af} The maximum relaxation time $\beta(n)$, defined by \[ \beta(n)=\max\{\tau(G): G \mbox{ is a simple, connected graph on $n$ vertices} \}, \] satisfies \[ \beta(n) \sim \frac{n^3}{54}. \] \end{conjecture} In \cite{aldous2002reversible}, Aldous and Fill showed that $\beta(n)$ is bounded above by $(1+o(1))\frac{2n^3}{27}$. In general, Conjecture \ref{conj:af} fits into a body of work addressing extremal problems for random walk parameters. For example, Brightwell and Winkler \cite{brightwell1990maximum} found the maximum hitting time between two vertices over all $n$-vertex graphs and determined the extremal graphs are lollipop graphs. Relatedly, Mazo considered maximum and minimum mean hitting time \cite{mazo1982some}. Furthermore, Feige obtained sharp upper bounds on cover time \cite{feige1995tight, feige1996collecting}, and Coppersmith, Tetali, and Winkler found the maximum commute time \cite{coppersmith1993collisions}. It is easy to see that $T^{-1/2}\mathcal{L} T^{1/2}=I-T^{-1}A$, and hence $\lambda_i$ is an eigenvalue of $\mathcal{L}$ if and only if $1-\rho_{i}$ is an eigenvalue of $T^{-1}A$. Consequently, the relaxation time of a graph may equivalently be written as $\tau=1/{\lambda_1}$ and so Theorem \ref{min54} confirms Conjecture \ref{conj:af}. \begin{corollary} The maximum relaxation time $\beta(n)$ for the random walk on a simple, connected graph on $n$ vertices satisfies $\beta(n) \sim n^3/54$. The extremal value $\beta(n)$ is achieved asymptotically by a double kite graph, $DK(\frac{n}{3}, \frac{n}{3})$. \end{corollary} The double kite graph can be defined as follows: \begin{definition} A {\it double kite graph}, denoted $DK(r,s)$, consists of two copies of the $r$-vertex complete graph $K_r$ and a path connecting them, $p_0,p_1,\dots,p_s,p_{s+1}$, where $p_0$ is a selected vertex from one copy of $K_r$ and $p_{s+1}$ is a selected vertex from the other copy of $K_r$. See Figure $\ref{kitePic}$ for an illustration. \end{definition} \begin{figure}[h] \[\begin{tikzpicture}[scale=0.9] \node[vertex] (c1) at (45:1){}; \node[vertex] (c2) at (90:1){}; \node[vertex] (c3) at (135:1){}; \node[vertex] (c4) at (180:1){}; \node[vertex](c5) at (225:1){}; \node[vertex](c6) at (270:1){}; \node[vertex](c7) at (315:1){}; \node[vertex](c8) at (360:1){}; \node[vertex] (d2) at (3,0){}; \node[vertex] (d3) at (4,0){}; \node[vertex] (d1) at (2,0){}; \node[vertex] (d4) at (5,0){}; \node[vertex] (d5) at (6,0){}; \node[vertex] (d6) at (7,0){}; \node[vertex] (e1) at (8,0){}; \node[vertex] (e2) at (9.7071, 0.7071){}; \node[vertex] (e3) at (9,1){}; \node[vertex] (e4) at (8.2929,0.7071){}; \node[vertex] (e5) at (8.2929,-0.7071){}; \node[vertex] (e6) at (9,-1){}; \node[vertex] (e7) at (9.7071,-0.7071){}; \node[vertex] (e8) at (10,0){}; \path (c1) edge (c2) (c1) edge (c3) (c1) edge (c4) (c1) edge (c5) (c1) edge (c6) (c1) edge (c7) (c1) edge (c8) (c2) edge (c1) (c2) edge (c3) (c2) edge (c4) (c2) edge (c5) (c2) edge (c6) (c2) edge (c7) (c2) edge (c8) (c3) edge (c2) (c3) edge (c1) (c3) edge (c4) (c3) edge (c5) (c3) edge (c6) (c3) edge (c7) (c3) edge (c8) (c4) edge (c2) (c4) edge (c3) (c4) edge (c1) (c4) edge (c5) (c4) edge (c6) (c4) edge (c7) (c4) edge (c8) (c5) edge (c2) (c5) edge (c3) (c5) edge (c4) (c5) edge (c1) (c5) edge (c6) (c5) edge (c7) (c5) edge (c8) (c6) edge (c2) (c6) edge (c3) (c6) edge (c4) (c6) edge (c5) (c6) edge (c1) (c6) edge (c7) (c6) edge (c8) (c7) edge (c2) (c7) edge (c3) (c7) edge (c4) (c7) edge (c5) (c7) edge (c6) (c7) edge (c1) (c7) edge (c8) (c8) edge (c2) (c8) edge (c3) (c8) edge (c4) (c8) edge (c5) (c8) edge (c6) (c8) edge (c1) (c8) edge (c7) (c8) edge (d1) (d1) edge (d2) (d2) edge (d3) (d3) edge (d4) (d4) edge (d5) (d5) edge (d6) (e1) edge (e2) (e1) edge (e3) (e1) edge (e4) (e1) edge (e5) (e1) edge (e6) (e1) edge (e7) (e1) edge (e8) (e2) edge (e1) (e2) edge (e3) (e2) edge (e4) (e2) edge (e5) (e2) edge (e6) (e2) edge (e7) (e2) edge (e8) (e3) edge (e2) (e3) edge (e1) (e3) edge (e4) (e3) edge (e5) (e3) edge (e6) (e3) edge (e7) (e3) edge (e8) (e4) edge (e2) (e4) edge (e3) (e4) edge (e1) (e4) edge (e5) (e4) edge (e6) (e4) edge (e7) (e4) edge (e8) (e5) edge (e2) (e5) edge (e3) (e5) edge (e4) (e5) edge (e1) (e5) edge (e6) (e5) edge (e7) (e5) edge (e8) (e6) edge (e2) (e6) edge (e3) (e6) edge (e4) (e6) edge (e5) (e6) edge (e1) (e6) edge (e7) (e6) edge (e8) (e7) edge (e2) (e7) edge (e3) (e7) edge (e4) (e7) edge (e5) (e7) edge (e6) (e7) edge (e1) (e7) edge (e8) (e8) edge (e2) (e8) edge (e3) (e8) edge (e4) (e8) edge (e5) (e8) edge (e6) (e8) edge (e1) (e8) edge (e7) (d6) edge (e1) ; \end{tikzpicture} \] \caption{The double kite graph $DK(8,6)$.} \label{kitePic} \end{figure} \begin{remark} In \cite{aldous2002reversible}, Aldous and Fill call $DK(r,s)$ the {\it barbell graph}. The specific cases of $DK(\frac{n}{2},0)$ as well as $DK(\frac{n}{3},\frac{n}{3})$ have also both been commonly referred to as the barbell graph (e.g., see \cite{ghosh2008minimizing} and \cite{wilf1989editor} respectively). \end{remark} \begin{remark} Landau and Odlyzko also consider the construction $DK(\frac{n}{3},\frac{n}{3})$ to show that the $n^3$ order of magnitude implied by their bound (Theorem \ref{thm:land}) is best possible. Applying their bound to this construction yields $\lambda_1 \geq (1+o(1))\frac{9}{n^3}$, while we show, $\lambda_1 \sim \frac{54}{n^3}$. \end{remark} \begin{remark} \label{rem:sharp} We note that the bound in Theorem \ref{fanImprovement} is asymptotically tight for $DK(\frac{n}{3},\frac{n}{3})$, yielding $\lambda_1 \geq (1+o(1))\frac{54}{n^3}$. In general, however, the lower bound $4/D\cdot \mathrm{vol}(G)$ may be off by orders of magnitude. For example, applying the bound to the $d$-dimensional hypercube graph on $n=2^d$ vertices yields $\lambda_1 \geq \tfrac{4}{n \cdot \log_2^2(n)}$ yet $\lambda_1=\tfrac{2}{\log_2(n)}$. On the other hand, in Section \ref{sec:fanImprov} we show Theorem \ref{fanImprovement} is sharp in a strong sense: for a wide range of $D$ and $\mathrm{vol}(G)$ there is an infinite sequence of graphs for which it is tight asymptotically, including the multiplicative constant. \end{remark} In addition to its interpretation in the random walk setting, Theorem \ref{min54} is also part of the literature surrounding extremal spectral graph theory, where one optimizes a spectral invariant over a fixed family of graphs. Such problems were first formalized by Brualdi and Solheid \cite{brualdi1986spectral} and since then have attracted attention from many researchers. Rather than give a broad survey of such work, we briefly mention a few results directly relevant to ours. For the spectral gap of the adjacency matrix, Stanic \cite{stanic2013graphs} proved some lower bounds for the spectral gap of the adjacency matrix, and conjectured that double kite graphs minimize the adjacency spectral gap. For the combinatorial Laplacian, Fallat and Kirkland \cite{fallat1998extremizing} find the combinatorial Laplacian algebraic connectivity minimizing graphs over all $n$-vertex trees with given diameter. Brand, Guiduli, and Imrich \cite{brand2007characterization} minimized $\lambda_1$ of the Laplacian over all $3$-regular graphs, and characterized the extremal graphs. For the general case, \cite{biyikouglu2012graphs} showed that the $n$-vertex graphs minimizing algebraic connectivity must consist of a chain of cliques. The remainder of the paper is structured as follows: in Section \ref{sec:fanImprov}, we prove a lemma from which Theorem \ref{fanImprovement} follows as a corollary and show Theorem \ref{fanImprovement} is sharp for a wide range of values of $D$ and $\mathrm{vol}(G)$. In Section \ref{sec:mainThm}, we apply this lemma, among others, to also prove Theorem \ref{min54}. In Section \ref{sec:conc}, we conclude by mentioning related open problems. \section{Proof of Theorem \ref{fanImprovement}} \label{sec:fanImprov} In this section, we establish the lemma from which Theorem \ref{fanImprovement} will follow as a corollary. To establish this lemma, we first require the solution to a related optimization problem. \begin{proposition}\label{optimization} Fix $(d_1,\ldots, d_n) \in \mathbb{N}^n$. Let $(f_1,\ldots, f_n)$ be a sequence minimizing the quantity \[ (f_n-f_1)^2 \] subject to the constraints \begin{align} \sum_{i=1}^n f_id_i = 0, \label{const:1}\\ \sum_{i=1}^n f_i^2d_i = 1\label{const:2}, \end{align} and \[ f_1 \leq f_k \leq f_n \] for all $k$. Then for all $k$ either $f_1 = f_k$ or $f_n = f_k$. \end{proposition} \begin{proof} First we consider the optimization problem without the constraint that $f(1) \leq f(k) \leq f(n)$. In this case, consider the Lagrangian \[ (f_n-f_1)^2-\alpha\left(\sum_{i=1}^n f_id_i \right) - \beta \left(\sum_{i=1}^n f_i^2 d_i-1\right). \] We show that either we are on the boundary where there exists a $k$ such that $f(1) = f(k)$ or $f(n) = f(k)$, or the critical point of this Lagrangian maximizes the objective function $(f_n - f_1)^2$, and so the minimum must occur on the boundary. A critical point of the Lagrangian occurs when \begin{align} 2\left(f_n-f_1\right)-\alpha d_n - 2\beta f_n d_n &= 0 \label{lag:1} \\ -2(f_n-f_1)-\alpha d_1 - 2\beta f_1 d_1 &= 0 \label{lag:2}\\ \alpha d_i + 2 \beta f_i d_i &= 0, \label{lag:3} \end{align} for $i=2,\dots, n-1$. If $\beta=0$, then from Eq.~$(\ref{lag:3})$, $\alpha=0$, in which case subtracting Eq.~$(\ref{lag:1})$ from Eq.~$(\ref{lag:2})$ yields $f_1=f_n$. But from the definitions of $f$ and $d$ and Eq.~$(\ref{const:1})$, it is clear $f_n>0$ and $f_1 <0$. So $\beta \not = 0$ and $f_i=-\frac{\alpha}{2 \beta}$ for $i=2,\dots,n-1$. Applying this fact and rewriting Eqs.~$(\ref{const:1})$ and Eq.~$(\ref{const:2})$ yields \begin{align} f_1d_1+f_nd_n &= \frac{\alpha}{2\beta}\sum_{i=2}^{n-1} d_i, \label{eq:firstlast} \\ f_1^2 d_1 + f_n^2 d_n &= 1- \frac{\alpha^2}{4 \beta^2} \sum_{i=2}^{n-1} d_i. \label{eq:square} \end{align} Adding Eqs.~$(\ref{lag:1})$ and $(\ref{lag:2})$, then applying Eq.~$(\ref{eq:firstlast})$ yields \[ \alpha \sum_{i=1}^n d_i =0, \] from which we can see that $\alpha=0$. Now, Eqs.~$(\ref{lag:3}),(\ref{eq:firstlast}),(\ref{eq:square})$ tell us $f_i =0$ for $i=2,\dots,n-2$, and \begin{align*} f_1 d_1 + f_n d_n &= 0, \\ f_1^2 d_1 + f_n^2 d_n &=1. \end{align*} Rewriting the former equation above, we get $f_1 = -c \cdot d_n$ and $f_n=c \cdot d_1$ for $ c\coloneqq {f_n}/{d_1}$. Plugging this into the latter, we find \[ c^2=\frac{1}{d_1d_n(d_1+d_n)}. \] Finally, we have \[ (f_n-f_1)^2=c^2 (d_1+d_n)^2 = \frac{1}{d_1}+\frac{1}{d_n}. \] We claim that this is the maximum value of $(f_n-f_1)^2$ subject to the constraints. To see this, note that letting \[ f_1 = f_2 = -\sqrt{\frac{d_n}{(d_1+d_2)(d_1+d_2+d_n)}}, \quad \quad f_n = \sqrt{\frac{d_1+d_2}{d_n(d_1+d_2+d_n)}}, \] satisfies all of the constraints and gives \[ (f_n-f_1)^2 = \frac{1}{d_1+d_2} + \frac{1}{d_n}, \] which is smaller than $\frac{1}{d_1} + \frac{1}{d_n}$ since $d_2\geq 1$. Therefore, the only critical point of the Lagrangian interior to the boundary is a maximum, and thus the minimum must occur when there is a $k$ such that $f_1 = f_k$ or $f_n = f_k$. In this case, we may substitute for $f_k$, and we are left with a similar optimization problem in $n-1$ variables, where we have eliminated the variable $f_k$ and replaced $d_1$ with $d_1+d_k$ if $f_1=f_k$ or $d_n$ by $d_n + d_k$ if $f_n = f_k$. We may use this argument repeatedly to show that the minimum must occur on the boundary until there are only $2$ variables remaining. At this point, the objective function is constant subject to the constraints, and we are done. \end{proof} We now prove the lemma from which Theorem \ref{fanImprovement} will follow. Let $G$ be a connected graph with normalized Laplacian eigenvalues $\lambda_0 \leq \lambda_1 \leq \cdots \leq \lambda_{n-1}$, and let $f$ be a harmonic eigenvector for $\lambda_1$. Once $f$ is fixed, let $u$ and $v$ be vertices corresponding to minimum and maximum entries of $f$ respectively. That is, for all $z\in V(G)$ we have $f(u) \leq f(z) \leq f(v)$. Further, let \begin{align*} \mathrm{vol}_P &= \sum_{z: f(z) \geq 0} d(z),\\ \mathrm{vol}_N &= \sum_{z: f(z) < 0} d(z). \end{align*} \begin{lemma}\label{lowerBoundLemma} Let $G$ be a connected graph with $f$ a harmonic eigenvector for $\lambda_1$ of its normalized Laplacian. Let $u$ and $v$ be vertices which minimize and maximize $f$ respectively, and let $\mathrm{vol}_P$ and $\mathrm{vol}_N$ be defined as above. Then \[ \lambda_1 \geq \frac{2}{\mathrm{dist}(u,v) \sqrt{\mathrm{vol}_P \cdot \mathrm{vol}_N}}. \] \end{lemma} \begin{proof} Let $f$ be a harmonic eigenvector for $\lambda_1$, and let $u$ and $v$ be vertices which minimize and maximize $f$ respectively, so $f(u) \leq f(z) \leq f(v)$ for all $z\in V(G)$. Let $S$ be a shortest path from $u$ to $v$. Then, \begin{align*} \lambda_1 &= \frac{\sum_{x\sim y} (f(x)-f(y))^2}{\sum_x (f(x))^2d(x)}\\ &\geq \frac{\sum_{xy\in S} (f(x)-f(y))^2}{\sum_x (f(x))^2d(x)} \\ & \geq \frac{\frac{1}{|S|} (f(u) - f(v))^2}{\sum_x (f(x))^2d(x)}, \end{align*} where the last inequality is by Cauchy-Schwarz. Now, since $f$ is a harmonic eigenvector, we have \[ \sum_x f(x)d(x) = 0. \] We may without loss of generality scale $f$ so that \[ \sum_x (f(x))^2d(x) = 1. \] By Proposition \ref{optimization}, we have that the quantity $(f(u)-f(v))^2$ is bounded below by $(c_2-c_1)^2$ where $c_1$ and $c_2$ satisfy \[ \sum_{x\in N} c_1d(x) + \sum_{x\in P} c_2d(x) = 0, \] and \[ \sum_{x\in N} c_1^2 d(x) + \sum_{x\in P} c_2^2 d(x) = 1. \] If $c_1$ and $c_2$ satisfy this system, then we have \[ c_1 = - \sqrt{\frac{\mathrm{vol}_P}{\mathrm{vol}_N^2 + \mathrm{vol}_P\mathrm{vol}_N}}, \quad \quad c_2 = \sqrt{\frac{\mathrm{vol}_N}{\mathrm{vol}_P^2 + \mathrm{vol}_P\mathrm{vol}_N}}. \] Thus we have \[ \lambda_1 \geq \frac{1}{\mathrm{dist}(u,v)} \left(\sqrt{\frac{\mathrm{vol}_N}{\mathrm{vol}_P^2 + \mathrm{vol}_P \mathrm{vol}_N}} + \sqrt{\frac{\mathrm{vol}_P}{\mathrm{vol}_N^2 + \mathrm{vol}_P \mathrm{vol}_N}}\right)^2. \] Using calculus, one can see that \[ \left(\sqrt{\frac{\mathrm{vol}_N}{\mathrm{vol}_P^2 + \mathrm{vol}_P\mathrm{vol}_N}} + \sqrt{\frac{\mathrm{vol}_P}{\mathrm{vol}_N^2 + \mathrm{vol}_P\mathrm{vol}_N}}\right)^2 \geq \frac{2}{\sqrt{\mathrm{vol}_P\mathrm{vol}_N}}. \] \end{proof} As a corollary of this, we can now prove Theorem \ref{fanImprovement}. \begin{proof}[Proof of Theorem \ref{fanImprovement}] Note that $\mathrm{vol}(G) = \mathrm{vol}_P + \mathrm{vol}_N$, and so the AM-GM inequality gives us \[ \frac{\mathrm{vol}(G)}{2} \geq \sqrt{\mathrm{vol}_P\cdot \mathrm{vol}_N}. \] Now, if $D$ is the diameter of $G$, we have by Lemma \ref{lowerBoundLemma} that \[ \lambda_1 \geq \frac{2}{\mathrm{dist}(u,v) \sqrt{\mathrm{vol}_P\mathrm{vol}_N}} \geq \frac{2}{D \sqrt{\mathrm{vol}_P\mathrm{vol}_N}} \geq \frac{4}{D\cdot \mathrm{vol}(G)}. \] \end{proof} Next we give a family of constructions showing that Theorem \ref{fanImprovement} is sharp. \begin{proposition}\label{prop:construction} Let $D$ and $d$ be fixed, and let $n-D+1$ be divisible by $4$. Let $H_1$ and $H_2$ be $d$-regular graphs on $\frac{n-D+1}{2}$ vertices, and let $H$ be the graph obtained by joining $H_1$ and $H_2$ by a path of length $D$. Then \[ \lambda_1(H) \leq \frac{4}{Dd(n-D)}. \] \end{proposition} \begin{proof} Label the vertices on the path between $H_1$ and $H_2$ as $p_0, p_1,\ldots, p_D$, where the terminal vertices $p_0$ and $p_D$ belong to $H_1$ and $H_2$ respectively. Define $f: V(H) \to \mathbb{R}$ by \[ f(u) = \begin{cases} 1 & \mbox{if } u\in H_1, \\ -1 & \mbox{if } u\in H_2, \\ 1-\frac{2i}{D} &\mbox{if } u =p_i. \end{cases} \] One may check that $\sum_{u} f(u)d(u) = 0$, and hence \begin{align*} \lambda_1 \leq \frac{\sum_{u \sim v} (f(u)-f(v))^2}{\sum_{v}f(v)^2 d(v)} \leq \frac{\sum_{u \sim v} (f(u)-f(v))^2}{(n-D)d} &= \frac{\sum_{i=1}^D (f(p_i) - f(p_{i-1}))^2}{(n-D)d} \\ &= \frac{D\left(\frac{2}{D}\right)^2}{(n-D)d}. \end{align*} \end{proof} Now, given $H$ we have that $\mathrm{vol}(H) = (n-D+1)d + 2D$ and the diameter of $H$ is at most $D +\mathrm{diam}(H_1) + \mathrm{diam(H_2)}$. Therefore, as long as we have $d(n-D+1)+2D \sim d(n-D)$ and $\mathrm{diam}(H_1) + \mathrm{diam}(H_2) = o(D)$, then the lower bound in Theorem \ref{fanImprovement} is asymptotically tight for $\lambda_1(H)$ as $n$ goes to infinity. Since we may choose $d$-regular graphs with diameter $O(\log n)$, for any $D$ and $V$ satisfying $D \gg \log n$ and $n\ll V \leq \frac{n^2}{2}$, there is a sequence of graphs with diameter asymptotic to $D$ and volume asymptotic to $V$ for which the bound in Theorem \ref{fanImprovement} is asymptotically sharp. \section{Proof of Theorem \ref{min54}} \label{sec:mainThm} We first prove an upper bound on $\alpha(n)$, which is straightforward by considering the double kite graph. \begin{claim} \label{upperBound} \[ \alpha(n) \leq (1+o(1))\frac{54}{n^3}. \] \end{claim} \begin{proof} Consider $G=DK(\frac{n}{3},\frac{n}{3})$. By Proposition \ref{prop:construction} we have $\lambda_1(G) \leq (1+o(1))\frac{54}{n^3}$. \end{proof} It remains to prove that $\alpha(n) \geq (1+o(1))\frac{54}{n^3}$. To do so, we will use Lemma \ref{lowerBoundLemma} from Section \ref{sec:fanImprov}, as well as an additional lemma below that establishes a key property of the extremal graphs. Henceforth, we assume $G$ achieves $\alpha(n)$ with harmonic eigenvector $f$ satisfying \[ \lambda_1 = \frac{\sum_{x\sim y}(f(x)-f(y))^2}{\sum_x (f(x))^2d(x)}. \] Let \begin{align*} P &= \{z\in V(G): f(z) \geq 0\},\\ N &=\{z\in V(G): f(z) < 0\}. \end{align*} Further, let $u$ and $v$ satisfy $f(u) \leq f(z) \leq f(v)$ for all $z\in V(G)$ and let $S$ be a shortest path from $u$ to $v$. \begin{lemma}\label{NPedges} If $G$ achieves $\alpha(n)$, then the number of edges with one endpoint in $N$ and the other in $P$ satisfies \[ 1\leq e(N,P) \leq n-1. \] \end{lemma} \begin{proof} Since $f$ is a harmonic eigenvector, we have $\sum_x f(x)d(x) = 0$ and so $f(u) < 0 < f(v)$. Therefore, there must be an edge in $S$ that has one endpoint in $N$ and the other in $P$. To see the upper bound, we claim that any edge with one endpoint in $N$ and the other in $S$ must be a bridge. To see this, let \[ a = \sum_{x\sim y} (f(x) - f(y))^2, \] and \[ b = \sum_x (f(x))^2d(x), \] so that $\lambda_1 = \frac{a}{b}$. Now let $e=wz$ be an edge with one endpoint in $N$ and the other in $P$, and let $G' = G\setminus \{e\}$. Furthermore, let $d'(x)$ be the degree sequence of $G'$, and let $f'(x) = f(x) + c$ where $c$ is chosen so that $\sum_x f'(x) d'(x) = 0$. So \begin{eqnarray*} 0 = \sum_x \left( f(x) + c \right) d'(x) & = & \sum_x \left( f(x) + c \right) d(x) - f(w) - c - f(z) - c \\ & = & \sum_x f(x) d(x) + c \sum_x d(x) - f(z) - f(w) - 2c \\ & = & c \sum_x d(x) - 2c - f(z) - f(w). \end{eqnarray*} We get \begin{equation}\label{c_expression} c = \frac{f(z) + f(w)}{\sum_x d(x) - 2}. \end{equation} If $R_G(f)$ is the Rayleigh quotient of graph $G$ with harmonic eigenfunction $f$, then define $c_1, c_2$ so that \[ R_{G'}(f') = \frac{a -c_1 }{b - c_2}, \] where $c_1, c_2 > 0$. It is easily seen that \[ \frac{a-c_1}{b-c_2} < \frac{a}{b} \] if and only if \[ \lambda_1 = \frac{a}{b} < \frac{c_1}{c_2}.\] By definition of $f'$ and $G'$, we have $c_1 = (f(w) - f(z))^2 > f(w)^2 + f(z)^2$, since $f(w)f(z) < 0$. Also, \begin{eqnarray*} c_2 & = & \sum_x f(x)^2 d(x) - \sum_x f'(x)^2d'(x) \\ & = & \sum_x f(x)^2 d(x) - \left(\sum_x (f(x)+c)^2 d(x) - (f(z) + c)^2 - (f(w) + c)^2 \right) \\ & = & f(z)^2 + f(w)^2 + 2c (f(z) + f(w)) - c^2 \left(\sum_x d(x) - 2\right). \end{eqnarray*} Using Expression~\ref{c_expression} we get \[ c_2 = f(z)^2 + f(w)^2 + \frac{(f(z) + f(w))^2}{\sum_x d(x) - 2} \leq f(z)^2 + f(w)^2 + \frac{f(z)^2 + f(w)^2}{\sum_x d(x) - 2}, \] again using the fact that $f(w)f(z) < 0$. Combining these, we get \[ \frac{c_1}{c_2} > \frac{f(z)^2 + f(w)^2}{f(z)^2 + f(w)^2 + \frac{f(z)^2 + f(w)^2}{\sum_x d(x) - 2}} = \frac{1}{1 + (\sum_x d(x) - 2)^{-1}}. \] If $G'$ is connected, we have the (very weak) bound $\sum_x d(x) - 2 > 2n - 4$, so for any $\varepsilon > 0$ if $n$ is large enough we have $\frac{c_1}{c_2} > 1 - \varepsilon > \lambda_1$. Therefore deleting this edge would decrease $\lambda_1$. By minimality we conclude that $e$ is a bridge. Now, given a connected graph, take any connected spanning tree. Since any edge not on this spanning tree cannot disconnect the graph, there can be at most $n-1$ bridges, giving us the upper bound. \end{proof} We are now in a position to prove a lower bound on $\alpha(n)$, which completes our proof of Theorem \ref{min54}. \begin{claim} \[ \alpha(n)\geq (1+o(1))\frac{54}{n^3}. \] \end{claim} \begin{proof} Assume $G$ achieves $\alpha(n)$. Let $P' = P\setminus S$ and $N' = N\setminus S$, and let $|P'| = \alpha_1 n$, $|N'| = \alpha_2 n$, and $|S| = \alpha_3 n$. So $\alpha_1 + \alpha_2 + \alpha_3 = 1$. Now, since $S$ is a shortest path from $u$ to $v$, we have that any vertex in $V(G) \setminus S$ may have at most $3$ neighbors on $S$, and any vertex in $S$ may have at most $2$ neighbors in $S$. Letting $G_P$ and $G_N$ be the graphs induced by $P$ and $N$, respectively, note that \[ \mathrm{vol}_P = 2e(G_P) + e(N,P), \] and \[ \mathrm{vol}_N = 2e(G_N) + e(N,P). \] Putting these facts together, we have \[ 2e(G_P) \leq \sum_{z\in P} d(z) \leq |P'|^2 + 2e(P',S) + 2|S| \leq |P'|^2 + 6|P'| + 2|S| \leq \alpha_1^2n^2 + 8n. \] By Lemma \ref{NPedges} we have that $\mathrm{vol}_P \leq \alpha_1^2n^2 + 9n$. Similarly, $\mathrm{vol}_N \leq \alpha_2^2 n^2 + 9n$. By Lemma \ref{lowerBoundLemma}, we have \[ \lambda_1 \geq \frac{2}{|S| \sqrt{\mathrm{vol}_P \mathrm{vol}_N}} \geq (1+o(1))\frac{2}{\alpha_1\alpha_2\alpha_3 n^3}. \] Since $\alpha_1+\alpha_2 + \alpha_3 = 1$, this quantity is minimized when $\alpha_1 = \alpha_2= \alpha_3 = \frac{1}{3}$, and so \[ \lambda_1 \geq (1+o(1))\frac{54}{n^3}. \] \end{proof} \section{Problems and remarks} \label{sec:conc} In this paper, we proved an asymptotically sharp lower bound on the normalized Laplacian spectral gap of a connected graph. However, many questions remain unanswered. Here we mention several related problems: \begin{itemize} \item Characterize the extremal graphs for which $\lambda_1 = \alpha(n)$. One might guess that all such extremal graphs are double kite graphs for large enough $n$, but we were not able to prove this. \item Prove the corresponding theorem for the adjacency matrix: Stanic \cite{stanic2013graphs} conjectured that double kite graphs minimize the adjacency spectral gap. \item Minimize $\lambda_1$ of the normalized Laplacian over the family of all regular graphs. Aldous and Fill \cite{aldous2002reversible} conjectured that the minimum is $(1+o(1))\frac{2\pi^2}{3n^2}$ and is achieved by a necklace graph. An affirmative answer to this conjecture was given for 3-regular graphs by \cite{brand2007characterization}, but the general case is still open. \end{itemize} \bibliographystyle{siam}
{ "timestamp": "2018-07-12T02:00:56", "yymm": "1804", "arxiv_id": "1804.05500", "language": "en", "url": "https://arxiv.org/abs/1804.05500" }
\section{Introduction} \label{sec:intro} Short-texts are abundant on the Web and appear in various different formats. For example, in Twitter, users are constrained to a $140$ character upper limit when posting their tweets~\cite{Kwak:WWW:2010}. Even when there are no strict upper limits, users tend to provide brief answers in QA forums, review sites, SMS, email, and chat messages~\cite{Cong:SIGIR:2008,Thelwall:2010}. Unlike lengthy responses that take time to both compose and to read, short responses have gained popularity particularly in social media contexts. Considering the steady growth of mobile devices that are physically restricted to compact keyboards, which are suboptimal for entering lengthy text inputs, it is safe to predict that the amount of short-texts will continue to grow in the future. Considering the importance and the quantity of the short-texts in various web-related tasks, such as text classification~\cite{Wang:JZUS:2012,dossantos-gatti:2014:Coling}, and event prediction~\cite{Sakaki:WWW:2010}, it is important to be able to accurately represent and classify short-texts. Compared to performing text mining on longer texts~\cite{Yogatama:ICML:2014,Su:ICML:2011,Guan:WWW:2009}, for which dense and diverse feature representations can be created relatively easily, handling of shorter texts poses several challenges. First, the number of features that are actually present in a short-text will be a small fraction of the set of all features that exist in all of the train instances. Although this \emph{feature sparseness} is problematic even for longer texts, it is critical for shorter texts. In particular, when the diversity of the feature space increases as with longer $n$-gram lexical features, (a) the number of occurrences of a feature in a given instance (i.e., term frequency), as well as (b) the number of instances in which a particular feature occurs (i.e., document frequency), will be small. Therefore, it is difficult to reliably estimate the salience of a feature in a particular class in supervised learning tasks. Second, the shorter length means that there is \emph{less redundancy} in terms of the features that exist in a short-text. Consequently, most of the related words of a particular word might be missing in a short-text. For example, consider a review on \emph{iPhone 6} that says ``\emph{I liked the larger screen size of iPhone 6 compared to that of its predecessor}''. Although \emph{iPhone 6 plus}, a product similar to \emph{iPhone 6}, has also a larger screen compared to its predecessors, this information is not included in this short review. On the other hand, we might observe such positive sentiments associated with \emph{iPhone 6 plus} but not with \emph{iPhone 6} in other train instances, which will result in a high positive score for \emph{iPhone 6 plus} in a classifier trained from those train reviews. Unfortunately, we will not be able to infer that this particular user would also likely be satisfied with \emph{iPhone 6 plus}, thereby not recommending \emph{iPhone 6 plus} for this user. To overcome the above-mentioned challenges encountered when handling short-texts, we propose a \emph{feature expansion} method analogous to the query expansion methods used in information retrieval (IR)~\cite{IR_book} to improve the agreement between search queries input by the users and documents indexed by the search engine~\cite{Carpineto:2012}. We assume short-texts are already represented using some feature vectors, which we refer to as \emph{instances} in this paper. Lexical features such as unigrams or bigrams of words, part-of-speech (POS) tag sequences, and dependency relations have been frequently used in prior work on text classification. Our proposed method does not assume any particular type of features, and can be used with any discrete feature set. First, we train binary classifiers which we call \emph{feature predictors} for predicting whether a particular feature $v_i$ occurs in a given instance $\vec{x}$. For example, given the previously discussed short review, we would like to predict whether iPhone 6 plus is likely to occur in this review. The training instances required to learn feature predictors are automatically selected from unlabeled texts. Specifically, given a feature $v_i$, we select texts in which $v_i$ occurs as the positive training instances for learning a feature predictor for $v_i$. On the other hand, negative training instances for learning the feature predictor for $v_i$ are randomly sampled from the unlabeled texts, where $v_i$ does not occur. Using those positive and negative training instances we learn a binary classifier to predict whether $v_i$ occurs in a given instance. Any binary classification algorithm, such as support vector machines, logistic regression, naive Bayes classifier etc. can be used for this purpose, and it is not limited to linear classifiers. We define \emph{ClassiNet} as a directed weighted graph $\cG(\cV, \cE, \mat{W})$ of feature predictors, where each vertex $v_i \in \cV$ corresponds to a feature predictor. The directed edge $e_{ij} \in \cE$ from $v_i$ to $v_j$ is assigned the weight $1 \geq w_{ij} \geq 0$, which is the conditional probability that given $v_i$ is predicted for a particular instance, $v_j$ is also predicted for the same instance. It is noteworthy that we obtain both positive and negative instances for learning feature predictors from unlabeled data, and do not require any labeled data for the target task. For example, consider the case that we are creating a ClassiNet to find missing features in sentiment classification. In this case, the target task is sentiment classification. However, we do not require any labeled data for the target task such as sentiment annotated reviews when creating the ClassiNet that we are subsequently going to use for finding missing features. Therefore, the training of ClassiNets can be conducted in a purely unsupervised manner, without requiring any manually labeled data for the target task. Moreover, the decoupling of ClassiNet training from the target task enables us to use the same ClassiNet to expand feature vectors for different target tasks. As we discuss later in Section~\ref{sec:classi-cooc}, ClassiNets can be seen as a generalized version of the word co-occurrence graphs that have been well-studied in the NLP community~\cite{Rada:2011}. However, ClassiNets consider both explicit as well as implicit co-occurrences of words in some context, whereas word co-occurrence graphs are limited to explicit co-occurrences. Given a ClassiNet created from unlabeled data as described above, we propose several strategies for finding related features for a given instance that do not occur in the original instance. Specifically, we compare both \emph{local} feature expansion methods that consider the nearest neighbours of a particular feature in an instance (Section~\ref{sec:local}), as well as \emph{global} feature expansion methods that propagate the features that exist in an instance over the entire set of vertices in ClassiNet (Section~\ref{sec:global}). We evaluate the performance of the proposed feature expansion methods on short-text classification benchmark datasets. Our experimental results show that the proposed global feature expansion method significantly outperforms several local feature expansion methods,, and several sentence-level embedding methods on multiple benchmark datasets proposed for evaluating short-text classification methods. Considering that (a) ClassiNets can be created using unlabeled data, (b) the same ClassiNet can be used in principle for predicting features for different target tasks, (c) arbitrary features could be used in the feature predictors, not limited to lexical features, we believe that ClassiNets can be applied to a broad-range of machine learning tasks, not limited to short-text classification. Our contributions in this paper can be summarised as follows: \begin{itemize} \item We propose a method for learning a network of feature predictors that can predict missing features in feature vectors. The proposed network, which we refer to as the ClassiNet, can be learnt in an unsupervised manner, without requiring any labeled data for the target task in which we are going to apply the ClassiNet to expand features (Section~\ref{sec:classinet:learn}). \item We propose an efficient method to learn ClassiNets from large datasets. Specifically, we show that the edge-weights of ClassiNets can be computed efficiently using locality sensitive hashing (Section~\ref{sec:project}). \item Having proposed ClassiNets, we describe its relationship to word co-occurrence graphs that have a long history in the NLP community. We show that ClassiNets can be considered as a generalised version of word co-occurrence graphs (Section~\ref{sec:classi-cooc}). \item We propose several methods for finding related features for a given instance using the created ClassiNet. In particular, we consider both \emph{local methods} (Section~\ref{sec:local}) that consider the nearest neighbours in ClassiNet of the features that exist in an instance, as well as \emph{global methods} (Section~\ref{sec:global}) that consider all vertices in the ClassiNet. \end{itemize} \section{Related Work} \label{sec:related} Feature sparseness is a common problem that is encountered in various text mining tasks. Two main approaches for overcoming the feature sparseness problem in short-texts can be identified in the literature: (a) embedding the train/test instances in a dense, lower-dimensional feature space thereby reducing the number of zero-valued features in the instances, and (b) predicting the values of the missing features. Next, we discuss prior work that belong to each of those two approaches. An effective technique frequently used in prior work on short-texts to overcome the feature sparseness problem is to represent the texts in some lower-dimensional dense space, thereby reducing the feature sparseness. Several methods have been used to obtain such lower-dimensional representations such as topic-models~\cite{Yan:WWW:2013,yang-EtAl:2015:NAACL-HLT2,Wang:JZUS:2012}, clustering~\cite{Dai:2013,Rangrej:WWW:2011}, and dimensionality reduction~\cite{Blitzer:EMNLP:2006,Pan:WWW:2010}. Wang et al.~\cite{Wang:JZUS:2012} used latent dirichlet allocation (LDA) to identify features that are useful for identifying a particular class. Higher weights are assigned to the identified features, thereby increasing their contribution towards the classification decision. However, applying LDA at sentence-level is problematic because the number of words in a sentence is much smaller than that in a document. Consequently, Yan et al.~\cite{Yan:WWW:2013} proposed the bi-term topic model that models the co-occurrence patterns between words accumulated over the entire corpus. An alternative solution that uses an external knowledge-base in the form of a phrase list is propsed by Yang et al.~\cite{yang-EtAl:2015:NAACL-HLT2} to overcome the feature sparseness problem when learning topics from short-texts. The phrase list is automatically extracted from the entire collection of short-texts in a pre-processing step. Cluster-based methods have been proposed for representing documents to overcome the feature sparseness problem. First, some clustering algorithm is used to cluster the documents into a group of clusters. Next, each document is represented by the clusters to which it belongs. Dai et al.~\cite{Dai:2013} used a hierarchical clustering algorithm with purity control to generate a set of clusters, and use the similarity between a document and each of the clusters as augmented features to enrich the document representation. Their method significantly improves the classification accuracy for short web snippets in a support vector machine classifier. Feature mismatch is a fundamental problem in domain adaptation, where we must learn a classifier using labeled data from a source domain and apply it to predict labels for the test instances in a different target domain. Pan et al.~\cite{Pan:WWW:2010} proposed Spectral Feature Alignment (SFA), a method to overcome the feature mismatch problem in cross-domain sentiment classification. They created a bi-partite graph between domain-specific and domain-independent features, and then used a spectral clustering method to obtain a domain-independent lower-dimensional embedding. In structural correspondence learning (SCL)~\cite{Blitzer:ACL:2007,Blitzer:EMNLP:2006}, a set of features that are common to both source and the target domains, referred to as \emph{pivots}, is identified using mutual information with the sentiment label. Next, linear classifiers that can predict those pivots are learnt from unlabeled reviews. The weight vectors corresponding to the learnt linear classifiers are arranged as rows in a matrix, on which subsequently singular value decomposition is applied to compute a lower-dimensional projection. Feature vectors representing train source reviews are projected into this lower-dimensional space, in which a binary sentiment classifier is trained. During test time, feature vectors representing test target reviews are also projected to the same lower-dimensional space and the trained binary classifier is used to predict the sentiment labels. However, domain adaptation methods such as SCL and SFA require data from at least two (source vs. target) different domains (e.g. reviews on products in different categories) to overcome the missing feature problem, whereas in this work we assume the availability of data from one domain only. Instead of representing documents using lexical features, which often results in high-dimensional and sparse feature vectors, by embedding documents in low-dimensional dense spaces we can effectively overcome the feature sparseness problem~\cite{Lu:NIPS:2013,dossantos-gatti:2014:Coling,Le:ICML:2014}. These methods jointly learn character-level or word-level embeddings as well as document-level embeddings~\cite{Kiros:2015,Hill:NAACL:2016} such that the learnt embeddings capture the similarity constraints satisfied by a collection of short-texts. First, each word in the vocabulary is assigned a fixed dimensional word vector. We can initialize the word vectors randomly or using pre-trained word representations. Next, the word vectors are updated such that we can accurately predict the co-occurrences of words in some context, such as a window of tokens, a sentence, a paragraph, or a document. Different loss functions encoding different co-occurrence measures have been proposed for this purpose~\cite{Pennington:EMNLP:2014,Milkov:2013}. As shown later in Section~\ref{sec:sentemb}, ClassiNets perform competitively against sentence-level embedding methods on several short-text classification tasks. A single word can have multiple senses. For example, the word \emph{bank} could mean a \emph{financial institution} or a \emph{river bank}. Therefore, it is inadequate to represent different senses of a word using a single embedding~\cite{Reisinger:NAACL:2010,Iacobacci:ACL,Song:2016,camachocollados-pilehvar-navigli:2015:NAACL-HLT,johansson-nietopina:2015:NAACL-HLT,li-jurafsky:2015:EMNLP,hu-zhang-zheng:2016:COLING}. Several solutions have been proposed in the literature to overcome this limitation and learn \emph{sense embeddings}, which capture the sense related information of words. For example, \citet{Reisinger:NAACL:2010} proposed a method for learning sense-specific high dimensional distributional vector representations of words, which was later extended by \citet{Huang:ACL:2012} using global and local context to learn multiple sense embeddings for an ambiguous word. \citet{neelakantan-EtAl:2014:EMNLP2014} proposed a multi sense skip-gram (MSSG), an online cluster-based sense-specific word representations learning method, by extending Skip-Gram with Negative Sampling (SGNG)~\cite{Milkov:2013}. Unlike SGNG, which updates the gradient of the word vector according to the context, MSSG predicts the nearest sense first, and then updates the gradient of the sense vector. Aforementioned methods apply a form of word sense discrimination by clustering a word contexts, before learning sense-specific word embeddings based on the induced clusters to learn a fixed number of sense embeddings for each word. In contrast, a nonparametric version of MSSG (NP-MSSG)~\cite{neelakantan-EtAl:2014:EMNLP2014} estimates the number of senses per word and learn the corresponding sense embeddings. On the other hand, \citet{iacobacci-pilehvar-navigli:2015:ACL-IJCNLP} used a Word Sense Disambiguation (WSD) tool to sense annotate a large text corpus and then used an existing prediction-based word embeddings learning method to learn sense and word embeddings with the help of sense information obtained from the BabelNet~\cite{iacobacci-pilehvar-navigli:2015:ACL-IJCNLP} sense inventory. Similarly, \citet{camachocollados-pilehvar-navigli:2015:NAACL-HLT} used the knowledge in two different lexical resources: WordNet~\cite{WordNet} and Wikipedia. They use the contextual information of a particular concept from Wikipedia and WordNet synsets prior to learning two separate vector representations for each concept. A single word can be related to multiple different topics, without necessarily corresponding to different senses of the word. Revisiting our previous example, we might have a collection of documents about \emph{retail banks}, \emph{commercial banks}, \emph{investment banks} and \emph{central banks}. All these different banks are related to the financial sense of the word bank. However, in a particular task (eg. classifying documents related to the different types of financial banks), we might require different embeddings for the different topics in which the word bank appears. \citet{Liu:AAAI:2015} proposed three methods for learning \emph{topical word embeddings}, where they first cluster words into different topics using LDA~\cite{Blei:JMLR:2003} and then learn word embeddings using SGNS. \citet{Liu:IJCAI:2015} modelled the interactions among topics, contexts and words using a tensor and obtained topical word embeddings via tensor factorisation. Instead of clustering words prior to embedding learning, \citet{Shi:2017} proposed a method to jointly learn both words and topics, thereby considering the correlations between multiple senses of different words that occur in different topics. TopicVec~\cite{TopicVec} learns vector representations for topics in a document by modelling the co-occurrence between a target word and a context word considering both words' word embeddings as well as the topic embedding of the context word. Our proposed methods for feature expansion using ClassiNet can be seen as an \emph{explicit} feature prediction method, whereas methods that learn lower-dimensional dense embeddings of texts can be seen as \emph{implicit} feature prediction methods. For example, if we use lexical features such as unigrams or bigrams to create a ClassiNet, then the features predicted by that ClassiNet will also be lexicalised features, which are easier to interpret than dimensions in a latent embedded space. Although for text classification purposes it is sufficient to represent short-texts in implicit feature spaces, there are numerous tasks that require explicit interpretable predictions such as query suggestion in information retrieval~\cite{Carpineto:2012}, reverse dictionary mapping~\cite{Hill:TACL:2016}, and hashtag suggestion in social media~\cite{weston-chopra-adams:2014:EMNLP2014}. Therefore, the potential applications of ClassiNets as an explicit feature expansion method goes beyond short-text classificaion. It would be an interesting future research direction to combine implicit and explicit feature expansion methods to construct better representations for texts. Recently there has been several methods proposed for learning embeddings (lower-dimensional implicit feature representations) for the vertices of undirected or directed (and weighted) graphs~\cite{DeepWalk,li-zhu-zhang:2016:P16-1,LINE}. For example, in \emph{language graphs}~\cite{LINE}, the vertices can correspond to words and the weight of the edge between two vertices represent the strength of the co-occurrences between two words in a corpus. Alternatively, in a \emph{co-author network}, the vertices correspond to authors and the edges represent the number of papers two people have co-authored. DeepWalk~\cite{DeepWalk} performs a random walk over an undirected graph to generate a pseudo-corpus, which is then used to learn word (vertex) embeddings using skip-gram with negative sampling (SGNS)~\cite{Milkov:2013}. Li et al.~\cite{li-zhu-zhang:2016:P16-1} proposed a discriminative version of DeepWalk by including a discriminative supervised loss that evaluates how well the learnt vertex embeddings perform on some supervised tasks. Tang et al.~\cite{LINE} used both first-order and second-order co-occurrences in a graph to learn separate vertex embeddings, which were subsequently concatenated to create a single vertex embedding. Although in this paper we consider graphs where vertices correspond to words, the objective of creating ClassiNets is fundamentally different from the above-mentioned vertex embedding methods. In graph (vertex) embedding, we are given a graph and a goal is to learn embeddings for the vertices such that structural information of the graph is preserved in the learnt embeddings. On the other hand, in ClassiNets, we learn feature predictors which can be used to predict whether a particular feature is missing in a given context. The connection between co-occurrence graphs and ClassiNets is further discussed in Section~\ref{sec:classi-cooc}. Moreover, in Section~\ref{sec:classinet:expand}, we propose and evaluate several methods for expanding feature vectors using the ClassiNets we create, which is not relevant for vertex embedding methods. \section{ClassiNets} \label{sec:classinets} \subsection{Overview} \label{sec:overview} Our proposed method for classifying short-texts consists of two steps. First, we create a network of classifiers which we refer to as the \emph{ClassiNet} in this paper. In Section~\ref{sec:classinet:learn}, we describe the details of the method we propose to create ClassiNets. In Section~\ref{sec:classinet:expand}, we describe several methods for using the learnt ClassiNet to expand feature vectors to overcome the feature sparseness problem. We define a ClassiNet as a directed weighted graph $\cG(\cV, \cE, \mat{W})$, in which a vertex $v_i \in \cV = \{v_1, \ldots, v_n \}$ corresponds to a binary classifier (feature predictor) $h_i$ that predicts the occurrence of a feature $v_i$ in an instance. We assume that each train/test instance $x$ is already represented by a $d$-dimensional vector $\vec{x} = (x_1, x_2, \ldots, x_d)\T$, in which the $i$-th dimension corresponds to the value $x_i$ of the $i$-th feature representing the instance $x$. The label predicted by $h_i$ for an instance $\vec{x}$ is denoted by $h_i(\vec{x}) \in \{0,1\}$. The weight $w_{ij}$ associated with the edge $e_{ij}$ connecting the vertex $v_i$ to $v_j$ represents the conditional probability, $p(h_j(\vec{x}) = 1| h_i(\vec{x}) = 1)$, that $v_j$ is predicted to occur in $\vec{x}$, given that $v_i$ is also predicted to occur in $x$. Several remarks can be made about the ClassiNets. First, there is a one-to-one correspondence between the vertices $v_i$ in the ClassiNet and the feature predictors $h_i$. Therefore, a ClassiNet can be seen as a network of binary classifiers, as is implied by its name. In general, the set of features $\cS$ that we use for representing instances $x$ (hence for learning feature predictors), and the set of vertices $\cV$ in ClassiNet need not be the same. As we discuss later, vertices in the ClassiNet are used as expansion features to augment instances $x$, thereby overcoming the feature sparseness problem in short-text classification. Therefore, we are free to select a subset of features from all the features used for representing instances as the vertices in ClassiNet. For example, we might use the most frequent features in the train data as vertices in ClassiNet thereby setting $\cV \subset \cS$ ($n < d$). Alternatively, we could use all the features in the feature space of the instances as vertices in the ClassiNet, where we have $\cV = \cS$ (and $n = d$). In the remainder of the paper, we consider the general case where we have $\cV \subseteq \cS$ ($n \leq d$). Second, as we discuss later in Section~\ref{sec:classinet:learn}, we \emph{do not} require labeled data for the target task when creating ClassiNets. For example, let us consider binary sentiment classification of product reviews as the target task. We might have both sentiment rated reviews (labeled instances), and reviews without sentiment ratings (unlabeled instances) at our disposal. We can use both those types of reviews, and ignore the label information when computing the ClassiNet. This is particularly attractive for two reasons: (a) obtaining unlabeled instances is often easier for most tasks compared to obtaining labeled instances, (b) because a ClassiNet created from a particular corpus is independent of the label information unique to a target task, in principle, the same ClassiNet can be used to expand features for different target tasks. The second property is attractive in multi-task learning settings, where we must perform different tasks on the same data. For example, consider the two tasks: (a) predicting whether a given tweet is positive or negative in sentiment, and (b) predicting whether a given tweet would get favorited or not. Both those tasks can be seen as binary classification tasks. We could learn two binary classifiers -- one for predicting the sentiment and the other for predicting whether a tweet would get favorited. However, to overcome the feature sparseness problem in both those tasks, we can use the same ClassiNet. As long as an instance (for example a sentence or a document) is represented using any bag-of-features (unigrams, bigrams, trigrams, dependency paths, syntactic paths, POS sequences, semantic roles, frames etc.) we can use the proposed method to create a ClassiNet. The first step in creating a ClassiNet is to learn feature predictors (Section~\ref{sec:classinet:learn}). The feature predictors use the features available in an instance to as features to train a binary classifier. Therefore, it does not matter whether these features are $n$-grams or more complex types of features as listed above. The remainder of the steps in the proposed method (measuring the correlations between feature predictors to build the ClassiNet, applying feature expansion) use only the learnt feature predictors. Therefore, our proposed method can be used with \emph{any} feature representation of instances, not limiting to lexical n-gram features. \subsection{Learning ClassiNets} \label{sec:classinet:learn} Let us assume that we are given a set $\cD_{u} = \{\vec{x}^{(k)}\}_{k=1}^{N}$ of unlabeled feature vectors $\vec{x}^{(k)} \in \R^d$ representing $N$ short-texts. Given $\cD_{u}$ we construct a ClassiNet in two steps: (a) learn feature predictors $h_i$ for each vertex $v_i \in \cV$, and (b) compute the conditional probabilities $p(h_j(\vec{x}) = 1| h_i(\vec{x}) = 1)$ using the labels predicted by the feature predictors $h_i$ and $h_j$ for an instance $\vec{x}$. As positive training instances for learning a binary feature predictor for a feature $v_i$, we randomly select a set $\cD_i^{(+)} \subset \cD_{u}$ of $N^{(+)}_i$ instances where $v_i$ occurs, and remove $v_i$ from those selected instances. Likewise, we randomly select a set $\cD_i^{(-)} \subset \cD_{u}$ of $N^{(-)}_i$ instances where $v_i$ does not occur. Instances that have few features are not informative for learning accurate feature predictors. Therefore, we select instances that have more non-zero features than the average number of non-zero features in an instance in $\cD_{u}$. We found that, on average, there are ca. $15$ features in an instance. Compared to the number of instances containing a particular feature $v_i$ in the dataset, the number of instances that do not contain $v_i$ is significantly larger. Considering that we are randomly sampling negative instances from a larger set of instances, it is likely that those selected negative instances are not very informative about why $v_i$ is missing in a given instance. In other words, the randomly sampled negative instances might already be further from the decision hyperplane, therefore do not provide sufficient specialization in the hypothesis space. Consequently, it has shown in prior work that use pseudo-negative instances for training classifiers~\cite{Bollegala_WWW_2007} that it is effective to select a larger number of pseudo-negative instances than that of positive instances (i.e., $N^{(+)}_i < N^{(-)}_i$). We note that it is possible to set the number of positive and negative train instances dynamically for each feature $v_i$. For example, some features might be popular in the dataset resulting in a larger positive sample than the others. For simplicity, in this paper, we select all instances in which a particular feature occurs as the positive training instances for that feature, and select twice that number of negative instances from the remainder of the instances (i.e., $N^{(-)}_i = 2N^{(+)}$). An extensive study of different sampling methods and $N^{(-)}_i / N^{(+)}_i$ ratios is beyond the scope of the current paper. Once we have selected $\cD_i^{(+)}$, and $\cD_i^{(-)}$ as described above, we train a binary classifier to predict whether $v_i$ occurs in a given instance. We note that any binary classification algorithm, not limited to linear classifiers, can be used for this purpose. In our experiments, we use $\ell_2$ regularized logistic regression for its simplicity. We tune the regularization coefficient in each feature predictor using $5$-fold cross-validation. Being a probabilistic discriminative classifier, it is possible to obtain not only the predicted labels but also the class conditional probabilities from the trained logistic regression classifier. However, we only require the predicted labels for constructing the edge weights in ClassiNets as we describe next. Therefore, in theory, we can use even binary classifiers that do not produce confidence scores for creating ClassiNets, which extends the applicability of ClassiNets to wider contexts. Let us denote the label predicted by the feature predictor $h_i$ for an instance $\vec{x}$ by $h_i(\vec{x}) \in \{0,1\}$. For two features $v_i$ and $v_j$, we compute the confusion matrix $\mat{M}$ shown in Table~\ref{tbl:conf}. Here, $M_{ab}$ denotes the number of instances $\vec{x}$ for which $h_i(\vec{x}) = a$ and $h_j(\vec{x}) = b$. In particular, $M_{11}$ is the number of instances where both $v_i$ and $v_j$ are predicted to be co-occurring by the learnt feature predictors. \begin{table}[t] \centering \caption{Confusion matrix for the labels predicted by the feature predictors learnt for two features $v_i$ and $v_j$.} \label{tbl:conf} \begin{tabular}{|c|c|c|}\hline & $h_j(\vec{x}) = 1$ & $h_j(\vec{x}) = 0$ \\ \hline $h_i(\vec{x}) = 1$ & $M_{11}$ & $M_{10}$ \\ \hline $h_i(\vec{x}) = 0$ & $M_{01}$ & $M_{00}$ \\ \hline \end{tabular} \end{table} Given the counts in Table~\ref{tbl:conf}, $w_{ij}$ is computed as follows: \begin{equation} \label{eq:weight} w_{ij} = \frac{M_{11}}{M_{11} + M_{10}} \end{equation} Several practical issues must be considered when estimating the edge-weights using \eqref{eq:weight}. First, the set of instances we use for predicting labels when computing the confusion matrix in Table~\ref{tbl:conf} must contain at least some instances in which $v_i$ or $v_j$ occur (i.e., $M_{11} + M_{10} > 0$, and $M_{11} + M_{01} > 0$). Otherwise, even if the feature predictors $h_i$, $h_j$ are accurately learnt, we will still get unreliable sparse counts for $M_{11}$ and $M_{10}$. Therefore, we randomly sample a set of instances $\cD_{(i,j)} \subseteq \cD_{u}$ such that there exist equal numbers of instances containing $v_i$, and $v_j$. Let the total number of elements in $\cD_{(i,j)}$ be $d'$. We use those $d'$ instances when computing the values in the confusion matrix shown in Table~\ref{tbl:conf}. We ensure that there is no overlap between the test instances $\cD_{(i,j)}$ and the train instances we use to learn feature predictors. This is important because if the feature predictors are overfitting we will not get accurate predictions using the ClassiNet during test time. Using non-overlapping train and test instance sets, we can check whether the learnt feature predictors are overfitting. Although we use a ratio of one-third when sampling $\cD_{(i,j)}$ above, we can use different ratios for sampling as long as both $v_i$ and $v_j$ are sufficiently represented in $\cD_{(i,j)}$. \subsection{Efficient Computation of ClassiNets} \label{sec:project} ClassiNets can be learnt offline during the training stage, prior to expanding test instances. Therefore, we are allowed to perform more computationally intensive processing steps compared to what we are allowed at test time, which is required to be real-time for most tasks that involve short-texts such as tweet classification. Nevertheless, we propose several methods to speed-up the the construction process when the number of vertices $n$ in the ClassiNet grows. Compared to learning feature predictors for the vertices we use in the ClassiNet, which is linear in the number of vertices $n$ in the ClassiNet, to compute weights $w_{ij}$ we must consider all pairwise combinations between the vertices in the ClassiNet. If we assume that the cost of learning a binary classifier for a vertex to be a constant $c$ and is independent of the feature, then the overall computational complexity of creating a ClassiNet can be estimated as $\cO(cn + N n^2 d )$. The first term is simply the complexity of computing $n$ feature predictors at the constant cost of $c$. This operation can be easily parallelised because each feature predictor can be learnt independently of the others. Moreover, it is linear in the number of vertices in ClassiNet. Therefore, the first term can be ignored in most practical scenarios. In cases where computational cost of the linear predictors is non-negligible, we can use several techniques to speed up this computation. First, we could resort to more computationally efficient liner classifiers such as the perceptron. Perceptrons can be trained in an online manner, without having to load the entire training dataset to the memory. Second, note that only the features $v_{j}$ that co-occur with a particular vertex $v_{i}$ in any train instance will be useful for predicting the occurrence of $v_{i}$. Therefore, we can limit the features that we use in the predictor for $v_{i}$ to be the set of features $v_{j}$ that occur at least once in the training data. We can efficiently compute such feature co-occurrences by building an inverted search index. We can further speed up this computation by resorting to approximate methods where we require a context feature $v_{j}$ to co-occur a predefined minimum number of times with the target feature $v_{i}$ for which we must compute a predictor. Setting this cut-off threshold to higher values will result in smaller, sparser and less noisier feature spaces and speed up the predictor computation. However, larger cut-off thresholds are likely to remove important contextual features, thereby decreasing the accuracy of the feature predictors. The optimal cut-off threshold could be determined using cross-validation or held-out data. On the other hand, the second term corresponds to learning edge-weights, and involves three factors: (a) $n^2$, the number of pairwise comparisons we must perform between the $n$ vertices in the ClassiNet, (b) $N$, the maximum number of instances for which we must predict labels for each pair of feature predictors when we compute the confusion matrices as shown in Table~\ref{tbl:conf}, and (c) $d$, the number of features we must consider when computing the label of a predictor. For example, if we use linear classifiers as feature predictors, during test time we must compute the inner-product between the weight vector of the classifier and the feature vector of the instance to be classified, both of which are $d$-dimensional. The dimensionality $d$ of the vectors that represent instances will depend on the type of features we use. For example, if we limit to lexical features from the short-text, then the number of non-zero features in any given instance will be small. However, if we use dense features such as word embeddings, then the number of non-zero features in an instance might be large. However, the factors (a) and (b) require careful consideration. First, we must compare all pairs of predictors, which is quadratic in the number of vertices in the ClassiNet. Second, to obtain the label for an instance we must classify that instance using the learnt prediction model. For example, in the case of linear classifiers we must compute the inner-product between two $d$-dimensional vectors: feature vector representing the instance to be classified, and the weight vector corresponding to the feature predictor. For nonliner classifiers such as the ones that use polynomial kernels, the number of feature combinations can grow exponentially resulting in slower prediction times for large batches of test instances. As a solution to this problem, we first represent each feature predictor $h_i$ by a $d' (< d)$ dimensional vector $\vec{h}_i(\cD_{(i,j)})$, where each element corresponds to the label predicted for a particular instance $\vec{x} \in \cD_{(i,j)}$. We randomly sample $\cD_{(i,j)} \subseteq \cD_{u}$ following the procedure detailed in Section~\ref{sec:classinet:learn}, where we include equal numbers of instances that contain $v_i$, $v_j$, and neither of those two. Therefore, $\vec{h}_i(\cD_{(i,j)}) \in \mat{I}_{d'}$ and $\mat{I}_{d'}$ is the $d'$-dimensional simplex. We name $\vec{h}_i(\cD_{(i,j)})$ as the \emph{label vector} because it is a vector of predicted labels for all the instances in $\cD_{(i,j)}$ by $h_i$, the feature predictor learnt for the feature $v_i$. We can explicitly compute the label vector for the $i$-th feature predictor as follows: \begin{equation} \label{eq:label-vector} \vec{h}_i(\cD_{(i,j)}) = \left( \vec{h}_i(\vec{x}_1), \ldots, \vec{h}_i(\vec{x}_{d'}) \right)\T \end{equation} In practice, $d' \ll N$ because only a small number of instances in $\cD_{u}$ will contain $v_i$, or $v_j$, and we select equal proportions of instances that do not contain both instances. The following theorem states the relationship between neighbouring feature predictors in the original $d$-dimensional space and the projected $d'$-dimensional space. \begin{theorem} \label{th:LSH} Consider two (possibly nonlinear) feature predictors $h_{i}(\vec{x}) = \sigma(\vec{\mu}_{i}\T\vec{x})$, and $h_{j}(\vec{x}) = \sigma(\vec{\mu}_{j}\T\vec{x})$, parametrized by $\vec{\mu}_{i}, \vec{\mu}_{j} \in \R^{d}$, and a transformation function $\sigma(\cdot) \in \{1,0\}$. Let $\theta(\vec{\mu}_{i}, \vec{\mu}_{j})$ be the angle between $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$. The following relation holds between $\theta(\vec{\mu}_{i}, \vec{\mu}_{j})$ and the probability of agreement $p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right)$, \[ \theta(\vec{\mu}_{i}, \vec{\mu}_{j}) = \pi \left(1 - {p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right)}^{1/d'}\right) . \] \end{theorem} The proof of Theorem~\ref{th:LSH} is given below, and follows from the properties of locality sensitive hashing (LSH)~\cite{He:NIPS:2003,Andoni:CACM:2008,Indyk:STOC:98}. \subsection*{Proof of Theorem~1} Let us consider the agreement of the feature predictors $h_{i}$ and $h_{j}$ on the $k$-th instance $\vec{x}_{k} \in \cD_{(i,j)}$. The probability of agreement can be written as, \begin{equation} \label{eq:agreement} p\left( h_{i}(\vec{x}_{k}) = h_{j}(\vec{x}_{k}) \right) = 1 - p\left( h_{i}(\vec{x}_{k}) \neq h_{j}(\vec{x}_{k}) \right) . \end{equation} From the symmetry in the half-plane, the disagreement probability on the right side in \eqref{eq:agreement} can be written as twice the probability of one parameter vector being projected positive and the other negative, given by: \begin{equation} \label{eq:double} p\left( h_{i}(\vec{x}_{k}) \neq h_{j}(\vec{x}_{k}) \right) = 2 p\left( \vec{\mu}_{i}\T\vec{x}_{k} \geq 0, \vec{\mu}_{j}\T\vec{x}_{k} < 0 \right) \end{equation} However, the vector $\vec{x}_{k}$ must exist inside the dyhedral angle $\theta(\vec{\mu}_{i}, \vec{\mu}_{j})$ formed by the intersection of the two half-panes spanned by $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$. Therefore, the probability in \eqref{eq:double} can be estimated as the ratio between angles given by, \begin{equation} \label{eq:angle} p\left( \vec{\mu}_{i}\T\vec{x}_{k} \geq 0, \vec{\mu}_{j}\T\vec{x}_{k} < 0 \right) = \frac{\theta(\vec{\mu}_{i}, \vec{\mu}_{j})}{2\pi} . \end{equation} From \eqref{eq:agreement}, \eqref{eq:double}, and \eqref{eq:angle}, we obtain, \begin{equation} \label{eq:full} p\left( h_{i}(\vec{x}_{k}) = h_{j}(\vec{x}_{k}) \right) = 1 - \frac{\theta(\vec{\mu}_{i}, \vec{\mu}_{j})}{\pi} . \end{equation} If we assume that the instances in $\cD_{(i,j)}$ are i.i.d., then the agreement of the entire two $d'$-dimensional label vectors can be computed as the product of agreement probabilities of each dimension, given by, \begin{eqnarray} \label{eq:prod} p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right) &=& \prod_{k=1}^{d'} p\left( \vec{h}_{i}(\vec{x}_{k}) = \vec{h}_{j}(\vec{x}_{k}) \right) \nonumber \\ &=& {\left( 1 - \frac{\theta(\vec{\mu}_{i}, \vec{\mu}_{j})}{\pi} \right)}^{d'} . \end{eqnarray} From \eqref{eq:prod} it follows that, \[ \theta(\vec{\mu}_{i}, \vec{\mu}_{j}) = \pi \left(1 - {p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right)}^{1/d'} \right) \qed \] Theorem~\ref{th:LSH} states that we can measure the agreement between labels predicted by two feature predictors using the angle between their corresponding parameter vectors. More importantly, Theorem~\ref{th:LSH} provides us with a heuristic to approximately find the nearest neighbours of each vertex without having to compute the confusion matrices for all pairs of vertices in the ClassiNet. We compute the nearest neighbours for each feature predictor in the $d'$-dimensional space. Computation of ${p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right)}$ is closely related to the calculation of hamming distance between the label vectors $\vec{h}_{i}(\cD_{(i,j)})$ and $ \vec{h}_{j}(\cD_{(i,j)})$. The Point Location in Equal Balls (PLEB) algorithm~\cite{Indyk:STOC:98} can be used to compute the hamming distance in an efficient manner. This algorithm considers random permutations of the bit streams and their sorting to find the vector with the closest hamming distance~\cite{Charikar:STOC:2002}. We use the variant of this algorithm proposed by Ravichandran and Hovy~\cite{Ravichandran:ACL:2005} that extends the original algorithm to find the $k$-nearest neighbours. Specifically, we use this algorithm to find the $k$-nearest neighbours for each feature $v_i$, and compute edge-weights $w_{ij}$ for each $v_i$ and its nearest neighbours $v_j$ using the contingency table. Note that although we find the nearest neighbours using the approximate method described above, the edge-weights computed between the selected neighbours are precise because they are based on the confusion matrix. To estimate the size of the neighbourhood $k$ that we must select in order to obtain a reliable approximation of the neighbours that we would have in the original $d$-dimensional space, we use the following procedure. First, we randomly select a small number $\alpha (\ll N)$ of vertices from the trained ClassiNet, and compute the confusion matrices with each of those $\alpha$ vertices and the remainder of the vertices in the ClassiNet. We then compute the weights $w_{ij}$ of the edges that connect the selected $\alpha$ vertices to the rest of the vertices in the ClassiNet. Following this procedure we compute the nearest neighbours of each vertex in $\alpha$ without using the projection trick described above. Second, we apply the projection method described above for all the vertices in the ClassiNet, and compute the nearest neighbours of the $\alpha$ vertices that we selected. We then compare the overlap between the two sets of neighbourhoods. In our preliminary experiments, we found that setting the neighbourhood size $k = 10$ to be an admissible trade-off between the accuracy of the neighbourhood computation and the speed. Therefore, all experiments described in the paper use edge-weights computed with this $k$ value. \subsection{ClassiNets vs. Co-occurrence Graphs} \label{sec:classi-cooc} Before we describe how to use the trained ClassiNets to classify short-texts, it is worth discussing the connection between word co-occurrence graphs and ClassiNets. Representing the association between words using co-occurrence graphs has a long history in NLP~\cite{Rada:2011}. Word co-occurrences could be measured using symmetric measures, such as the Pointwise Mutual Information (PMI), Log-Likelihood Ratio (LLR), or asymmetric measures such as KL-divergence, or conditional probability~\cite{FSNLP}. In a co-occurrence graph, vertices correspond to words, and the weight of the edge connecting two vertices represents the strength of association between the corresponding two words. However, in a co-occurrence graph, two words $v_i$ and $v_j$ to be connected by an edge, $v_i$ and $v_j$ must explicitly co-occur within the same context. On the other hand, in ClassiNets, we have edges between vertices not only for the words that co-occur within the same context, but also if they are predicted for the same instance even though none of those features might actually be occurring in that instance. For example, for an instance $\vec{x}$ where $x_i = x_j = 0$, we might still have $h_i(\vec{x}) = h_j(\vec{x}) = 1$. Therefore, ClassiNets consider implicit occurrences of features which would not be captured by co-occurrence graphs. In fact, ClassiNets can be thought to be a generalized version of co-occurrence graphs that subsumes explicit co-occurrences. To see this, let us define feature predictors $h_i$ and $h_j$ as follows: \begin{eqnarray} h_i(\vec{x}) = \vec{1}[x_i \neq 0] \\ h_j(\vec{x}) = \vec{1}[x_j \neq 0] \end{eqnarray} Here, $\vec{1}$ is the indicator function defined as follows: \begin{equation} \label{eq:indicator} \vec{1}(\delta) = \begin{cases} 1 & \delta = \text{TRUE} \\ 0 & \delta = \text{FALSE} \end{cases} \end{equation} Then, $M_{11}$ in Table~\ref{tbl:conf} can be written as, \begin{equation} M_{11} = \sum_{\vec{x} \in \cD_{(i,j)}} \vec{1}[x_i \neq 0] \vec{1}[x_j \neq 0] , \end{equation} which is the number of instances in which both features $v_i$ and $v_j$ would co-occur. Therefore, ClassiNet reduces to co-occurrence graphs when the feature predictor is simply the indicator function for a single feature. However, in general, feature predictors would consider not just a single feature but a combination (potentially non-linear) of multiple features, thereby capturing broader information than in a word co-occurrence graph. \section{Feature Expansion} \label{sec:classinet:expand} In this Section, we describe several methods to use the ClassiNets created in Section~\ref{sec:classinets} for predicting missing features in instances, thereby overcoming the feature sparseness problem. We refer to this operation as \emph{feature expansion}. Given a train or a test instance $\vec{x} = (x_1, \ldots, x_d)\T$, we use the non-zero features, $x_i \neq 0$ in $x$ and find similar vertices $v_j \in \cV$ from the created ClassiNet. In Section~\ref{sec:local}, we describe \emph{local feature expansion} methods that consider only the nearest neighbours of the vertices in the ClassiNet that correspond to non-zero features in an instance, whereas in Section~\ref{sec:global} we propose a \emph{global feature expansion} method that propagates the original features across the ClassiNet to predict the related features. \subsection{Local Feature Expansion} \label{sec:local} Given a ClassiNet, we propose several feature expansion methods that consider the local neighbourhood of the non-zero features that occur in an instance. We refer to such methods collectively as \emph{local feature expansion} methods. \subsubsection{Independent Expansion} \label{sec:expand:independent} The first local feature expansion method we propose expands each feature in an instance independently of the others. Specifically, we predict whether $v_i$ occurs in a given instance $\vec{x}$ using the feature predictor $h_i$ we trained from the unlabeled instances. If $h_i(\vec{x}) = 1$, then we append $v_i$ as an expansion feature to $\vec{x}$, otherwise we ignore $v_i$. We repeat this process for all the vertices $v_i \in \cV$ and append the positively predicted vertices to the original instance $\vec{x}$. If the $i$-th feature $x_i$ already appears in $\vec{x}$ and also predicted by $h_i(\vec{x})$ then we set its feature value to $x_i + h_i(\vec{x})$. In the case where we have binary feature representations we will have $x_i \in \{0,1\}$. Therefore, in the binary feature setting if a feature that already exists in an instance is predicted, then it will result in doubling the feature weight ($\because x_i + h_i(\vec{x}) = 1 + 1 = 2$). Moreover, instead of predicting the label, in a probabilistic classifier, such as the logistic regression, we can use the posterior probability instead of the predicted label as $h_i(\vec{x})$ to compute feature values for the expansion features. \subsubsection{Local Path Expansion} \label{sec:expand:local} This method extends the independent expansion method described in Section~\ref{sec:expand:independent} by including all the vertices along the shortest paths that connect predicted features to the original features over the ClassiNet. For example, let us assume that a feature $x_i = 0$ in an instance $\vec{x}$. If $h_i(\vec{x}) = 1$, we will append $v_i$ as well as all the vertices along the shortest paths that connect $v_i$ to each feature $x_j \neq 0$ that exists in the instance $\vec{x}$. Because all expanded features are connected to the original non-zero features that exist in the instance via some local path, we refer to this approach as the \emph{local path expansion}. By construction, the set of expansion candidates produced by the local path expansion method subsumes that of the independent expansion method. \subsubsection{All Neighbour Expansion} \label{sec:expand:nn} In this expansion method, first, we use edge-weights to find the $k$-nearest neighbours of each vertex $v_i$, and connect all the neighbours for each vertex to create a $k$-nearest neighbour graph from the trained ClassiNet. The $k$-nearest neighbour graph that we create from the ClassiNet in this manner is a subgraph of the ClassiNet. Two vertices $v_i$ and $v_j$ are connected by an edge in this $k$-nearest neighbour graph if and only if $v_i$ is among the top $k$ most similar vertices to $v_j$ as well as $v_j$ is among the top $k$ most similar vertices to $v_i$. The weights of all the edges in this $k$-nearest neighbour graph are set to $1$. Next, for each non-zero feature in an instance $\vec{x}$, we use its nearest neighbours as expansion features. This method ignores the absolute values of the edge-weights in the ClassiNet, and considers only their relative strengths. If we increase the value of $k$, we will have a larger set of candidate expansion features. However, it will also result in considering less relevant features to the original features. Therefore, there exists a trade-off between the number of expansion candidates we can use for feature vector expansion, and the relevancy of the expansion features to the original features. Using development data, we constructed $k$-nearest neighbour graphs for varying $k$ values, and found that $k > 4$ settings often result in noisy neighbourhoods. Consequently, when using neighbour expansion, we set $k = 4$. \subsubsection{Mutual Neighbour Expansion} \label{sec:expand:mutual} The mutual neighbour expansion method also uses the same $k$-nearest neighbour graph as used by the all neighbour expansion method described in Section~\ref{sec:expand:nn}. The mutual neighbour expansion method selects a vertex $v_j$ in ClassiNet as an expansion candidate, if there exists at least two distinct vertices $v_i$, $v_k$ in the ClassiNet for which $x_i \neq 0$, and $x_k \neq 0$ in the instance $\vec{x}$ to be expanded. This method can be seen as a conservative version of the all neighbour expansion method described in Section~\ref{sec:expand:nn} because, we would ignore vertices $v_j$ that are nearest neighbours of only a single feature in the original feature vector. The mutual neighbour expansion method addresses the issue associated with previously proposed local feature expansion methods, which select expansion candidates separately for each non-zero feature in the feature vector to be expanded, ignoring the fact that the feature vector represents a single coherent short-text. However, this conservative expansion candidate selection strategy of the mutual neighbour expansion method means that we will have a smaller set of expansion candidates in comparison to, for example, the all neighbour expansion method. \subsection{Global Feature Expansion} \label{sec:global} The local feature expansion methods described in Section~\ref{sec:local} consider only the vertices in the ClassiNet that are \emph{directly connected} to a feature in an instance as expansion candidates. Even in the case of local path expansion (Section~\ref{sec:expand:local}), the expansion candidates are limited to the local neighbours of the original features and the predicted features. Considering that ClassiNet is a directed graph, we can perform label propagation on ClassiNet to find features that are not directly connected nor appearing in the local neighbourhood of a feature in a short-text but still relevant. For example, assume that \emph{Google} and \emph{Microsoft} are not local neighbours in a ClassiNet. Consequently none of the local neighbour expansion methods will be able to predict \emph{Microsoft} as a relevant feature for expanding a short-text containing \emph{Google}. However, if \emph{Bing}, a Web search engine similar to \emph{Google}, appears in the local neighbourhood of \emph{Google} in the ClassiNet, and if we can propagate from \emph{Bing} to its parent company \emph{Microsoft} via the ClassiNet, then we will be able to predict \emph{Microsoft} as a relevant feature for \emph{Google}. The propagation might be over multiple hops, thereby reaching beyond the local neighbourhood of a feature. Propagation over ClassiNet can also help to reduce the ambiguity in feature expansion. For example, consider the sentence ``\emph{Microsoft and Apple are competing for the tablet computer market.}''. If we do not perform word sense disambiguation prior to feature expansion, and we expand each feature independently of the others, then it is likely that we might incorrectly expand \emph{apple} by other types of fruits such as \emph{banana} or \emph{orange}. Such phenomena are observed in prior work on set expansion and is referred to as \emph{semantic drift}~\cite{Kozareva:NAACL:2010}. However, if we find the expansion candidates jointly, such that they are relevant to all the features (words) in the sentence, then they must be relevant to both \emph{Microsoft} as well as \emph{Apple}, which encourages other IT companies, such as \emph{Google} or \emph{Yahoo} for example. All local feature expansion methods described in Section~\ref{sec:local} except the independent expansion method address this issue by ranking expansion candidates depending on how well they are related to all the features in a short-text. Label propagation can solve this ambiguity problem in a more systematic manner by converging multiple random walks initiated at different features that exist in a short text. Next, we describe a \emph{global feature expansion} method based on propagation over ClassiNet. \begin{figure}[t] \centering \includegraphics[height=50mm]{global.pdf} \caption{Computing the feature value of an expansion feature $v^*$ for an instance that has $v_1 = x_1$ and $v_2 = x_2$ as non-zero features.} \label{fig:global} \end{figure} First, let us describe the proposed global feature expansion method using the ClassiNet shown in Figure~\ref{fig:global}. Here, we consider expanding an instance $\vec{x} = (x_1, x_2)\T$ with two non-zero features $v_1 = x_1$ and $v_2 = x_2$ ($x_1 \neq 0$, and $x_2 \neq 0$). We would like to compute the likelihood $p(v^*|\vec{x})$ of a vertex $v^*$ as an expansion candidate for the instance $\vec{x}$. From Figure~\ref{fig:global} we see that there are two possible paths reaching $v^*$ starting from the original features $x_1$ and $x_2$. Assuming that the two paths are independent, we compute $p(v^*|\vec{x})$ as follows: \begin{equation} p(v^*|\vec{x}) = p(x_1)p(v_3|x_1)p(v^*|v_3) + p(x_2)p(v_4|x_2)p(v^*|v_4) \label{eq:example} \end{equation} The computation described in Figure~\ref{fig:global} can be generalized for an arbitrary ClassiNet $\cG(\cV, \cE, \mat{W})$, and an instance $\vec{x} = (x_1, \ldots, x_d)\T$. For this purpose, let us define the set of non-cyclic paths connecting two vertices $v_i$, $v_j$ in $\cG$ to be $\Gamma(v_i, v_j)$. For the example shown in Figure~\ref{fig:global} we have the two paths $x_1 \rightarrow v_3 \rightarrow v^*$, and $x_2 \rightarrow v_4 \rightarrow v^*$. We compute the likelihood $p(v^* | \vec{x})$ of a vertex $v^* \in \cV$ being an expansion candidate of $\vec{x}$ as follows: \begin{equation} p(v^*|\vec{x}) = \sum_{k=1}^{d} \left( x_k p(x_k=v_k)\prod_{(a,b) \in \Gamma(x_k, v^*)}p(b|a) \right) \label{eq:global} \end{equation} If a feature $x_k = 0$, then the likelihoods corresponding to paths starting from $x_k$ will be ignored in the computation of \eqref{eq:global}. The prior probabilities of features $p(x_k)$ can be estimated from train data by dividing the number of instances that contain $x_k$ by the total number of instances. Alternatively, we could set a uniform prior for $p(x_k)$ thereby considering all the words that occur in an instance equally. We follow the latter approach in our experiments. The sum-product computation over paths can be efficiently computed by observing that it can be modeled as a label propagation problem over a directed weighted graph, where an instance $\vec{x}$ is the initial state vector and the transition probabilities are given by the weight matrix $\mat{W}$. Vertices that can be reached after $q$ hops are given by $\sum_{i=1}^{q}\mat{W}^{i}\vec{x}$. Neighbours that are distantly located in the ClassiNet are less reliable as expansion candidates. To reduce the noise due to distant (and potentially irrelevant) vertices during the propagation, we introduce a damping factor $0 < \gamma \leq 1$ in the summation, $\sum_{i=1}^{q}\gamma^i \mat{W}^{i} \vec{x}$. In Section~\ref{sec:damp}, we experimentally study the effect of the level of damping on the classification accuracy of short-text classification. The feature expansion methods we described above are used to predict missing features for both train and test instances. We expand feature vectors representing the train/test instances, and assign unique identifiers to the expansion features, thereby distinguishing between the original features and the expanded features. For example, given the positive sentiment labeled train sentence ``\emph{I love dogs}'', we can represent it using the feature vector, [(\emph{I}, 1), (\emph{love}, 1), (\emph{dog}, 1)]. Here, we assume that lemmatization has been conducted on the input and the feature \emph{dogs} has been converted to its singular form \emph{dog}. Let us further assume that from the trained ClassiNet we were able to predict that \emph{cat} is a related feature for \emph{dog}, and the candidate score $p(cat|dog) = 0.8$. Next, we add the feature (\emph{EXP=cat}, 0.8) to the feature vector representing this train instance, where the prefix \emph{EXP=} indicates that it is a feature introduced by the expansion method and not a feature that existed in the original train instance. Distinguishing original vs. expansion features is useful when we would like to learn different weights for the same feature depending on whether it is expanded or not. For example, if a particular feature is not very useful as an expansion feature, it will be assigned a lower weight thereby effectively pruning that feature out from the model learnt by the classifier. The first step of learning a ClassiNet is learning the feature predictors. In this regard, any word embedding learning method can be used for the purpose of learning feature predictors. Once the feature predictors are learnt, we can create a ClassiNet in the same manner as we propose in this paper and use the ClassiNet created to perform feature expansion using local/global feature expansion methods we propose in the paper. This view of ClassiNets illustrates the general applicability of the proposed method. \section{A Theoretical Analysis of ClassiNets} \label{sec:theory} Before we empirically evaluate the performance of the proposed ClassiNets for feature expansion in short-text classification, let us analyze some interesting properties of ClassiNets. To simplify the analysis, let us assume that we are using a ClassiNet for learning a linear classifier $\vec{\phi} \in \R^{d}$ for a binary classification task. Specifically, let us assume that we are given a train dataset $\{(\vec{x}^{(k)}, y^{(k)})\}_{k=1}^{N}$ consisting of $N$ instances, where each train instance $k$ is represented by a feature vector $\vec{x}^{(k)} \in \R^{d}$. The binary target label assigned to the $k$-th train instance is denoted by $y^{(k)} \in \{1, -1\}$. For correctly classified train instances $\vec{x}^{(k)}$ we have, $y^{(k)}\phi\T\vec{x}^{(k)} > 0$. We use the trained linear classifier $\vec{\phi}$, and predict the label $\hat{y}$ of an unseen test instance $\hat{\vec{x}}$ as follows: \begin{eqnarray} \label{eq:pred} \hat{y} = \begin{cases} 1 & \text{if } \phi\T\hat{\vec{x}} > 0 \\ -1 & \text{otherwise} \end{cases} \end{eqnarray} Let us assume that we have learnt a feature predictor $h_{i}$ that predicts whether the $i$-th feature exists in a given instance. As described in Section~\ref{sec:overview}, we can use any classification algorithm to learn the feature predictors. However, as a concrete case, let us consider linear classifiers in this analysis. In the case of linear classifiers, we can represent the feature predictor learnt for the $i$-th feature by the vector $\vec{\mu}_{i}$. Following the notation introduced in Section~\ref{sec:overview}, we can write the feature predictor $h_{i}$ as follows: \begin{equation} h_{i} (\vec{x}) = \begin{cases} 1 & \text{if } \vec{\mu}_{i}\T\vec{x} > 0 \\ -1 & \text{otherwise} \end{cases} \end{equation} In the ClassiNets described in the paper so far, we used the predicted discrete labels as the values of the predicted features during feature expansion. However, in this analysis let us consider the more general case where we use the actual prediction score, $\vec{\mu}_{i}\T\vec{x}$ as the contribution of the feature expansion towards the $i$-th feature. We can construct the expanded feature vector, $\vec{x}^{*} \in \R^{d}$, of the feature vector $\vec{x} \in \R^{d}$ considering the inner-product between $\vec{x}$ and each of the feature predictors $\vec{\mu}_{i}$ as in \eqref{eq:expand}. \begin{equation} \label{eq:expand} \vec{x}^{*} = [ (x_{1} + \vec{\mu}_{i}\T\vec{x}), \ldots, (x_{i} + \vec{\mu}_{i}\T\vec{x}), \ldots, (x_{d} + \vec{\mu}_{d}\T\vec{x})]\T \end{equation} Here, we denote the $i$-th dimension of the feature vector $\vec{x}$ by $x_{i}$. We can transform the given train dataset $\{(\vec{x}^{(k)}, y^{(k)})\}_{k=1}^{N}$ by expanding each feature vector separately using \eqref{eq:expand}, and use the expanded feature vectors to train a binary linear classifier $\vec{\phi}^{*}$. Following \eqref{eq:pred}, we can use $\vec{\phi}^{*}$ to predict the label for a test instance $\vec{x}^{*}$ based on the prediction score given by \begin{eqnarray} \vec{\phi}^{*}\T\vec{x}^{*} &=& \sum_{i=1}^{d} \phi_{i}^{*} \left( x_{i} + \vec{\mu}_{i}\T\vec{x} \right) \nonumber \\ &=& \sum_{i=1}^{d} \phi_{i}^{*} x_{i} + \sum_{i=1}^{d} \phi_{i}^{*} \vec{\mu}_{i}\T\vec{x} \nonumber \\ &=& \vec{\phi}^{*}\T \vec{x} + \vec{\phi}^{*}\T \mat{L} \vec{x} \label{eq:exp2} \\ &=& \vec{\phi}^{*}\T \left(\mat{I} + \mat{L} \right) \vec{x} \label{eq:exp3} \end{eqnarray} Here, $\mat{I} \in \R^{d \times d}$ is a unit matrix, and $\mat{L} \in \R^{d \times d}$ is the matrix formed by arranging the feature predictors $\vec{\mu}_{i}$ in rows. In other words, $\mat{L} = [\vec{\mu}_{1} \ldots \vec{\mu}_{d}]\T$. The first term in \eqref{eq:exp2} corresponds to classifying the non-expanded (original) instance $\vec{x}$ using the classifier trained using the expanded train dataset. The second term in \eqref{eq:exp2} represents the prediction score due to feature expansion. From \eqref{eq:exp3} we see that performing feature expansion on a feature vector $\vec{x}$ is equivalent to multiplying the matrix $\left(\mat{I} + \mat{L} \right)$ into $\vec{x}$. Therefore, local feature expansion methods described in Section~\ref{sec:local} can be seen as projecting the train feature vectors into the same $d$-dimensional feature space spanned by the features that exist in the train instances. As a special case, we see that when we do not learn feature predictors we have $\mat{L} = \mat{0}$, for which \eqref{eq:exp2} reduces to the prediction score $\vec{\phi}^{*}\T\vec{x}$ of the binary linear classifier trained using non-expanded train instances. \subsection{Edge weights of ClassiNets} Recall that, $w_{ij}$ the weight of the edge connecting the vertex $i$ to vertex $j$ in a ClassiNet was defined by \eqref{eq:weight}. In the case of binary linear feature predictors $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$ we considered in the previous section, let us estimate the value of $w_{ij}$. Using the indicator function $\vec{1}$ defined by \eqref{eq:indicator}, we compute $M_{11}$ and $(M_{11} + M_{10})$ in \eqref{eq:weight} as follows: {\small \begin{eqnarray} && M_{11} = \sum_{k=1}^{N} \vec{1}[(y^{(k)}\vec{x}^{(k)}\T\vec{\mu}_{i}\mkern-5mu>\mkern-5mu0) \land (y^{(k)}\vec{x}^{(k)}\T\vec{\mu}_{j} \mkern-5mu > \mkern-5mu 0)] \label{eq:M11} \\ && M_{11} + M_{10} = \sum_{k=1}^{N} \vec{1}[(y^{(k)}\vec{x}^{(k)}\T\vec{\mu}_{i} > 0)] \label{eq:M*} \end{eqnarray} } Let us assume that we sample instances $\vec{x}$ from the train dataset randomly according to the distribution $p(\vec{x})$. Then the expected counts in $\hat{M}_{11}$ and $\hat{M}_{10}$ in \eqref{eq:M11} and \eqref{eq:M*} can be expressed using the expected number of the correct classifications made by the feature predictors $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$ as follows: {\small \begin{eqnarray} && \hat{M}_{11} = \Ep_{p(\vec{x})}\left[ \vec{1}[(y\vec{x}\T\vec{\mu}_{i} > 0) \land (y\vec{x}\T\vec{\mu}_{j} > 0)] \right] \label{eq:M11:hat} \\ && \hat{M}_{11} + \hat{M}_{10} = \Ep_{p(\vec{x})} \left[ \vec{1}[(y\vec{x}\T\vec{\mu}_{i} > 0)] \right] \label{eq:M*:hat} \end{eqnarray} } Using the expected counts given by \eqref{eq:M11:hat} and \eqref{eq:M*:hat} we can compute the approximate value of the edge weight $\hat{w}_{ij}$ as follows: \begin{equation} \label{eq:weight:approx} \hat{w}_{ij} = \frac{\Ep_{p(\vec{x})}\left[ \vec{1}[(y\vec{x}\T\vec{\mu}_{i} > 0) \land (y\vec{x}\T\vec{\mu}_{j} > 0)] \right]} { \Ep_{p(\vec{x})} \left[ \vec{1}[(y\vec{x}\T\vec{\mu}_{i} > 0)] \right]} \end{equation} If we have a sufficiently large train dataset, then \eqref{eq:weight:approx} provides an alternative procedure for estimating the edge weights. We could randomly select samples from the train dataset, predict the features $i$ and $j$ for those samples, and compute the expectations as ratio counts. We can repeat this procedure many times to obtain better approximations for the edge weights. Although this is a theoretically feasible procedure for approximately computing the edge weights, it can be slow in practice and might require many samples before we obtain a reliable approximation for the edge weights. Therefore, the edge weight computation method described in Section~\ref{sec:project} is more appropriate for practical purposes. \subsection{Analysis of the Global Feature Expansion Method} We already showed in \eqref{eq:exp3} that local feature expansion methods can be considered as feature vector transformation methods by a matrix $(\mat{I} + \mat{L})$. However, an important strength of ClassiNet is that we can propagate the predicted features over the network using the global feature expansion method described in Section~\ref{sec:global}. Let us denote the edge-weight matrix of the ClassiNet $\cG$ by $\mat{W}$. The $(i,j)$-th element of $\mat{W}$ is denoted by $w_{ij}$. The connection between edge weights $w_{ij}$ and the feature predictors $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$ is given by \eqref{eq:weight:approx}. In the global feature expansion method, we repeatedly propagate the predicted features across the network, which can be seen as a repeated multiplication using $\gamma \mat{W}$, where $\gamma$ is the damping factor described in Section~\ref{sec:global}. Observing this connection, we can derive the prediction score under the global feature expansion method similar to \eqref{eq:exp3} as follows: \begin{eqnarray} \vec{\phi}^{*}\T\vec{x}^{*}&=& \vec{\phi}^{*}\T \left(\mat{I} + \gamma\mat{W} + \ldots + \gamma^{q} \mat{W}^{q} \right) \vec{x} \nonumber \\ &=& \vec{\phi}^{*}\T (\mat{I} - \gamma \mat{W})\inv (\mat{I} - \gamma^{(q+1)} \mat{W}^{(q+1)}) \vec{x} \label{eq:exp4} \end{eqnarray} For the summation shown in \eqref{eq:exp4} to hold, and the matrix $(\mat{I} - \gamma \mat{W})$ to be invertible, for all eigenvalues $\lambda_{r}$ of $\mat{W}$ we require $\gamma |\lambda_{r}| < 1$. This requirement can be met in practice by a sufficiently small damping factor. For example, we could set $\gamma = 1/(1 + |\lambda_{\max}||)$, where $|\lambda_{\max}|$ is the eigenvalue of $\mat{W}$ with the maximum absolute value. As a special case where we propagate the features without truncating, we have $q \rightarrow \infty$, for which we obtain the prediction score given in \eqref{eq:inf}. \begin{equation} \label{eq:inf} \vec{\phi}^{*}\T\vec{x}^{*} = \vec{\phi}^{*}\T (\mat{I} - \gamma \mat{W})\inv \vec{x} \end{equation} From \eqref{eq:inf}, we see that, similar to the local feature expansion methods, the global feature expansion method can also be seen as projecting the input feature vector $\vec{x}$ using the matrix $(\mat{I} - \gamma \mat{W})\inv$. \section{Experiments} \label{sec:exp} We create a ClassiNet using 257,306 unlabeled sentences from the Large Movie Review dataset\footnote{\url{http://ai.stanford.edu/~amaas/data/sentiment/}}. Each word in this dataset is uniquely represented by a vertex in the ClassiNet. We learn linear predictor for each feature using automatically selected positive (reviews where the target feature appears) and negative (reviews where the target feature does not appear) training instances. The ClassiNet created from this dataset contains $489,000$ vertices. This ClassiNet is used in all the experiments described in the remainder of this paper. For evaluation purposes we use four binary classification datasets: the Stanford sentiment treebank (\textbf{TR})\footnote{\url{http://nlp.stanford.edu/sentiment/treebank.html}} (903 positive test instances and 903 negative test instances), movie reviews dataset (\textbf{MR})~\cite{Pang:ACL:2005} (5331 positive instances and 5331 negative instances), customer reviews dataset (\textbf{CR})~\cite{Hu:KDD:2004} (925 positive instances and 569 negative instances), and subjectivity dataset (\textbf{SUBJ})~\cite{Pang+Lee:04a} (5000 positive instances and 5000 negative instances). We perform five-fold cross-validation in all datasets, except in the Stanford sentiment treebank where there exists a pre-defined test and train split. In each dataset, we use the train portion to learn a binary classifier. Next, we use the trained ClassiNet to expand the feature vectors for the test instances. We then measure the classification accuracy of the binary classifier on the expanded test instances. If high classification accuracies are obtained using a particular feature expansion method, then that feature expansion method is considered superior. We use a CPU server containing 48 cores of 2.5GHz Intel Xeon CPU and 512GB RAM in our experiments. The entire training pipeline of training feature predictors, building the ClassiNet and expanding training instances using Global feature expansion method takes approximately 1.5 hours. The testing phase is significantly faster because we can use the created ClassiNet to expand test instances and use the trained model to make predictions. For example, for the \textbf{SUBJ} dataset, which is the largest among all datasets used in our experiments, it takes only 5 minutes to both expand (using Global feature expansion) and predict (using logistic regression). \subsection{Binary Classification of Short-Texts} \label{sec:sentiment} Direct evaluation of the features predicted by the ClassiNet is difficult because there is no gold standard for feature expansion. Instead, we perform an extrinsic evaluation of the created ClassiNet by using it to expand feature vectors representing sentences in several binary text classification tasks. If we can observe any increase (or decrease) in classification accuracy for the target classification task when we use the features predicted by the ClassiNet, then it can be directly associated with the effectiveness of the ClassiNet. For the purpose of training a binary classifier, we represent a sentence by a real-valued vector, in which elements correspond to the unigrams extracted from that sentence. The feature values are computed using the tfidf measure. We train a binary logistic regression model, where the $L_{2}$ regularisation coefficient is tuned using development data selected from the Stanford sentiment treebank dataset. We use classification accuracy, which is defined as the ratio between the correctly classified test sentences and the total number of test sentences in the Stanford sentiment treebank. In addition to reporting the overall classification accuracies, we report classification accuracies separately for the positively labeled instances and the negatively labeled sentences. Because this is a binary classification task, a random classifier would obtain an accuracy of $50\%$. There are $903$ positive and $908$ negative sentiment labeled test sentences in the Stanford sentiment treebank test dataset. Therefore, a baseline that assigns the majority label would obtain an accuracy of $50.13\%$ on this dataset. Table~\ref{tbl:sentiment} compares the sentiment classification accuracies obtained by the following methods: \textbf{No Expansion:} This baseline does not perform any feature expansions. It trains a binary logistic regression classifier using the train sentences, and applies it to classify sentiment of the test sentences. This baseline demonstrates the level of performance we would obtain if we had not performed any feature expansion. It can be seen as a lower-baseline for this task. \textbf{Independent Expansion:} This method is described in Section~\ref{sec:expand:independent}. \textbf{Local Path Expansion:} This method is described in Section~\ref{sec:expand:local}. \textbf{All neighbour Expansion:} This method is described in Section~\ref{sec:expand:nn}. \textbf{Mutual neighbour Expansion:} This method is described in Section~\ref{sec:expand:mutual}. \textbf{WordNet:} Using lexical resources such as thesauri to find related words is a popular technique used in query expansion~\cite{Fang:ACL:2008,Gong:2005}. To simulate the performance that we would obtain if we had used an external resource such as the WordNet to find the expansion candidates, we implement the following baseline. In the WordNet, words that are semantically related are grouped into clusters called \emph{synsets}. For each feature in a test instance, we search the WordNet for that feature, and use all words listed in synsets for that feature as its expansion candidates. We consider all synonyms in a synset to be equally relevant as expansion candidates of a feature. \textbf{SCL:} Domain adaptation methods attempt to overcome the feature mismatch between source and target domains by predicting missing features and/or learning a lower-dimensional embedding common to the two domains. Although we do not have two domains in our setting, we can still apply domain adaptation methods such as the structural correspondence learning (SCL) proposed by Blitzer et al.~\cite{Blitzer:EMNLP:2006} to predict missing features in a given short-text. SCL was described in detail in Section~\ref{sec:related}. Specifically, we train SCL using the same set of vertices as used by the ClassiNet as pivots. This enables us to conduct a fair comparison between SCL and methods that use ClassiNet because the performance between SCL and methods that use ClassiNet can be directly attributable to the projection method used in SCL and not due to any differences of the expansion set. We then train linear predictors for those pivots using logistic regression. We arrange the trained linear predictors as rows in a matrix, on which we subsequently perform singular value decomposition to obtain a lower-dimensional projection. Following the recommendations in \cite{Blitzer:EMNLP:2006}, we set the dimensionality of the projection to $50$. Both train and test instances are first projected to this lower-dimensional space and we append the projected features to the original feature vectors. Next, we train a binary sentiment classifier using logistic regression with $\ell_{2}$ regularisation. The regularisation coefficient is set using a held-out set of review sentences. \textbf{FTS:} FTS is the frequent term sets method proposed by Man~\cite{Man:2014}. First, co-occurrence and class-orientation relations are defined among features (terms). Next, terms that are frequent in those relations more than a pre-defined threshold (support) are selected as expansion candidates. Finally, for each feature in a short text, the frequent term sets containing this feature are appended as expansion features to the original feature vector representing the short-text. FTS can be considered as a method that uses clusters of features induced from the data instances to overcome the feature sparseness problem. \textbf{CBOW:} To compare the explicit feature expansion approach used by ClassiNets against implicit text representation methods, we use pre-trained word embeddings to represent a short-text in a lower-dimensional space. Specifically, we create $300$ dimensional word embeddings using the same corpus used by ClassiNets to create continuous bag-of-words (CBOW) ~\cite{Milkov:2013} embeddings, and add the word embedding vectors for all the words in a short text to create a $300$ dimensional vector that represents the given short-text. \textbf{Global Feature Expansion:} This method propagates the original features across the trained ClassiNet, and is described in Section~\ref{sec:global}. It is the main method proposed in this paper. \begin{table}[t] \caption{Binary classification accuracies.} \begin{center} \begin{tabular}{l c c c c} \toprule Method & \textbf{TR} & \textbf{MR} & \textbf{CR} & \textbf{SUBJ} \\ \midrule No Expansion & $76.31$ & $73.35$ & $81.54$ & $88.95$ \\ Independent Expansion & $75.32$ & $74.11$ & $78.19$ & $87.15$ \\ Local Path Expansion & $76.97$ & $73.73$ & $81.87$ & $88.05$ \\ All neighbour Expansion & $77.36$ & $72.93$ & $82.55$ & $88.75$ \\ Mutual neighbour Expansion & $77.13$ & $74.15$ & $80.87$ & $88.95$ \\ WordNet & $76.58$ & $66.09$ & $79.86$ & $77.95$ \\ SCL~\cite{Blitzer:EMNLP:2006} & $78.02$ & $74.44$ & $81.20$ & $89.25$ \\ FTS~\cite{Man:2014} & $76.47$ & $66.83$ & $62.41$ & $50.15$ \\ CBOW & $77.52$ & $73.31$ & $79.87$ & $88.88$ \\ Global Feature Expansion & $\mathbf{78.30}$ & $\mathbf{81.20}^{*}$ & $\mathbf{83.89}^{*}$ & $\mathbf{89.70}$ \\ \bottomrule \end{tabular} \end{center} \label{tbl:sentiment} \end{table} We summarise the classification accuracies obtained with different approaches discussed on the four test datasets in Table~\ref{tbl:sentiment}. For each dataset we indicate the best performing method using boldface font, whereas an asterisk indicates if the best performance reported is statistically significantly better than the second best method on the same dataset according to a two-tailed paired t-test under $0.01$ confidence level. From Table~\ref{tbl:sentiment}, we see that the proposed \textbf{Global Feature Expansion} method obtains the best performance in all four datasets. Moreover, in \textbf{MR} and \textbf{CR} datasets its performance is significantly better than the second best methods (respectively \textbf{SCL} and \textbf{All Neigbour Expansion}) on those two datasets . Among the four local expansion methods, \textbf{All neighbour Expansion} reports the best performance in \textbf{TR} and \textbf{CR} datasets, whereas the \textbf{Mutual neighbour Expansion} reports the best performance in \textbf{MR} and \textbf{SUBJ} datasets. \textbf{Independent Expansion} method performs worse than the \textbf{No Expansion} baseline in \textbf{TR}, \textbf{CR}, and \textbf{SUBJ} datasets indicating that by individually expanding each feature in a short-text we introduce a significant level of noise into the short-text. This result shows the importance for a feature expansion methods to consider all the features in an instance when adding related features to an instance. None of the local feature expansion methods are able to outperform the global feature expansion method in any of the datasets. In particular, in the \textbf{SUBJ} dataset we see that none of the local feature expansion methods outperform the \textbf{No Expansion} baseline. This result implies that it is not sufficient to simply create a ClassiNet, but it is also important to use an appropriate feature expansion method on the built ClassiNet to find expansion features to overcome the feature sparseness problem in short-text classification. \textbf{FTS} method performs poorly in all our experiments. This indicates that the frequency of a feature is not a good indicator of its effectiveness as an expansion candidate. On the other hand, \textbf{WordNet} method that uses synsets as expansion candidates performs much better than \textbf{FTS} method. Not surprisingly, this result shows that synonyms are useful as expansion candidates. However, a prerequisite of this approach is the availability of a thesauri that are either manually or semi-automatically created. Such linguistic resources might not be available or incomplete for some languages. On the other hand, our proposed method does not require such linguistic resources. \textbf{CBOW} and \textbf{SCL} methods perform competitively with the \emph{Global Feature Expansion} method in all datasets. Given that both \textbf{CBOW} and \textbf{SCL} are using word-level embeddings to compute a representation for a short text, this result shows the effectiveness of word-level embeddings as a method to overcome feature sparseness in short-text classification tasks. We compare non-compositional sentence-level embedding methods against the proposed \textbf{Global Feature Expansion} method later in Section~\ref{sec:sentemb}. \subsection{Comparisons against sentence-level embeddings} \label{sec:sentemb} An alternative direction for representing short-texts is to project the entire text directly to a lower-dimensional space, without applying any compositional operators to word-level embeddings. The expectation is that the overlap between short-texts in the projected space will be higher than that in the original space such as a bag-of-word representation of a short-text. Skip-thought vectors~\cite{Kiros:2015}, FastSent~\cite{Hill:NAACL:2016}, and Paragraph2Vec~\cite{Le:ICML:2014} are popular sentence-level embedding methods that have reported state-of-the-art performance on text classification tasks. In contrast to our proposed method which explicitly append features to the original feature vectors to overcome the feature sparseness problem, sentence-level embedding methods can be seen as an implicit feature representation method. In Table~\ref{tbl:sentemb}, we compare the proposed method against the state-of-the-art sentence-level embedding methods. We use the published results in \cite{Kiros:2015} on \textbf{MR}, \textbf{CR}, and \textbf{SUBJ} datasets for Skip-thought, FastSent, and Paragraph2Vec, without re-training those methods. All three methods are trained on the Toronto books corpus~\cite{moviebook}. Performance of these methods on the \textbf{TR} dataset were not available. As a multiclass classification setting, we used the \textbf{TREC} question-type classification dataset. In this dataset, each question is manually classified to 6 question types depending on the information asked in the question such as abbreviation, entity, description, human, location and numeric. We use the same classinet as we used in the binary classification tasks to predict features for 5500 train and 500 test questions. A multiclass logistic regression classifier is trained on feature vectors with missing features predicted and tested on the feature vectors for the test questions with missing features predicted. Next, we briefly describe the methods compared in Table~\ref{tbl:sentemb}. \textbf{Skip-thought}~\cite{Kiros:2015} is a sequence-to-sequence model that encodes sentences using a Recurrent Neural Network (RNN) with Gated Recurrent Units (GRUs)~\cite{Cho:SSST:2014}. \textbf{FastSent}~\cite{Hill:NAACL:2016} is similar to \textbf{Skip-thought} in that both models predict the words in the next and previous sentences given the current sentence. However, unlike \textbf{Skip-though} which considers the word-order in a sentence, \textbf{FastSent} models a sentence as a bag-of-words. \textbf{Paragraph2Vec}~\cite{Le:ICML:2014} learns a vector for every short-text (eg. a sentence) in a corpus jointly with word embeddings for every word in that corpus such that the word embeddings are shared across all short-texts in the corpus. Sequential Denoising Autoencoder (\textbf{SDAE})~\cite{Hill:NAACL:2016} is an encoder-decoder model with a Long Short-Term Memory (LSTM)~\cite{Hochreiter:1997} unit. We use the \textbf{SDAE} version that uses pre-trained CBOW embeddings to initialise the word embeddings because of its superior performance over the \textbf{SDAE} version that uses randomly initialised word embeddings. We use Convolutional Neural Networks (\textbf{CNN}) for creating sentence-level embeddings as a baseline. For this purpose, we follow the model architecture proposed by \citet{kim:2014:EMNLP2014}. Specifically, each word $v_{i}$ in a sentence is represented by a $d$-dimensional word embedding $\vec{v}_{i} \in \R^{d}$, and the word embeddings are concatenated to create a fixed-length sentence embedding. The maximum length $n$ of a sentence is used to determine the length of this initial sentence-level embedding, where sentences with words less than this maximum length are padded using null vectors. Next, a convolution operator defined by a filter $\vec{w} \in \R^{hd}$ is applied on windows of consecutive $h$ tokens in sentences to produce new feature vectors for the sentences. We use several convolutional filters by varying the window size. Next, max-over-time pooling~\cite{Collobert:2011} is applied on this feature map to select the maximum value corresponding to a particular feature. This operation produces a sentence-level embedding that is independent of the length of the sentence. Finally, a fully connected layer with dropout~\cite{Srivastava:2014} and a softmax output unit is applied on top of this sentence representation that can predict the class label of a sentence. Pre-trained CBOW embeddings are used in the CNN-based sentence encoder as well. From Table~\ref{tbl:sentemb} we see that the proposed \textbf{Global Feature Expansion} method obtains best classification accuracies on \textbf{MR} and \textbf{CR} datasets with statistically significant improvements over the corresponding second-best methods, whereas \textbf{Skip-thought} reports the best results on the \textbf{SUBJ} and \textbf{TREC} datasets. However, unlike \textbf{Skip-thought} that is trained for two weeks on a GPU cluster, ClassiNets can be trained in less than 6 hours end-to-end on a single core CPU. The computational efficiency of ClassiNets is particularly attractive when continuously classifying large amounts of short-texts such as, for example, sentiment classification of tweets coming in as a continuous data stream. \begin{table}[t] \caption{Comparison against sentence-level embedding methods.} \begin{center} \begin{tabular}{l c c c c} \toprule Method & \textbf{MR} & \textbf{CR} & \textbf{SUBJ} & \textbf{TREC}\\ \midrule Skip-thought & $76.5$ & $80.1$ & $\mathbf{93.6}^{*}$ & $92.2$ \\ Paragraph2Vec & $74.8$ & $78.1$ & $90.5$ & $59.4$ \\ FastSent & $70.8$ & $78.4$ & $88.7$ & $76.8$ \\ SDAE & $74.6$ & $78.0$ & $90.8$ & $77.6$ \\ CNN & $76.1$ & $79.8$ & $89.6$ & $83.4$\\ Global Feature Expansion & $\mathbf{81.2}^{*}$ & $\mathbf{83.89}^{*}$ & $89.7$ & $88.3$ \\ \bottomrule \end{tabular} \label{tbl:sentemb} \end{center} \end{table} \subsection{Qualitative evaluation} \label{sec:quality} \begin{table*}[t] \caption{Example short-reviews and the features predicted by ClassiNet. The correct label (+/-) is shown within brackets. All these instances were misclassified when classified using the original features. However, when we use the features predicted by the ClassiNet all those instances are correctly classified.} \begin{center} \begin{tabular}{|p{7cm}|p{7cm}|} \hline Review & Predicted features \\ \hline \hline On its own cinematic terms, it successfully showcases the passions of both the director and novelist Byatt. (+) & \emph{writer, played, excellent, thriller, story, writing, subject, script, animation, films, role, storyline, experience, episode, cinematography.} \\ \hline What Jackson has accomplished here is amazing on a technical level. (+) & \emph{beautiful, perfect, fantastic, good, brilliant, great, wonderful, excellent, fine, strong.} \\ \hline This is art playing homage to art. (+) & \emph{cinema, modern, theme, theater, reality, style, experience, British, drama, documentary, history, period, acting, cinematography.} \\ \hline About as satisfying and predictable as the fare at your local drive through. (-) & \emph{terrible, ridiculous, annoying, least, horrible, poor, slow, awful, dull, scary, boring, stupid, bad, silly.} \\ \hline \end{tabular} \end{center} \label{tbl:example} \end{table*}% In Table~\ref{tbl:example}, we show the expansion candidates predicted by the proposed \textbf{Global Feature Expansion} method for some randomly selected short-reviews. The gold standard sentiment labels associated with each short review in the test dataset are shown within brackets. All the reviews shown in Table~\ref{tbl:example} are misclassified if we had used only the features in the original review. However, by appending the expansion features found from the ClassiNet, we can correctly predict the sentiment for those short reviews. From Table~\ref{tbl:example}, we see that many semantically related features are found by the proposed method. \begin{figure}[t] \centering \includegraphics[height=6cm]{my.pdf} \caption{Portion of the created ClassiNet from movie reviews. Vertices denote features and the edge-weights are shown on arrows.} \label{fig:classinet} \end{figure} Figure~\ref{fig:classinet} shows an extract from the ClassiNet we create from the Large Movie Review dataset. To avoid cluttering of edges, we show only the edges for a sparse $k=4$ mutual neighbour graph created from the original densely connected ClassiNet. First, for each vertex $v_i$ in the ClassiNet we compute its top $k$ similar vertices according to the edge weights. Next, we connect a vertex $v_i$ to a vertex $v_j$ in the $k$-mutual neighbour graph if $v_j$ is among the top $k$ similar vertices of $v_i$, and $v_i$ is among the top $k$ similar vertices of $v_j$. We see that synonyms, such as \emph{awful}, and \emph{horrible} are connected by high weighted edges in Figure~\ref{fig:classinet}. It is interesting to see that antonyms, such as \emph{good}, and \emph{bad} are also among the mutual nearest neighbours because those terms frequently occur in similar contexts (e.g., \emph{good movie} vs. \emph{bad movie}). Moreover, Figure~\ref{fig:classinet} shows the importance of propagating over the ClassiNet, instead of simply considering the directly connected vertices as the expansion candidates. For example, although being highly related features, there is no direct connection from \emph{horrible} to \emph{boring} in the ClassiNet. However, if we consider two-hop connections then we can find a path through \emph{awful}. \subsection{Effect of the Damping Factor} \label{sec:damp} To empirically study the effect of the damping factor on the classification accuracy of short-texts under the \textbf{Global Feature Expansion} method, we randomly select $1000$ positive and $1000$ negative sentiment labeled sentences from the Large Movie Review dataset as validation data, and evaluate the sentiment classification accuracy of the \textbf{Global Feature Expansion} method with different $\gamma$ values. The result is shown in Figure~\ref{fig:damp}. Note that smaller $\gamma$ values will reduce the propagation than larger $\gamma$ values, restricting the expansion candidates to a smaller local neighbourhood surrounding the original features. From Figure~\ref{fig:damp} we see that initially when increasing $\gamma$ the classification accuracy increases and reaches a peak at $\gamma = 0.85$. This shows that it is indeed important to find expansion neighbours by propagating over the ClassiNet as done by the global feature expansion method. However, setting $\gamma > 0.85$ results in a drop of classification accuracy, which is due to distant and potentially irrelevant expansion candidates. Interestingly, $\gamma = 0.85$ has been found to be the optimal value for different graph-based propagation tasks such as the PageRank~\cite{PageRank}. \begin{figure}[t] \begin{center} \includegraphics[height=6cm]{damp.pdf} \caption{The effect of the damping factor on the classification accuracy out.} \label{fig:damp} \end{center} \end{figure} \subsection{Number of Expansion Features} \label{sec:featcount} In this Section we analyse the number of feature appended to train/test instances by the different feature expansion methods using a fixed ClassiNet. Recall that none of the feature expansion methods we proposed has any predefined number of expansion features. In contrast, the number of expansion features depends on several factors: (a) the number of features in the original (prior to expansion) feature vector, (b) the size and the connectivity of the ClassiNet and (c) the feature expansion method. For example, if a particular feature vector has $n$ features, which are all present in the ClassiNet, then on average under the All Neighbour Expansion method, we will append $dn$ number of features to this instance where $d$ is the out degree of the ClassiNet. More precisely, the actual number of expansion features will be different from $dn$ due to several reasons. First, some vertices in ClassiNet might have different numbers of neighbours, not necessarily equal to the out degree. Second, the out degree considers the weight of the edges and not simply the different number of vertices connected via outbound edges. Third, some of the expansion features might already be in the original feature vector, thereby not increasing the number of features. Finally, the same expansion feature might be suggested by different vertices, therefore doubly counting the number of expansion features. To empirically analyse the number of expansion features, we build a ClassiNet containing 700 vertices and count the number of features expanded on the \textbf{SUBJ} train dataset. The out degree $d$ is given by \eqref{eq:out-degree}. \begin{equation} \label{eq:out-degree} d = \frac{1}{N} \sum_{i} \sum_{j \in \cN(v_{i})} w_{ij} \end{equation} Here, $N$ is the total number of vertices in the ClassiNet, $\cN(v_{i})$ is the set of neighbours connected to $v_{i}$ by an out bound link, and $w_{ij}$ is the weight of the edge connecting vertex $v_{i}$ to $v_{j}$. Figure~\ref{fig:degree} shows the degree distribution for the ClassiNet with degree $d = 263.35$. We see that most vertices are connected to $240-300$ other vertices in the ClassiNet. Given that this ClassiNet contains 700 vertices, this is a tightly connected, dense graph. For each train instance in the \textbf{SUBJ} dataset, we compute the expansion ration, ratio between the number of features after and before feature expansion, for the All Neighbour Expansion (Figure~\ref{fig:all-neighb}) and Global Feature Expansion (Figure~\ref{fig:global}). We see that the expansion ratio is higher for the global feature expansion (ca. 25-30) compared to that for all neighbour expansion (ca. 1.5-2.5). Given that the global feature expansion considers a broader neighbourhood surrounding the initial features in an instance this is not surprising. Moreover, it provides an explanation for the superior performance of the global feature expansion. Although expanding too much using not only relevant nearby features but also potentially irrelevant broader neighbourhoods is likely to degrade performance, we see that at the level of expansions done by the global feature expansion this is not an issue. Therefore, we conclude that under the global feature expansion method, we do not need to impose any predefined limitations to the number of expansion features. \begin{figure}[t] \begin{center} \includegraphics[height=6cm]{outdegree.png} \caption{Out degree distribution of the ClassiNet.} \label{fig:degree} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[height=6cm]{all-neighb.png} \caption{All neighbour Expansion.} \label{fig:all-neighb} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[height=6cm]{global-ratio.png} \caption{Global Feature Expansion.} \label{fig:global} \end{center} \end{figure} \section{Conclusion} \label{sec:conclusion} We proposed ClassiNet, a network of binary classifiers for predicting missing features to overcome the feature sparseness problem observed in short-text classification. We select positive and negative training instances for learning the feature predictors using unlabeled data. In ClassiNets, the weight of the edge connecting the vertex $v_i$ to $v_j$ represents the probability that given $v_i$ is predicted to occur in an instance, $v_j$ is also predicted to occur in the same instance. We proposed an efficient method using locality sensitive hashing to approximately compute the neighbourhood of a vertex, thereby avoiding all-pair computation of confusion matrices. We propose local and global methods for feature expansion using ClassiNets. Our experimental results show that the global feature expansion method significantly improves the classification accuracy of a sentence-level sentiment classification tasks outperforming previously proposed methods such as structural correspondence learning (SCL), and frequent term sets (FTS), Skip-thought vectors, FastSent, and Paragraph2Vec on multiple datasets. Moreover, close inspection of the expanded feature vectors show that features that are related to an instance are found as expansion candidates for that instance. In the future, we plan to apply ClassiNets to other tasks that require missing feature prediction such as recommendation systems. \bibliographystyle{ACM-Reference-Format} \section{Introduction} \label{sec:intro} Short-texts are abundant on the Web and appear in various different formats. For example, in Twitter, users are constrained to a $140$ character upper limit when posting their tweets~\cite{Kwak:WWW:2010}. Even when there are no strict upper limits, users tend to provide brief answers in QA forums, review sites, SMS, email, and chat messages~\cite{Cong:SIGIR:2008,Thelwall:2010}. Unlike lengthy responses that take time to both compose and to read, short responses have gained popularity particularly in social media contexts. Considering the steady growth of mobile devices that are physically restricted to compact keyboards, which are suboptimal for entering lengthy text inputs, it is safe to predict that the amount of short-texts will continue to grow in the future. Considering the importance and the quantity of the short-texts in various web-related tasks, such as text classification~\cite{Wang:JZUS:2012,dossantos-gatti:2014:Coling}, and event prediction~\cite{Sakaki:WWW:2010}, it is important to be able to accurately represent and classify short-texts. Compared to performing text mining on longer texts~\cite{Yogatama:ICML:2014,Su:ICML:2011,Guan:WWW:2009}, for which dense and diverse feature representations can be created relatively easily, handling of shorter texts poses several challenges. First, the number of features that are actually present in a short-text will be a small fraction of the set of all features that exist in all of the train instances. Although this \emph{feature sparseness} is problematic even for longer texts, it is critical for shorter texts. In particular, when the diversity of the feature space increases as with longer $n$-gram lexical features, (a) the number of occurrences of a feature in a given instance (i.e., term frequency), as well as (b) the number of instances in which a particular feature occurs (i.e., document frequency), will be small. Therefore, it is difficult to reliably estimate the salience of a feature in a particular class in supervised learning tasks. Second, the shorter length means that there is \emph{less redundancy} in terms of the features that exist in a short-text. Consequently, most of the related words of a particular word might be missing in a short-text. For example, consider a review on \emph{iPhone 6} that says ``\emph{I liked the larger screen size of iPhone 6 compared to that of its predecessor}''. Although \emph{iPhone 6 plus}, a product similar to \emph{iPhone 6}, has also a larger screen compared to its predecessors, this information is not included in this short review. On the other hand, we might observe such positive sentiments associated with \emph{iPhone 6 plus} but not with \emph{iPhone 6} in other train instances, which will result in a high positive score for \emph{iPhone 6 plus} in a classifier trained from those train reviews. Unfortunately, we will not be able to infer that this particular user would also likely be satisfied with \emph{iPhone 6 plus}, thereby not recommending \emph{iPhone 6 plus} for this user. To overcome the above-mentioned challenges encountered when handling short-texts, we propose a \emph{feature expansion} method analogous to the query expansion methods used in information retrieval (IR)~\cite{IR_book} to improve the agreement between search queries input by the users and documents indexed by the search engine~\cite{Carpineto:2012}. We assume short-texts are already represented using some feature vectors, which we refer to as \emph{instances} in this paper. Lexical features such as unigrams or bigrams of words, part-of-speech (POS) tag sequences, and dependency relations have been frequently used in prior work on text classification. Our proposed method does not assume any particular type of features, and can be used with any discrete feature set. First, we train binary classifiers which we call \emph{feature predictors} for predicting whether a particular feature $v_i$ occurs in a given instance $\vec{x}$. For example, given the previously discussed short review, we would like to predict whether iPhone 6 plus is likely to occur in this review. The training instances required to learn feature predictors are automatically selected from unlabeled texts. Specifically, given a feature $v_i$, we select texts in which $v_i$ occurs as the positive training instances for learning a feature predictor for $v_i$. On the other hand, negative training instances for learning the feature predictor for $v_i$ are randomly sampled from the unlabeled texts, where $v_i$ does not occur. Using those positive and negative training instances we learn a binary classifier to predict whether $v_i$ occurs in a given instance. Any binary classification algorithm, such as support vector machines, logistic regression, naive Bayes classifier etc. can be used for this purpose, and it is not limited to linear classifiers. We define \emph{ClassiNet} as a directed weighted graph $\cG(\cV, \cE, \mat{W})$ of feature predictors, where each vertex $v_i \in \cV$ corresponds to a feature predictor. The directed edge $e_{ij} \in \cE$ from $v_i$ to $v_j$ is assigned the weight $1 \geq w_{ij} \geq 0$, which is the conditional probability that given $v_i$ is predicted for a particular instance, $v_j$ is also predicted for the same instance. It is noteworthy that we obtain both positive and negative instances for learning feature predictors from unlabeled data, and do not require any labeled data for the target task. For example, consider the case that we are creating a ClassiNet to find missing features in sentiment classification. In this case, the target task is sentiment classification. However, we do not require any labeled data for the target task such as sentiment annotated reviews when creating the ClassiNet that we are subsequently going to use for finding missing features. Therefore, the training of ClassiNets can be conducted in a purely unsupervised manner, without requiring any manually labeled data for the target task. Moreover, the decoupling of ClassiNet training from the target task enables us to use the same ClassiNet to expand feature vectors for different target tasks. As we discuss later in Section~\ref{sec:classi-cooc}, ClassiNets can be seen as a generalized version of the word co-occurrence graphs that have been well-studied in the NLP community~\cite{Rada:2011}. However, ClassiNets consider both explicit as well as implicit co-occurrences of words in some context, whereas word co-occurrence graphs are limited to explicit co-occurrences. Given a ClassiNet created from unlabeled data as described above, we propose several strategies for finding related features for a given instance that do not occur in the original instance. Specifically, we compare both \emph{local} feature expansion methods that consider the nearest neighbours of a particular feature in an instance (Section~\ref{sec:local}), as well as \emph{global} feature expansion methods that propagate the features that exist in an instance over the entire set of vertices in ClassiNet (Section~\ref{sec:global}). We evaluate the performance of the proposed feature expansion methods on short-text classification benchmark datasets. Our experimental results show that the proposed global feature expansion method significantly outperforms several local feature expansion methods,, and several sentence-level embedding methods on multiple benchmark datasets proposed for evaluating short-text classification methods. Considering that (a) ClassiNets can be created using unlabeled data, (b) the same ClassiNet can be used in principle for predicting features for different target tasks, (c) arbitrary features could be used in the feature predictors, not limited to lexical features, we believe that ClassiNets can be applied to a broad-range of machine learning tasks, not limited to short-text classification. Our contributions in this paper can be summarised as follows: \begin{itemize} \item We propose a method for learning a network of feature predictors that can predict missing features in feature vectors. The proposed network, which we refer to as the ClassiNet, can be learnt in an unsupervised manner, without requiring any labeled data for the target task in which we are going to apply the ClassiNet to expand features (Section~\ref{sec:classinet:learn}). \item We propose an efficient method to learn ClassiNets from large datasets. Specifically, we show that the edge-weights of ClassiNets can be computed efficiently using locality sensitive hashing (Section~\ref{sec:project}). \item Having proposed ClassiNets, we describe its relationship to word co-occurrence graphs that have a long history in the NLP community. We show that ClassiNets can be considered as a generalised version of word co-occurrence graphs (Section~\ref{sec:classi-cooc}). \item We propose several methods for finding related features for a given instance using the created ClassiNet. In particular, we consider both \emph{local methods} (Section~\ref{sec:local}) that consider the nearest neighbours in ClassiNet of the features that exist in an instance, as well as \emph{global methods} (Section~\ref{sec:global}) that consider all vertices in the ClassiNet. \end{itemize} \section{Related Work} \label{sec:related} Feature sparseness is a common problem that is encountered in various text mining tasks. Two main approaches for overcoming the feature sparseness problem in short-texts can be identified in the literature: (a) embedding the train/test instances in a dense, lower-dimensional feature space thereby reducing the number of zero-valued features in the instances, and (b) predicting the values of the missing features. Next, we discuss prior work that belong to each of those two approaches. An effective technique frequently used in prior work on short-texts to overcome the feature sparseness problem is to represent the texts in some lower-dimensional dense space, thereby reducing the feature sparseness. Several methods have been used to obtain such lower-dimensional representations such as topic-models~\cite{Yan:WWW:2013,yang-EtAl:2015:NAACL-HLT2,Wang:JZUS:2012}, clustering~\cite{Dai:2013,Rangrej:WWW:2011}, and dimensionality reduction~\cite{Blitzer:EMNLP:2006,Pan:WWW:2010}. Wang et al.~\cite{Wang:JZUS:2012} used latent dirichlet allocation (LDA) to identify features that are useful for identifying a particular class. Higher weights are assigned to the identified features, thereby increasing their contribution towards the classification decision. However, applying LDA at sentence-level is problematic because the number of words in a sentence is much smaller than that in a document. Consequently, Yan et al.~\cite{Yan:WWW:2013} proposed the bi-term topic model that models the co-occurrence patterns between words accumulated over the entire corpus. An alternative solution that uses an external knowledge-base in the form of a phrase list is propsed by Yang et al.~\cite{yang-EtAl:2015:NAACL-HLT2} to overcome the feature sparseness problem when learning topics from short-texts. The phrase list is automatically extracted from the entire collection of short-texts in a pre-processing step. Cluster-based methods have been proposed for representing documents to overcome the feature sparseness problem. First, some clustering algorithm is used to cluster the documents into a group of clusters. Next, each document is represented by the clusters to which it belongs. Dai et al.~\cite{Dai:2013} used a hierarchical clustering algorithm with purity control to generate a set of clusters, and use the similarity between a document and each of the clusters as augmented features to enrich the document representation. Their method significantly improves the classification accuracy for short web snippets in a support vector machine classifier. Feature mismatch is a fundamental problem in domain adaptation, where we must learn a classifier using labeled data from a source domain and apply it to predict labels for the test instances in a different target domain. Pan et al.~\cite{Pan:WWW:2010} proposed Spectral Feature Alignment (SFA), a method to overcome the feature mismatch problem in cross-domain sentiment classification. They created a bi-partite graph between domain-specific and domain-independent features, and then used a spectral clustering method to obtain a domain-independent lower-dimensional embedding. In structural correspondence learning (SCL)~\cite{Blitzer:ACL:2007,Blitzer:EMNLP:2006}, a set of features that are common to both source and the target domains, referred to as \emph{pivots}, is identified using mutual information with the sentiment label. Next, linear classifiers that can predict those pivots are learnt from unlabeled reviews. The weight vectors corresponding to the learnt linear classifiers are arranged as rows in a matrix, on which subsequently singular value decomposition is applied to compute a lower-dimensional projection. Feature vectors representing train source reviews are projected into this lower-dimensional space, in which a binary sentiment classifier is trained. During test time, feature vectors representing test target reviews are also projected to the same lower-dimensional space and the trained binary classifier is used to predict the sentiment labels. However, domain adaptation methods such as SCL and SFA require data from at least two (source vs. target) different domains (e.g. reviews on products in different categories) to overcome the missing feature problem, whereas in this work we assume the availability of data from one domain only. Instead of representing documents using lexical features, which often results in high-dimensional and sparse feature vectors, by embedding documents in low-dimensional dense spaces we can effectively overcome the feature sparseness problem~\cite{Lu:NIPS:2013,dossantos-gatti:2014:Coling,Le:ICML:2014}. These methods jointly learn character-level or word-level embeddings as well as document-level embeddings~\cite{Kiros:2015,Hill:NAACL:2016} such that the learnt embeddings capture the similarity constraints satisfied by a collection of short-texts. First, each word in the vocabulary is assigned a fixed dimensional word vector. We can initialize the word vectors randomly or using pre-trained word representations. Next, the word vectors are updated such that we can accurately predict the co-occurrences of words in some context, such as a window of tokens, a sentence, a paragraph, or a document. Different loss functions encoding different co-occurrence measures have been proposed for this purpose~\cite{Pennington:EMNLP:2014,Milkov:2013}. As shown later in Section~\ref{sec:sentemb}, ClassiNets perform competitively against sentence-level embedding methods on several short-text classification tasks. A single word can have multiple senses. For example, the word \emph{bank} could mean a \emph{financial institution} or a \emph{river bank}. Therefore, it is inadequate to represent different senses of a word using a single embedding~\cite{Reisinger:NAACL:2010,Iacobacci:ACL,Song:2016,camachocollados-pilehvar-navigli:2015:NAACL-HLT,johansson-nietopina:2015:NAACL-HLT,li-jurafsky:2015:EMNLP,hu-zhang-zheng:2016:COLING}. Several solutions have been proposed in the literature to overcome this limitation and learn \emph{sense embeddings}, which capture the sense related information of words. For example, \citet{Reisinger:NAACL:2010} proposed a method for learning sense-specific high dimensional distributional vector representations of words, which was later extended by \citet{Huang:ACL:2012} using global and local context to learn multiple sense embeddings for an ambiguous word. \citet{neelakantan-EtAl:2014:EMNLP2014} proposed a multi sense skip-gram (MSSG), an online cluster-based sense-specific word representations learning method, by extending Skip-Gram with Negative Sampling (SGNG)~\cite{Milkov:2013}. Unlike SGNG, which updates the gradient of the word vector according to the context, MSSG predicts the nearest sense first, and then updates the gradient of the sense vector. Aforementioned methods apply a form of word sense discrimination by clustering a word contexts, before learning sense-specific word embeddings based on the induced clusters to learn a fixed number of sense embeddings for each word. In contrast, a nonparametric version of MSSG (NP-MSSG)~\cite{neelakantan-EtAl:2014:EMNLP2014} estimates the number of senses per word and learn the corresponding sense embeddings. On the other hand, \citet{iacobacci-pilehvar-navigli:2015:ACL-IJCNLP} used a Word Sense Disambiguation (WSD) tool to sense annotate a large text corpus and then used an existing prediction-based word embeddings learning method to learn sense and word embeddings with the help of sense information obtained from the BabelNet~\cite{iacobacci-pilehvar-navigli:2015:ACL-IJCNLP} sense inventory. Similarly, \citet{camachocollados-pilehvar-navigli:2015:NAACL-HLT} used the knowledge in two different lexical resources: WordNet~\cite{WordNet} and Wikipedia. They use the contextual information of a particular concept from Wikipedia and WordNet synsets prior to learning two separate vector representations for each concept. A single word can be related to multiple different topics, without necessarily corresponding to different senses of the word. Revisiting our previous example, we might have a collection of documents about \emph{retail banks}, \emph{commercial banks}, \emph{investment banks} and \emph{central banks}. All these different banks are related to the financial sense of the word bank. However, in a particular task (eg. classifying documents related to the different types of financial banks), we might require different embeddings for the different topics in which the word bank appears. \citet{Liu:AAAI:2015} proposed three methods for learning \emph{topical word embeddings}, where they first cluster words into different topics using LDA~\cite{Blei:JMLR:2003} and then learn word embeddings using SGNS. \citet{Liu:IJCAI:2015} modelled the interactions among topics, contexts and words using a tensor and obtained topical word embeddings via tensor factorisation. Instead of clustering words prior to embedding learning, \citet{Shi:2017} proposed a method to jointly learn both words and topics, thereby considering the correlations between multiple senses of different words that occur in different topics. TopicVec~\cite{TopicVec} learns vector representations for topics in a document by modelling the co-occurrence between a target word and a context word considering both words' word embeddings as well as the topic embedding of the context word. Our proposed methods for feature expansion using ClassiNet can be seen as an \emph{explicit} feature prediction method, whereas methods that learn lower-dimensional dense embeddings of texts can be seen as \emph{implicit} feature prediction methods. For example, if we use lexical features such as unigrams or bigrams to create a ClassiNet, then the features predicted by that ClassiNet will also be lexicalised features, which are easier to interpret than dimensions in a latent embedded space. Although for text classification purposes it is sufficient to represent short-texts in implicit feature spaces, there are numerous tasks that require explicit interpretable predictions such as query suggestion in information retrieval~\cite{Carpineto:2012}, reverse dictionary mapping~\cite{Hill:TACL:2016}, and hashtag suggestion in social media~\cite{weston-chopra-adams:2014:EMNLP2014}. Therefore, the potential applications of ClassiNets as an explicit feature expansion method goes beyond short-text classificaion. It would be an interesting future research direction to combine implicit and explicit feature expansion methods to construct better representations for texts. Recently there has been several methods proposed for learning embeddings (lower-dimensional implicit feature representations) for the vertices of undirected or directed (and weighted) graphs~\cite{DeepWalk,li-zhu-zhang:2016:P16-1,LINE}. For example, in \emph{language graphs}~\cite{LINE}, the vertices can correspond to words and the weight of the edge between two vertices represent the strength of the co-occurrences between two words in a corpus. Alternatively, in a \emph{co-author network}, the vertices correspond to authors and the edges represent the number of papers two people have co-authored. DeepWalk~\cite{DeepWalk} performs a random walk over an undirected graph to generate a pseudo-corpus, which is then used to learn word (vertex) embeddings using skip-gram with negative sampling (SGNS)~\cite{Milkov:2013}. Li et al.~\cite{li-zhu-zhang:2016:P16-1} proposed a discriminative version of DeepWalk by including a discriminative supervised loss that evaluates how well the learnt vertex embeddings perform on some supervised tasks. Tang et al.~\cite{LINE} used both first-order and second-order co-occurrences in a graph to learn separate vertex embeddings, which were subsequently concatenated to create a single vertex embedding. Although in this paper we consider graphs where vertices correspond to words, the objective of creating ClassiNets is fundamentally different from the above-mentioned vertex embedding methods. In graph (vertex) embedding, we are given a graph and a goal is to learn embeddings for the vertices such that structural information of the graph is preserved in the learnt embeddings. On the other hand, in ClassiNets, we learn feature predictors which can be used to predict whether a particular feature is missing in a given context. The connection between co-occurrence graphs and ClassiNets is further discussed in Section~\ref{sec:classi-cooc}. Moreover, in Section~\ref{sec:classinet:expand}, we propose and evaluate several methods for expanding feature vectors using the ClassiNets we create, which is not relevant for vertex embedding methods. \section{ClassiNets} \label{sec:classinets} \subsection{Overview} \label{sec:overview} Our proposed method for classifying short-texts consists of two steps. First, we create a network of classifiers which we refer to as the \emph{ClassiNet} in this paper. In Section~\ref{sec:classinet:learn}, we describe the details of the method we propose to create ClassiNets. In Section~\ref{sec:classinet:expand}, we describe several methods for using the learnt ClassiNet to expand feature vectors to overcome the feature sparseness problem. We define a ClassiNet as a directed weighted graph $\cG(\cV, \cE, \mat{W})$, in which a vertex $v_i \in \cV = \{v_1, \ldots, v_n \}$ corresponds to a binary classifier (feature predictor) $h_i$ that predicts the occurrence of a feature $v_i$ in an instance. We assume that each train/test instance $x$ is already represented by a $d$-dimensional vector $\vec{x} = (x_1, x_2, \ldots, x_d)\T$, in which the $i$-th dimension corresponds to the value $x_i$ of the $i$-th feature representing the instance $x$. The label predicted by $h_i$ for an instance $\vec{x}$ is denoted by $h_i(\vec{x}) \in \{0,1\}$. The weight $w_{ij}$ associated with the edge $e_{ij}$ connecting the vertex $v_i$ to $v_j$ represents the conditional probability, $p(h_j(\vec{x}) = 1| h_i(\vec{x}) = 1)$, that $v_j$ is predicted to occur in $\vec{x}$, given that $v_i$ is also predicted to occur in $x$. Several remarks can be made about the ClassiNets. First, there is a one-to-one correspondence between the vertices $v_i$ in the ClassiNet and the feature predictors $h_i$. Therefore, a ClassiNet can be seen as a network of binary classifiers, as is implied by its name. In general, the set of features $\cS$ that we use for representing instances $x$ (hence for learning feature predictors), and the set of vertices $\cV$ in ClassiNet need not be the same. As we discuss later, vertices in the ClassiNet are used as expansion features to augment instances $x$, thereby overcoming the feature sparseness problem in short-text classification. Therefore, we are free to select a subset of features from all the features used for representing instances as the vertices in ClassiNet. For example, we might use the most frequent features in the train data as vertices in ClassiNet thereby setting $\cV \subset \cS$ ($n < d$). Alternatively, we could use all the features in the feature space of the instances as vertices in the ClassiNet, where we have $\cV = \cS$ (and $n = d$). In the remainder of the paper, we consider the general case where we have $\cV \subseteq \cS$ ($n \leq d$). Second, as we discuss later in Section~\ref{sec:classinet:learn}, we \emph{do not} require labeled data for the target task when creating ClassiNets. For example, let us consider binary sentiment classification of product reviews as the target task. We might have both sentiment rated reviews (labeled instances), and reviews without sentiment ratings (unlabeled instances) at our disposal. We can use both those types of reviews, and ignore the label information when computing the ClassiNet. This is particularly attractive for two reasons: (a) obtaining unlabeled instances is often easier for most tasks compared to obtaining labeled instances, (b) because a ClassiNet created from a particular corpus is independent of the label information unique to a target task, in principle, the same ClassiNet can be used to expand features for different target tasks. The second property is attractive in multi-task learning settings, where we must perform different tasks on the same data. For example, consider the two tasks: (a) predicting whether a given tweet is positive or negative in sentiment, and (b) predicting whether a given tweet would get favorited or not. Both those tasks can be seen as binary classification tasks. We could learn two binary classifiers -- one for predicting the sentiment and the other for predicting whether a tweet would get favorited. However, to overcome the feature sparseness problem in both those tasks, we can use the same ClassiNet. As long as an instance (for example a sentence or a document) is represented using any bag-of-features (unigrams, bigrams, trigrams, dependency paths, syntactic paths, POS sequences, semantic roles, frames etc.) we can use the proposed method to create a ClassiNet. The first step in creating a ClassiNet is to learn feature predictors (Section~\ref{sec:classinet:learn}). The feature predictors use the features available in an instance to as features to train a binary classifier. Therefore, it does not matter whether these features are $n$-grams or more complex types of features as listed above. The remainder of the steps in the proposed method (measuring the correlations between feature predictors to build the ClassiNet, applying feature expansion) use only the learnt feature predictors. Therefore, our proposed method can be used with \emph{any} feature representation of instances, not limiting to lexical n-gram features. \subsection{Learning ClassiNets} \label{sec:classinet:learn} Let us assume that we are given a set $\cD_{u} = \{\vec{x}^{(k)}\}_{k=1}^{N}$ of unlabeled feature vectors $\vec{x}^{(k)} \in \R^d$ representing $N$ short-texts. Given $\cD_{u}$ we construct a ClassiNet in two steps: (a) learn feature predictors $h_i$ for each vertex $v_i \in \cV$, and (b) compute the conditional probabilities $p(h_j(\vec{x}) = 1| h_i(\vec{x}) = 1)$ using the labels predicted by the feature predictors $h_i$ and $h_j$ for an instance $\vec{x}$. As positive training instances for learning a binary feature predictor for a feature $v_i$, we randomly select a set $\cD_i^{(+)} \subset \cD_{u}$ of $N^{(+)}_i$ instances where $v_i$ occurs, and remove $v_i$ from those selected instances. Likewise, we randomly select a set $\cD_i^{(-)} \subset \cD_{u}$ of $N^{(-)}_i$ instances where $v_i$ does not occur. Instances that have few features are not informative for learning accurate feature predictors. Therefore, we select instances that have more non-zero features than the average number of non-zero features in an instance in $\cD_{u}$. We found that, on average, there are ca. $15$ features in an instance. Compared to the number of instances containing a particular feature $v_i$ in the dataset, the number of instances that do not contain $v_i$ is significantly larger. Considering that we are randomly sampling negative instances from a larger set of instances, it is likely that those selected negative instances are not very informative about why $v_i$ is missing in a given instance. In other words, the randomly sampled negative instances might already be further from the decision hyperplane, therefore do not provide sufficient specialization in the hypothesis space. Consequently, it has shown in prior work that use pseudo-negative instances for training classifiers~\cite{Bollegala_WWW_2007} that it is effective to select a larger number of pseudo-negative instances than that of positive instances (i.e., $N^{(+)}_i < N^{(-)}_i$). We note that it is possible to set the number of positive and negative train instances dynamically for each feature $v_i$. For example, some features might be popular in the dataset resulting in a larger positive sample than the others. For simplicity, in this paper, we select all instances in which a particular feature occurs as the positive training instances for that feature, and select twice that number of negative instances from the remainder of the instances (i.e., $N^{(-)}_i = 2N^{(+)}$). An extensive study of different sampling methods and $N^{(-)}_i / N^{(+)}_i$ ratios is beyond the scope of the current paper. Once we have selected $\cD_i^{(+)}$, and $\cD_i^{(-)}$ as described above, we train a binary classifier to predict whether $v_i$ occurs in a given instance. We note that any binary classification algorithm, not limited to linear classifiers, can be used for this purpose. In our experiments, we use $\ell_2$ regularized logistic regression for its simplicity. We tune the regularization coefficient in each feature predictor using $5$-fold cross-validation. Being a probabilistic discriminative classifier, it is possible to obtain not only the predicted labels but also the class conditional probabilities from the trained logistic regression classifier. However, we only require the predicted labels for constructing the edge weights in ClassiNets as we describe next. Therefore, in theory, we can use even binary classifiers that do not produce confidence scores for creating ClassiNets, which extends the applicability of ClassiNets to wider contexts. Let us denote the label predicted by the feature predictor $h_i$ for an instance $\vec{x}$ by $h_i(\vec{x}) \in \{0,1\}$. For two features $v_i$ and $v_j$, we compute the confusion matrix $\mat{M}$ shown in Table~\ref{tbl:conf}. Here, $M_{ab}$ denotes the number of instances $\vec{x}$ for which $h_i(\vec{x}) = a$ and $h_j(\vec{x}) = b$. In particular, $M_{11}$ is the number of instances where both $v_i$ and $v_j$ are predicted to be co-occurring by the learnt feature predictors. \begin{table}[t] \centering \caption{Confusion matrix for the labels predicted by the feature predictors learnt for two features $v_i$ and $v_j$.} \label{tbl:conf} \begin{tabular}{|c|c|c|}\hline & $h_j(\vec{x}) = 1$ & $h_j(\vec{x}) = 0$ \\ \hline $h_i(\vec{x}) = 1$ & $M_{11}$ & $M_{10}$ \\ \hline $h_i(\vec{x}) = 0$ & $M_{01}$ & $M_{00}$ \\ \hline \end{tabular} \end{table} Given the counts in Table~\ref{tbl:conf}, $w_{ij}$ is computed as follows: \begin{equation} \label{eq:weight} w_{ij} = \frac{M_{11}}{M_{11} + M_{10}} \end{equation} Several practical issues must be considered when estimating the edge-weights using \eqref{eq:weight}. First, the set of instances we use for predicting labels when computing the confusion matrix in Table~\ref{tbl:conf} must contain at least some instances in which $v_i$ or $v_j$ occur (i.e., $M_{11} + M_{10} > 0$, and $M_{11} + M_{01} > 0$). Otherwise, even if the feature predictors $h_i$, $h_j$ are accurately learnt, we will still get unreliable sparse counts for $M_{11}$ and $M_{10}$. Therefore, we randomly sample a set of instances $\cD_{(i,j)} \subseteq \cD_{u}$ such that there exist equal numbers of instances containing $v_i$, and $v_j$. Let the total number of elements in $\cD_{(i,j)}$ be $d'$. We use those $d'$ instances when computing the values in the confusion matrix shown in Table~\ref{tbl:conf}. We ensure that there is no overlap between the test instances $\cD_{(i,j)}$ and the train instances we use to learn feature predictors. This is important because if the feature predictors are overfitting we will not get accurate predictions using the ClassiNet during test time. Using non-overlapping train and test instance sets, we can check whether the learnt feature predictors are overfitting. Although we use a ratio of one-third when sampling $\cD_{(i,j)}$ above, we can use different ratios for sampling as long as both $v_i$ and $v_j$ are sufficiently represented in $\cD_{(i,j)}$. \subsection{Efficient Computation of ClassiNets} \label{sec:project} ClassiNets can be learnt offline during the training stage, prior to expanding test instances. Therefore, we are allowed to perform more computationally intensive processing steps compared to what we are allowed at test time, which is required to be real-time for most tasks that involve short-texts such as tweet classification. Nevertheless, we propose several methods to speed-up the the construction process when the number of vertices $n$ in the ClassiNet grows. Compared to learning feature predictors for the vertices we use in the ClassiNet, which is linear in the number of vertices $n$ in the ClassiNet, to compute weights $w_{ij}$ we must consider all pairwise combinations between the vertices in the ClassiNet. If we assume that the cost of learning a binary classifier for a vertex to be a constant $c$ and is independent of the feature, then the overall computational complexity of creating a ClassiNet can be estimated as $\cO(cn + N n^2 d )$. The first term is simply the complexity of computing $n$ feature predictors at the constant cost of $c$. This operation can be easily parallelised because each feature predictor can be learnt independently of the others. Moreover, it is linear in the number of vertices in ClassiNet. Therefore, the first term can be ignored in most practical scenarios. In cases where computational cost of the linear predictors is non-negligible, we can use several techniques to speed up this computation. First, we could resort to more computationally efficient liner classifiers such as the perceptron. Perceptrons can be trained in an online manner, without having to load the entire training dataset to the memory. Second, note that only the features $v_{j}$ that co-occur with a particular vertex $v_{i}$ in any train instance will be useful for predicting the occurrence of $v_{i}$. Therefore, we can limit the features that we use in the predictor for $v_{i}$ to be the set of features $v_{j}$ that occur at least once in the training data. We can efficiently compute such feature co-occurrences by building an inverted search index. We can further speed up this computation by resorting to approximate methods where we require a context feature $v_{j}$ to co-occur a predefined minimum number of times with the target feature $v_{i}$ for which we must compute a predictor. Setting this cut-off threshold to higher values will result in smaller, sparser and less noisier feature spaces and speed up the predictor computation. However, larger cut-off thresholds are likely to remove important contextual features, thereby decreasing the accuracy of the feature predictors. The optimal cut-off threshold could be determined using cross-validation or held-out data. On the other hand, the second term corresponds to learning edge-weights, and involves three factors: (a) $n^2$, the number of pairwise comparisons we must perform between the $n$ vertices in the ClassiNet, (b) $N$, the maximum number of instances for which we must predict labels for each pair of feature predictors when we compute the confusion matrices as shown in Table~\ref{tbl:conf}, and (c) $d$, the number of features we must consider when computing the label of a predictor. For example, if we use linear classifiers as feature predictors, during test time we must compute the inner-product between the weight vector of the classifier and the feature vector of the instance to be classified, both of which are $d$-dimensional. The dimensionality $d$ of the vectors that represent instances will depend on the type of features we use. For example, if we limit to lexical features from the short-text, then the number of non-zero features in any given instance will be small. However, if we use dense features such as word embeddings, then the number of non-zero features in an instance might be large. However, the factors (a) and (b) require careful consideration. First, we must compare all pairs of predictors, which is quadratic in the number of vertices in the ClassiNet. Second, to obtain the label for an instance we must classify that instance using the learnt prediction model. For example, in the case of linear classifiers we must compute the inner-product between two $d$-dimensional vectors: feature vector representing the instance to be classified, and the weight vector corresponding to the feature predictor. For nonliner classifiers such as the ones that use polynomial kernels, the number of feature combinations can grow exponentially resulting in slower prediction times for large batches of test instances. As a solution to this problem, we first represent each feature predictor $h_i$ by a $d' (< d)$ dimensional vector $\vec{h}_i(\cD_{(i,j)})$, where each element corresponds to the label predicted for a particular instance $\vec{x} \in \cD_{(i,j)}$. We randomly sample $\cD_{(i,j)} \subseteq \cD_{u}$ following the procedure detailed in Section~\ref{sec:classinet:learn}, where we include equal numbers of instances that contain $v_i$, $v_j$, and neither of those two. Therefore, $\vec{h}_i(\cD_{(i,j)}) \in \mat{I}_{d'}$ and $\mat{I}_{d'}$ is the $d'$-dimensional simplex. We name $\vec{h}_i(\cD_{(i,j)})$ as the \emph{label vector} because it is a vector of predicted labels for all the instances in $\cD_{(i,j)}$ by $h_i$, the feature predictor learnt for the feature $v_i$. We can explicitly compute the label vector for the $i$-th feature predictor as follows: \begin{equation} \label{eq:label-vector} \vec{h}_i(\cD_{(i,j)}) = \left( \vec{h}_i(\vec{x}_1), \ldots, \vec{h}_i(\vec{x}_{d'}) \right)\T \end{equation} In practice, $d' \ll N$ because only a small number of instances in $\cD_{u}$ will contain $v_i$, or $v_j$, and we select equal proportions of instances that do not contain both instances. The following theorem states the relationship between neighbouring feature predictors in the original $d$-dimensional space and the projected $d'$-dimensional space. \begin{theorem} \label{th:LSH} Consider two (possibly nonlinear) feature predictors $h_{i}(\vec{x}) = \sigma(\vec{\mu}_{i}\T\vec{x})$, and $h_{j}(\vec{x}) = \sigma(\vec{\mu}_{j}\T\vec{x})$, parametrized by $\vec{\mu}_{i}, \vec{\mu}_{j} \in \R^{d}$, and a transformation function $\sigma(\cdot) \in \{1,0\}$. Let $\theta(\vec{\mu}_{i}, \vec{\mu}_{j})$ be the angle between $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$. The following relation holds between $\theta(\vec{\mu}_{i}, \vec{\mu}_{j})$ and the probability of agreement $p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right)$, \[ \theta(\vec{\mu}_{i}, \vec{\mu}_{j}) = \pi \left(1 - {p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right)}^{1/d'}\right) . \] \end{theorem} The proof of Theorem~\ref{th:LSH} is given below, and follows from the properties of locality sensitive hashing (LSH)~\cite{He:NIPS:2003,Andoni:CACM:2008,Indyk:STOC:98}. \subsection*{Proof of Theorem~1} Let us consider the agreement of the feature predictors $h_{i}$ and $h_{j}$ on the $k$-th instance $\vec{x}_{k} \in \cD_{(i,j)}$. The probability of agreement can be written as, \begin{equation} \label{eq:agreement} p\left( h_{i}(\vec{x}_{k}) = h_{j}(\vec{x}_{k}) \right) = 1 - p\left( h_{i}(\vec{x}_{k}) \neq h_{j}(\vec{x}_{k}) \right) . \end{equation} From the symmetry in the half-plane, the disagreement probability on the right side in \eqref{eq:agreement} can be written as twice the probability of one parameter vector being projected positive and the other negative, given by: \begin{equation} \label{eq:double} p\left( h_{i}(\vec{x}_{k}) \neq h_{j}(\vec{x}_{k}) \right) = 2 p\left( \vec{\mu}_{i}\T\vec{x}_{k} \geq 0, \vec{\mu}_{j}\T\vec{x}_{k} < 0 \right) \end{equation} However, the vector $\vec{x}_{k}$ must exist inside the dyhedral angle $\theta(\vec{\mu}_{i}, \vec{\mu}_{j})$ formed by the intersection of the two half-panes spanned by $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$. Therefore, the probability in \eqref{eq:double} can be estimated as the ratio between angles given by, \begin{equation} \label{eq:angle} p\left( \vec{\mu}_{i}\T\vec{x}_{k} \geq 0, \vec{\mu}_{j}\T\vec{x}_{k} < 0 \right) = \frac{\theta(\vec{\mu}_{i}, \vec{\mu}_{j})}{2\pi} . \end{equation} From \eqref{eq:agreement}, \eqref{eq:double}, and \eqref{eq:angle}, we obtain, \begin{equation} \label{eq:full} p\left( h_{i}(\vec{x}_{k}) = h_{j}(\vec{x}_{k}) \right) = 1 - \frac{\theta(\vec{\mu}_{i}, \vec{\mu}_{j})}{\pi} . \end{equation} If we assume that the instances in $\cD_{(i,j)}$ are i.i.d., then the agreement of the entire two $d'$-dimensional label vectors can be computed as the product of agreement probabilities of each dimension, given by, \begin{eqnarray} \label{eq:prod} p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right) &=& \prod_{k=1}^{d'} p\left( \vec{h}_{i}(\vec{x}_{k}) = \vec{h}_{j}(\vec{x}_{k}) \right) \nonumber \\ &=& {\left( 1 - \frac{\theta(\vec{\mu}_{i}, \vec{\mu}_{j})}{\pi} \right)}^{d'} . \end{eqnarray} From \eqref{eq:prod} it follows that, \[ \theta(\vec{\mu}_{i}, \vec{\mu}_{j}) = \pi \left(1 - {p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right)}^{1/d'} \right) \qed \] Theorem~\ref{th:LSH} states that we can measure the agreement between labels predicted by two feature predictors using the angle between their corresponding parameter vectors. More importantly, Theorem~\ref{th:LSH} provides us with a heuristic to approximately find the nearest neighbours of each vertex without having to compute the confusion matrices for all pairs of vertices in the ClassiNet. We compute the nearest neighbours for each feature predictor in the $d'$-dimensional space. Computation of ${p\left( \vec{h}_{i}(\cD_{(i,j)}) = \vec{h}_{j}(\cD_{(i,j)}) \right)}$ is closely related to the calculation of hamming distance between the label vectors $\vec{h}_{i}(\cD_{(i,j)})$ and $ \vec{h}_{j}(\cD_{(i,j)})$. The Point Location in Equal Balls (PLEB) algorithm~\cite{Indyk:STOC:98} can be used to compute the hamming distance in an efficient manner. This algorithm considers random permutations of the bit streams and their sorting to find the vector with the closest hamming distance~\cite{Charikar:STOC:2002}. We use the variant of this algorithm proposed by Ravichandran and Hovy~\cite{Ravichandran:ACL:2005} that extends the original algorithm to find the $k$-nearest neighbours. Specifically, we use this algorithm to find the $k$-nearest neighbours for each feature $v_i$, and compute edge-weights $w_{ij}$ for each $v_i$ and its nearest neighbours $v_j$ using the contingency table. Note that although we find the nearest neighbours using the approximate method described above, the edge-weights computed between the selected neighbours are precise because they are based on the confusion matrix. To estimate the size of the neighbourhood $k$ that we must select in order to obtain a reliable approximation of the neighbours that we would have in the original $d$-dimensional space, we use the following procedure. First, we randomly select a small number $\alpha (\ll N)$ of vertices from the trained ClassiNet, and compute the confusion matrices with each of those $\alpha$ vertices and the remainder of the vertices in the ClassiNet. We then compute the weights $w_{ij}$ of the edges that connect the selected $\alpha$ vertices to the rest of the vertices in the ClassiNet. Following this procedure we compute the nearest neighbours of each vertex in $\alpha$ without using the projection trick described above. Second, we apply the projection method described above for all the vertices in the ClassiNet, and compute the nearest neighbours of the $\alpha$ vertices that we selected. We then compare the overlap between the two sets of neighbourhoods. In our preliminary experiments, we found that setting the neighbourhood size $k = 10$ to be an admissible trade-off between the accuracy of the neighbourhood computation and the speed. Therefore, all experiments described in the paper use edge-weights computed with this $k$ value. \subsection{ClassiNets vs. Co-occurrence Graphs} \label{sec:classi-cooc} Before we describe how to use the trained ClassiNets to classify short-texts, it is worth discussing the connection between word co-occurrence graphs and ClassiNets. Representing the association between words using co-occurrence graphs has a long history in NLP~\cite{Rada:2011}. Word co-occurrences could be measured using symmetric measures, such as the Pointwise Mutual Information (PMI), Log-Likelihood Ratio (LLR), or asymmetric measures such as KL-divergence, or conditional probability~\cite{FSNLP}. In a co-occurrence graph, vertices correspond to words, and the weight of the edge connecting two vertices represents the strength of association between the corresponding two words. However, in a co-occurrence graph, two words $v_i$ and $v_j$ to be connected by an edge, $v_i$ and $v_j$ must explicitly co-occur within the same context. On the other hand, in ClassiNets, we have edges between vertices not only for the words that co-occur within the same context, but also if they are predicted for the same instance even though none of those features might actually be occurring in that instance. For example, for an instance $\vec{x}$ where $x_i = x_j = 0$, we might still have $h_i(\vec{x}) = h_j(\vec{x}) = 1$. Therefore, ClassiNets consider implicit occurrences of features which would not be captured by co-occurrence graphs. In fact, ClassiNets can be thought to be a generalized version of co-occurrence graphs that subsumes explicit co-occurrences. To see this, let us define feature predictors $h_i$ and $h_j$ as follows: \begin{eqnarray} h_i(\vec{x}) = \vec{1}[x_i \neq 0] \\ h_j(\vec{x}) = \vec{1}[x_j \neq 0] \end{eqnarray} Here, $\vec{1}$ is the indicator function defined as follows: \begin{equation} \label{eq:indicator} \vec{1}(\delta) = \begin{cases} 1 & \delta = \text{TRUE} \\ 0 & \delta = \text{FALSE} \end{cases} \end{equation} Then, $M_{11}$ in Table~\ref{tbl:conf} can be written as, \begin{equation} M_{11} = \sum_{\vec{x} \in \cD_{(i,j)}} \vec{1}[x_i \neq 0] \vec{1}[x_j \neq 0] , \end{equation} which is the number of instances in which both features $v_i$ and $v_j$ would co-occur. Therefore, ClassiNet reduces to co-occurrence graphs when the feature predictor is simply the indicator function for a single feature. However, in general, feature predictors would consider not just a single feature but a combination (potentially non-linear) of multiple features, thereby capturing broader information than in a word co-occurrence graph. \section{Feature Expansion} \label{sec:classinet:expand} In this Section, we describe several methods to use the ClassiNets created in Section~\ref{sec:classinets} for predicting missing features in instances, thereby overcoming the feature sparseness problem. We refer to this operation as \emph{feature expansion}. Given a train or a test instance $\vec{x} = (x_1, \ldots, x_d)\T$, we use the non-zero features, $x_i \neq 0$ in $x$ and find similar vertices $v_j \in \cV$ from the created ClassiNet. In Section~\ref{sec:local}, we describe \emph{local feature expansion} methods that consider only the nearest neighbours of the vertices in the ClassiNet that correspond to non-zero features in an instance, whereas in Section~\ref{sec:global} we propose a \emph{global feature expansion} method that propagates the original features across the ClassiNet to predict the related features. \subsection{Local Feature Expansion} \label{sec:local} Given a ClassiNet, we propose several feature expansion methods that consider the local neighbourhood of the non-zero features that occur in an instance. We refer to such methods collectively as \emph{local feature expansion} methods. \subsubsection{Independent Expansion} \label{sec:expand:independent} The first local feature expansion method we propose expands each feature in an instance independently of the others. Specifically, we predict whether $v_i$ occurs in a given instance $\vec{x}$ using the feature predictor $h_i$ we trained from the unlabeled instances. If $h_i(\vec{x}) = 1$, then we append $v_i$ as an expansion feature to $\vec{x}$, otherwise we ignore $v_i$. We repeat this process for all the vertices $v_i \in \cV$ and append the positively predicted vertices to the original instance $\vec{x}$. If the $i$-th feature $x_i$ already appears in $\vec{x}$ and also predicted by $h_i(\vec{x})$ then we set its feature value to $x_i + h_i(\vec{x})$. In the case where we have binary feature representations we will have $x_i \in \{0,1\}$. Therefore, in the binary feature setting if a feature that already exists in an instance is predicted, then it will result in doubling the feature weight ($\because x_i + h_i(\vec{x}) = 1 + 1 = 2$). Moreover, instead of predicting the label, in a probabilistic classifier, such as the logistic regression, we can use the posterior probability instead of the predicted label as $h_i(\vec{x})$ to compute feature values for the expansion features. \subsubsection{Local Path Expansion} \label{sec:expand:local} This method extends the independent expansion method described in Section~\ref{sec:expand:independent} by including all the vertices along the shortest paths that connect predicted features to the original features over the ClassiNet. For example, let us assume that a feature $x_i = 0$ in an instance $\vec{x}$. If $h_i(\vec{x}) = 1$, we will append $v_i$ as well as all the vertices along the shortest paths that connect $v_i$ to each feature $x_j \neq 0$ that exists in the instance $\vec{x}$. Because all expanded features are connected to the original non-zero features that exist in the instance via some local path, we refer to this approach as the \emph{local path expansion}. By construction, the set of expansion candidates produced by the local path expansion method subsumes that of the independent expansion method. \subsubsection{All Neighbour Expansion} \label{sec:expand:nn} In this expansion method, first, we use edge-weights to find the $k$-nearest neighbours of each vertex $v_i$, and connect all the neighbours for each vertex to create a $k$-nearest neighbour graph from the trained ClassiNet. The $k$-nearest neighbour graph that we create from the ClassiNet in this manner is a subgraph of the ClassiNet. Two vertices $v_i$ and $v_j$ are connected by an edge in this $k$-nearest neighbour graph if and only if $v_i$ is among the top $k$ most similar vertices to $v_j$ as well as $v_j$ is among the top $k$ most similar vertices to $v_i$. The weights of all the edges in this $k$-nearest neighbour graph are set to $1$. Next, for each non-zero feature in an instance $\vec{x}$, we use its nearest neighbours as expansion features. This method ignores the absolute values of the edge-weights in the ClassiNet, and considers only their relative strengths. If we increase the value of $k$, we will have a larger set of candidate expansion features. However, it will also result in considering less relevant features to the original features. Therefore, there exists a trade-off between the number of expansion candidates we can use for feature vector expansion, and the relevancy of the expansion features to the original features. Using development data, we constructed $k$-nearest neighbour graphs for varying $k$ values, and found that $k > 4$ settings often result in noisy neighbourhoods. Consequently, when using neighbour expansion, we set $k = 4$. \subsubsection{Mutual Neighbour Expansion} \label{sec:expand:mutual} The mutual neighbour expansion method also uses the same $k$-nearest neighbour graph as used by the all neighbour expansion method described in Section~\ref{sec:expand:nn}. The mutual neighbour expansion method selects a vertex $v_j$ in ClassiNet as an expansion candidate, if there exists at least two distinct vertices $v_i$, $v_k$ in the ClassiNet for which $x_i \neq 0$, and $x_k \neq 0$ in the instance $\vec{x}$ to be expanded. This method can be seen as a conservative version of the all neighbour expansion method described in Section~\ref{sec:expand:nn} because, we would ignore vertices $v_j$ that are nearest neighbours of only a single feature in the original feature vector. The mutual neighbour expansion method addresses the issue associated with previously proposed local feature expansion methods, which select expansion candidates separately for each non-zero feature in the feature vector to be expanded, ignoring the fact that the feature vector represents a single coherent short-text. However, this conservative expansion candidate selection strategy of the mutual neighbour expansion method means that we will have a smaller set of expansion candidates in comparison to, for example, the all neighbour expansion method. \subsection{Global Feature Expansion} \label{sec:global} The local feature expansion methods described in Section~\ref{sec:local} consider only the vertices in the ClassiNet that are \emph{directly connected} to a feature in an instance as expansion candidates. Even in the case of local path expansion (Section~\ref{sec:expand:local}), the expansion candidates are limited to the local neighbours of the original features and the predicted features. Considering that ClassiNet is a directed graph, we can perform label propagation on ClassiNet to find features that are not directly connected nor appearing in the local neighbourhood of a feature in a short-text but still relevant. For example, assume that \emph{Google} and \emph{Microsoft} are not local neighbours in a ClassiNet. Consequently none of the local neighbour expansion methods will be able to predict \emph{Microsoft} as a relevant feature for expanding a short-text containing \emph{Google}. However, if \emph{Bing}, a Web search engine similar to \emph{Google}, appears in the local neighbourhood of \emph{Google} in the ClassiNet, and if we can propagate from \emph{Bing} to its parent company \emph{Microsoft} via the ClassiNet, then we will be able to predict \emph{Microsoft} as a relevant feature for \emph{Google}. The propagation might be over multiple hops, thereby reaching beyond the local neighbourhood of a feature. Propagation over ClassiNet can also help to reduce the ambiguity in feature expansion. For example, consider the sentence ``\emph{Microsoft and Apple are competing for the tablet computer market.}''. If we do not perform word sense disambiguation prior to feature expansion, and we expand each feature independently of the others, then it is likely that we might incorrectly expand \emph{apple} by other types of fruits such as \emph{banana} or \emph{orange}. Such phenomena are observed in prior work on set expansion and is referred to as \emph{semantic drift}~\cite{Kozareva:NAACL:2010}. However, if we find the expansion candidates jointly, such that they are relevant to all the features (words) in the sentence, then they must be relevant to both \emph{Microsoft} as well as \emph{Apple}, which encourages other IT companies, such as \emph{Google} or \emph{Yahoo} for example. All local feature expansion methods described in Section~\ref{sec:local} except the independent expansion method address this issue by ranking expansion candidates depending on how well they are related to all the features in a short-text. Label propagation can solve this ambiguity problem in a more systematic manner by converging multiple random walks initiated at different features that exist in a short text. Next, we describe a \emph{global feature expansion} method based on propagation over ClassiNet. \begin{figure}[t] \centering \includegraphics[height=50mm]{global.pdf} \caption{Computing the feature value of an expansion feature $v^*$ for an instance that has $v_1 = x_1$ and $v_2 = x_2$ as non-zero features.} \label{fig:global} \end{figure} First, let us describe the proposed global feature expansion method using the ClassiNet shown in Figure~\ref{fig:global}. Here, we consider expanding an instance $\vec{x} = (x_1, x_2)\T$ with two non-zero features $v_1 = x_1$ and $v_2 = x_2$ ($x_1 \neq 0$, and $x_2 \neq 0$). We would like to compute the likelihood $p(v^*|\vec{x})$ of a vertex $v^*$ as an expansion candidate for the instance $\vec{x}$. From Figure~\ref{fig:global} we see that there are two possible paths reaching $v^*$ starting from the original features $x_1$ and $x_2$. Assuming that the two paths are independent, we compute $p(v^*|\vec{x})$ as follows: \begin{equation} p(v^*|\vec{x}) = p(x_1)p(v_3|x_1)p(v^*|v_3) + p(x_2)p(v_4|x_2)p(v^*|v_4) \label{eq:example} \end{equation} The computation described in Figure~\ref{fig:global} can be generalized for an arbitrary ClassiNet $\cG(\cV, \cE, \mat{W})$, and an instance $\vec{x} = (x_1, \ldots, x_d)\T$. For this purpose, let us define the set of non-cyclic paths connecting two vertices $v_i$, $v_j$ in $\cG$ to be $\Gamma(v_i, v_j)$. For the example shown in Figure~\ref{fig:global} we have the two paths $x_1 \rightarrow v_3 \rightarrow v^*$, and $x_2 \rightarrow v_4 \rightarrow v^*$. We compute the likelihood $p(v^* | \vec{x})$ of a vertex $v^* \in \cV$ being an expansion candidate of $\vec{x}$ as follows: \begin{equation} p(v^*|\vec{x}) = \sum_{k=1}^{d} \left( x_k p(x_k=v_k)\prod_{(a,b) \in \Gamma(x_k, v^*)}p(b|a) \right) \label{eq:global} \end{equation} If a feature $x_k = 0$, then the likelihoods corresponding to paths starting from $x_k$ will be ignored in the computation of \eqref{eq:global}. The prior probabilities of features $p(x_k)$ can be estimated from train data by dividing the number of instances that contain $x_k$ by the total number of instances. Alternatively, we could set a uniform prior for $p(x_k)$ thereby considering all the words that occur in an instance equally. We follow the latter approach in our experiments. The sum-product computation over paths can be efficiently computed by observing that it can be modeled as a label propagation problem over a directed weighted graph, where an instance $\vec{x}$ is the initial state vector and the transition probabilities are given by the weight matrix $\mat{W}$. Vertices that can be reached after $q$ hops are given by $\sum_{i=1}^{q}\mat{W}^{i}\vec{x}$. Neighbours that are distantly located in the ClassiNet are less reliable as expansion candidates. To reduce the noise due to distant (and potentially irrelevant) vertices during the propagation, we introduce a damping factor $0 < \gamma \leq 1$ in the summation, $\sum_{i=1}^{q}\gamma^i \mat{W}^{i} \vec{x}$. In Section~\ref{sec:damp}, we experimentally study the effect of the level of damping on the classification accuracy of short-text classification. The feature expansion methods we described above are used to predict missing features for both train and test instances. We expand feature vectors representing the train/test instances, and assign unique identifiers to the expansion features, thereby distinguishing between the original features and the expanded features. For example, given the positive sentiment labeled train sentence ``\emph{I love dogs}'', we can represent it using the feature vector, [(\emph{I}, 1), (\emph{love}, 1), (\emph{dog}, 1)]. Here, we assume that lemmatization has been conducted on the input and the feature \emph{dogs} has been converted to its singular form \emph{dog}. Let us further assume that from the trained ClassiNet we were able to predict that \emph{cat} is a related feature for \emph{dog}, and the candidate score $p(cat|dog) = 0.8$. Next, we add the feature (\emph{EXP=cat}, 0.8) to the feature vector representing this train instance, where the prefix \emph{EXP=} indicates that it is a feature introduced by the expansion method and not a feature that existed in the original train instance. Distinguishing original vs. expansion features is useful when we would like to learn different weights for the same feature depending on whether it is expanded or not. For example, if a particular feature is not very useful as an expansion feature, it will be assigned a lower weight thereby effectively pruning that feature out from the model learnt by the classifier. The first step of learning a ClassiNet is learning the feature predictors. In this regard, any word embedding learning method can be used for the purpose of learning feature predictors. Once the feature predictors are learnt, we can create a ClassiNet in the same manner as we propose in this paper and use the ClassiNet created to perform feature expansion using local/global feature expansion methods we propose in the paper. This view of ClassiNets illustrates the general applicability of the proposed method. \section{A Theoretical Analysis of ClassiNets} \label{sec:theory} Before we empirically evaluate the performance of the proposed ClassiNets for feature expansion in short-text classification, let us analyze some interesting properties of ClassiNets. To simplify the analysis, let us assume that we are using a ClassiNet for learning a linear classifier $\vec{\phi} \in \R^{d}$ for a binary classification task. Specifically, let us assume that we are given a train dataset $\{(\vec{x}^{(k)}, y^{(k)})\}_{k=1}^{N}$ consisting of $N$ instances, where each train instance $k$ is represented by a feature vector $\vec{x}^{(k)} \in \R^{d}$. The binary target label assigned to the $k$-th train instance is denoted by $y^{(k)} \in \{1, -1\}$. For correctly classified train instances $\vec{x}^{(k)}$ we have, $y^{(k)}\phi\T\vec{x}^{(k)} > 0$. We use the trained linear classifier $\vec{\phi}$, and predict the label $\hat{y}$ of an unseen test instance $\hat{\vec{x}}$ as follows: \begin{eqnarray} \label{eq:pred} \hat{y} = \begin{cases} 1 & \text{if } \phi\T\hat{\vec{x}} > 0 \\ -1 & \text{otherwise} \end{cases} \end{eqnarray} Let us assume that we have learnt a feature predictor $h_{i}$ that predicts whether the $i$-th feature exists in a given instance. As described in Section~\ref{sec:overview}, we can use any classification algorithm to learn the feature predictors. However, as a concrete case, let us consider linear classifiers in this analysis. In the case of linear classifiers, we can represent the feature predictor learnt for the $i$-th feature by the vector $\vec{\mu}_{i}$. Following the notation introduced in Section~\ref{sec:overview}, we can write the feature predictor $h_{i}$ as follows: \begin{equation} h_{i} (\vec{x}) = \begin{cases} 1 & \text{if } \vec{\mu}_{i}\T\vec{x} > 0 \\ -1 & \text{otherwise} \end{cases} \end{equation} In the ClassiNets described in the paper so far, we used the predicted discrete labels as the values of the predicted features during feature expansion. However, in this analysis let us consider the more general case where we use the actual prediction score, $\vec{\mu}_{i}\T\vec{x}$ as the contribution of the feature expansion towards the $i$-th feature. We can construct the expanded feature vector, $\vec{x}^{*} \in \R^{d}$, of the feature vector $\vec{x} \in \R^{d}$ considering the inner-product between $\vec{x}$ and each of the feature predictors $\vec{\mu}_{i}$ as in \eqref{eq:expand}. \begin{equation} \label{eq:expand} \vec{x}^{*} = [ (x_{1} + \vec{\mu}_{i}\T\vec{x}), \ldots, (x_{i} + \vec{\mu}_{i}\T\vec{x}), \ldots, (x_{d} + \vec{\mu}_{d}\T\vec{x})]\T \end{equation} Here, we denote the $i$-th dimension of the feature vector $\vec{x}$ by $x_{i}$. We can transform the given train dataset $\{(\vec{x}^{(k)}, y^{(k)})\}_{k=1}^{N}$ by expanding each feature vector separately using \eqref{eq:expand}, and use the expanded feature vectors to train a binary linear classifier $\vec{\phi}^{*}$. Following \eqref{eq:pred}, we can use $\vec{\phi}^{*}$ to predict the label for a test instance $\vec{x}^{*}$ based on the prediction score given by \begin{eqnarray} \vec{\phi}^{*}\T\vec{x}^{*} &=& \sum_{i=1}^{d} \phi_{i}^{*} \left( x_{i} + \vec{\mu}_{i}\T\vec{x} \right) \nonumber \\ &=& \sum_{i=1}^{d} \phi_{i}^{*} x_{i} + \sum_{i=1}^{d} \phi_{i}^{*} \vec{\mu}_{i}\T\vec{x} \nonumber \\ &=& \vec{\phi}^{*}\T \vec{x} + \vec{\phi}^{*}\T \mat{L} \vec{x} \label{eq:exp2} \\ &=& \vec{\phi}^{*}\T \left(\mat{I} + \mat{L} \right) \vec{x} \label{eq:exp3} \end{eqnarray} Here, $\mat{I} \in \R^{d \times d}$ is a unit matrix, and $\mat{L} \in \R^{d \times d}$ is the matrix formed by arranging the feature predictors $\vec{\mu}_{i}$ in rows. In other words, $\mat{L} = [\vec{\mu}_{1} \ldots \vec{\mu}_{d}]\T$. The first term in \eqref{eq:exp2} corresponds to classifying the non-expanded (original) instance $\vec{x}$ using the classifier trained using the expanded train dataset. The second term in \eqref{eq:exp2} represents the prediction score due to feature expansion. From \eqref{eq:exp3} we see that performing feature expansion on a feature vector $\vec{x}$ is equivalent to multiplying the matrix $\left(\mat{I} + \mat{L} \right)$ into $\vec{x}$. Therefore, local feature expansion methods described in Section~\ref{sec:local} can be seen as projecting the train feature vectors into the same $d$-dimensional feature space spanned by the features that exist in the train instances. As a special case, we see that when we do not learn feature predictors we have $\mat{L} = \mat{0}$, for which \eqref{eq:exp2} reduces to the prediction score $\vec{\phi}^{*}\T\vec{x}$ of the binary linear classifier trained using non-expanded train instances. \subsection{Edge weights of ClassiNets} Recall that, $w_{ij}$ the weight of the edge connecting the vertex $i$ to vertex $j$ in a ClassiNet was defined by \eqref{eq:weight}. In the case of binary linear feature predictors $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$ we considered in the previous section, let us estimate the value of $w_{ij}$. Using the indicator function $\vec{1}$ defined by \eqref{eq:indicator}, we compute $M_{11}$ and $(M_{11} + M_{10})$ in \eqref{eq:weight} as follows: {\small \begin{eqnarray} && M_{11} = \sum_{k=1}^{N} \vec{1}[(y^{(k)}\vec{x}^{(k)}\T\vec{\mu}_{i}\mkern-5mu>\mkern-5mu0) \land (y^{(k)}\vec{x}^{(k)}\T\vec{\mu}_{j} \mkern-5mu > \mkern-5mu 0)] \label{eq:M11} \\ && M_{11} + M_{10} = \sum_{k=1}^{N} \vec{1}[(y^{(k)}\vec{x}^{(k)}\T\vec{\mu}_{i} > 0)] \label{eq:M*} \end{eqnarray} } Let us assume that we sample instances $\vec{x}$ from the train dataset randomly according to the distribution $p(\vec{x})$. Then the expected counts in $\hat{M}_{11}$ and $\hat{M}_{10}$ in \eqref{eq:M11} and \eqref{eq:M*} can be expressed using the expected number of the correct classifications made by the feature predictors $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$ as follows: {\small \begin{eqnarray} && \hat{M}_{11} = \Ep_{p(\vec{x})}\left[ \vec{1}[(y\vec{x}\T\vec{\mu}_{i} > 0) \land (y\vec{x}\T\vec{\mu}_{j} > 0)] \right] \label{eq:M11:hat} \\ && \hat{M}_{11} + \hat{M}_{10} = \Ep_{p(\vec{x})} \left[ \vec{1}[(y\vec{x}\T\vec{\mu}_{i} > 0)] \right] \label{eq:M*:hat} \end{eqnarray} } Using the expected counts given by \eqref{eq:M11:hat} and \eqref{eq:M*:hat} we can compute the approximate value of the edge weight $\hat{w}_{ij}$ as follows: \begin{equation} \label{eq:weight:approx} \hat{w}_{ij} = \frac{\Ep_{p(\vec{x})}\left[ \vec{1}[(y\vec{x}\T\vec{\mu}_{i} > 0) \land (y\vec{x}\T\vec{\mu}_{j} > 0)] \right]} { \Ep_{p(\vec{x})} \left[ \vec{1}[(y\vec{x}\T\vec{\mu}_{i} > 0)] \right]} \end{equation} If we have a sufficiently large train dataset, then \eqref{eq:weight:approx} provides an alternative procedure for estimating the edge weights. We could randomly select samples from the train dataset, predict the features $i$ and $j$ for those samples, and compute the expectations as ratio counts. We can repeat this procedure many times to obtain better approximations for the edge weights. Although this is a theoretically feasible procedure for approximately computing the edge weights, it can be slow in practice and might require many samples before we obtain a reliable approximation for the edge weights. Therefore, the edge weight computation method described in Section~\ref{sec:project} is more appropriate for practical purposes. \subsection{Analysis of the Global Feature Expansion Method} We already showed in \eqref{eq:exp3} that local feature expansion methods can be considered as feature vector transformation methods by a matrix $(\mat{I} + \mat{L})$. However, an important strength of ClassiNet is that we can propagate the predicted features over the network using the global feature expansion method described in Section~\ref{sec:global}. Let us denote the edge-weight matrix of the ClassiNet $\cG$ by $\mat{W}$. The $(i,j)$-th element of $\mat{W}$ is denoted by $w_{ij}$. The connection between edge weights $w_{ij}$ and the feature predictors $\vec{\mu}_{i}$ and $\vec{\mu}_{j}$ is given by \eqref{eq:weight:approx}. In the global feature expansion method, we repeatedly propagate the predicted features across the network, which can be seen as a repeated multiplication using $\gamma \mat{W}$, where $\gamma$ is the damping factor described in Section~\ref{sec:global}. Observing this connection, we can derive the prediction score under the global feature expansion method similar to \eqref{eq:exp3} as follows: \begin{eqnarray} \vec{\phi}^{*}\T\vec{x}^{*}&=& \vec{\phi}^{*}\T \left(\mat{I} + \gamma\mat{W} + \ldots + \gamma^{q} \mat{W}^{q} \right) \vec{x} \nonumber \\ &=& \vec{\phi}^{*}\T (\mat{I} - \gamma \mat{W})\inv (\mat{I} - \gamma^{(q+1)} \mat{W}^{(q+1)}) \vec{x} \label{eq:exp4} \end{eqnarray} For the summation shown in \eqref{eq:exp4} to hold, and the matrix $(\mat{I} - \gamma \mat{W})$ to be invertible, for all eigenvalues $\lambda_{r}$ of $\mat{W}$ we require $\gamma |\lambda_{r}| < 1$. This requirement can be met in practice by a sufficiently small damping factor. For example, we could set $\gamma = 1/(1 + |\lambda_{\max}||)$, where $|\lambda_{\max}|$ is the eigenvalue of $\mat{W}$ with the maximum absolute value. As a special case where we propagate the features without truncating, we have $q \rightarrow \infty$, for which we obtain the prediction score given in \eqref{eq:inf}. \begin{equation} \label{eq:inf} \vec{\phi}^{*}\T\vec{x}^{*} = \vec{\phi}^{*}\T (\mat{I} - \gamma \mat{W})\inv \vec{x} \end{equation} From \eqref{eq:inf}, we see that, similar to the local feature expansion methods, the global feature expansion method can also be seen as projecting the input feature vector $\vec{x}$ using the matrix $(\mat{I} - \gamma \mat{W})\inv$. \section{Experiments} \label{sec:exp} We create a ClassiNet using 257,306 unlabeled sentences from the Large Movie Review dataset\footnote{\url{http://ai.stanford.edu/~amaas/data/sentiment/}}. Each word in this dataset is uniquely represented by a vertex in the ClassiNet. We learn linear predictor for each feature using automatically selected positive (reviews where the target feature appears) and negative (reviews where the target feature does not appear) training instances. The ClassiNet created from this dataset contains $489,000$ vertices. This ClassiNet is used in all the experiments described in the remainder of this paper. For evaluation purposes we use four binary classification datasets: the Stanford sentiment treebank (\textbf{TR})\footnote{\url{http://nlp.stanford.edu/sentiment/treebank.html}} (903 positive test instances and 903 negative test instances), movie reviews dataset (\textbf{MR})~\cite{Pang:ACL:2005} (5331 positive instances and 5331 negative instances), customer reviews dataset (\textbf{CR})~\cite{Hu:KDD:2004} (925 positive instances and 569 negative instances), and subjectivity dataset (\textbf{SUBJ})~\cite{Pang+Lee:04a} (5000 positive instances and 5000 negative instances). We perform five-fold cross-validation in all datasets, except in the Stanford sentiment treebank where there exists a pre-defined test and train split. In each dataset, we use the train portion to learn a binary classifier. Next, we use the trained ClassiNet to expand the feature vectors for the test instances. We then measure the classification accuracy of the binary classifier on the expanded test instances. If high classification accuracies are obtained using a particular feature expansion method, then that feature expansion method is considered superior. We use a CPU server containing 48 cores of 2.5GHz Intel Xeon CPU and 512GB RAM in our experiments. The entire training pipeline of training feature predictors, building the ClassiNet and expanding training instances using Global feature expansion method takes approximately 1.5 hours. The testing phase is significantly faster because we can use the created ClassiNet to expand test instances and use the trained model to make predictions. For example, for the \textbf{SUBJ} dataset, which is the largest among all datasets used in our experiments, it takes only 5 minutes to both expand (using Global feature expansion) and predict (using logistic regression). \subsection{Binary Classification of Short-Texts} \label{sec:sentiment} Direct evaluation of the features predicted by the ClassiNet is difficult because there is no gold standard for feature expansion. Instead, we perform an extrinsic evaluation of the created ClassiNet by using it to expand feature vectors representing sentences in several binary text classification tasks. If we can observe any increase (or decrease) in classification accuracy for the target classification task when we use the features predicted by the ClassiNet, then it can be directly associated with the effectiveness of the ClassiNet. For the purpose of training a binary classifier, we represent a sentence by a real-valued vector, in which elements correspond to the unigrams extracted from that sentence. The feature values are computed using the tfidf measure. We train a binary logistic regression model, where the $L_{2}$ regularisation coefficient is tuned using development data selected from the Stanford sentiment treebank dataset. We use classification accuracy, which is defined as the ratio between the correctly classified test sentences and the total number of test sentences in the Stanford sentiment treebank. In addition to reporting the overall classification accuracies, we report classification accuracies separately for the positively labeled instances and the negatively labeled sentences. Because this is a binary classification task, a random classifier would obtain an accuracy of $50\%$. There are $903$ positive and $908$ negative sentiment labeled test sentences in the Stanford sentiment treebank test dataset. Therefore, a baseline that assigns the majority label would obtain an accuracy of $50.13\%$ on this dataset. Table~\ref{tbl:sentiment} compares the sentiment classification accuracies obtained by the following methods: \textbf{No Expansion:} This baseline does not perform any feature expansions. It trains a binary logistic regression classifier using the train sentences, and applies it to classify sentiment of the test sentences. This baseline demonstrates the level of performance we would obtain if we had not performed any feature expansion. It can be seen as a lower-baseline for this task. \textbf{Independent Expansion:} This method is described in Section~\ref{sec:expand:independent}. \textbf{Local Path Expansion:} This method is described in Section~\ref{sec:expand:local}. \textbf{All neighbour Expansion:} This method is described in Section~\ref{sec:expand:nn}. \textbf{Mutual neighbour Expansion:} This method is described in Section~\ref{sec:expand:mutual}. \textbf{WordNet:} Using lexical resources such as thesauri to find related words is a popular technique used in query expansion~\cite{Fang:ACL:2008,Gong:2005}. To simulate the performance that we would obtain if we had used an external resource such as the WordNet to find the expansion candidates, we implement the following baseline. In the WordNet, words that are semantically related are grouped into clusters called \emph{synsets}. For each feature in a test instance, we search the WordNet for that feature, and use all words listed in synsets for that feature as its expansion candidates. We consider all synonyms in a synset to be equally relevant as expansion candidates of a feature. \textbf{SCL:} Domain adaptation methods attempt to overcome the feature mismatch between source and target domains by predicting missing features and/or learning a lower-dimensional embedding common to the two domains. Although we do not have two domains in our setting, we can still apply domain adaptation methods such as the structural correspondence learning (SCL) proposed by Blitzer et al.~\cite{Blitzer:EMNLP:2006} to predict missing features in a given short-text. SCL was described in detail in Section~\ref{sec:related}. Specifically, we train SCL using the same set of vertices as used by the ClassiNet as pivots. This enables us to conduct a fair comparison between SCL and methods that use ClassiNet because the performance between SCL and methods that use ClassiNet can be directly attributable to the projection method used in SCL and not due to any differences of the expansion set. We then train linear predictors for those pivots using logistic regression. We arrange the trained linear predictors as rows in a matrix, on which we subsequently perform singular value decomposition to obtain a lower-dimensional projection. Following the recommendations in \cite{Blitzer:EMNLP:2006}, we set the dimensionality of the projection to $50$. Both train and test instances are first projected to this lower-dimensional space and we append the projected features to the original feature vectors. Next, we train a binary sentiment classifier using logistic regression with $\ell_{2}$ regularisation. The regularisation coefficient is set using a held-out set of review sentences. \textbf{FTS:} FTS is the frequent term sets method proposed by Man~\cite{Man:2014}. First, co-occurrence and class-orientation relations are defined among features (terms). Next, terms that are frequent in those relations more than a pre-defined threshold (support) are selected as expansion candidates. Finally, for each feature in a short text, the frequent term sets containing this feature are appended as expansion features to the original feature vector representing the short-text. FTS can be considered as a method that uses clusters of features induced from the data instances to overcome the feature sparseness problem. \textbf{CBOW:} To compare the explicit feature expansion approach used by ClassiNets against implicit text representation methods, we use pre-trained word embeddings to represent a short-text in a lower-dimensional space. Specifically, we create $300$ dimensional word embeddings using the same corpus used by ClassiNets to create continuous bag-of-words (CBOW) ~\cite{Milkov:2013} embeddings, and add the word embedding vectors for all the words in a short text to create a $300$ dimensional vector that represents the given short-text. \textbf{Global Feature Expansion:} This method propagates the original features across the trained ClassiNet, and is described in Section~\ref{sec:global}. It is the main method proposed in this paper. \begin{table}[t] \caption{Binary classification accuracies.} \begin{center} \begin{tabular}{l c c c c} \toprule Method & \textbf{TR} & \textbf{MR} & \textbf{CR} & \textbf{SUBJ} \\ \midrule No Expansion & $76.31$ & $73.35$ & $81.54$ & $88.95$ \\ Independent Expansion & $75.32$ & $74.11$ & $78.19$ & $87.15$ \\ Local Path Expansion & $76.97$ & $73.73$ & $81.87$ & $88.05$ \\ All neighbour Expansion & $77.36$ & $72.93$ & $82.55$ & $88.75$ \\ Mutual neighbour Expansion & $77.13$ & $74.15$ & $80.87$ & $88.95$ \\ WordNet & $76.58$ & $66.09$ & $79.86$ & $77.95$ \\ SCL~\cite{Blitzer:EMNLP:2006} & $78.02$ & $74.44$ & $81.20$ & $89.25$ \\ FTS~\cite{Man:2014} & $76.47$ & $66.83$ & $62.41$ & $50.15$ \\ CBOW & $77.52$ & $73.31$ & $79.87$ & $88.88$ \\ Global Feature Expansion & $\mathbf{78.30}$ & $\mathbf{81.20}^{*}$ & $\mathbf{83.89}^{*}$ & $\mathbf{89.70}$ \\ \bottomrule \end{tabular} \end{center} \label{tbl:sentiment} \end{table} We summarise the classification accuracies obtained with different approaches discussed on the four test datasets in Table~\ref{tbl:sentiment}. For each dataset we indicate the best performing method using boldface font, whereas an asterisk indicates if the best performance reported is statistically significantly better than the second best method on the same dataset according to a two-tailed paired t-test under $0.01$ confidence level. From Table~\ref{tbl:sentiment}, we see that the proposed \textbf{Global Feature Expansion} method obtains the best performance in all four datasets. Moreover, in \textbf{MR} and \textbf{CR} datasets its performance is significantly better than the second best methods (respectively \textbf{SCL} and \textbf{All Neigbour Expansion}) on those two datasets . Among the four local expansion methods, \textbf{All neighbour Expansion} reports the best performance in \textbf{TR} and \textbf{CR} datasets, whereas the \textbf{Mutual neighbour Expansion} reports the best performance in \textbf{MR} and \textbf{SUBJ} datasets. \textbf{Independent Expansion} method performs worse than the \textbf{No Expansion} baseline in \textbf{TR}, \textbf{CR}, and \textbf{SUBJ} datasets indicating that by individually expanding each feature in a short-text we introduce a significant level of noise into the short-text. This result shows the importance for a feature expansion methods to consider all the features in an instance when adding related features to an instance. None of the local feature expansion methods are able to outperform the global feature expansion method in any of the datasets. In particular, in the \textbf{SUBJ} dataset we see that none of the local feature expansion methods outperform the \textbf{No Expansion} baseline. This result implies that it is not sufficient to simply create a ClassiNet, but it is also important to use an appropriate feature expansion method on the built ClassiNet to find expansion features to overcome the feature sparseness problem in short-text classification. \textbf{FTS} method performs poorly in all our experiments. This indicates that the frequency of a feature is not a good indicator of its effectiveness as an expansion candidate. On the other hand, \textbf{WordNet} method that uses synsets as expansion candidates performs much better than \textbf{FTS} method. Not surprisingly, this result shows that synonyms are useful as expansion candidates. However, a prerequisite of this approach is the availability of a thesauri that are either manually or semi-automatically created. Such linguistic resources might not be available or incomplete for some languages. On the other hand, our proposed method does not require such linguistic resources. \textbf{CBOW} and \textbf{SCL} methods perform competitively with the \emph{Global Feature Expansion} method in all datasets. Given that both \textbf{CBOW} and \textbf{SCL} are using word-level embeddings to compute a representation for a short text, this result shows the effectiveness of word-level embeddings as a method to overcome feature sparseness in short-text classification tasks. We compare non-compositional sentence-level embedding methods against the proposed \textbf{Global Feature Expansion} method later in Section~\ref{sec:sentemb}. \subsection{Comparisons against sentence-level embeddings} \label{sec:sentemb} An alternative direction for representing short-texts is to project the entire text directly to a lower-dimensional space, without applying any compositional operators to word-level embeddings. The expectation is that the overlap between short-texts in the projected space will be higher than that in the original space such as a bag-of-word representation of a short-text. Skip-thought vectors~\cite{Kiros:2015}, FastSent~\cite{Hill:NAACL:2016}, and Paragraph2Vec~\cite{Le:ICML:2014} are popular sentence-level embedding methods that have reported state-of-the-art performance on text classification tasks. In contrast to our proposed method which explicitly append features to the original feature vectors to overcome the feature sparseness problem, sentence-level embedding methods can be seen as an implicit feature representation method. In Table~\ref{tbl:sentemb}, we compare the proposed method against the state-of-the-art sentence-level embedding methods. We use the published results in \cite{Kiros:2015} on \textbf{MR}, \textbf{CR}, and \textbf{SUBJ} datasets for Skip-thought, FastSent, and Paragraph2Vec, without re-training those methods. All three methods are trained on the Toronto books corpus~\cite{moviebook}. Performance of these methods on the \textbf{TR} dataset were not available. As a multiclass classification setting, we used the \textbf{TREC} question-type classification dataset. In this dataset, each question is manually classified to 6 question types depending on the information asked in the question such as abbreviation, entity, description, human, location and numeric. We use the same classinet as we used in the binary classification tasks to predict features for 5500 train and 500 test questions. A multiclass logistic regression classifier is trained on feature vectors with missing features predicted and tested on the feature vectors for the test questions with missing features predicted. Next, we briefly describe the methods compared in Table~\ref{tbl:sentemb}. \textbf{Skip-thought}~\cite{Kiros:2015} is a sequence-to-sequence model that encodes sentences using a Recurrent Neural Network (RNN) with Gated Recurrent Units (GRUs)~\cite{Cho:SSST:2014}. \textbf{FastSent}~\cite{Hill:NAACL:2016} is similar to \textbf{Skip-thought} in that both models predict the words in the next and previous sentences given the current sentence. However, unlike \textbf{Skip-though} which considers the word-order in a sentence, \textbf{FastSent} models a sentence as a bag-of-words. \textbf{Paragraph2Vec}~\cite{Le:ICML:2014} learns a vector for every short-text (eg. a sentence) in a corpus jointly with word embeddings for every word in that corpus such that the word embeddings are shared across all short-texts in the corpus. Sequential Denoising Autoencoder (\textbf{SDAE})~\cite{Hill:NAACL:2016} is an encoder-decoder model with a Long Short-Term Memory (LSTM)~\cite{Hochreiter:1997} unit. We use the \textbf{SDAE} version that uses pre-trained CBOW embeddings to initialise the word embeddings because of its superior performance over the \textbf{SDAE} version that uses randomly initialised word embeddings. We use Convolutional Neural Networks (\textbf{CNN}) for creating sentence-level embeddings as a baseline. For this purpose, we follow the model architecture proposed by \citet{kim:2014:EMNLP2014}. Specifically, each word $v_{i}$ in a sentence is represented by a $d$-dimensional word embedding $\vec{v}_{i} \in \R^{d}$, and the word embeddings are concatenated to create a fixed-length sentence embedding. The maximum length $n$ of a sentence is used to determine the length of this initial sentence-level embedding, where sentences with words less than this maximum length are padded using null vectors. Next, a convolution operator defined by a filter $\vec{w} \in \R^{hd}$ is applied on windows of consecutive $h$ tokens in sentences to produce new feature vectors for the sentences. We use several convolutional filters by varying the window size. Next, max-over-time pooling~\cite{Collobert:2011} is applied on this feature map to select the maximum value corresponding to a particular feature. This operation produces a sentence-level embedding that is independent of the length of the sentence. Finally, a fully connected layer with dropout~\cite{Srivastava:2014} and a softmax output unit is applied on top of this sentence representation that can predict the class label of a sentence. Pre-trained CBOW embeddings are used in the CNN-based sentence encoder as well. From Table~\ref{tbl:sentemb} we see that the proposed \textbf{Global Feature Expansion} method obtains best classification accuracies on \textbf{MR} and \textbf{CR} datasets with statistically significant improvements over the corresponding second-best methods, whereas \textbf{Skip-thought} reports the best results on the \textbf{SUBJ} and \textbf{TREC} datasets. However, unlike \textbf{Skip-thought} that is trained for two weeks on a GPU cluster, ClassiNets can be trained in less than 6 hours end-to-end on a single core CPU. The computational efficiency of ClassiNets is particularly attractive when continuously classifying large amounts of short-texts such as, for example, sentiment classification of tweets coming in as a continuous data stream. \begin{table}[t] \caption{Comparison against sentence-level embedding methods.} \begin{center} \begin{tabular}{l c c c c} \toprule Method & \textbf{MR} & \textbf{CR} & \textbf{SUBJ} & \textbf{TREC}\\ \midrule Skip-thought & $76.5$ & $80.1$ & $\mathbf{93.6}^{*}$ & $92.2$ \\ Paragraph2Vec & $74.8$ & $78.1$ & $90.5$ & $59.4$ \\ FastSent & $70.8$ & $78.4$ & $88.7$ & $76.8$ \\ SDAE & $74.6$ & $78.0$ & $90.8$ & $77.6$ \\ CNN & $76.1$ & $79.8$ & $89.6$ & $83.4$\\ Global Feature Expansion & $\mathbf{81.2}^{*}$ & $\mathbf{83.89}^{*}$ & $89.7$ & $88.3$ \\ \bottomrule \end{tabular} \label{tbl:sentemb} \end{center} \end{table} \subsection{Qualitative evaluation} \label{sec:quality} \begin{table*}[t] \caption{Example short-reviews and the features predicted by ClassiNet. The correct label (+/-) is shown within brackets. All these instances were misclassified when classified using the original features. However, when we use the features predicted by the ClassiNet all those instances are correctly classified.} \begin{center} \begin{tabular}{|p{7cm}|p{7cm}|} \hline Review & Predicted features \\ \hline \hline On its own cinematic terms, it successfully showcases the passions of both the director and novelist Byatt. (+) & \emph{writer, played, excellent, thriller, story, writing, subject, script, animation, films, role, storyline, experience, episode, cinematography.} \\ \hline What Jackson has accomplished here is amazing on a technical level. (+) & \emph{beautiful, perfect, fantastic, good, brilliant, great, wonderful, excellent, fine, strong.} \\ \hline This is art playing homage to art. (+) & \emph{cinema, modern, theme, theater, reality, style, experience, British, drama, documentary, history, period, acting, cinematography.} \\ \hline About as satisfying and predictable as the fare at your local drive through. (-) & \emph{terrible, ridiculous, annoying, least, horrible, poor, slow, awful, dull, scary, boring, stupid, bad, silly.} \\ \hline \end{tabular} \end{center} \label{tbl:example} \end{table*}% In Table~\ref{tbl:example}, we show the expansion candidates predicted by the proposed \textbf{Global Feature Expansion} method for some randomly selected short-reviews. The gold standard sentiment labels associated with each short review in the test dataset are shown within brackets. All the reviews shown in Table~\ref{tbl:example} are misclassified if we had used only the features in the original review. However, by appending the expansion features found from the ClassiNet, we can correctly predict the sentiment for those short reviews. From Table~\ref{tbl:example}, we see that many semantically related features are found by the proposed method. \begin{figure}[t] \centering \includegraphics[height=6cm]{my.pdf} \caption{Portion of the created ClassiNet from movie reviews. Vertices denote features and the edge-weights are shown on arrows.} \label{fig:classinet} \end{figure} Figure~\ref{fig:classinet} shows an extract from the ClassiNet we create from the Large Movie Review dataset. To avoid cluttering of edges, we show only the edges for a sparse $k=4$ mutual neighbour graph created from the original densely connected ClassiNet. First, for each vertex $v_i$ in the ClassiNet we compute its top $k$ similar vertices according to the edge weights. Next, we connect a vertex $v_i$ to a vertex $v_j$ in the $k$-mutual neighbour graph if $v_j$ is among the top $k$ similar vertices of $v_i$, and $v_i$ is among the top $k$ similar vertices of $v_j$. We see that synonyms, such as \emph{awful}, and \emph{horrible} are connected by high weighted edges in Figure~\ref{fig:classinet}. It is interesting to see that antonyms, such as \emph{good}, and \emph{bad} are also among the mutual nearest neighbours because those terms frequently occur in similar contexts (e.g., \emph{good movie} vs. \emph{bad movie}). Moreover, Figure~\ref{fig:classinet} shows the importance of propagating over the ClassiNet, instead of simply considering the directly connected vertices as the expansion candidates. For example, although being highly related features, there is no direct connection from \emph{horrible} to \emph{boring} in the ClassiNet. However, if we consider two-hop connections then we can find a path through \emph{awful}. \subsection{Effect of the Damping Factor} \label{sec:damp} To empirically study the effect of the damping factor on the classification accuracy of short-texts under the \textbf{Global Feature Expansion} method, we randomly select $1000$ positive and $1000$ negative sentiment labeled sentences from the Large Movie Review dataset as validation data, and evaluate the sentiment classification accuracy of the \textbf{Global Feature Expansion} method with different $\gamma$ values. The result is shown in Figure~\ref{fig:damp}. Note that smaller $\gamma$ values will reduce the propagation than larger $\gamma$ values, restricting the expansion candidates to a smaller local neighbourhood surrounding the original features. From Figure~\ref{fig:damp} we see that initially when increasing $\gamma$ the classification accuracy increases and reaches a peak at $\gamma = 0.85$. This shows that it is indeed important to find expansion neighbours by propagating over the ClassiNet as done by the global feature expansion method. However, setting $\gamma > 0.85$ results in a drop of classification accuracy, which is due to distant and potentially irrelevant expansion candidates. Interestingly, $\gamma = 0.85$ has been found to be the optimal value for different graph-based propagation tasks such as the PageRank~\cite{PageRank}. \begin{figure}[t] \begin{center} \includegraphics[height=6cm]{damp.pdf} \caption{The effect of the damping factor on the classification accuracy out.} \label{fig:damp} \end{center} \end{figure} \subsection{Number of Expansion Features} \label{sec:featcount} In this Section we analyse the number of feature appended to train/test instances by the different feature expansion methods using a fixed ClassiNet. Recall that none of the feature expansion methods we proposed has any predefined number of expansion features. In contrast, the number of expansion features depends on several factors: (a) the number of features in the original (prior to expansion) feature vector, (b) the size and the connectivity of the ClassiNet and (c) the feature expansion method. For example, if a particular feature vector has $n$ features, which are all present in the ClassiNet, then on average under the All Neighbour Expansion method, we will append $dn$ number of features to this instance where $d$ is the out degree of the ClassiNet. More precisely, the actual number of expansion features will be different from $dn$ due to several reasons. First, some vertices in ClassiNet might have different numbers of neighbours, not necessarily equal to the out degree. Second, the out degree considers the weight of the edges and not simply the different number of vertices connected via outbound edges. Third, some of the expansion features might already be in the original feature vector, thereby not increasing the number of features. Finally, the same expansion feature might be suggested by different vertices, therefore doubly counting the number of expansion features. To empirically analyse the number of expansion features, we build a ClassiNet containing 700 vertices and count the number of features expanded on the \textbf{SUBJ} train dataset. The out degree $d$ is given by \eqref{eq:out-degree}. \begin{equation} \label{eq:out-degree} d = \frac{1}{N} \sum_{i} \sum_{j \in \cN(v_{i})} w_{ij} \end{equation} Here, $N$ is the total number of vertices in the ClassiNet, $\cN(v_{i})$ is the set of neighbours connected to $v_{i}$ by an out bound link, and $w_{ij}$ is the weight of the edge connecting vertex $v_{i}$ to $v_{j}$. Figure~\ref{fig:degree} shows the degree distribution for the ClassiNet with degree $d = 263.35$. We see that most vertices are connected to $240-300$ other vertices in the ClassiNet. Given that this ClassiNet contains 700 vertices, this is a tightly connected, dense graph. For each train instance in the \textbf{SUBJ} dataset, we compute the expansion ration, ratio between the number of features after and before feature expansion, for the All Neighbour Expansion (Figure~\ref{fig:all-neighb}) and Global Feature Expansion (Figure~\ref{fig:global}). We see that the expansion ratio is higher for the global feature expansion (ca. 25-30) compared to that for all neighbour expansion (ca. 1.5-2.5). Given that the global feature expansion considers a broader neighbourhood surrounding the initial features in an instance this is not surprising. Moreover, it provides an explanation for the superior performance of the global feature expansion. Although expanding too much using not only relevant nearby features but also potentially irrelevant broader neighbourhoods is likely to degrade performance, we see that at the level of expansions done by the global feature expansion this is not an issue. Therefore, we conclude that under the global feature expansion method, we do not need to impose any predefined limitations to the number of expansion features. \begin{figure}[t] \begin{center} \includegraphics[height=6cm]{outdegree.png} \caption{Out degree distribution of the ClassiNet.} \label{fig:degree} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[height=6cm]{all-neighb.png} \caption{All neighbour Expansion.} \label{fig:all-neighb} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[height=6cm]{global-ratio.png} \caption{Global Feature Expansion.} \label{fig:global} \end{center} \end{figure} \section{Conclusion} \label{sec:conclusion} We proposed ClassiNet, a network of binary classifiers for predicting missing features to overcome the feature sparseness problem observed in short-text classification. We select positive and negative training instances for learning the feature predictors using unlabeled data. In ClassiNets, the weight of the edge connecting the vertex $v_i$ to $v_j$ represents the probability that given $v_i$ is predicted to occur in an instance, $v_j$ is also predicted to occur in the same instance. We proposed an efficient method using locality sensitive hashing to approximately compute the neighbourhood of a vertex, thereby avoiding all-pair computation of confusion matrices. We propose local and global methods for feature expansion using ClassiNets. Our experimental results show that the global feature expansion method significantly improves the classification accuracy of a sentence-level sentiment classification tasks outperforming previously proposed methods such as structural correspondence learning (SCL), and frequent term sets (FTS), Skip-thought vectors, FastSent, and Paragraph2Vec on multiple datasets. Moreover, close inspection of the expanded feature vectors show that features that are related to an instance are found as expansion candidates for that instance. In the future, we plan to apply ClassiNets to other tasks that require missing feature prediction such as recommendation systems. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2018-04-17T02:07:59", "yymm": "1804", "arxiv_id": "1804.05260", "language": "en", "url": "https://arxiv.org/abs/1804.05260" }
\section{Introduction} Let $X$ be a smooth projective variety defined over an algebraically closed field $k$ of arbitrary characteristic. In his epoch-making paper (see \cite{mori}), Shigefumi Mori established the following famous cone theorem \[ \overline\NE(X)=\overline\NE(X)_{K_X\geq 0}+\sum R_j, \] where $\overline\NE(X)$ denotes the Kleiman--Mori cone of $X$ and each $R_j$ is called a $K_X$-negative extremal ray of $\overline\NE(X)$. By the original proof of the above cone theorem, which is based on Mori's bend and break technique to create rational curves, we know that for each $K_X$-negative extremal ray $R$ there exists a (possibly singular) rational curve $C$ on $X$ such that the numerical equivalence class of $C$ spans $R$ and \[ 0<-K_X\cdot C\leq \dim X+1 \] holds. Let $X$ be a $\mathbb Q$-Gorenstein projective algebraic variety for which the cone theorem holds. Then for a $K_X$-negative extremal ray $R$ of $\overline\NE(X)$, we put \[ l(R):=\min_{[C]\in R}(-K_X\cdot C) \] and call it the {\em{length}} of $R$. We have already known that $l(R)$ is an important invariant and that some conditions on $l(R)$ determine the structure of the associated extremal contraction. In this paper, we are interested in the case where $X$ is a toric variety. We note that $\NE(X)=\overline\NE(X)$ holds when $X$ is a projective toric variety. This is because $\NE(X)$ is a rational polyhedral cone. We also note that the cone theorem holds for $\mathbb Q$-Gorenstein projective toric varieties without any extra assumptions. From now on, we will only treat $\mathbb{Q}$-factorial projective toric varieties defined over an algebraically closed field $k$ of arbitrary characteristic for simplicity. \medskip For a $\mathbb{Q}$-factorial projective toric $n$-fold $X$ of Picard number $\rho(X)=1$, there exists the unique extremal ray of $\NE(X)$. In this case, the following statement holds. \begin{thm}[{\cite[Proposition 2.9]{fujino-notes} and \cite[Proposition 2.1]{fujino-osaka}}]\label{rho1} Let $X$ be a $\mathbb{Q}$-factorial projective toric $n$-fold of Picard number $\rho(X)=1$ with $R=\NE(X)$. Then, the following statements hold. \begin{enumerate} \item If $l(R)>n$, then $X\simeq \mathbb{P}^n$. \item If $l(R)\ge n$ and $X\not\simeq \mathbb{P}^n$, then $X\simeq\mathbb{P}(1,1,2,\ldots, 2)$. \end{enumerate} \end{thm} For the case where the associated extremal contraction is birational, we have the following estimates which are special cases of \cite[Theorem 3.2.1]{fujino-sato3}. \begin{thm}\label{lengthbirational} Let $X$ be a $\mathbb{Q}$-factorial projective toric $n$-fold, and let $R$ be a $K_X$-negative extremal ray of $\NE(X)$. Suppose that the contraction morphism $\varphi_R:X\to W$ associated to $R$ is birational. Then, we obtain \[ l(R)<d+1, \] where \[d=\max_{w\in W} \dim \varphi^{-1}_R(w)\leq n-1. \] When $d=n-1$, we have a sharper inequality \[ l(R)\leq d=n-1. \] In particular, if $l(R)=n-1$ holds, then $\varphi_R:X\to W$ can be described as follows. There exists a torus invariant smooth point $P\in W$ such that $\varphi_R:X\to W$ is a weighted blow-up at $P$ with the weight $(1, a, \ldots, a)$ for some positive integer $a$. In this case, the exceptional locus $E$ of $\varphi_R$ is a torus invariant prime divisor and is isomorphic to $\mathbb P^{n-1}$. \end{thm} This estimate shows that the extremal ray $R$ with $l(R)>n-1$ must be of fiber type. In this case, we can determine the structure of the associated contraction $\varphi_R$ as follows. \begin{thm}\label{lengthfanotop1} Let $X$ be a $\mathbb Q$-factorial projective toric $n$-fold with $\rho (X)\geq 2$, and let $R$ be a $K_X$-negative extremal ray of $\NE(X)$. If $l(R)>n-1$, then the extremal contraction $\varphi_R:X\to W$ associated to $R$ is a $\mathbb P^{n-1}$-bundle over $\mathbb P^1$. \end{thm} \begin{rem} Theorem \ref{lengthfanotop1} holds for projective $\mathbb{Q}$-Gorenstein toric varieties (without the assumption that $X$ is $\mathbb{Q}$-factorial). For the details, please see \cite[Proposition 3.2.9]{fujino-sato3}. \end{rem} As a generalization of Theorem \ref{lengthfanotop1}, we prove the following theorem about the structure of extremal contractions of fiber type. More precisely, we will prove a sharper result in Section \ref{f-sec3} (see Theorem \ref{maintheorem}). Theorem \ref{intromain} is a direct easy consequence of Theorem \ref{maintheorem} (see Corollary \ref{maincor}). \begin{thm}[Main theorem]\label{intromain} Let $X$ be a $\mathbb{Q}$-factorial projective toric $n$-fold. Let $\varphi_R:X\to W$ be a Fano contraction associated to a $K_X$-negative extremal ray $R\subset\NE(X)$ such that the dimension of a fiber of $\varphi_R$ is $d$, equivalently, $d=\dim X-\dim W$. If $l(R)>d$, then $\varphi_R$ is a $\mathbb{P}^d$-bundle over $W$. \end{thm} We show that this result is sharp by Examples \ref{primbutnotbdl} and \ref{wtproj}. We note that Theorem \ref{intromain} is nothing but Theorem \ref{rho1} (1) if $\dim W=0$. Therefore, we can see Theorem \ref{intromain} as a generalization of Theorem \ref{rho1} (1). \begin{ack} The authors would like to thank the referee for useful comments. \end{ack} \section{Preliminaries} In this section, we introduce some basic results and notation of the toric geometry in order to prove the main theorem. For the details, please see \cite{cls}, \cite{fulton} and \cite{oda}. See also \cite{fujino-sato}, \cite[Chapter 14]{matsuki} and \cite{reid} for the toric Mori theory. \medskip Let $X=X_\Sigma$ be the toric $n$-fold associated to a fan $\Sigma$ in $N=\mathbb{Z}^n$ over an algebraically closed field $k$ of arbitrary characteristic. We will use the notation $\Sigma=\Sigma_X$ to denote the fan associated to a toric variety $X$. It is well known that there exists a one-to-one correspondence between the $r$-dimensional cones in $\Sigma$ and the torus invariant subvarieties of dimension $n-r$ in $X$. Let $\G(\Sigma)$ be the set of primitive generators for $1$-dimensional cones in $\Sigma$. Thus, for $v\in\G(\Sigma)$, we have a torus invariant prime divisor corresponding to $v$. \medskip For an $r$-dimensional simplicial cone $\sigma\in\Sigma$, let $N_\sigma\subset N$ be the sublattice generated by $\sigma\cap N$ and let $\sigma\cap\G(\Sigma)=\{v_1,\ldots,v_r\}$, that is, $\sigma=\langle v_1,\ldots,v_r\rangle$, where $\langle v_1,\ldots,v_r\rangle$ is the $r$-dimensional strongly convex cone generated by $\{v_1,\ldots,v_r\}$. Put \[ \mult(\sigma):=[N_\sigma:\mathbb{Z}v_1+\cdots+\mathbb{Z}v_r] \] which is the index of the subgroup $\mathbb{Z}v_1+\cdots+\mathbb{Z}v_r$ in $N_\sigma$. The following property is fundamental. \begin{prop}\label{kotensu} Let $X$ be a $\mathbb{Q}$-factorial toric $n$-fold, and let $\tau\in\Sigma$ be an $(n-1)$-dimensional cone and $v\in\G(\Sigma)$. If $v$ and $\tau$ generate a maximal cone $\sigma$ in $\Sigma$, then \[ D\cdot C=\frac{\mult(\tau)}{\mult(\sigma)}, \] where $D$ is the torus invariant prime divisor corresponding to $v$, while $C$ is the torus invariant curve corresponding to $\tau$. \end{prop} Let $X$ be a projective toric variety. We put \[ \mathrm{Z}_1(X):=\{1\text{-cycles of} \ X\}, \] and \[ \mathrm{Z}_1(X)_{\mathbb R}:= \mathrm{Z}_1(X)\otimes \mathbb R. \] Let \[ \Pic (X)\times \mathrm{Z}_1(X) \to \mathbb Z \] be a pairing defined by $(\mathcal L, C)\mapsto \deg _C\mathcal L$. By extending it by bilinearity, we have a pairing \[ (\Pic (X)\otimes \mathbb R)\times \mathrm{Z}_1(X)_{\mathbb R} \to \mathbb R. \] We define \[ \mathrm{N}^1(X):=(\Pic (X)\otimes \mathbb R)/\equiv \] and \[ \mathrm{N}_1(X):= \mathrm{Z}_1(X)_{\mathbb R}/\equiv, \] where the {\em numerical equivalence} $\equiv$ is by definition the smallest equivalence relation which makes $\mathrm{N}^1$ and $\mathrm{N}_1$ into dual spaces. Inside $\mathrm{N}_1(X)$ there is a distinguished cone of effective $1$-cycles of $X$, \[ {\NE}(X)=\left\{\, Z\, \left| \ Z\equiv \sum a_iC_i \ \text{with}\ a_i\in \mathbb R_{\geq 0}\right.\right\} \subset \mathrm{N}_1(X), \] which is usually called the {\em{Kleiman--Mori cone}} of $X$. It is known that $\NE(X)$ is a rational polyhedral cone. A face $F\subset {\NE}(X)$ is called an {\em{extremal face}} in this case. A one-dimensional extremal face is called an {\em{extremal ray}}. \medskip Next, we introduce a combinatorial description of toric Fano contractions which are main objects of this paper. Let $X=X_\Sigma$ be a $\mathbb{Q}$-factorial projective toric $n$-fold and $\varphi_R:X\to W$ be the extremal contraction associated to an extremal ray $R\subset\NE(X)$ of fiber type. Put \[ d:=\dim X-\dim W. \] Up to automorphisms of $N$, $\Sigma$ is constructed as follows: For the standard basis $\{e_1,\ldots,e_{n}\}\subset N=\mathbb{Z}^n$, put $N':=\mathbb{Z}e_1+\cdots+\mathbb{Z}e_d$, while $N'':=\mathbb{Z}e_{d+1}+\cdots+\mathbb{Z}e_{n}$, that is, $N=N'\oplus N''$. Then, there exist $\{v_1,\ldots,v_{d+1}\}\subset\G(\Sigma)\cap N'$ such that $\{v_1,\ldots,v_{d+1}\}\setminus\{v_i\}$ generates a $d$-dimensional cone $\sigma_i\in\Sigma$ for any $1\le i\le d+1$, and $\sigma_1\cup\cdots\cup\sigma_{d+1}=N'\otimes\mathbb{R}$. Namely, we obtain the complete fan $\Sigma_F$ in $N'$ whose maximal cones are $\sigma_1,\ldots,\sigma_{d+1}$. $\Sigma_F$ is associated to a general fiber $F$ of $\varphi_R$, and the Picard number $\rho(F)$ is $1$. Moreover, for any $\{y_1,\ldots,y_{n-d}\}\subset\G(\Sigma) \setminus\{v_1,\ldots,v_{d+1}\}$ which generates an $(n-d)$-dimensional cone in $\Sigma$, $\{v_1,\ldots,v_{d+1},y_1,\ldots,y_{n-d}\}\setminus\{v_i\}$ generates a maximal cone in $\Sigma$ for any $1\le i\le d+1$. Thus, the projection $N=N'\oplus N''\to N''$ induces $\varphi_R$. \begin{rem}\label{fanofiber} This description shows that for a toric Fano contraction $\varphi_R:X\to W$, the dimension of any fiber is constant. As we saw above, the general fiber $F$ of $\varphi_R$ is a projective $\mathbb Q$-factorial toric variety of Picard number $\rho(F)=1$. Moreover, it is known that the fiber $\varphi_R^{-1}(w)_{\mathrm{red}}$ with the reduced structure is isomorphic to $F$ for every closed point $w\in W$ (see \cite[Proposition 15.4.5]{cls} and \cite[Corollary 14-2-2]{matsuki}). \end{rem} \section{Fano contractions}\label{f-sec3 The following result is the main theorem of this paper. \begin{thm}\label{maintheorem} Let $X=X_\Sigma$ be a $\mathbb{Q}$-factorial projective toric $n$-fold. Let $\varphi_R:X\to W$ be a Fano contraction associated to a $K_X$-negative extremal ray $R\subset\NE(X)$, and $d=n-\dim W$ be the dimension of a fiber of $\varphi_R$. If a general fiber of $\varphi_R$ is isomorphic to $\mathbb{P}^d$ and \[ -K_X\cdot C>\frac{d+1}{2} \] holds for any curve $C$ on $X$ contracted by $\varphi_R$, then $\varphi_R$ is a $\mathbb{P}^d$-bundle over $W$. \end{thm} \begin{proof} We may assume that $\varphi_R:X\to W$ is induced by the following projection: \[ \begin{array}{ccc} N=\mathbb{Z}^n & \stackrel{p}{\longrightarrow} & \mathbb{Z}^{n-d} \\ \rotatebox{90}{$\in$} & & \rotatebox{90}{$\in$} \\ (x_1,\ldots,x_n) & \longmapsto & (x_{d+1},\ldots,x_n). \end{array} \] Let $\{e_1,\ldots,e_n\}$ be the standard basis for $N=\mathbb{Z}^n$. We put $$v_1:=e_1,\quad \ldots,\quad v_d:=e_d,\quad \text{and}\quad v_{d+1}:=-(e_1+\cdots+e_d). $$ Then $\Sigma$ contains the $d$-dimensional subfan $\Sigma_F$ corresponding to a general fiber $F\simeq \mathbb{P}^d$ whose maximal cones are \[ \left\langle\left\{v_1,\ldots,v_{d+1}\right\} \setminus\{v_i\}\right\rangle\quad (1\le i\le d+1). \] Let $V_\sigma\subset N\otimes_\mathbb{Z}\mathbb{R}$ be the linear subspace spanned by $\sigma$ for any $(n-d)$-dimensional cone $\sigma$ in $\Sigma$ such that $\left(\sigma\cap\G(\Sigma)\right)\cap \{v_1,\ldots,v_{d+1}\}=\emptyset$. Then it is sufficient to show that \begin{equation}\label{eq1} V_\sigma\cap\mathbb{Z}^{n}\stackrel{p}{\longrightarrow}\mathbb{Z}^{n-d} \end{equation} is bijective. This is because the restriction of $\varphi_R:X\to W$ to the affine toric open subset $U$ corresponding to an $(n-d)$-dimensional cone $p(\sigma)$ is the second projection $\mathbb P^d\times U\to U$ if $p$ in \eqref{eq1} is bijective. The injectivity of \eqref{eq1} is trivial. Therefore, we will show the surjectivity of \eqref{eq1}. Let $y_1,\ldots,y_{n-d}\in\G(\Sigma)\setminus\{v_1,\ldots,v_{d+1}\}$ be the primitive generators for any $(n-d)$-dimensional cone in $\Sigma$ such that $p(\langle y_1,\ldots,y_{n-d}\rangle)$ is also $(n-d)$-dimensional. Put \[ \begin{split} y_1&=(b_{1,1},\ldots,b_{d,1},a_{1,1},\ldots,a_{n-d,1}), \\ &\vdots\\ y_{n-d}&=(b_{1,n-d},\ldots,b_{d,n-d},a_{1,n-d},\ldots,a_{n-d,n-d}). \end{split} \] For any $(z_1,\ldots,z_{n-d})\in\mathbb{Z}^{n-d}$, we can take $(c_1,\ldots,c_{n-d})\in\mathbb{R}^{n-d}$ satisfying \[ p(c_1y_1+\cdots+c_{n-d}y_{n-d})= c_1p(y_1)+\cdots+c_{n-d}p(y_{n-d})= (z_1,\ldots,z_{n-d}). \] We note that the matrix \[ A: = \left( \begin{array}{ccc} a_{1,1} & \ldots & a_{1,n-d} \\ \vdots & \ddots & \vdots \\ a_{n-d,1} & \ldots & a_{n-d,n-d} \end{array} \right) \] is regular as a real matrix because $p(y_1),\ldots,p(y_{n-d})$ generates an $(n-d)$-dimensional cone. Therefore, $(c_1,\ldots,c_{n-d})$ is uniquely determined by \[ \left( \begin{array}{c} c_1 \\ \vdots \\ c_{n-d} \end{array} \right) =A^{-1} \left( \begin{array}{c} z_1 \\ \vdots \\ z_{n-d} \end{array} \right) \in\mathbb{Q}^{n-d}. \] Thus, all we have to do is to show that \[ c_1b_{r,1}+\cdots+c_{n-d}b_{r,n-d}\in\mathbb{Z} \] for any $1\le r\le d$. By considering the principal Cartier divisors of the dual basis of $\{e_1,\ldots,e_n\}$, we obtain the relations \begin{eqnarray}\label{rationalfunc} \left\{ \begin{array}{rcl} D_1-D_{d+1}+b_{1,1}E_1+\cdots+b_{1,n-d}E_{n-d}+H_1 & = & 0, \\ & \vdots & \\ D_d-D_{d+1}+b_{d,1}E_1+\cdots+b_{d,n-d}E_{n-d}+H_d & = & 0, \\ a_{1,1}E_1+\cdots+a_{1,n-d}E_{n-d}+H_{d+1} & = & 0, \\ & \vdots & \\ a_{n-d,1}E_1+\cdots+a_{n-d,n-d}E_{n-d}+H_{n} & =& 0 \end{array} \right. \end{eqnarray} in $\mathrm{N}^1(X)$, where $D_1,\ldots,D_{d+1},E_1,\ldots,E_{n-d}$ are the torus invariant prime divisors corresponding to $v_1,\ldots,v_{d+1},y_1,\ldots,y_{n-d}$, respectively, and $H_1,\ldots,H_n$ are some linear combinations of torus invariant prime divisors other than $D_1,\ldots,D_{d+1},E_1,\ldots,E_{n-d}$. Let $C=C_{r}$ $(1\le r\le d)$ be the torus invariant curve corresponding to the $(n-1)$-dimensional cone \[ \left\langle \left\{v_1,\ldots,v_{d},y_1,\ldots,y_{n-d}\right\} \setminus\{v_r\}\right\rangle. \] Since $H_i\cdot C=0$ for any $1\le i\le n$, we may ignore $H_1,\ldots,H_n$ in the following calculation. Since the matrix $A$ is regular, we have \[ E_1\cdot C=\cdots=E_{n-d}\cdot C=0, \] and \[ D_1\cdot C=D_2\cdot C=\cdots=D_{d+1}\cdot C \] by the above equalities \eqref{rationalfunc} in $N^1(X)$. Thus, we obtain \[ -K_X\cdot C=(d+1)D_i\cdot C \] for any $1\le i\le d+1$. Put \[ \alpha:=\mult\left(\left\langle\left\{v_1,\dots,v_{d},y_1,\ldots,y_{n-d}\right\} \setminus\{v_r\}\right\rangle\right) \] and \[ \beta:=\mult\left(\left\langle\left\{v_1,\dots,v_{d},y_1,\ldots,y_{n-d}\right\}\right\rangle\right). \] Then we get \[ D_r\cdot C=\frac{\alpha}{\beta} \] by Proposition \ref{kotensu}. We note that $\alpha\mid\beta$ always holds. Obviously, $\beta=|\det A|$. On the other hand, $\alpha$ is the product of the elementary divisors of the $n\times (n-1)$ matrix \[ \left( {}^{\rm t}v_1, \ldots, \stackrel{\vee}{{}^{\rm t}v_r}, \ldots, {}^{\rm t}v_d, {}^{\rm t}y_1, \ldots, {}^{\rm t}y_{n-d} \right) = \left( \begin{array}{ccccccccc} 1 & & & & & & b_{1,1} & \ldots & b_{1,n-d} \\ & \ddots & & & \text{{\huge{0}}} & & \vdots & \ddots & \vdots \\ & & 1 & & & & b_{r-1,1} & \ldots & b_{r-1,n-d} \\ 0 & \cdots & 0 & 0 & \cdots & 0 & b_{r,1} & \ldots & b_{r,n-d} \\ & & & 1 & & & b_{r+1,1} & \ldots & b_{r+1,n-d} \\ & & & & \ddots & & \vdots & \ddots & \vdots \\ & & & & & 1 & b_{d,1} & \ldots & b_{d,n-d} \\ & & & & & & a_{1,1} & \ldots & a_{1,n-d} \\ & \text{{\huge{0}}} & & & & & \vdots & \ddots & \vdots \\ & & & & & & a_{n-d,1} & \ldots & a_{n-d,n-d} \end{array} \right), \] where ${}^{\rm t}v$ stands for the transpose of $v$. By interchanging rows of this matrix, one can easily check that $\alpha$ is also the product of the elementary divisors of the $(n-d+1)\times (n-d)$ matrix \[ \overline{A} = \left( \begin{array}{ccc} b_{r,1} & \ldots & b_{r,n-d} \\ a_{1,1} & \ldots & a_{1,n-d} \\ \vdots & \ddots & \vdots \\ a_{n-d,1} & \ldots & a_{n-d,n-d} \end{array} \right). \] Suppose that $D_r\cdot C<1$ holds. Then, more strongly, we obtain the inequality $D_r\cdot C\le \frac{1}{2}$ by the relation $\alpha\mid\beta$. Thus, the following inequality \[ -K_X\cdot C=(d+1)D_r\cdot C\le \frac{d+1}{2} \] holds. However, this contradicts the assumption that $\frac{d+1}{2}< -K_X\cdot C$. Therefore, the equality $$\frac{\alpha}{\beta}=D_r\cdot C=1$$ must always hold. Since the general theory of elementary divisors says that $\alpha$ is the greatest common divisor of the $(n-d)\times(n-d)$ minor determinants of $\overline{A}$, the $(n-d)\times(n-d)$ determinant \[ \left| \begin{array}{ccc} b_{r,1} & \ldots & b_{r,n-d} \\ a_{1,1} & \ldots & a_{1,n-d} \\ \vdots & \vdots & \vdots \\ a_{i-1,1} & \ldots & a_{i-1,n-d} \\ a_{i+1,1} & \ldots & a_{i+1,n-d} \\ \vdots & \vdots & \vdots \\ a_{n-d,1} & \ldots & a_{n-d,n-d} \end{array} \right| \] is divisible by $\det A$ for any $1\le i\le n-d$. Let \[ \widetilde{A}: = \left( \begin{array}{ccc} \widetilde{a}_{1,1} & \ldots & \widetilde{a}_{1,n-d} \\ \vdots & \ddots & \vdots \\ \widetilde{a}_{n-d,1} & \ldots & \widetilde{a}_{n-d,n-d} \end{array} \right) \] be the cofactor matrix of $A$. Then, \[ c_1b_{r,1}+\cdots+c_{n-d}b_{r,n-d} \] \[ = \frac{1}{\det A} \left(\widetilde{a}_{1,1}z_1+\cdots+\widetilde{a}_{1,n-d}z_{n-d}\right)b_{r,1} +\cdots+ \frac{1}{\det A} \left(\widetilde{a}_{n-d,1}z_1+\cdots+\widetilde{a}_{n-d,n-d}z_{n-d}\right)b_{r,n-d} \] \[ = \frac{ \widetilde{a}_{1,1}b_{r,1}+\cdots+\widetilde{a}_{n-d,1}b_{r,n-d}} {\det A}\times z_1 +\cdots+ \frac{ \widetilde{a}_{1, n-d}b_{r,1}+\cdots+\widetilde{a}_{n-d,n-d}b_{r,n-d}} {\det A}\times z_{n-d} \] is an integer. This completes the proof. \end{proof} The following example shows that Theorem \ref{maintheorem} is sharp. \begin{ex}\label{primbutnotbdl} Let $\{e_1,\ldots,e_n\}$ be the standard basis for $N=\mathbb{Z}^n$ and $p:N\to\mathbb{Z}^{n-d}$ be the projection \[ (x_1,\ldots,x_d,x_{d+1},\ldots,x_n)\mapsto (x_{d+1},\ldots,x_n) \] for $1\le d< n$. Put \[ v_1:=e_1,\ \ldots,\ v_d:=e_d,\ v_{d+1}:=-(e_1+\cdots+e_d), \] \[ y_1:=e_{d+1},\ \ldots,\ y_{n-d-1}:=e_{n-1},\ y_{n-d}:=e_1+e_{d+1}+\cdots+e_{n-1}+2e_n. \] Let $\Sigma$ be the fan in $N$ whose maximal cones are generated by $\{v_1,\ldots,v_{d+1},y_1,\ldots,y_{n-d}\}\setminus\{v_i\}$ for $1\le i\le d+1$. In this case, $X=X_\Sigma$ has a Fano contraction whose general fiber is isomorphic to $\mathbb{P}^d$. Moreover, every fiber with the reduced structure is isomorphic to $\mathbb P^d$ (see Remark \ref{fanofiber}). However, $X$ does not decompose into $\mathbb{P}^d$ and a toric affine $(n-d)$-fold, because \[ \frac{p(y_1)+\cdots+p(y_{n-d})}{2}= e_{d+1}+\cdots+e_n\in\mathbb{Z}^{n-d}, \] while \[ \frac{y_1+\cdots+y_{n-d}}{2}= \frac{1}{2}e_1+e_{d+1}+\cdots+e_n\not\in N. \] From this noncomplete variety, one can easily construct a projective toric $n$-fold which has a Fano contraction associated to an extremal ray of length $\frac{d+1}{2}$ (for example, add the generator $y_{n-d+1}:= -(e_{d+1}+\cdots+e_n)$ and compactify $\Sigma$). \end{ex} If we make the inequality in Theorem \ref{maintheorem} stronger, then the assumption that a general fiber of a Fano contraction is isomorphic to the projective space automatically holds as follows. \begin{cor}\label{maincor} Let $X=X_\Sigma$ be a $\mathbb{Q}$-factorial projective toric $n$-fold. Let $\varphi_R:X\to W$ be a Fano contraction associated to a $K_X$-negative extremal ray $R\subset\NE(X)$, and $d=n-\dim W$ be the dimension of a fiber of $\varphi_R$. If $-K_X\cdot C>d$ holds for any curve $C$ on $X$ contracted by $\varphi_R$, then $\varphi_R$ is a $\mathbb{P}^d$-bundle over $W$. \end{cor} \begin{proof} Let $F$ be a general fiber of $\varphi_R$ and let $C$ be any curve on $F$. Then, by adjunction, we have \[ d< -K_X\cdot C= -K_F\cdot C. \] Therefore, by Theorem \ref{rho1} (1), $F\simeq \mathbb{P}^d$ holds. Since $\frac{d+1}{2}\le d$, we can apply Theorem \ref{maintheorem}. \end{proof} As an easy consequence of Corollary \ref{maincor}, we obtain: \begin{cor} Let $X=X_\Sigma$ be a $\mathbb Q$-factorial projective toric $n$-fold and let $\Delta$ be any effective {\em{(}}not necessarily torus invariant{\em{)}} $\mathbb R$-divisor on $X$. Let $\varphi_R: X\to W$ be a Fano contraction associated to a $(K_X+\Delta)$-negative extremal ray $R\subset \NE (X)$ with $d=n-\dim W$. If $-(K_X+\Delta)\cdot C>d$ for any curve $C$ on $X$ contracted by $\varphi_R$, then $\varphi_R$ is a $\mathbb P^d$-bundle over $W$. \end{cor} \begin{proof} We can easily see that $D\cdot C\geq 0$ for any effective Weil divisor $D$ on $X$ and any curve $C$ on $X$ contracted by $\varphi_R$ since $\varphi_R: X\to W$ is a toric Fano contraction of a $\mathbb Q$-factorial projective toric variety $X$. Therefore, we get $$ d< -(K_X+\Delta)\cdot C\leq -K_X\cdot C $$ for any curve $C$ on $X$ contracted by $\varphi_R$. Thus, we see that $\varphi_R:X\to W$ is a $\mathbb P^d$-bundle over $W$ by Corollary \ref{maincor}. \end{proof} The following example shows that Corollary \ref{maincor} is sharp. \begin{ex}\label{wtproj} Let $F:=\mathbb{P}(1,1,2,\ldots,2)$ be the $d$-dimensional weighted projective space and $W$ a $\mathbb{Q}$-factorial projective toric $(n-d)$-fold. Then, the length of the extremal ray corresponding to the first projection $\varphi:X=W\times F\to W$ is $d$ (see \cite[Proposition 2.1]{fujino-osaka} and \cite[Proposition 3.1.6]{fujino-sato3}). \end{ex}
{ "timestamp": "2018-08-20T02:10:48", "yymm": "1804", "arxiv_id": "1804.05302", "language": "en", "url": "https://arxiv.org/abs/1804.05302" }
\section{Introduction} Let $n \geq 2$. The \emph{twin group on $n$ arcs}, denoted by $TW_n$, is generated by a set of $(n-1)$ generators: $\{ \tau_i \ | \ i=1, 2, \ldots, n-1\}$ satisfying the following set of defining relations: \begin{equation}\tau_i^2=1, \ \hbox{ for all } i, \end{equation} \begin{equation}\tau_i \tau_j=\tau_j \tau_i, \ \hbox{ if } |i-j|>1.\end{equation} \medskip The role of this group in the theory of `doodles' on a closed oriented surface is similar to the role of Artin's braid groups in the theory of knots and links. In \cite{khov}, Khovanov investigated the doodle groups, and introduced the twin group of $n$ arcs. Khovanov proved that the closure of a twin is a doodle on the ($2$ dimensional) sphere, see \cite{khov} for details. \medskip The above group presentation is also of importance in the Grothendieck's theory of `dessins d'enfant'. For $m \geq 1$, the group $TW_{m+2}$ is isomorphic to Grothendieck's $m$-dimensional cartographical group $\mathcal C_m$. Voevodsky used this group in \cite{vv} as a generalization of the $2$-dimensional cartographical group. It is a standard fact in this theory that the conjugacy classes of the $2$-dimensional cartographical group $\mathcal C_2$ can be identified with combinatorial maps on connected surfaces, not necessarily orientable or without boundary, see \cite{js} for more details. In \cite{vince1, vince2}, Vince looked at the group $\mathcal C_m$ as `combinatorial maps' and investigated certain topological and combinatorial structures associated to this group. \medskip The commutator subgroup or derived subgroup $G'$ of a group $G$ is generated by the elements of the form $x^{-1} y^{-1} x y$. This subgroup is one measure to know how far $G$ is from being abelian. This is the smallest normal subgroup that abelianize $G$, i.e. the quotient $G/G'$ is abelian. The quotient $G/G'$ also gives the first homology group of $G$. \medskip The commutator subgroup $B_n'$ of Artin's braid group on $n$ strands $B_n$ is well-studied. Gorin and Lin \cite{gl} obtained a finite presentation for $B_n'$. Several authors have investigated commutators of larger class of spherical Artin groups, e.g. \cite{zinde}, \cite{mr}, \cite{orevkov}. \medskip In this paper, we ask for the commutator subgroup of the group $TW_n$. Note that $TW_2$ is the cyclic group of order two, and hence the commutator subgroup $TW_2'$ is trivial. However, for $n \geq 3$, the structure of the commutator subgroup $TW_n'$ is non-trivial. It is easy to see that $TW_n'$ is a finite index subgroup of the finitely presented group $TW_n$, hence it is clear that $TW_n'$ is finitely presented. In general, it is a difficult problem to obtain a finite presentation for a finitely presented group, and sometimes it is algorithmically impossible as well, see \cite{bw}. So, knowing that $TW_n'$ is finitely presented is not enough to have a clear understanding about the structure of the group. In this paper, we obtain an explicit finite presentation for $TW_n'$. Since $TW_{m+2}$ is isomorphic to $\mathcal C_{m}$ for all $m \geq 1$, this also gives finite presentation for the group $\mathcal C_m'$. \begin{theorem}\label{mainth} For $m \ge 1$, $TW_{m+2}'$ has the following presentation:\\ Generators: $ \ \ \ \ \beta_{p}(j), \ \ \ \ \ 0 \le p < j \le m. $\\ Defining relations: \ \ \ For all $~ l \ge 3,~~ 1 \le k \le j,~~ j+2 \le t \le m,$ \begin{equation*} \beta_{j-k}(j) ~ \beta_{t-(j+l)}(t) = \beta_{t-(j+l)}(t) ~ \beta_{j-k}(j), \end{equation*} \begin{equation*} \beta_{t-k}(t) = \beta_{j-k}(j)^{-1} ~ \beta_{t-(j+1)}(t) ~ \beta_{j-k}(j). \end{equation*} \end{theorem} \medskip Even if a group is finitely generated, it is a non-trivial problem to compute its rank, that is the smallest cardinality of a generating set for the group. In \cite{pv}, Panov and Ver\"evkin constructed classifying spaces for the commutator subgroups of right-angled Coxeter groups and have given a general formula for the rank of such groups, see \cite[Theorem 4.5]{pv}. However, the number of minimal generators given in \cite{pv} is in general form and involves the rank of the zeroth homology groups of certain subcomplexes of the underlying classifying space. As an immediate application of \thmref{mainth}, we obtain the rank of $TW_n'$ in terms of the `arcs' of the twin group, or the `dimension' of the cartographical group, and thus it is more explicit in our context. We have the following. \begin{theorem}\label{thmrank} For $m \geq 1$, the group $TW_{m+2}'$ has rank $2m-1$. \end{theorem} The following is a consequence of the above two theorems. \begin{corollary}\label{cor1} For $m \ge 1$, the quotient group $~ TW_{m+2}'/TW_{m+2}''$, is isomorphic to the free abelian group of rank $~ 2m-1$, i.e. the group $~ \bigoplus_{i=1}^{2m-1} \mathbb Z.$ In particular, $TW_{m+2}'$ is not perfect for any $m \ge 1$. \end{corollary} We further characterize freeness of $TW_n'$ in the following corollary. \begin{cor}\label{corfree} $TW_{m+2}'$ is a free group if and only if $m \le 3$. The group $TW_3'$ is infinite cyclic. The groups $TW_4'$ and $TW_5'$ are free groups of rank $3$ and $5$ respectively. \end{cor} As applications to the above results, we derive geometric properties of the ambient group $TW_{m+2}$. It is clear from the presentation in \thmref{mainth} that for $m \geq 4$, $TW_{m+2}'$ contains free abelian subgroups of rank $\geq 2$. By \cite[Theorem B]{mou}, this shows that $TW_{m+2}$ is not word-hyperbolic for $m \ge 4$. Whereas from \corref{corfree} we observe that $TW_{m+2}$ is virtually free for $m \le 3$; so it is clear that $TW_{m+2}$ is word-hyperbolic for $m \le 3$. Hence we have the following characterization for word-hyperbolicity of $TW_{m+2}$. \begin{cor}\label{wh} The group $TW_{m+2}$ is word-hyperbolic if and only if $m \leq 3$. \end{cor} Gordon, Long and Reid proved in \cite{glr} that a coxeter group $G$ is virtually free if and only if $G$ does not contain a surface group. Since $TW_{m+2}$ is finitely generated, by \corref{wh}, it can not be virtually free for $m \geq 4$. Hence we have the following. \begin{cor} The group $TW_{m+2}$ does not contain a surface group if and only if $m \leq 3$. \end{cor} According to \cite[Theorem B]{kala}, and also \cite[Theorem 1]{krist}, any finite extension of a free group of finite rank has a finitely presented automorphism group. Noting that $TW_n/TW_n'$ is a finite group and using \corref{corfree} we have an immediate corollary as follows. \begin{corollary}\label{cor5} The automorphism group of $TW_{m+2}$ is finitely presented for $m \leq 3$. \end{corollary} \medskip We have proved \thmref{mainth} by systematic use of the Reidemeister-Schreier algorithm. This method is a well-known technique to obtain presentations for subgroups, for details see \cite{mks}. This algorithm has been used to obtain presentations for certain classes of generalized braid groups and Artin groups in \cite{bgn1}, \cite{dg1}, \cite{lo}, \cite{man}. We obtain a presentation for $TW_n'$, $n \geq 3$, using this approach and then remove some of the generators using Tietze transformations. This gives the finite presentation for $TW_n'$. We further reduce the number of generators in this presentation to obtain the rank. \medskip Now we briefly describe the structure of the paper. In \secref{gen}, we compute a generating set for $TW_n'$, $n \geq 3$, using the Reidemeister-Schreier method. In \secref{dr}, a set of defining relations for $TW_n'$ involving these generators is obtained. We then apply Tietze transformations to prove \thmref{mainth} in \secref{simp}. Following this theorem, in \secref{simp}, we also prove \thmref{thmrank}, \corref{cor1} and \corref{corfree}. \section{A Generating Set for $TW_n'$}\label{gen} For $n \geq 3$, define the following map: \begin{equation*}\phi : TW_n \longrightarrow \ \underbrace{\mathbb Z_2 \oplus \mathbb Z_2 \oplus \dots \oplus \mathbb Z_2}_\text{(n -- 1) copies} = \bigoplus_{i=1}^{n-1} \mathbb Z_2 \end{equation*} where, for $ i = 1, \ldots , n-1 $, \ $\phi$ maps $\tau_i$ to the generator of the \ $i$ th copy of $\mathbb Z_2$ in the product \ $ \bigoplus_{i=1}^{n-1} \mathbb Z_2 $ . Here, Image($\phi$) is isomorphic to the abelianization of $TW_n$, denoted as $TW_n^{ab}$. To prove this, we abelianize the above presentation for $TW_n$ by inserting the relations $ ~ \tau_i \tau_j=\tau_j \tau_i ~ $ (for all $i,j$) in the presentation. The resulting presentation is the following:\\ $$\langle \tau_1, \dots , \tau_{n-1} ~ | ~ \tau_i \tau_j=\tau_j \tau_i, ~ \tau_i^2=1, ~ i,j \in \{ 1,2, \dots n-1 \} \rangle.$$ Clearly, the above is a presentation for $\bigoplus_{i=1}^{n-1} \mathbb Z_2$. Thus, $TW_n^{ab}$ is isomorphic to $\bigoplus_{i=1}^{n-1} \mathbb Z_2$. But as $\phi$ is onto, Image($\phi$) = $\bigoplus_{i=1}^{n-1} \mathbb Z_2$, i.e. Image($\phi$) is isomorphic to $TW_n^{ab}$. Hence, we get the following short exact sequence: \begin{equation*}\label{se1}1 \xrightarrow {} TW_n' \hookrightarrow{} TW_n \xrightarrow{\phi} \ \bigoplus_{i=1}^{n-1} \mathbb Z_2 \ \xrightarrow{} 1.\end{equation*} \begin{lemma}\label{lemma0} For $n \ge 3$, $TW_n'$ is generated by the conjugates of $ \ \tau_j \tau_{j+1} \tau_j \tau_{j+1} $ and $ \ \tau_{j+1} \tau_j \tau_{j+1} \tau_j $ by the elements $ \ \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \ $ for all $ \ j \in \{ 1, 2, \dots , n-2 \} $ and $~ 1 \le i_1 < i_2 < \dots < i_s < j$. \end{lemma} \begin{proof} Consider a Schreier set of coset representatives: $$\Lambda=\{ \tau_1^{\epsilon_1} \tau_2^{\epsilon_2} \dots \tau_{n-1}^{\epsilon_{n-1}} \ | \ \epsilon_i \in \{0, 1\}, \ i=1,2, \dots ,n-1 \}.$$ For $a \in TW_n$, we denote by $\overline{a}$ the unique element in $\Lambda$ which belongs to the coset corresponding to $\phi(a)$ in the quotient group $TW_n/TW_n'$.\\ By \cite[Theorem 2.7]{mks}, the group $TW_n'$ is generated by the set $$\{S_{\lambda, a}=(\lambda a) (\overline{\lambda a})^{-1} \ | \ \lambda \in \Lambda, \ a \in \{ \ \tau_i \ | \ i=1, 2, \ldots, n-1\} \ \}.$$ Hence, $TW_n'$ is generated by the elements: $S_{\tau_{i_1} \tau_{i_2} \dots \tau_{i_k}, \tau_j }$ for $1 \le i_1 < i_2 < \dots < i_k \le n-1$ and for $1 \le j \le n-1$. We calculate these elements below. \subsection*{Case 1: \ $i_k \le j$ :} In this case, $S_{\tau_{i_1} \tau_{i_2} \dots \tau_{i_k}, \tau_j } \ = \ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_j \overline{ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_j )}^{-1} $ \\ $ \hbox{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } = \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_j \ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_j )^{-1}=1$.\\ Hence we don't get any nontrivial generator from this case. \subsection*{Case 2: \ $i_k > j$ :} We divide this case into following 3 subcases. \subsubsection*{Subcase 2A: \ $i_k > j$ and $ (j+1) \in \{ i_1, i_2, \dots , i_k \} $ but $ j \notin \{ i_1, i_2, \dots , i_k \} $: \\\\} Suppose $j+1=i_{s+1}$. Then we have: $S_{\tau_{i_1} \tau_{i_2} \dots \tau_{i_k}, \tau_j } \ = \ \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} \tau_j \ \overline{ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} \tau_j )}^{-1} $ \\ $= \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} \tau_j \ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_j \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} )^{-1}$\\ $= \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_{j+1} \tau_j \tau_{i_{s+2}} \dots \tau_{i_k} \ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_j \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} )^{-1}$\\ $= \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_{j+1} \tau_j \tau_{i_{s+2}} \dots \tau_{i_k} \tau_{i_k} \dots \tau_{i_{s+2}} \tau_{j+1} \tau_j \tau_{i_s} \dots \tau_{i_2} \tau_{i_1} $\\ $= \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_{j+1} \tau_j \tau_{j+1} \tau_j \tau_{i_s} \dots \tau_{i_2} \tau_{i_1} $.\\ (Here we assume $i_1 < (j+1) < i_k$. The cases $(j+1)=i_1, i_k$ are similar and give same form of elements.)\\ So, we get some of the generators for $TW_n'$ as follows:\\ $ \{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} (\tau_{j+1} \tau_j \tau_{j+1} \tau_j) \tau_{i_s} \dots \tau_{i_2} \tau_{i_1} ~ | ~ j \in \{ 1, 2, \dots n-2 \} $ and $i_1 < i_2 < \dots < i_s < j$ where $i_1, i_2, \dots ,i_s,j$ are consecutive integers $ \} $. \subsubsection*{Subcase 2B: \ $i_k > j$ and $ j, (j+1) \in \{ i_1, i_2, \dots , i_k \} $: \\\\ } Suppose $j=i_s, \ j+1=i_{s+1}$. Then we have: $S_{\tau_{i_1} \tau_{i_2} \dots \tau_{i_k}, \tau_j } \ = \ \tau_{i_1} \tau_{i_2} \dots \tau_{i_{s-1}} \tau_j \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} \tau_j \ \overline{ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_{s-1}} \tau_j \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} \tau_j )}^{-1} $ \\ $= \tau_{i_1} \tau_{i_2} \dots \tau_{i_{s-1}} \tau_j \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} \tau_j \ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_{s-1}} \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} )^{-1}$\\ $= \tau_{i_1} \tau_{i_2} \dots \tau_{i_{s-1}} \tau_j \tau_{j+1} \tau_j \tau_{i_{s+2}} \dots \tau_{i_k} \ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_{s-1}} \tau_{j+1} \tau_{i_{s+2}} \dots \tau_{i_k} )^{-1}$\\ $= \tau_{i_1} \tau_{i_2} \dots \tau_{i_{s-1}} \tau_j \tau_{j+1} \tau_j \tau_{i_{s+2}} \dots \tau_{i_k} \tau_{i_k} \dots \tau_{i_{s+2}} \tau_{j+1} \tau_{i_{s-1}} \dots \tau_{i_2} \tau_{i_1} $\\ $= \tau_{i_1} \tau_{i_2} \dots \tau_{i_{s-1}} \tau_j \tau_{j+1} \tau_j \tau_{j+1} \tau_{i_{s-1}} \dots \tau_{i_2} \tau_{i_1} $.\\ (Here we assume $i_1 < j < (j+1) < i_k$. The cases $j=i_1$ and $(j+1)= i_k$ are similar and give same form of elements.)\\ So, we get some of the generators for $TW_n'$ as follows:\\ $ \{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} ( \tau_j \tau_{j+1} \tau_j \tau_{j+1} ) \tau_{i_s} \dots \tau_{i_2} \tau_{i_1} ~ | ~ j \in \{ 1, 2, \dots n-2 \} $ and $i_1 < i_2 < \dots < i_s < j$ where $i_1, i_2, \dots ,i_s,j$ are consecutive integers $ \}. $ \subsubsection*{Subcase 2C: \ $i_k > j$ and $ (j+1) \notin \{ i_1, i_2, \dots , i_k \} $: \\\\ } There is $i_s \in \{ i_1, i_2, \dots , i_k \} $ such that $i_s \le j < i_{s+1} $. \\ As $ (j+1) \notin \{ i_1, i_2, \dots , i_k \} $, $|i_{s+1} - j |>1$. So we have:\\ $S_{\tau_{i_1} \tau_{i_2} \dots \tau_{i_k}, \tau_j } \ = \ \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_{i_{s+1}} \dots \tau_{i_k} \tau_j \ \overline{ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_{i_{s+1}} \dots \tau_{i_k} \tau_j )}^{-1} $\\ $= \ \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_j \tau_{i_{s+1}} \dots \tau_{i_k} \ \overline{ ( \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} \tau_j \tau_{i_{s+1}} \dots \tau_{i_k} )}^{-1}=1.$\\ So, this case does not give any nontrivial generator for $TW_n'$. \end{proof} \subsection{Notation:} Let us introduce some notations as follows:\\ For $1 \le i_1 < i_2 < \dots < i_s < j \le n-2 ~ $ let us denote $$\alpha(i_1, i_2, \dots , i_s \ ; \ j) := \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} ( \tau_j \tau_{j+1} \tau_j \tau_{j+1} ) \tau_{i_s} \dots \tau_{i_2} \tau_{i_1}, $$ $$\beta(i_1, i_2, \dots , i_s \ ; \ j) := \tau_{i_1} \tau_{i_2} \dots \tau_{i_s} ( \tau_{j+1} \tau_j \tau_{j+1} \tau_j ) \tau_{i_s} \dots \tau_{i_2} \tau_{i_1}, $$ $$\alpha(j) := \tau_j \tau_{j+1} \tau_j \tau_{j+1}, \ \ \ \ \ \beta(j) := \tau_{j+1} \tau_j \tau_{j+1} \tau_j.$$ \medskip \section{Defining Relations for $TW_n'$}\label{dr} To obtain defining relations for $TW_n'$, following the Reidemeister-Schreier algorithm, we define a re-writing process $\eta$ as below. Refer \cite{mks} for more details. $$\eta(a_{i_1}^{\epsilon_1} \dots a_{i_p}^{\epsilon_p}) := S_{K_{i_1},a_{i_1}}^{\epsilon_1} \dots S_{K_{i_p},a_{i_p}}^{\epsilon_p} \hbox{ with } \epsilon_j = 1 \hbox{ or } -1,$$ where if $\epsilon_j = 1$, $K_{i_1} = 1$ and $K_{i_j}$ = $\overline{a_{i_1}^{\epsilon_1} \dots a_{i_{j-1}}^{\epsilon_{j-1}}}, ~ j \ge 2$, \\ and if $\epsilon_j = -1$, $K_{i_j}$ = $\overline{a_{i_1}^{\epsilon_1} \dots a_{i_j}^{\epsilon_j}}$ . By \cite[Theorem 2.9]{mks}, the group $TW_n'$ is defined by the relations: $$\eta(\lambda r_{\mu} \lambda^{-1})=1, ~ \lambda \in \Lambda,$$ where $r_{\mu}$ are the defining relators of $TW_n$.\\ We have the following lemma. \begin{lemma}\label{lemma1} The generators $ \ \alpha(j), \ \beta(j), \ \alpha(i_1, i_2, \dots , i_s \ ; \ j), \ \beta(i_1, i_2, \dots , i_s \ ; \ j)$ satisfy the following defining relations in $TW_n'$:\\ \begin{equation*} \alpha(j) \ \beta(j) = 1, \ \ \text{ for all \ }j \in \{ 1, 2, \dots , n-2 \}, \end{equation*} \begin{equation*} \alpha(i_1, \dots , i_s \ ; \ j) \ \beta(i_1, \dots , i_s \ ; \ j) = 1, \ \ \text{ when \ } 1 \le i_1 < i_2 < \dots < i_s < j \le n-2. \end{equation*} \end{lemma} \begin{proof} Following the Reidemeister-Schreier algorithm we will apply the re-writing process $\eta$ on all the conjugates (by the elements $\tau_{i_1} \tau_{i_2} \dots \tau_{i_k}$ of $\Lambda$) of the defining relators in $TW_n$ in order to deduce a set of defining relators for $TW_n'$.\\ For all $j \in \{ 1, 2, \dots, n-1 \} $, we have the relation $\tau_j^2=1$ in $TW_n$. We apply the re-writing process $\eta$ on the conjugates of the relator as follows.\\ For any element $\tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \in \Lambda$ we have,\\ $ \eta \ (\tau_{i_1} \tau_{i_2} \dots \tau_{i_k} ( \tau_j \tau_j ) \ \tau_{i_k} \dots \tau_{i_2} \tau_{i_1}) $ \\ $= S_{1, \tau_{i_1}} S_{ \overline{ \tau_{i_1} } , \tau_{i_2}} \dots S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_j} \ S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_j} , \tau_j} \ S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} } , \tau_{i_k} } \dots S_{ \overline{ \tau_{i_1} \tau_{i_2} } , \tau_{i_2}} S_{ \overline{ \tau_{i_1} }, \tau_{i_1} } $\\ $= S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_j} \ S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_j} , \tau_j } .$\\ For $i_k \le j$ the above expression vanishes.\\ If we have $i_k > j$ and $ (j+1) \notin \{ i_1, i_2, \dots , i_k \} $ the above expression vanishes.\\ In case $i_k > j$ and $ j, (j+1) \in \{ i_1, i_2, \dots , i_k \} $, assuming $j=i_s$, the above expression equals \begin{equation*} \alpha(i_1, i_2, \dots , i_{s-1} \ ; \ j) \ \beta(i_1, i_2, \dots , {i_s-1} \ ; \ j). \end{equation*} And, if $s=1$, then we have: \begin{equation*} \alpha(j) \ \beta(j). \end{equation*} For $i_k > j$ and $ (j+1) \in \{ i_1, i_2, \dots , i_k \} $ but $ j \notin \{ i_1, i_2, \dots , i_k \} $, assuming $j+1=i_{s+1}$, the above expression equals \begin{equation*} \beta(i_1, i_2, \dots , i_s \ ; \ j) \ \alpha(i_1, i_2, \dots , i_s \ ; \ j). \end{equation*} Hence, corresponding to the relation $\tau_j^2=1$ in $TW_n$ we have the following defining relations for $TW_n'$: \begin{equation} \alpha(j) \ \beta(j) = 1, \text{ for all } 1 \le j \le n-2, \end{equation} \begin{equation} \alpha(i_1, i_2, \dots , i_s \ ; \ j) \ \beta(i_1, i_2, \dots , i_s \ ; \ j) = 1, \end{equation} for all $~ i_1, i_2, \dots, i_s, j ~$ such that $ ~ 1 \le i_1 < i_2 < \dots < i_s < j \le n-2.$ \end{proof} Now, we will find the defining relations in $TW_n'$ corresponding to the defining relations $\tau_t \tau_j \tau_t \tau_j = 1, ~ |t-j|>1$, in $TW_n$.\\ We have the following lemma. \begin{lemma}\label{lemma2} The generators $ \ \alpha(j), \ \beta(j), \ \alpha(i_1, i_2, \dots , i_s \ ; \ j), \ \beta(i_1, i_2, \dots , i_s \ ; \ j)$ satisfy the following defining relations in $TW_n'$:\\ For all $~ i_1, i_2, \dots, i_r, j, t ~$ where $ 1 \le i_1 < i_2 < \dots < i_r < t \le n-2, ~ j \le t-2,$ we have \begin{equation*} \alpha(i_1, \dots, j,~ \widehat{j+1}, \dots, i_r ; t) ~ \beta(i_1, \dots, \widehat{j},~ \widehat{j+1}, \dots, i_r ; t) = 1, \end{equation*} \begin{equation*} \beta(i_1, \dots, j,~ \widehat{j+1}, \dots, i_r ; t) ~ \alpha(i_1, \dots, \widehat{j},~ \widehat{j+1}, \dots, i_r ; t) = 1, \end{equation*} \begin{equation*} \beta(i_1, \dots, i_s,~ \widehat{j},~ j+1, \dots, i_r ; t) ~ \beta(i_1, \dots, i_s ; j) ~ \alpha(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t) ~ \alpha(i_1, \dots, i_s ; j) = 1, \end{equation*} \begin{equation*} \alpha(i_1, \dots, i_s,~ \widehat{j},~ j+1, \dots, i_r ; t) ~ \beta(i_1, \dots, i_s ; j) ~ \beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t) ~ \alpha(i_1, \dots, i_s ; j) = 1. \end{equation*} \end{lemma} \begin{proof} For $ |t-j|>1 $, in $TW_n$ we have the relation: $\tau_t \tau_j \tau_t \tau_j = 1$. We rewrite this relation below. $ \eta \ (\tau_{i_1} \tau_{i_2} \dots \tau_{i_k} ( \tau_t \tau_j \tau_t \tau_j ) \ \tau_{i_k} \dots \tau_{i_2} \tau_{i_1}) $ \\ $= S_{1, \tau_{i_1}} S_{ \overline{ \tau_{i_1} } , \tau_{i_2}} \dots S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t} S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j} S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t} S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}$\\ $S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t \tau_j} , \tau_{i_k} } \dots S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t \tau_j \tau_{i_k} \dots \tau_{i_2} } , \tau_{i_1} }$\\ $= S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t} S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j} S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t} S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}.$\\ We need to calculate the above expression in all possible cases in order to get all the remaining defining relations for $TW_n'$.\\ Without loss of generality, we may assume that $j < t$.\\ We can only have the following 3 cases: \medskip \noindent Case 1: $i_k \le j < t$; \\ Case 2: $j < i_k \le t$;\\ Case 3: $j < t < i_k $. \subsection*{Case 1: $i_k \le j < t$} In this case we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=1,$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=1.$$ Hence, this case gives no nontrivial defining relation for $TW_n'$. \subsection*{Case 2: $j < i_k \le t$} We further divide this case into 3 subcases. \subsubsection*{Subcase 2A} $ (j+1) \in \{ i_1, i_2, \dots , i_k \} $ but $ j \notin \{ i_1, i_2, \dots , i_k \} $: \\ Assume, $(j+1) = i_{s+1}$. Then we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=1,$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}= \beta(i_1, \dots, i_s ; j),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=\alpha(i_1, \dots, i_s ; j).$$ Hence, we get the relations: $$\beta(i_1, \dots, i_s ; j) ~ \alpha(i_1, \dots, i_s ; j) = 1.$$ \subsubsection*{Subcase 2B} $ j, (j+1) \in \{ i_1, i_2, \dots , i_k \} $: \\ Assume, $(j+1) = i_{s+1},~ j = i_s$. Then we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=1,$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=\alpha(i_1, \dots, i_{s-1} ; j),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=\beta(i_1, \dots, i_{s-1} ; j).$$ So, we get the relations: $$\alpha(i_1, \dots, i_{s-1} ; j) ~ \beta(i_1, \dots, i_{s-1} ; j) = 1.$$ \subsubsection*{Subcase 2C} $ (j+1) \notin \{ i_1, i_2, \dots , i_k \} $: In this case we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=1,$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=1.$$ So, we do not get any nontrivial relation from this subcase. \subsection*{Case 3: $j < t < i_k $} We need to divide this case into 9 subcases. \subsubsection*{Subcase 3A} $ (j+1) \in \{ i_1, i_2, \dots , i_k \} $, $ j \notin \{ i_1, i_2, \dots , i_k \} $, $ (t+1) \in \{ i_1, i_2, \dots , i_k \} $, $ t \notin \{ i_1, i_2, \dots , i_k \} $: \\ Assume, $(j+1)=i_{s+1}, ~ (t+1)=i_{r+1}$. In this case we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=\beta(i_1, \dots, \widehat{j}, \dots, i_r ; t),$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=\beta(i_1, \dots, i_s ; j),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=\alpha(i_1, \dots, j, \dots, i_r ; t),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=\alpha(i_1, \dots, i_s ; j).$$ ($~ \widehat{j} ~$ denotes absence of $j$)\\ Hence, we get the relations: $$\beta(i_1, \dots, i_s,~ \widehat{j},~ j+1, \dots, i_r ; t) ~ \beta(i_1, \dots, i_s ; j) ~ \alpha(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t) ~ \alpha(i_1, \dots, i_s ; j) = 1.$$ \subsubsection*{Subcase 3B} $ (j+1) \in \{ i_1, i_2, \dots , i_k \} $, $ j \notin \{ i_1, i_2, \dots , i_k \} $, and $~ t, (t+1) \in \{ i_1, i_2, \dots , i_k \}: $\\ Assume, $(j+1)=i_{s+1}, ~ (t+1)=i_{r+1}, ~ t=i_r$. In this case we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=\alpha(i_1, \dots, \widehat{j}, \dots, i_{r-1} ; t),$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=\beta(i_1, \dots, i_s ; j),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=\beta(i_1, \dots, j, \dots, i_{r-1} ; t),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=\alpha(i_1, \dots, i_s ; j).$$ So, we get the relations: $$\alpha(i_1, \dots, i_s,~ \widehat{j},~ j+1, \dots, i_{r-1} ; t) ~ \beta(i_1, \dots, i_s ; j) ~ \beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_{r-1} ; t) ~ \alpha(i_1, \dots, i_s ; j) = 1.$$ \subsubsection*{Subcase 3C} $ (j+1) \in \{ i_1, i_2, \dots , i_k \} $, $ j \notin \{ i_1, i_2, \dots , i_k \} $, and $~ (t+1) \notin \{ i_1, i_2, \dots , i_k \}: $\\ Assume, $(j+1)=i_{s+1}$. In this case we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=1,$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=\beta(i_1, \dots, i_s ; j),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=\alpha(i_1, \dots, i_s ; j).$$ Hence, we get the relations: $$\beta(i_1, \dots, i_s ; j) ~ \alpha(i_1, \dots, i_s ; j) = 1.$$ \subsubsection*{Subcase 3D} $j, (j+1) \in \{ i_1, i_2, \dots , i_k \}$, $ (t+1) \in \{ i_1, i_2, \dots , i_k \} $, $ t \notin \{ i_1, i_2, \dots , i_k \}: $\\ Assume, $(j+1)=i_{s+1}, ~ j=i_s, ~ (t+1)=i_{r+1}$. In this case we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=\beta(i_1, \dots, j, \dots, i_r ; t),$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=\alpha(i_1, \dots, i_{s-1} ; j),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=\alpha(i_1, \dots, \widehat{j}, \dots, i_r ; t),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=\beta(i_1, \dots, i_{s-1} ; j).$$ So, we get the relations: \medskip \noindent $\beta(i_1, \dots, i_{s-1},~ j,~ j+1, \dots, i_r ; t) ~ \alpha(i_1, \dots, i_{s-1} ; j) ~ \alpha(i_1, \dots, i_{s-1},~ \widehat{j},~ j+1, \dots, i_r ; t) ~ \\ \beta(i_1, \dots, i_{s-1} ; j) = 1$. \subsubsection*{Subcase 3E} $j, (j+1) \in \{ i_1, i_2, \dots , i_k \}$, and $~ t, (t+1) \in \{ i_1, i_2, \dots , i_k \} $:\\ Assume, $(j+1)=i_{s+1}, ~ j=i_s, ~ (t+1)=i_{r+1}, ~ t=i_r$. In this case we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=\alpha(i_1, \dots, j, \dots, i_{r-1} ; t),$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=\alpha(i_1, \dots, i_{s-1} ; j),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=\beta(i_1, \dots, \widehat{j}, \dots, i_{r-1} ; t),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=\beta(i_1, \dots, i_{s-1} ; j).$$ Hence, we get the relations: \medskip \noindent $\alpha(i_1, \dots, i_{s-1},~ j, ~j+1, \dots, i_{r-1} ; t) ~\alpha(i_1, \dots, i_{s-1} ; j) ~\beta(i_1, \dots, i_{s-1},~\widehat{j},~j+1, \dots, ~i_{r-1} ; t)~ \\ \beta(i_1, \dots, i_{s-1} ; j) = 1$. \subsubsection*{Subcase 3F} $j, (j+1) \in \{ i_1, i_2, \dots , i_k \}$, and $~ (t+1) \notin \{ i_1, i_2, \dots , i_k \} $:\\ Assume, $(j+1)=i_{s+1}, ~ j=i_s$. In this case we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=1,$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=\alpha(i_1, \dots, i_{s-1} ; j),$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=\beta(i_1, \dots, i_{s-1} ; j).$$ So, we get the relations: $$\alpha(i_1, \dots, i_{s-1} ; j) ~ \beta(i_1, \dots, i_{s-1} ; j) = 1.$$ \subsubsection*{Subcase 3G} $(j+1) \notin \{ i_1, i_2, \dots , i_k \} $, and $ (t+1) \in \{ i_1, i_2, \dots , i_k \} $, $ t \notin \{ i_1, i_2, \dots , i_k \} $:\\ Assume, $(t+1)=i_{r+1}$.\\ In this case we have:\\ $S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t} = \begin{cases} \beta(i_1, \dots, j, \dots, i_r ; t) & \text{ if $j \in \{ i_1, i_2, \dots , i_k \},$ } \\ \beta(i_1, \dots, \widehat{j}, \dots, i_r ; t) & \text{ if $j \notin \{ i_1, i_2, \dots , i_k \},$ } \end{cases}$ \\ $S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}$ $=1$, $S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t} = \begin{cases} \alpha(i_1, \dots, \widehat{j}, \dots, i_r ; t) & \text{ if $j \in \{ i_1, i_2, \dots , i_k \},$ } \\ \alpha(i_1, \dots, j, \dots, i_r ; t) & \text{ if $j \notin \{ i_1, i_2, \dots , i_k \},$ } \end{cases}$ \\ $S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}$ $=1$.\\ So, we get the relations: $$\beta(i_1, \dots, j, \dots, i_r ; t) ~ \alpha(i_1, \dots, \widehat{j}, \dots, i_r ; t) = 1,$$ $$\beta(i_1, \dots, \widehat{j}, \dots, i_r ; t) ~ \alpha(i_1, \dots, j, \dots, i_r ; t) = 1.$$ \subsubsection*{Subcase 3H} $(j+1) \notin \{ i_1, i_2, \dots , i_k \} $, and $~ t, (t+1) \in \{ i_1, i_2, \dots , i_k \} $:\\ Assume, $(t+1)=i_{r+1}, ~ t=i_r$. In this case we have:\\ $S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t} = \begin{cases} \alpha(i_1, \dots, j, \dots, i_{r-1} ; t) & \text{ if $j \in \{ i_1, i_2, \dots , i_k \},$ } \\ \alpha(i_1, \dots, \widehat{j}, \dots, i_{r-1} ; t) & \text{ if $j \notin \{ i_1, i_2, \dots , i_k \},$ } \end{cases}$ \\ $S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}$ $=1$, $S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t} = \begin{cases} \beta(i_1, \dots, \widehat{j}, \dots, i_{r-1} ; t) & \text{ if $j \in \{ i_1, i_2, \dots , i_k \},$ } \\ \beta(i_1, \dots, j, \dots, i_{r-1} ; t) & \text{ if $j \notin \{ i_1, i_2, \dots , i_k \},$ } \end{cases}$ \\ $S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}$ $=1$.\\ Hence, we get the relations: $$\alpha(i_1, \dots, j, \dots, i_{r-1} ; t) ~ \beta(i_1, \dots, \widehat{j}, \dots, i_{r-1} ; t) = 1,$$ $$\alpha(i_1, \dots, \widehat{j}, \dots, i_{r-1} ; t) ~ \beta(i_1, \dots, j, \dots, i_{r-1} ; t) = 1.$$ \subsubsection*{Subcase 3I} $(j+1) \notin \{ i_1, i_2, \dots , i_k \} $, and $(t+1) \notin \{ i_1, i_2, \dots , i_k \} $:\\ In this case we have: $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k}} , \tau_t}=1,$$ $$S_{ \overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t} , \tau_j}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j} , \tau_t}=1,$$ $$S_{\overline{ \tau_{i_1} \tau_{i_2} \dots \tau_{i_k} \tau_t \tau_j \tau_t } , \tau_j}=1.$$ So, we do not get any nontrivial relation from this subcase.\\ Collecting the relations obtained in all the above cases we have the lemma. \end{proof} \section{Finite presentation for $TW_n$: Proof of the theorems}\label{simp} In this section, we will simplify the presentation for $TW_n'$ that we deduced in the previous section. We will apply Tietze transformations on the current presentation for $TW_n'$ in order to deduce an equivalent presentation for $TW_n'$ with less number of generators and relations than the last one. Refer to \cite{mks} for more details on Tietze transformations. We begin with the following lemma. \begin{lemma}\label{lemma3} For $n \ge 3$, $TW_n'$ has the following presentation:\\ Generators: $~ ~ ~ \beta(j),~ \beta(i_1, i_2, \dots , i_s \ ; \ j)$, for $~ 1 \le i_1 < i_2 < \dots < i_s < j \le n-2$, \medskip Defining relations: $$\beta(i_1, \dots, i_s,~ j,~ \widehat{j+1}, \dots, i_r ; t) = \beta(i_1, \dots, i_s,~ \widehat{j},~ \widehat{j+1}, \dots, i_r ; t), $$ $$\beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t) = \beta(i_1, \dots, i_s ; j)^{-1} ~ \beta(i_1, \dots, i_s,~ \widehat{j},~ j+1, \dots, i_r ; t) ~ \beta(i_1, \dots, i_s ; j), $$ where $ 1 \le i_1 < i_2 < \dots < i_s < j < \dots < i_r < t \le n-2, ~ j \le t-2$. \end{lemma} \begin{proof} From \lemref{lemma1}, we have $\alpha(j)=\beta(j)^{-1}$, $\alpha(i_1, i_2, \dots , i_s \ ; \ j)=\beta(i_1, i_2, \dots , i_s \ ; \ j)^{-1}$.\\ Hence, we replace $\alpha(j)$ by $\beta(j)^{-1}$ and $\alpha(i_1, i_2, \dots , i_s \ ; \ j)$ by $\beta(i_1, i_2, \dots , i_s \ ; \ j)^{-1}$ in all other defining relations for $TW_n'$, and remove all $\alpha(j),~ \alpha(i_1, i_2, \dots , i_s \ ; \ j)$ from the set of generators. This completes the proof of \lemref{lemma3}. \end{proof} \subsection{Observation} Note that, we have the defining relations $$\beta(i_1, \dots, i_s,~ j,~ \widehat{j+1}, \dots, i_r ; t) = \beta(i_1, \dots, i_s,~ \widehat{j},~ \widehat{j+1}, \dots, i_r ; t).$$ Note that here we have $j \le t-2$. Let us look at the following example.\\ Consider the generator $\beta(3,4,6,7,9,10,11;12)$ in $TW_{15}'$. From the above set of relations, as `5' does not appear in $\beta(3,4,6,7,9,10,11;12)$, we can conclude that $\beta(3,4,6,7,9,10,11;12) = \beta(3,6,7,9,10,11;12)$. As `4' is missing in $\beta(3,6,7,9,10,11;12)$, we get $\beta(3,6,7,9,10,11;12) = \beta(6,7,9,10;11)$. We can go further. Using the same relations we get $\beta(6,7,9,10;11) = \beta(6,9,10;11) = \beta(9,10;11)$.\\ From the above observation it is clear that using the above defining relations finitely many times, any generator $\beta(i_1, i_2, \dots, i_s; j)$ can be shown to be equal to a generator of the form $\beta(j-p, j-p+1, \dots, j-1 \ ; \ j)$ for some $p < j$, or be equal to $\beta(j)$. Let's call these the \textit{normal forms} of the generators.\\ \subsection{Notation: } We will follow the notations for the normal forms as below: $$ \text{For } 1 \le p < j, \ \ \beta_{p}(j) := \beta(j-p, \dots, j-1 ; j), \ \ \text{ and }\ \ \ \beta_{0}(j) := \beta(j). $$ As every generator is equal to its normal form, we replace all the generators with their normal forms in all the defining relations and remove all the generators except the normal forms from the generating set. For clarity of exposition, we define the following. \begin{definition} For a generator $~ \beta(i_1, i_2, \dots, i_s ; j)~$ we define the \it{ highest missing entry in $\beta(i_1, i_2, \dots, i_s ; j)$ } to be $k$ for some $~i_1-1 \le k \le j-1~$ if $k \notin \{ i_1, i_2, \dots, i_s, j \} $ but for any $m$ with $k < m \le j$, $m \in \{ i_1, i_2, \dots, i_s, j \}$. \end{definition} \subsection{Proof of \thmref{mainth}} \begin{proof} We have the following relations in the presentation for $TW_n'$, $n \geq 3$, as in \lemref{lemma3}: $$\beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t) = \beta(i_1, \dots, i_s ; j)^{-1} ~ \beta(i_1, \dots, i_s,~ \widehat{j},~ j+1, \dots, i_r ; t) ~ \beta(i_1, \dots, i_s ; j),$$ where $ 1 \le i_1 < i_2 < \dots < i_s < j < \dots < i_r < t \le n-2, ~ j \le t-2$.\\ We replace the generators appearing in these relations by their normal forms $\beta_{p}(j)$'s. Our goal is to find the modified relations after the substitution. Consider the left hand side of the above relations. We have $\beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t)$. Note that the highest missing entry in $\beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t)$ cannot be $j$ or $j+1$, as both are present as entries. So we can have 2 possibilities. We examine the 2 cases separately below.\\ $\textbf{Case 1:}$ The highest missing entry in $\beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t)$ is greater than $j+1$.\\ Suppose the highest missing entry in $\beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t)$ is $j+(l-1)$ for some $l \ge 3$. Also suppose the highest missing entry in $\beta(i_1, \dots, i_s; j)$ is $m-1$ for some $1 \le m \le j$.\\ Then, the relations are equivalent to the following relations:\\ $\text{ for all } l \ge 3,~ 1 \le m \le j,~ j \le t-2,$ \begin{equation*} \beta(j+l, \dots, t-1;t) = \beta(m, \dots, j-1;j)^{-1} \beta(j+l, \dots, t-1;t) \beta(m, \dots, j-1;j). \end{equation*} So, after the substitution by normal forms the relations become: \begin{equation*} \beta_{t-(j+l)}(t) = \beta_{j-m}(j)^{-1} ~ \beta_{t-(j+l)}(t) ~ \beta_{j-m}(j), ~ ~ \text{ for all } l \ge 3,~ 1 \le m \le j,~ j \le t-2. \end{equation*} Equivalently, \begin{equation*} \beta_{j-m}(j) ~ \beta_{t-(j+l)}(t) = \beta_{t-(j+l)}(t) ~ \beta_{j-m}(j), ~ ~ \text{ for all } l \ge 3,~ 1 \le m \le j,~ j \le t-2. \end{equation*} $\textbf{Case 2:}$ The highest missing entry in $\beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t)$ is less than $j$.\\ Suppose the highest missing entry in $\beta(i_1, \dots, i_s,~ j,~ j+1, \dots, i_r ; t)$ is $m-1$ for some $1 \le m \le j$. Then clearly the highest missing entry in $\beta(i_1, \dots, i_s; j)$ is also $m-1$.\\ So, after the substitution by normal forms the relations become: \begin{equation*} \beta_{t-m}(t) = \beta_{j-m}(j)^{-1} ~ \beta_{t-(j+1)}(t) ~ \beta_{j-m}(j), ~ ~ \text{ for all } 1 \le m \le j,~ j \le t-2. \end{equation*} This proves the theorem. \end{proof} \subsection{Further elimination:} We shall further reduce the number of generators in the presentation by removing all $\beta_p(j)$ with $p > 1$ by using the defining relations: \begin{equation*} \beta_{t-m}(t) = \beta_{j-m}(j)^{-1} ~ \beta_{t-(j+1)}(t) ~ \beta_{j-m}(j), \end{equation*} for all $~ m,~ j,~ t \in \{ 1,\dots,n-2 \}$ with $~ 1 \le m \le j \le t-2.$\\ Note that, if we consider the cases where $j=m$ in the above set of relations, we obtain the following set of relations: \begin{equation*} \beta_{t-m}(t) = \beta_{0}(m)^{-1} ~ \beta_{t-(m+1)}(t) ~ \beta_{0}(m), \end{equation*} for all $~ m,~ t \in \{ 1,\dots,n-2 \} ~$ with $~ 1 \le m \le t-2.$\\ So, if $~ t-m \ge 2, ~$ we can express $~ \beta_{t-m}(t) ~$ as the conjugate of $~ \beta_{t-(m+1)}(t) ~$ by $~ \beta_{0}(m)$. We do this iteratively to express $~ \beta_{t-m}(t) ~$ as the conjugate of $\beta_1(t)$ by the element $\beta_0(t-2) \dots \beta_0(m)$ and thus remove all $\beta_p(j)$ with $p \ge 2$ from the set of generators after replacing them with the above values in all the remaining relations. \begin{lemma}\label{lemma5} For $n \ge 3$, $TW_n'$ has a finite presentation with $(2n-5) $ generators. \end{lemma} \begin{proof} After performing the above substitution we are left with $\beta_p(j)$ with $p \le 1$ and $1 \le j \le n-2.$ Hence, corresponding to every $~ 2 \le j \le n-2 ~$ we have 2 generators $\beta_0(j)$ and $\beta_1(j)$. For $j=1$, we have only 1 generator, namely $\beta_0(1)$. So, we have total $~ 2 \times (n-3) + 1 = 2n-5 ~$ generators in the final presentation for $TW_n'$ for $n \ge 3.$ Note that the presentation given in \thmref{mainth} has finitely many defining relations. As finitely many $\beta_p(j)$ are being replaced and each $\beta_p(j)$ appears finitely many times in all the defining relations, after the above substitution we will have finitely many defining relations in the final presentation. This proves the lemma. \end{proof} \subsection*{Proof of \thmref{thmrank}} We consider the abelianization of $TW_n'$ for $n \ge 3,$ $~(TW_n')^{ab} = TW_n'/TW_n''$. In order to find a presentation for $(TW_n')^{ab}$ we insert all possible commuting relations $\beta_p(j)~ \beta_q(i) = \beta_q(i)~ \beta_p(j),~$ for all $i,j \in \{1, \dots, n-2\},~ 0 \le p < j,~ 0 \le q < i, ~$ in the presentation for $TW_n'$. This gives the following presentation for $(TW_n')^{ab}$:\\ Generators: $ \ \ \ \ \beta_{p}(j), \ \ \ \ \ 0 \le p < j \le n-2. $\\ Defining relations: $\beta_p(j)~ \beta_q(i) = \beta_q(i)~ \beta_p(j),~~ \forall i,j \in \{1, \dots, n-2\}, $ \begin{equation*} \beta_{t-m}(t) = \beta_{t-(j+1)}(t),~~ 1 \le m \le j,~~ j+2 \le t \le n-2. \end{equation*} Iterating the last set of relations, we deduce that $\beta_p(j) = \beta_1(j)$ for all $p \ge 2$ and for all $j \ge 3$. Hence, we remove all $\beta_p(j)$ with $p \ge 2$ from the set of generators by replacing them with $\beta_1(j)$. After this replacement we get the following presentation for $(TW_n')^{ab}$:\\ Generators: $~ \beta_0(1),~ \beta_0(j),~ \beta_1(j),~ 2 \le j \le n-2.$\\ Defining relations: $\beta_p(j)~ \beta_q(i) = \beta_q(i)~ \beta_p(j),~~ \forall i,j \in \{1, \dots, n-2\},~~ p, q \in \{0,1\}.$\\ Clearly, this is the presentation for direct sum of $(2n-5)$ copies of $\mathbb Z,~~$ i.e. $\mathbb Z^{2n-5}$. So, $~ (TW_n')^{ab} ~$ is isomorphic to $~ \mathbb Z^{2n-5}$. Hence, rank of $~ (TW_n')^{ab} ~$ is $~ (2n-5)$.\\ As, $~ (TW_n')^{ab} ~$ is the homomorphic image of $~TW_n'~$ under the quotient homomorphism $~TW_n' \longrightarrow (TW_n')^{ab},~$ rank of $~ (TW_n')^{ab} ~$ is less than or equal to the rank of $~TW_n'.~$ Thus, rank($~TW_n'~$) $\ge$ rank($~ (TW_n')^{ab} ~$) $= 2n-5.$ From \lemref{lemma5} we get rank($~TW_n'~$) $\le 2n-5$. So, we conclude that rank($~TW_n'~$) $= 2n-5$. \subsection*{Proof of \corref{cor1}:} In the proof of \thmref{thmrank} we observed that for $n \ge 3$, $TW_n'/TW_n''$ is isomorphic to direct sum of $(2n-5)$ copies of $\mathbb Z$. So, we conclude that $TW_n' \ne TW_n''$, hence $TW_n'$ is not perfect for any $n \ge 3.$\\ For $n \le 5$, $TW_n'$ are well known groups. We have the following. \begin{prop}\label{prop7} We have the following:\\ (i) $TW_2'$ is the identity group $\{ 1 \}$.\\ (ii) $TW_3'$ is the infinite cyclic group $\mathbb Z$.\\ (iii) $TW_4'$ and $TW_5'$ are free groups of rank $3$ and $5$, respectively. \end{prop} \begin{proof} Note that, $TW_2 ~ = ~ \langle ~ \tau_1 ~|~ \tau_1^2=1 ~ \rangle ~ = ~ \mathbb Z/2\mathbb Z ~$ and $~ \mathbb Z/2\mathbb Z ~$ is an abelian group. Hence, $TW_2'$ is the identity group. From \thmref{mainth} it follows that $~ TW_3' ~ = ~ \langle ~ \beta_0(1) ~ \rangle ~$ which is isomorphic to the infinite cyclic group $\mathbb Z$. From \thmref{mainth} it follows that $~ TW_4' ~ = ~ \langle ~ \beta_0(1),~ \beta_0(2),~ \beta_1(2) ~ \rangle ~$ which is the free group of rank 3. From \thmref{mainth} it follows that: $$~ TW_5' ~ = ~ \langle ~ \beta_0(1),~ \beta_0(2),~ \beta_1(2),~ \beta_0(3),~ \beta_1(3),~ \beta_2(3) ~ | ~ \beta_2(3) ~ = ~ \beta_0(1)^{-1} ~ \beta_1(3) ~ \beta_0(1) ~ \rangle ~$$ $$= ~ \langle ~ \beta_0(1),~ \beta_0(2),~ \beta_1(2),~ \beta_0(3),~ \beta_1(3)~ \rangle .~ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ Hence, $TW_5'$ is free of rank 5. This completes the proof of \propref{prop7}. \end{proof} From \cite{pv} we have a necessary and sufficient condition for the commutator subgroup of a right-angled Coxeter group to be free. Since $TW_n$ is a right-angled Coxeter group, we check the condition for $TW_n$. We note the following definitions. \begin{definition} A graph $\Gamma$ is called \textit{ chordal } if for every cycle in $\Gamma$ with atleast 4 vertices there is an edge (called chord) in $\Gamma$ joining 2 non-adjacent vertices of the cycle. \end{definition} \begin{definition} The \textit{Coxeter graph} $~ \Gamma_{TW_n} ~$ corresponding to $~ TW_n ~$ is defined as follows. Corresponding to each generator $\tau_i$ of $TW_n$ we have a vertex $v_i$ in $~ \Gamma_{TW_n}. ~$ Corresponding to each commuting defining relation $\tau_i \tau_j = \tau_j \tau_i,~ |i-j|>1,~$ we have an edge in $~ \Gamma_{TW_n} ~$ joining $v_i$ and $v_j$. \end{definition} We have the following proposition. \begin{prop}\label{prop8} For $n \ge 6$, $TW_n'$ is not a free group. \end{prop} \begin{proof} As proved in \cite{pv}, for a right-angled Coxeter group $G$, the commutator subgroup $G'$ is free group if and only if the Coxeter graph of $G$, $~ \Gamma_{G} ~$ is chordal. \begin{figure}[ht!] \centering \includegraphics[width=35mm]{chordal.jpg} \caption{Cycle with 4 vertices but no chord in $~ \Gamma_{TW_n}, ~$ $n \ge 6$ \label{chordal}} \end{figure} Consider the Coxeter graph $~ \Gamma_{TW_n} $ corresponding to $~ TW_n. ~$ Note that for $n \ge 6$, $~ \Gamma_{TW_n} ~$ contains the cycle $ v_1 v_4 v_2 v_5 v_1 $ joining the vertices $~ v_1, v_4, v_2, v_5 ~$ (as in the figure above). Clearly this cycle does not have any chord; as $\tau_1, \tau_2$ do not commute and $\tau_4, \tau_5$ do not commute. This shows that for $~ n \ge 6 ~$ $\Gamma_{TW_n} ~$ is not chordal. Consequently, $~ TW_n' ~$ is not free for $ n \ge 6$, proving \propref{prop8}. \end{proof} \subsection*{Proof of \corref{corfree}} \corref{corfree} follows from \propref{prop7}. and \propref{prop8}. \subsection*{Presentation for $TW_6'$} As follows from the above, $TW_6'$ is the first non-free group in the family of $TW_n'$, $n \geq 3$. Here, we note down a presentation for $TW_6'$ with minimal number of generators:\\ Generators: $\beta_0(1),~ \beta_0(2),~ \beta_1(2),~ \beta_0(3),~ \beta_1(3),~ \beta_0(4),~ \beta_1(4).$\\ Defining relations: $$ \beta_0(1) ~ \beta_0(4) ~ = ~ \beta_0(4) ~ \beta_0(1), $$ $$ \beta_1(2)^{-1} ~ \beta_1(4) ~ \beta_1(2) ~ = ~ \beta_0(1)^{-1} ~ \beta_0(2)^{-1} ~ \beta_1(4) ~ \beta_0(2) ~ \beta_0(1). $$ \bigskip \bigskip \begin{ack} Thanks to Andrei Vesnin, Matt Zaremsky and Pranab Sardar for comments on this work. The work was initiated when Soumya Dey was visiting the Sobolev Institute of Mathematics, Novosibirsk, Russia during July 2017, and he is indebted to Mahender Singh for facilitating the visit by DST grant INT/RUS/RSF/P-2. Dey is grateful to Andrei Vesnin for introducing him to the twin groups and suggesting the problem. Dey acknowledges initial discussions with Valeriy Bardakov on $TW_3'$ and $TW_4'$. This problem was a part of the Indo-Russian collaboration, supported by the above DST grant and the grant RSF-16-41-02006. This research was also supported in part by the International Centre for Theoretical Sciences (ICTS) during a visit for participating in the program - Geometry, Groups and Dynamics (Code: ICTS/ggd2017/11). Thanks to ICTS for the hospitality during the work. \end{ack}
{ "timestamp": "2018-05-07T02:05:30", "yymm": "1804", "arxiv_id": "1804.05375", "language": "en", "url": "https://arxiv.org/abs/1804.05375" }
\section{Introduction} The shapes and dynamics of polyampholytes (PAs), which are polymers with monomers that carry both positive and negative charges, have been extensively studied \cite{Edwards80Ferroelectrics,Higgs91JCP,Srivastava96Mac,Barrat07ACP,Borukhov98EPJB,Dobrynin95JPF,Dobrynin04JPS,Lee00JCP}. Polyampholytes naturally occur in aqueous solution if the monomers contain acidic and basic groups. In this sense, all proteins are PAs in which charged residues are interspersed between hydrophobic and hydrophilic residues. Because of the simultaneous presence of positive and negative charges, the conformations of the PAs are determined by an interplay of electrostatic interactions, charge fluctuation effects (see below), as well as the stiffness of the backbone. In simple terms, we expect that repulsion between like-charges would stretch the chain whereas attraction would tend to make the polymer compact. Of course, in random PAs this balance is determined on an average performed over an ensemble of sequences (see below). If the number, $N$, of monomers is large then the PA is predicted to adopt compact conformations if the polymer is overall neutral (the number of positive and negative charges nearly cancel). On the other hand, if there is residual charge on the PA it is likely to be extended. It should be noted that there are differences in the behavior of the dependence of the radius of gyration, $R_g$, on $N$ depending on whether the the chain is globally neutral (plus and minus charges exactly cancel) or statistically neutral \cite{Yamakov00PRL} (residual charge when averaged over a large number of sequences scales as $\sqrt{N}$ with $N \gg 1$). Thanks to several insightful theoretical studies \cite{Higgs91JCP,Barrat07ACP,Srivastava96Mac,Gutin94PRE}, the complex phase behavior of PAs as a function of salt concentration and temperature have been elucidated. More recently, there has been renewed interest in PAs in the biophysics community because many eukaryotic proteins contain an unusually large fraction of charged residues \cite{Wright15NatMolCellBiol,vanderlee14ChemRev,Oldfield14ARBiochem}. As a consequence the favorable hydrophobic interactions cannot overcome the residual electrostatic interactions. For this reason, this class of proteins do not adopt globular structures unless it is in complex with another partner protein. Polypeptide sequences with this characteristic are referred to as intrinsically disordered proteins or IDPs because they do not have stable ordered structures under physiological conditions. It is also the case that there are protein sequences in which only certain regions are disordered under nominal conditions. Because of the preponderance of such sequences and their roles in a variety of cellular functions and the potential role they play in diseases \cite{Dima04Bioinformatics,Oldfield14ARBiochem}, there is heightened interest in understanding their structural and dynamical properties \cite{Das15COSB,Zheng16JACS,Schuler16ARB,Levine17COSB}. The IDPs, whose backbone is relatively flexible (persistence length in the range (0.6 - 1.0) nm), are low complexity sequences containing a large fraction of charged residues and smaller fraction of hydrophobic residues compared to their counterparts that adopt well-defined structures in isolation. As a consequence water is likely to be a good or at best a $\Theta$ solvent, which means that $R_g \approx N^{\nu}$ where $\nu$ is approximately 0.6 or 0.5. There are differences between IDPs and random PAs. (i) The sequences of IDPs are quenched, thus making it necessary to understand the conformations of a specific sequence. In other words, two sequences with identical charge composition could have drastically different structural characteristics. Of course, this could be the case for random PAs as well although this aspect has not been investigated as much. (ii) Unlike the case of PAs for which $N \gg 1$, which allows one to develop analytical and scaling type arguments using well-developed methods in polymer physics, typically studied IDPs have finite $N$, at best on the order of a few hundred residues. (iii) IDPs also contain uncharged amino acids, which are not usually considered when treating PAs using theory and simulations. Despite these differences, concepts from polyelectrolytes (PEs) and PAs have been used to envision the conformations of IDPs using the difference between positive and negative charge ($\sigma$) and net charge as appropriate variables \cite{Uversky00ProtSci}. The importance of sequence effects on the $R_g$ of PAs was first illustrated in a key note by Srivastava and Muthukumar \cite{Srivastava96Mac}. Using Monte Carlo simulations, with $N = 50$, they showed that there are substantial variations in $R_g$ in PAs (containing only charged monomers) for a globally neutral chain. This study showed that the location of charges (sequence specificity) plays a crucial role in determining the conformational properties. More recently, Firman and Ghosh \cite{Firman18JCP} used Edwards model for charged polymers, encoding for the precise sequence in order to calculate $R_g$s for small $N$. Their theory successfully accounted for simulations of synthetic IDPs \cite{Das13PNAS}, containing only a mixture of positive and negative charged residues. Here, we develop a theory to investigate the effects of charge fluctuations on the shapes of random PAs. In our model there is a probability, $p_+$, ($p_-$) that a monomer at location $s$ is positively (negatively) charged. The probabilities, $p_+$ and $p_-$, should be calculated as follows. The number of sequences, $M$, of a PA containing $N$ monomers is $M = 3^N$ because each monomer can either have a $+$ or a $-$ charge or is neutral. We assume that there are no correlations between charges along a given sequence, which implies that the magnitude of charge of monomer $s$ does not affect the value of $s^{\prime}$. Thus, $p_+$ and $p_-$ are independent of $s$. Let $N_{+}(s)$ ($N_{-}(s)$) be the number of sequences with $+$ ($-$) charge at position $s$. Then, $p_+ = \frac{N_{+}(s)}{M}$ and $p_- = \frac{N_{-}(s)}{M}$. Because $N_{+}(s) + N_{-}(s) + N_{0}(s) = M$ where $N_{0}(s)$ is the number of sequences in which the $s^{th}$ monomer is neutral, the probability that the $s^{th}$ monomer in an ensemble of $M$ sequences is neutral is $1 - p_{+} - p_{-}$. The fluctuations in the ensemble of PA sequences arise because the normalized charge distribution is taken to be stochastic given by, \begin{equation} P[\sigma(s)] = p_+ \delta [\sigma(s) - 1] + p_{-} \delta [\sigma(s) + 1 ] + ( 1 - p_{+} - p_{-}) \delta [\sigma(s)]. \label{Dist} \vspace{.2 in} \end{equation} The charges are measured in units of $e^-$. Because some monomers do not carry a charge (like in IDPs) $(p_{+} + p_{-}) \ne 1$. The mean $\langle \sigma(s) \rangle$ gives the net charge, $p_{+} - p_{-}$, and the expression for the square of the charge fluctuations is, $\langle \delta \sigma^{2}(s) \rangle = p_{+} + p_{-} - (p_{+} - p_{-})^2$. We refer to $\langle \sigma(s) \rangle$ and $\langle \delta \sigma (s)\rangle$, both of which are independent of $s$, as PA variables. We show that due to $\langle \delta \sigma^{2}(s) \rangle$ the $R_g$ is altered substantially, and could even induce a coil-globule transition even when the total charge on the PA is not globally neutral. Because of the opposing behavior of polyelectrolyte ($\sigma \ne 0$) and PA effects arising from charge fluctuations ($\langle \delta \sigma^{2}(s) \ne 0$), the dependence of $R_g$ on the Debye screening length could be non-monotonic. The phase diagram in the [$\langle \sigma(s) \rangle$,$\langle \delta \sigma(s) \rangle$] plane is rich. We also apply the theory to calculate $R_g$ of specific IDP sequences. Remarkably, the theory reproduces quantitatively the $R_g$ values for the wild type Tau protein and various fragments obtained from the wild type Tau, which have been measured by Small Angle X-ray Scattering (SAXS) experiments \cite{Mylonas08Biochem}. In Tau, and other IDPs, charge fluctuations arise because of conformational heterogeneity, which we demonstrate explicitly elsewhere \cite{Upayan18JACS} for IDPs, and here for PAs using simulations. From now on we drop the angular brackets in both $\langle \sigma \rangle$ and $\langle \delta \sigma \rangle$. \section{Theory} We begin by considering the Edwards Hamiltonian for a polymer chain: \begin{equation}\label{hamiltonian} \mathcal{H}=\frac{3k_B T}{2 a_0^2} \int\limits_0^N \left(\frac{\partial \vec{r}}{\partial s}\right)^2 ds + k_B T{V}(\vec{r}(s)), \end{equation} where $\vec{r}(s)$ is the position of the monomer $s$, $a_0$ is the monomer size, $N$ is the number of monomers. The first term in the Eq.(\ref{hamiltonian}) accounts for chain connectivity, and the second term represents the sum of excluded volume interactions, electrostatic interactions, and effects of charge fluctuations (see below) due to the random values of charges in different positions in the ensemble of sequences. The expression for ${V}(\vec{r}(s))$ is, \begin{eqnarray}\label{Hapotential} {V}(\vec{r}(s))&=&\frac{v_0}{(2\pi a_0^2)^{3/2}}\sum\limits_{s,s'=0}^{N} \text{exp}[{-\frac{(\vec{r}(s)-\vec{r}(s'))^2}{2a_0^2}}]\\ \nonumber &+& l_B \int_0^N \int_0^N ds~ds'~ \sigma(s) \sigma(s') \frac{e^{-\kappa \mid \vec{r}(s)-\vec{r}(s') \mid}}{\mid \vec{r}(s)-\vec{r}(s')\mid }\\ \nonumber &=&V_0 +V_1(\mid \vec{r}(s)-\vec{r}(s')\mid ). \end{eqnarray} The first term in Eq.(\ref{Hapotential}) accounts for the non-specific two body excluded volume interactions. It differs insignificantly from the usual $\delta$ function potential used in the standard Edwards model. Of course, when $a_0$ is small compared to $R_g$, the precise form of this term is irrelevant, as long as it is short-ranged. In a good solvent ($v_0>0$), the polymer chain swells with $R_g \sim a_0 N^\nu$ $(\nu\approx 0.6)$, where as in a poor solvent ($v_0<0$), the size of the polymer is $R_g \sim a_0 N^\nu$ $(\nu\approx 1/3)$. Here, we consider a PA in a good solvent ($v_0>0$). From Eq.(\ref{Hapotential}) one may obtain an effective interaction term between charges on the PA chain. By following the theory developed previously \cite{Ha97JPF}, we use the Hubbard-Stratonovich transformation to decouple the product of charges $\sigma(s)\sigma(s')$ in Eq.\ref{Hapotential}. The partition function may be written as, \begin{eqnarray}\label{pf1} Z=\mathcal{N}^{-1} &&\int d[\psi(\vec{r})] \text{exp}\left[-\frac{1}{2}\int d\vec{r}d\vec{r'} \psi(\vec{r})\right.\\ \nonumber && \left. V_1^{-1}(\mid \vec{r}(s)-\vec{r}(s')\mid )\psi(\vec{r'})\right]Z_\psi \end{eqnarray} where, $Z_\psi=\int d[\vec{r}]\text{exp}\left[ -V_0 - i\int ds\sigma(s) \psi(\vec{r}(s))\right] $, and $\mathcal{N}=\int d[\psi(\vec{r})] \text{exp}[-\frac{1}{2}\int d\vec{r}d\vec{r'} \psi(\vec{r}) V_1^{-1}(\mid \vec{r}(s)-\vec{r}(s')\mid )\psi(\vec{r'})]$. If we assume that the charge distribution (Eq. \ref{Dist}) is annealed, it suffices to average $Z_\psi$ over the sequence of charges. With assumption that the charges $\sigma(s)$ at distant sites are not correlated, the partition function averaged over sequence of charges to second order in $\psi$ becomes,~\cite{Ha97JPF} \begin{eqnarray}\label{pf2} <Z_\psi>_{seq}&=&\int\mathcal{D}[\vec{r}]\text{exp} \{-i\sigma\int\psi(\vec{r})c(\vec{r})d\vec{r} \\ \nonumber &-& \frac{1}{2} (\delta \sigma)^2 \int[\psi^2(\vec{r})-<\psi^2(\vec{r})>_\psi]c(\vec{r}) d\vec{r} \end{eqnarray} where the average value of the charge on the chain, $\sigma=<\sigma(s)>=p_+ -p_-$, the charge fluctuation, $(\delta \sigma)^2= <\sigma^2(s)-<\sigma(s)>^2>=p_+ + p_- -(p_+ -p_-)^2$ and the local monomer density, $c(\vec{r}) =\int ds \delta(\vec{r}(s)-\vec{r})$. The term involving $(\delta \sigma)^2$, arising from the charge fluctuations, gives rise to the so called PA effect, which is manifested as an effective attractive interaction of the screened Coulomb potential. Using Eq.(\ref{pf1}) and Eq.(\ref{pf2}), we perform the needed integration over $\psi(\vec{r})$ to obtain the following expression for the effective two body interaction term between charges on the PA, \begin{eqnarray}\label{potential} \mathcal{V}(\vec{r}(s))&=&\frac{v}{(2\pi a_0^2)^{3/2}}\sum\limits_{s,s'=0}^{N} \text{exp}[{-\frac{(\vec{r}(s)-\vec{r}(s'))^2}{2a_0^2}}]\\ \nonumber &+& \sigma^2 l_B \int \int ds~ds'~ \frac{e^{-\kappa \mid \vec{r}(s)-\vec{r}(s') \mid}}{\mid \vec{r}(s)-\vec{r}(s')\mid }\\ \nonumber &-& \frac{1}{2}(\delta \sigma)^4 l_B^2 \sum_{\{s,s'\}}~ \frac{e^{-2\kappa \mid \vec{r}(s)-\vec{r}(s') \mid}}{\mid \vec{r}(s)-\vec{r}(s')\mid^2 }. \end{eqnarray} We neglect the three body interactions in the effective Hamiltonian in Eq.(\ref{potential}), which would be important if the PA were in a poor solvent. In the work of Higgs and Joanny~\cite{Higgs91JCP}, the variational type calculation (see below) was done directly using Eq.~3. In this case, upon expansion to second order in ${V}(\vec{r}(s))$, the electrostatic potential (second term in Eq.~3) generates a term $\propto \sigma(s) \sigma(s^{\prime}) \sigma(s^{\prime\prime})\sigma(s^{\prime\prime\prime})\sigma(s^{\prime\prime\prime\prime})$, which is random. When averaged over the ensemble of sequences, the coefficient of the third term is $\propto (p_+ + p_-)^{2}$ in \cite{Higgs91JCP}. In contrast, we carry out averaging first as shown in Eq.~ (4), and hence obtain a different prefactor for the charge fluctuation ($\delta \sigma$) induced attraction term in Eq.~(6). The screened Coulomb potential, the second term in Eq.(\ref{potential}), accounts for the interactions between charges separated by a distance $\mid \vec{r}(s)-\vec{r}(s')\mid$. The strength of the unscreened electrostatic interactions is characterized by the Bjerrum length $l_B=e^2/\epsilon k_B T$. The Debye screening length, $\kappa^{-1}$ determines the range of the electrostatic interactions. By changing the value of $\kappa$, and hence the range of charge interactions, the PA chain could undergo a coil-to-globule transition. The value of $\kappa$ may be the changed by decreasing or increasing the salt concentration. The dimensionless parameter, $\sigma$, determines the net charge per residue on the polyelectrolyte chain. For a particular sequence, fraction $p=p_+ + p_-$ of the monomers are charged with the charge on each monomer being $\pm e$. Therefore, the net charge per monomer is $\sigma=\mid p_+ - p_-\mid$. The third term in Eq.(\ref{potential}) is the attractive interaction term that is proportional to charge fluctuations ($\delta \sigma$). The PA affect arises due to the interaction between charge and dipoles formed between sequence of positive and negative charges. The charge-dipole interaction term decays as $\mid \vec{r}(s)-\vec{r}(s')\mid^{-2}$ and it is effectively screened (with a screening length $1/2\kappa$) due to the presence of other dipoles. In the absence of the third term the Hamiltonian would describe a polyelectrolyte, whose phases as function of temperature and $\kappa$ have been previous described using the methods used here \cite{Ha92PRA}. In order to obtain $R_g$, we adopt the Edwards-Singh (ES) type variational calculation \cite{Edwards79JCSFT}, which has been extensively used in the polymer literature \cite{Muthukumar82JCP,Muthu87JCP,Higgs91JCP,Ha92PRA,Ha99JCP}. More recently, the method was used to study sequence dependence of collapse of polypeptide chains \cite{Himadri17SM} and polyelectrolytes \cite{Firman18JCP} with application to a special class of synthetic IDPs. In developing the theory, we assume that the interactions between charges exist only between specific monomers, described by the second and third term in Eq.(\ref{potential}). The sum is over the set of specific contacts between pairs $\{s_i, s_j\}$. We use the contact maps of IDP, generated in coarse-grained (CG) simulations of IDPs \cite{Upayan18JACS}, in order to assign the specific interactions. The contact map from the simulation is computed by using a cutoff of 8 \AA. The contacts are included between all side chain beads. In the two bead CG model,\cite{Upayan18JACS} the charges are positioned on the center of masses of the side chain beads, and therefore the contact map includes charge-charge contacts. The ES method is a variational type (referred to as the uniform expansion method) calculation that represents the exact Hamiltonian by a Gaussian chain with an effective monomer size, which is determined as follows. Consider a virtual chain without excluded volume interactions, whose radius of gyration $\langle R_{g}^{2} \rangle=N a^{2}/6$ \cite{Edwards79JCSFT}, described by the Hamiltonian, \begin{equation} \mathcal{H}_v=\frac{3k_B T}{2 a^2} \int\limits_0^N \left(\frac{\partial \vec{r}}{\partial s}\right)^2 ds. \end{equation} The monomer size in the trial Hamiltonian is $a$. We split the deviation $\mathcal{W}$ between the virtual chain Hamiltonian and the real Hamiltonian as, \begin{equation} \mathcal{H}-\mathcal{H}_v=k_BT\mathcal{W}=k_BT(\mathcal{W}_1+\mathcal{W}_2), \end{equation} where \begin{equation} \mathcal{W}_1=\frac{3}{2 }\left(\frac{1}{a_0^2}-\frac{1}{a^2}\right) \int\limits_0^N \left(\frac{\partial \vec{r}}{\partial s}\right)^2 ds, ~ \mathcal{W}_2=\mathcal{V}(\vec{r}(s)). \end{equation} The radius of gyration is $R_g^2=\frac{1}{N} \int\limits_0^N \langle\vec{r}^2(s)\rangle ds$, with the average being, $\langle\vec{r}^2(s)\rangle=\frac{\int r^2 e^{-\mathcal{H}_v/k_BT}e^{\mathcal{-W}} \delta\vec{r}}{\int e^{-\mathcal{H}_v/k_BT}e^{\mathcal{-W}} \delta\vec{r}}=\frac{\langle\vec{r}^2(s)e^{\mathcal{-W}}\rangle_v}{\langle e^{\mathcal{-W}}\rangle_v}$, where, $\langle \cdots \rangle_v$ denotes the average over $\mathcal{H}_v$. Assuming that the deviation $\mathcal{W}$ is small, we can calculate the average to first order in $\mathcal{W}$. The result is, $ \langle\vec{r}^2(s)\rangle \approx \frac{\langle\vec{r}^2(s)(1-\mathcal{W})\rangle_v}{\langle (1-\mathcal{W})\rangle_v} \approx \langle\vec{r}^2(s)(1-\mathcal{W})\rangle_v\langle (1+\mathcal{W})\rangle_v $ and the radius of gyration becomes, \begin{eqnarray}\label{rg} &&<R_g^2>=\frac{1}{N} \int\limits_0^N \langle\vec{r}^2(s)\rangle ds\\ \nonumber && = \frac{1}{N} \int\limits_0^N [\langle\vec{r}^2(s)\rangle_v + \langle\vec{r}^2(s)\rangle_v \langle\mathcal{W}\rangle_v -\langle\vec{r}^2(s)\mathcal{W}\rangle_v] ds. \end{eqnarray} If we choose the effective monomer size $a$ in $\mathcal{H}_v$, such that the first order correction (second and third terms in the right hand side of Eq.(\ref{rg})) vanishes, then the size of the chain is, $\langle R_{g}^{2} \rangle=N a^{2}/6$. This is an estimate of the exact $\langle R_g^2 \rangle$, and is only an approximation as we have neglected $\mathcal{W}^2$ and higher powers of $\mathcal{W}$. Thus, in the ES theory, we determine $a$ using Eq. (\ref{rg}), \begin{equation}\label{first} \vspace{-.1 in} \frac{1}{N} \int\limits_0^N [ \langle\vec{r}^2(s)\rangle_v \langle\mathcal{W}\rangle_v -\langle\vec{r}^2(s)\mathcal{W}\rangle_v] ds=0. \end{equation} The equation above leads to a self-consistent equation for $a$, and is given by \cite{Edwards79JCSFT}: \begin{equation} \vspace{-.3 in} \frac{1}{a_0^2}-\frac{1}{a^2}= \frac{ \frac{1}{N}\int\limits_0^N [\langle\vec{r}^2(s)\rangle_v \langle\mathcal{V}\rangle_v -\langle\vec{r}^2(s)\mathcal{V}\rangle_v]ds}{ \frac{a^2}{N}\int_{0}^N ds \ \langle\vec{r}^2(s)\rangle_v}. \vspace{.2 in} \end{equation} By calculating the averages in the Fourier space ($\vec{r}_n=\frac{1}{N}\int\limits_1^N \cos\left({ \frac{\pi n s}{N}}\right) \vec{r}(s) ds$; $\vec{r}(s)=2\sum\limits_{n =1}^{N}\cos\left({\frac{\pi n s}{N}}\right)\vec{\tilde{{r}_n}}$; $R_g^2=2\sum\limits_n \langle|{\vec{\tilde{r_n}}}^2|\rangle$), we obtain \begin{widetext} \vspace{-.15 in} \begin{eqnarray}\label{selfconsistent} \frac{1}{a_0^2}-\frac{1}{a^2}&=&\frac{4N a_0^3}{9\pi \sum{\frac{1}{n^2}}}\sum\limits_{s,s'=0}^N \frac{C^{ss'}_{1}}{(\frac{a^2 N}{3\pi^2}C^{ss'}_{2}+a_0^2)^{\frac{5}{2}}} \\ \nonumber && + \frac{4N \sigma^2 l_B}{9\pi^3 \sum{\frac{1}{n^2}}}\sum\limits_{s,s'=0}^N C^{ss'}_{1} \left( \frac{\pi^{1/2}(1-2\frac{2\kappa^2 a^2 N C^{ss'}_2}{3\pi^2})}{4(\frac{2a^2 N}{3\pi^2} C^{ss'}_{2})^{3/2}} +\frac{\pi \kappa^3}{2} e^{(\frac{2a^2 \kappa^2 N C^{ss'}_{2}}{3\pi^2})}\text{erfc}[ \kappa \sqrt{\frac{2a^2N}{3\pi^2} C^{ss'}_{2}}]\right) \\ \nonumber &&- \frac{4N (\delta\sigma)^4 l_B^2}{9\pi^3 \sum{\frac{1}{n^2}}}\sum\limits_{\{s,s'\}} C^{ss'}_{1} \int_0^\infty dq~q^3 \left(\pi-\text{arctan}\left(\frac{2\kappa}{q}\right)\right) \text{exp}\left(-q^2 \frac{2a^2N}{3\pi^2} C^{ss'}_{2}\right) \end{eqnarray} \end{widetext} where, $C^{ss'}_{1}= \sum\limits_{n=1}^{N}\frac{1-\cos[n \pi(s-s')/N]}{n^4}$ and $C^{ss'}_2=\sum\limits_{n=1}^{N}\frac{1-\cos[n \pi(s-s')/N]}{n^2}$. In obtaining Eq. \ref{selfconsistent} we have used $v_0 = \frac{4 \pi a_0^3}{3}$ in Eq. \ref{Hapotential} From Eq.(\ref{selfconsistent}), we can calculate the effective monomer size $a$, and hence the chain size $<R_g^2>=\frac{a^2 N}{6}$. However, without having to solve Eq.(\ref{selfconsistent}) numerically, we can define the $\Theta$-like point, which signals the onset of a potential transition from a coil to globule state in the PA. At the $\theta$-point, the repulsive terms exactly balance the PA term. Since at the $\Theta$-point, the PA behaves as a Gaussian chain, with $a=a_0$, we substitute this value for $a$ in Eq.\ref{selfconsistent} to determine the the condition for the $\Theta$-point. Thus, from Eq.(\ref{selfconsistent}), the critical charge fluctuation value, at which the PA term equals the excluded volume and PE terms is, \begin{widetext} \begin{eqnarray}\label{deltasigma} (\delta\sigma_{\theta}^2)^2&=&\left[\frac{4N a_0^3}{9\pi \sum{\frac{1}{n^2}}}\sum\limits_{s,s'=0}^N \frac{C^{ss'}_{1}}{(\frac{a^2 N}{3\pi^2}C^{ss'}_{2}+a_0^2)^{\frac{5}{2}}} \right. \\ \nonumber && \left.+ \frac{4N \sigma^2 l_B}{9\pi^3 \sum{\frac{1}{n^2}}}\sum\limits_{s,s'=0}^N C^{ss'}_{1} \left( \frac{\pi^{1/2}(1-2\frac{2\kappa^2 a^2 N C^{ss'}_2}{3\pi^2})}{4(\frac{2a^2 N}{3\pi^2} C^{ss'}_{2})^{3/2}} +\frac{\pi \kappa^3}{2} e^{(\frac{2a^2 \kappa^2 N C^{ss'}_{2}}{3\pi^2})}\text{erfc}[ \kappa \sqrt{\frac{2a^2N}{3\pi^2} C^{ss'}_{2}}]\right)\right]/ \\ \nonumber &&\left[ \frac{4N l_B^2}{9\pi^3 \sum{\frac{1}{n^2}}}\sum\limits_{\{s,s'\}} C^{ss'}_{1} \int_0^\infty dq~q^3 \left(\pi-\text{Arctan}\left(\frac{2\kappa}{q}\right)\right) \text{exp}\left(-q^2 \frac{2a^2N}{3\pi^2} C^{ss'}_{2}\right)\right] \end{eqnarray} \end{widetext} The numerator in Eq.(\ref{deltasigma}) is a consequence of the repulsion containing excluded volume interactions and polyelectrolyte term. The denominator encodes the PA effect, determining the extent to which the size of the polymer changes due to charge fluctuations. Using Eq.(\ref{deltasigma}), we obtain the dependence of $\delta \sigma_{\theta}$ on $N$. Scaling $n$ by $N$, it can be shown that $C_1^{ss'}\sim \frac{1}{N^2}$ and $C_2^{ss'}\sim 1$. From these result, we obtain, $\delta \sigma_\theta \sim \sqrt{N}$. The implication is that for $N \gg 1$, charge fluctuations have to be extremely large to drive coil to globule transition unless the PA is globally neutral. Because even for statistically neutral PA, the PE term would not be irrelevant, we surmise that a genuine coil to globule transition transition may not be easily realizable in long PAs, which is in accord with the results in a previous study \cite{Yamakov00PRL}. By implication our theory suggests that maximally compact IDPs would be difficult to obtain for generic IDP sequences if the fractions of + and - charged residues is on the order of (0.4 - 0.5). Of course, to establish the various conformations IDPs or PAs adopt as the nominal PA parameters and salt concentration ($\sigma$, $\delta \sigma$, and $\kappa$) are varied, will require performing detailed calculations as was previously done for polyelectrolytes \cite{Lee01Macromolecules}. \section{Simulations } \textbf{Model:} To provide further insights into some aspects of our theoretical predictions, and to highlight the nature of the heterogeneous ensembles that are sampled, we carried out simulations of PA chains, with $N=50$. We consider sequences having the same net charge, $\sigma$, but different charge distributions to elucidate the role of sequence in determining the size of PAs. The PA chain is modeled using a standard bead-spring model for charged polymers, with the total potential energy, $U_{tot}$, given by: \begin{equation} U_{tot} = U_{ch} + U_{exv} + U_{elec}. \end{equation} \noindent Here, $U_{ch}$ describes the chain connectivity between the beads, and is modeled using the FENE potential: \begin{equation} U_{ch} = \sum_{i}^{N_{bonds}}-0.5 k R_{0}^{2} \ln \left[ 1 - \left( \frac{l_{i}-l_{0}}{R_{0}} \right) ^{2} \right]. \end{equation} \noindent In Eq.~16, $k = 20$\,kcal mol$^{-1}$ \AA$^{-2}$ denotes the spring constant; $l_{0} = 3.8$\,\AA \,\, is the equilibrium bond length between the connected PA beads; and $R_{0} = $\,2 \AA\,\, controls the maximum allowable deformation of the covalent bonds. The excluded volume interactions between pairs of beads are described by a truncated and shifted Lennard-Jones potential: \begin{equation} U_{exv} = \sum _{i,j} ^{N_{pairs}} 4 \epsilon \left[\left( \frac{\sigma}{r_{ij}}\right)^{12} - \left( \frac{\sigma}{r_{ij}} \right) ^ {6} + \frac{1}{4} \right]. \end{equation} Based on previous work,~\cite{Kremer,PA_sim, PA_sim2} we set $\epsilon = k_{B}T$, and $\sigma = l_{0}$. The pairwise interactions between the beads are ignored if the distance is greater than $2^{1/6}\sigma$. This cutoff ensures that the excluded volume term is purely repulsive. The interactions between charged beads are taken into account via the screened Coulomb potential: \begin{equation} U_{elec} = \sum_{i,j}^{N_{charged}} \frac{q_{i}q_{j}}{2 \varepsilon r_{ij}}\exp^{-\kappa r_{ij}} \end{equation} In Eq.~18, $\varepsilon$, and $\kappa$ are the inverse Debye length, and the dielectric constant, respectively. We consider only unit charges, i.e., $q = \pm e$. \textbf{Simulations:} The conformational space of each PA chain is explored using Langevin dynamics. For each PA bead, the stochastic equation of equation is given by: $m\bm{\ddot r}_{i} = -\gamma \bm{\dot r}_{i} + \bm{F}_{i} + \bm{g}_{i}$, where $m$ is the mass, $\bm{F}_{i}$ is the conservative force acting on each bead, and $\gamma_{i}$ is the drag coefficient. The Gaussian random force, $\bm{g}_{i}$, satisfies $\langle \bm{g}_{i}(t) \bm{g}_{j}(t^{\prime})\rangle = 6k_{B}T \gamma \delta_{ij} \delta(t-t^{\prime})$. The drag coefficient $\gamma$ is given by: $\gamma = m/\tau_{eff}$, where $\tau_{eff} = \sigma (m/\epsilon)^{1/2}$ is the effective time scale. We used a variant of the velocity Verlet scheme~\cite{honeycutt_dt} to integrate the equations of motion, using a time step of $\Delta t = 0.01\,\tau_{eff}$. Each simulation was carried out for 1.2 $\times$ 10$^{9}$ steps to ensure proper equilibration, and to obtain meaningful statistics. \textbf{Analysis:} Following Eq.~6, we can estimate the charge fluctuations for each PA chain from simulations using an approximate expression: \begin{equation} \langle \delta^{2}{U_{elec}} \rangle \approx \frac{(k_{B}T)^{2} \delta \sigma ^{4}_{c} l_{B}^{2}}{\langle R_{g} \rangle ^{2}}, \end{equation} \noindent where $\delta U_{elec} = U_{elec} - \langle U_{elec} \rangle$ is the fluctuation in the electrostatic energy (Eq.~18) about its mean, and $\delta \sigma_{c}$ denotes the charge fluctuation computed from the ensemble of sequence-specific conformations generated from simulations (see below). To characterize the structural heterogeneity of the PA ensembles, and to identify the most populated equilibrium conformations, we carried out hierarchical clustering of the simulation trajectories using a pairwise distance metric, $D_{ij}$ defined as: \begin{equation} D_{ij} = \frac{1}{N_{p}} \sum _{a,b} \vert \left( r_{a,b} ^{i} - r_{a,b}^{j} \right) \vert, \end{equation} \noindent where $r_{a,b}^{i}$ and $r_{a,b}^{j}$ are the pairwise distances between the PA beads $a$ and $b$, in snapshots $i$ and $j$, respectively. Distinct geometric clusters were identified using the Ward variance minimization algorithm,\cite{ward} as implemented within the \textit{scipy} module. The hierarchical organization of conformations into distinct families were visualized in the form of dendrograms. \section{Results:} \textbf{Theoretical Predictions:} From Eq. \ref{potential} it is easy to show that that the size of the PA should be determined by ${\frac{\delta \sigma}{\sigma}}$, which can be written as $\sqrt{(1 - \sigma_A)/\sigma}$ where $\sigma_A = \frac{(p_+ - p_-)^2}{\sigma}$, which in the IDP literature is referred to as the charge asymmetry parameter. Fig. (1), displaying the dependence of the radius of gyration for a PA chain, with randomly distributed charges, on the screening length, shows that $R_g$ changes non-monotonically as $\kappa$ increases when $\frac{\delta \sigma}{\sigma} = 10$. In this charge-fluctuation dominated regime, the behavior can be explained by noting that at small values of $\kappa$, $R_g$ increases due to the PA term until $\kappa l_b \approx 0.19$. In this range of $\kappa$, the effective attractions between monomers, the PA effect, decreases by adding ions to the solvent. As a result the size of the chain increases. For the PA whose $R_g$ is shown in Fig. (1), at $\kappa l_B=0.19$ the PA and PE effects balance each other, and the chain becomes a random coil. Upon further increase in $\kappa$, the decrease in $R_g$ (but the chain is not a globule) is due to the dominance of the PA term. In the opposite limit when $\frac{\delta \sigma}{\sigma} = 0.1$, both the dimensions of the chain are dominated by the PE term, and $R_g$ decreases with increasing $\kappa$. We expect that at sufficiently large values of $\kappa l_B$ the consequences of PE and PA effects are negligible, and hence, $R_g$ would have the value expected for a Flory random coil ($\nu = 0.6$). Interestingly, these trends are qualitatively similar to experiments on two IDPs (N-terminal domain of HIV-1 integrase, and human prothymosin-$\alpha$)~\cite{Mueller-Spaeth10PNAS}. \begin{figure}[h] \includegraphics[width=0.4\textwidth]{fig1idp.pdf} \caption{Non-monotonic increase in the radius of gyration, $R_g$, with increasing value of the inverse screening length $\kappa$ for a PA chain (N=150).} \end{figure} \textbf{Predictions for a Tau-like IDP:} The generality of the theory allows us to predict the dependence size of an IDP. In Fig.(2) we plot the $\kappa$ dependence of radius of gyration of a Tau protein fragment (K17Tau with $N$=145). To perform the calculations, we used the contact map generated in simulations based on the Self-Organized Polymer (SOP)-IDP model, which captures accurately the measured structure factors for a variety of IDPs \cite{Upayan18JACS}. Using input from simulations,~\cite{Upayan18JACS} which accounts for heterogeneity of the conformational ensembles of IDP, we find that $R_g$ of K17Tau changes non-monotonically with increasing value of $\kappa$ (Fig. (2)). The size of K17Tau protein increases with $\kappa$ until it reaches a maximum at $\kappa l_B \approx 0.28$ (ion density for a monovalent salt is $\approx$ 30.5 mM), where the protein behaves like a polymer in a $\Theta$-solvent. The peak is broader with compared to the chain with random sequences. With further increase in $\kappa$, $R_g$ decreases just as for the random PA chain (Fig. 1). From these results, we conclude that charge fluctuations are substantial in K17Tau. \begin{figure}[h] \vspace{.4 in} \includegraphics[width=0.4\textwidth]{fig2idp.pdf} \caption{The size of the chain ($R_g$) changes non-monotonically with increasing value of $\kappa$ for K17Tau (N=145) protein. The parameter used to generate the plot are $\delta \sigma =0.95$ and $\sigma=0.01$.} \end{figure} \textbf{Phase diagram:} The 3D plots in fig.(3) and fig.(4) for two different values of $\kappa$ show the phase diagram for different values of $\sigma$, and $\delta \sigma$. The plot in fig.(3) shows that for small values of $\sigma$, the change in $R_g$ is significant at a particular value of $\delta \sigma$. For a large value of net charge, say $\sigma=0.8$, the change in $R_g$ is small over a range of values of $\delta \sigma$ indicating that PE effects dominate. The value of $\delta\sigma_\theta$ increases with $\sigma$ for a PA chain. In fig.(4), for $\kappa=0.4$\,nm$^{-1}$ and for a high value of net charge $\sigma$, the change in $R_g$ is significant at a particular value of $\delta \sigma$ indicating that PA effects dominate. The phase diagrams show that by changing the salt concentrations, the sizes of random PAs can be altered dramatically. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{fig6idp.pdf} \caption{Phase diagram for different values of net charge $\sigma$ and the charge fluctuation parameter $\delta \sigma$. The chain size decreases monotonically as $\delta \sigma$ increases. The parameters for the plot: $N=150$ and $\kappa=0.2$\,nm$^{-1}$.} \end{figure} \begin{figure}[h] \includegraphics[width=0.5\textwidth]{fig3idp.pdf} \caption{Phase diagram for different values of net charge $\sigma$ and the charge fluctuation parameter $\delta \sigma$. The chain size decreases monotonically as $\delta \sigma$ increases. The parameters for the plot: $N=150$ and $\kappa=0.4$\,nm$^{-1}$.} \end{figure} \textbf{Application to IDPs:} In order to calculate $R_g$ for several IDPs (see Fig. 5), with $N$ ranging from 24 (HISTATIN5) to 441 (hTau40) we used the average contact maps from simulations \cite{Upayan18JACS}, which restricts the summation in Eq.\ref{potential} to specific sites on the IDP. The dependence of $R_g$ on the chain length for a set of IDPs (listed in the caption to Fig. 5) is shown in Fig. 5. The theory shows that $R_g \sim N^{0.6}$, implying that these IDPs behave as self avoiding polymers, similar to the results in the simulations for PAs \cite{Yamakov00PRL}. The scaling in Fig. 5 has a weaker $N$ dependence than predicted by the renormalization group argument ($R_g \sim N$) for long PAs \cite{Kantor91EPL}. The $N$ dependence in Fig. 5 has higher power than the result in \cite{Higgs91JCP} ($R_g \sim N^{1/3}$). It appears that for values of $\sigma$ observed in this set of IDPs the random coil behavior is the apt description. \begin{figure}[h] \includegraphics[width=0.4\textwidth]{fig7idp.pdf} \caption{The size $R_{g}$ increases with $N$ as $R_g \sim 0.26 N^{0.6}$ for specific contacts. The symbols denote to the $R_g$ values calculated from the theory (using Eq.~6), and the green line is the power law fit. The scaling is unchanged even when the restriction to specific interactions in Eq.~6 is removed. The orange squares correspond to the $R_{g}$ values for the IDP sequences: ACTR (N=65), An16 (N=185), aSyn (N=140), ERMnTAD (N=122), HISTATIN5 (N=24), hNHE1 (N=131), NUP153 (N=81), p53 (N=93), ProtA (N=111), SH4UDsrc (N=85) and Sic1 (N=90). The $R_g$ values for the hTau40 protein, and other constructs derived from it are denoted as blue circles. The various Tau protein constructs are: K18Tau ($N$=130), K17Tau ($N$=145), K27Tau ($N$=167), K10Tau ($N$=168), K32Tau ($N$=198), K44Tau ($N$=283), K23Tau ($N$=254), K25Tau ($N$=185), hTau23 ($N$=352), hTau40 ($N$=441) and K16Tau ($N$=176). The parameters used to generate the plot are $\sigma = 0.01$, $\delta \sigma =0.8$ and $\kappa l_B=0.324$. } \end{figure} \textbf{Sequence effects and conformational heterogeneity:} To illustrate the effect of charge fluctuations on the conformational ensemble of PAs we performed simulations of PA chains using a simple off-lattice model. The PA variables, $p_{+}$, and $p_{-}$, as well as the other simulation parameters are kept fixed. Hence, any variations in the size, or the underlying conformational heterogeneity of the PA sequences is entirely due to the different charge distributions. In a recent study, Firman and Ghosh~\cite{Firman18JCP} identified combinations of $p_{+}$ and $p_{-}$, for which coil to globule transitions are expected to be extremely sensitive to the charge decoration along the PA chain. Taking a cue from their work, we consider PA sequences with a net charge of +6, with $p_{+} = 0.280$, and $p_{-} = 0.160$. Twenty different realizations of charge distributions were generated by randomly permuting the positions of the neutral, positive, and negative beads along the chain. The ensemble averaged $R_{g}$ values fall in the range from 1.77 to 2.02\,nm. The spread in $R_{g}$s is interesting considering that in all these sequences $N=50$, and the PA variables, $\sigma$ and $\delta \sigma$, which are often used to analyze data in the IDP community, are identical. To explain these differences, in terms of fluctuations in the conformations, linked to $\delta \sigma$, we consider three representative examples (\textit{Seq1}, \textit{Seq2}, and \textit{Seq3}, in Fig.~6). The peak of the $R_{g}$ distribution progressively shifts towards lower values in going from Seq1 to Seq3 (Fig.~6), clearly indicating that standard PA variables are not sufficient to fully describe the equilibrium properties. Insights into the relative populations of the coil-like and the globule-like states for the three PA sequences can be obtained from the hierarchical arrangement of structural clusters (Fig.~7). The structural ensemble of \textit{Seq1} is clearly dominated by extended conformations, which accounts for 64.1\% of the equilibrium population. Compact structures on the other hand, have a lower occupation probability (35.9\%). For \textit{Seq2}, the relative populations of extended, and compact structures is approximately the same, being 49.5\%, and 50.5\%, respectively. As is evident from the dendrogram, the equilibrium ensemble of \textit{Seq3} is primarily dominated by compact structures (net population of 73.4\%), which is in complete contrast to \textit{Seq1}. The contrasting heterogeneity of the conformational ensembles for the three PA sequences, with identical $N$, $\sigma$, and $\delta \sigma$, readily explains the differences in $R_{g}$. The distributions of $\delta U_{elec}$ (Fig.~6), together with the approximate values of $\delta \sigma_{c}$ computed using Eq.~19 (\mbox{Table 1}), provide a clear-cut evidence of the key role of the charge fluctuations in determining chain dimensions. For \textit{Seq1}, which has a high propensity to form extended structures, the $\delta U_{elec}$ distribution is narrow, and the charge fluctuation is minimal. In fact, the variance to mean ratio (VMR) of the electrostatic energy suggests that the charge distribution of \textit{Seq1} would correspond to the theoretical limit,$\frac{\delta \sigma}{\sigma} < 1$, where PE effects dominate. For \textit{Seq3}, the $\delta U_{elec}$ distribution is quite broad, and charge fluctuations effects are the most dominant, in perfect harmony with the clustering analysis, which revealed that \textit{Seq3} is mostly associated with compact structures. Furthermore, the VMR suggests that in contrast to \textit{Seq1}, the appropriate theoretical limit would be $\frac {\delta \sigma}{\sigma} > 1$, the regime where PA effects dominate. \textit{Seq2}, for which the equilibrium populations of compact and extended structures are approximately equal, presents an interesting scenario. As expected, the estimated charge fluctuation, falls between the two extremities. The VMR $\approx$\,1 implies that the corresponding theoretical limit would be $\frac{\delta \sigma}{\sigma} \approx 1$. Therefore, the random coil like behavior predicted from structural clustering, which manifests itself due to a balancing act of the PA and PE effects. We draw two important conclusions. (i) The scaling of $R_{g}$ with $N$, which for a certain class of IDPs, obey Flory scaling law can be used to accurately determine $R_{g}$ without the need for experiments or simulations. The class of IDPs for which $R_{g}$ can be computed using \mbox{$R_{g} \sim N^{\nu}$} ($\nu \approx 0.6$) can be discerned using $\sigma$ and $\delta \sigma$. (ii) However, quantitative description of the equilibrium properties of IDPs as a function of salt concentration or denaturants requires a complete characterization of the conformational ensemble, as simulations explicitly demonstrate. The analyses of charge fluctuations show that $\delta \sigma$, which can be readily calculated for a specific sequence (in our simulations they are identical) is substantially modified when weighted by the energies associated with the conformational ensemble (denoted as $\delta \sigma_{c}$). Thus, only by understanding the details of the conformations can the properties of IDPs be correctly described. Alas, the average $R_{g}$ masks such subtle but important effects. \begin{table} \begin{tabular}{| l | l | l | l | l | l} \hline Seq & $\langle \delta ^{2} {U_{elec}} (kcal/mol) \rangle$ & $\langle R_{g} \rangle$ (nm) & $\delta \sigma_{c}$ & $ \vert \langle \delta^{2}{U_{elec}} \rangle/\langle U_{elec} \rangle \vert$\\ \hline Seq1 & 0.23 & 2.03 & 1.53 & 0.83 \\ Seq2 & 1.04 & 1.87 & 2.14& 1.02\\ Seq3 & 4.07 & 1.78 & 2.93 & 9.49\\ \hline \end{tabular} \caption{ The values of charge fluctuation,$\delta \sigma_{c}$, for the different sequences. Note that $\delta \sigma_{c}$ is computed from the ensemble of sequence-specific conformations using Eq.~19. Also shown are the variance to mean ratios (VMR) for the electrostatic energy.} \end{table} \begin{figure}[htbp!] \includegraphics[width=0.44\textwidth]{sequences.pdf} \includegraphics[width=0.45\textwidth]{distrg.pdf} \includegraphics[width=0.45\textwidth]{distelec.pdf} \caption{Top: The cartoon representations of the PA sequences having different charge decorations along the chain with $p_{+}=0.280$, and $p_{-} = 0.160$. The beads are color coded according to charge: the neutral beads are colored green, positively charged beads are colored red, and negatively charged beads are colored blue. Middle: The distribution of $R_{g}$ for \textit{Seq1} (brown), \textit{Seq2} (cyan), and \textit{Seq3} (orange). Bottom: The distributions of $\delta U_{elec} = U_{elec} - \langle U_{elec} \rangle$ for the PA sequences, shown with the same color coding.} \end{figure} \begin{figure}[htbp!] \includegraphics[width=0.44\textwidth]{dendrogrampattern4.pdf} \includegraphics[width=0.44\textwidth]{dendrogrampattern8.pdf} \includegraphics[width=0.44\textwidth]{dendrogrampattern6.pdf} \caption{The conformational heterogeneity of PA sequences depicted in the form of dendrograms (Top: \textit{Seq1}, Middle: \textit{Seq2}, and Bottom: \textit{Seq3}). The olive branches lead to extended configurations, and magenta branches lead to collapsed structures. Representative snapshots corresponding to the different clusters are also depicted. The relative cluster populations are marked near the appropriate dendrogram branches.} \end{figure} \section{Conclusions:} We developed a theory to quantitatively predict the effect of charge fluctuations ($\delta \sigma$) on the size of flexible PAs that is in a good solvent (excluded volume interactions are positive) as a function of the inverse Debye length and the net charge per monomer on the chain $(\sigma$). Interestingly, when charge fluctuations are non-negligible ($\frac{\delta \sigma}{\sigma}$ is greater than unity), the radius of gyration increases non-monotonically as $\kappa$ increases. When $\frac{\delta \sigma}{\sigma}$ is less than unity, $R_g$ decreases with increasing $\kappa$. The generality of the theory allows us to predict $R_g$ for a number of IDPs. For a certain class of IDPs, we find the usual scaling of $R_g \sim N^{\nu}$ with $\nu = 0.6$, which coincides with the behavior expected for Flory random coils. Remarkably, our theory gives accurate estimates of the size of the Tau protein, and various fragments derived from it. This class of IDPs behaves as an ideal chain. The differences in the two scaling behavior between these IDPs can be rationalized in terms of the interplay between charge fluctuations and net charge per monomer. What could be the origin of charge fluctuations in an IDP in which $\sigma$ (more precisely the precise sequence) is fixed? Even with $\sigma$ fixed, the ensemble of conformations that a typical IDP or PA samples is heterogeneous. In sampling a large number of conformations, the spatial distances between charged residues could vary greatly. Therefore, the effective charge of each conformation is different. In some of the conformations, positively and negatively charged residues would be close together , whereas in others they would be spatially well-separated. This gives rise to conformation-dependent effective attraction, which is quantified in our theory in terms of the average quantity $\delta \sigma$. Of course, the effective value of $\delta \sigma$ cannot be computed for a quenched charged sequence for a specific IDP without suitable simulations (see Figs 7 and 8, which illustrate this important point using three specific PA sequences). Therefore, it is difficult to construct phase diagrams of IDPs solely in terms of $\sigma$ or the differences between the number of positively and negatively charged residues. Construction of phase diagrams requires use of physical order parameters, which necessarily involves quantitatively characterizing the conformational ensembles of IDPs, an exercise requiring simulations using models that reproduce experimental measurements, such as, scattering profiles. {\bf Acknowledgments:} We are indebted to Upayan Baul for providing the simulation results and for useful discussions. We thank Prof. D. Svergun for providing us SAXS data on the Tau protein. This work was supported by the National Science Foundation (CHE 16-36424) and the Collie-Welch Foundation (F-0019). \newpage
{ "timestamp": "2018-06-18T02:03:18", "yymm": "1804", "arxiv_id": "1804.05609", "language": "en", "url": "https://arxiv.org/abs/1804.05609" }
\section{Introduction} On the Internet, each message is split into several packets. We regard the packets which are not correctly received as erased. Hence, the Internet is modeled as a packet erasure channel. The sender cannot retransmit packets in the case of user datagram protocol (UDP). Fountain codes \cite{FountainCode} are erasure correcting codes which realizes reliable communication via UDP, in particular, multicasting. We assume that the original message is split into $k$ source packets. In the fountain coding system, the sender generates infinite output packets from the $k$ source packets. Each receiver decodes the original message from $k(1+\alpha)$ received packets, where $\alpha$ is referred to as {\it packet overhead}. Hence, in the fountain coding system, the receiver need not request the retransmission. Raptor code \cite{RaptorCode} is a fountain code which achieves arbitrarily small $\alpha$ as $k\to \infty$ with a linear time encoding and decoding algorithm. Encoding of Raptor code is divided into two stages. At the first stage, the encoder generates {\it precoded packets} from the source packets by using a precode, which is a high rate erasure correcting code, e.g, low-density parity-check (LDPC) code. At the second stage, the encoder generates output packets from precoded packets by using LT code \cite{LTCode}. More precisely, each output packet is generated from the bit-wise exclusive OR (XOR) of randomly chosen precoded packets. Decoding of Raptor codes is in a similar way to LDPC code over the binary erasure channels. In other words, the decoder constructs the factor graph from the received packets and the parity check matrix of the precode and recovers the precoded packets by using the peeling algorithm (PA) \cite{PA}. Zigzag decodable fountain (ZDF) code \cite{ZDF} is a generalization of Raptor code. Similarly to Raptor code, encoding of ZDF code is divided into two stages. At the first stage, the encoder generates precoded packets from the source packets by using a precode. At the second stage, the encoder generates output packets from the precoded packets in the following way; Encoder randomly chooses precoded packets and those shift amounts, executes the bit-level shift to the chosen precoded packets, and perform the bit-wise XOR to the shifted precoded packets. A decoding algorithm for the ZDF codes is also two-stage algorithm. Similarly to Raptor code, a factor graph for the ZDF code is constructed before stating the decoding algorithm. At the first stage, a packet-wise PA works on the factor graph and recovers the precoded packets in packet-wise. Unless the packet-wise PA succeeds, the remaining precoded packets are decoded by the bit-wise PA, which is the PA over the bit-wise representation of the factor graph. As shown in \cite{ZDF}, ZDF codes outperform Raptor codes in terms of packet overhead. However, the decoding algorithm for ZDF codes requires large decoding time. The purpose of this research is to propose a fast decoding algorithm for ZDF codes. As a related work, the work in \cite{ZDF_TEP} slightly reduces the number of decoding iterations. In this paper, we propose a fast decoding algorithm for ZDF codes by reducing the number of decoding processes in the bit-wise PA. By a numerical example shown in Section \ref{sec:con_ori}, we ascertain that only particular edges contribute to recovering the bits of precoded packets. The main idea of this work is to execute the decoding processes only to such edges. Moreover, the algorithm makes a list which records the order of edges which contribute the decoding and execute the decoding processes as in the order of this list. As a result, we significantly reduce the decoding time compared with the existing algorithm. The rest of paper is organized as follows. Section \ref{sec:pre} briefly reviews ZDF codes and those existing decoding algorithm. Section \ref{sec:new} gives a numerical example which shows that the only particular edges contribute the decoding, and proposes a fast decoding algorithm for the ZDF codes via scheduling. The simulation results in Section \ref{sec:kekka} show that the proposed algorithm reduces the number of decoding processes and decoding time compared with exist one. Section \ref{sec:5} concludes the paper. \section{Preliminaries}\label{sec:pre} This section gives some notation and introduces the encoding and decoding algorithm of the ZDF codes. Section \ref{sec:ZDF} explains the encoding of the ZDF codes. Section \ref{sec:ZDF_fac} gives factor graph representations of the ZDF codes. Section \ref{sec:ZDF_Dec} explains the original decoding algorithm \cite{ZDF} of the ZDF codes. \subsection{Encoding of the ZDF Codes \label{sec:ZDF} \cite{ZDF}} A polynomial representation of the packets $\bm{a}=(a_1,a_2,\dots,a_\ell)$ is defined as $a(z)=\sum^\ell_{j=1}a_jz^j$. The ZDF codes is defined by precode $\mathcal{C}$, the distribution for the inner code $\Omega (x)=\sum_i\Omega_ix^i$, and shift distribution $\Delta(x)=\sum^D_{i=0}\Delta_ix^i$. Here, $\Delta_i$ represents the probability that the shift amount is $i$. Similarly to the Raptor codes, the ZDF codes generates the precoded packets $\bm{b}_1,\bm{b}_2,\dots,\bm{b}_n$ from the source packets $\bm{a}_1,\bm{a}_2,\dots,\bm{a}_k$ by the precode $\mathcal{C}$ at the first stage. At the second stage, the ZDF codes generates the infinite output packets as the following procedure for $t = 1,2,\dots$. \begin{enumerate} \item Choose a degree $d$ of the $t$-th output packet according to the degree distribution $\Omega(x)$. In other words, choose $d$ with probability $\Omega_d$. \item Choose $d$-tuple of shift amounts $(\tilde{\delta}_{t,1},\tilde{\delta}_{t,2},\dots,\tilde{\delta}_{t,d})\in[0,D]^d := \{0,1,\dots,D\}^d$ in independent of each other according to shift distribution $\Delta(x)$, where $[a, b]$ denote the set of integers between $a$ and $b$. Define $\tilde{\delta}_{t,\min} := \min_i\tilde{\delta}_{t,i}$ and calculate $\delta_{t,i} := \tilde{\delta}_{t,i} - \tilde{\delta}_{t,\min}$. \item Choose $d$ distinct precoded packets uniformly. Let $(j_1,\allowbreak j_2,\allowbreak \dots,\allowbreak j_d)$ denote the $d$-tuple of indexes of the chosen precoded packets. Then the polynomial representation for the $t$-th output packet is given as \begin{eqnarray*} x_t(z)=\sum^d_{i=1}z^{\delta_{t,i}}b_{j_i}(z). \end{eqnarray*} \end{enumerate} \subsection{Factor graph generated by the receiver \label{sec:ZDF_fac}} Let $\bm{y}_1,\bm{y}_2,\dots,\bm{y}_{k'}$ be $k'$ received packets for a receiver, where $k' := k(1+\alpha)$. Similarly to the Raptor codes, each receiver constructs a factor graph from the precode $\mathcal{C}$ and the received packets. The generated factor graphs depend on receivers since the $k'$ received packets depend on receivers. The factor graphs for the ZDF codes is composed of labeled edges and the four kinds of nodes: $n$ variable nodes representing precoding packets $\mathsf{V}_{\mathrm{p}} := \{\mathsf{v}_1,\mathsf{v}_2,\dots,\mathsf{v}_n\}$, $m := n-k$ check nodes on the precode code $\mathsf{F}_{\mathrm{p}} := \{\mathsf{f}_1,\dots,\mathsf{f}_m\}$, $k'$ variable nodes representing received packets $\mathsf{V}_{\mathrm{r}} := \{\mathsf{v}_{1}', \allowbreak \mathsf{v}_{2}', \allowbreak \dots, \allowbreak \mathsf{v}_{k'}'\}$, and $k'$ factor nodes on the inner code $\mathsf{F}_{\mathrm{i}} := \{\mathsf{f}_{m+1},\dots,\mathsf{f}_{m+k'}\}$. The edge connection between $\mathsf{F}_{\mathrm{p}}$ and $\mathsf{V}_{\mathrm{p}}$ is decided from the precode $\mathcal{C}$. More precisely, $\mathsf{f}_{i}$ and $\mathsf{v}_{j}$ are connected to an edge labeled by 1 if and only if the $(i, j)$-th entry of $\mathcal{C}$ is equal to 1. The edge connection between $\mathsf{F}_{\mathrm{i}}$ and $\mathsf{V}_{\mathrm{p}}$ is decided from the header of the $i$-th received packet. If the header of the $i$-th received packet represents $(\delta_1,\delta_2,\dots,\delta_d)$ and $(j_1,j_2,\dots,j_d)$, an edge labeled by $z^{\delta_{t,i}}$ connects $\mathsf{f}_{m+t_i}$ and $\mathsf{v}_{j_i}$ for $i \in [1,d]$. We denote the label on the edges connecting to $\mathsf{f}_{m+i}$ and $\mathsf{v}_{j}$, by $z^{\delta_{i,j}}$. For $i \in [1,k']$, an edge connects $\mathsf{f}_{m+i}$ and $\mathsf{v}_{i}'$. Denote the set of indexes of the variable nodes adjacent to the $i$-th factor node, by $\mathcal{N}_{f}(j)$. \subsection{Decoding algorithm for the ZDF codes\label{sec:ZDF_Dec}} The decoding algorithm of the ZDF codes is a two-stage algorithm. At the first stage, packet-wise PA works on the factor graph of the ZDF codes. The details of packet-wise PA is given in \cite{ZDF}. Unless the packet-wise PA successes, bit-wise PA works on its residual graph. In this section, we explain the original bit-wise PA \cite{ZDFalgo}. In the decoding of the ZDF codes, the $i$-th factor node has memory of length $\ell+D$, denoted by $\bm{s}_i \in \{0,1\}^{\ell+D}$. Let $\mathcal{E}$ be the set of indexes of variable nodes which are not recovered by the packet-wise PA. The residual graph is the subgraph composed of the variable nodes in $\mathcal{E}$ and those connecting edges. Now, we explain factor node processing of bit-wise PA. Let $(w_{t,1},\dots,w_{t,\ell})$ be vector representation of packet $\bm{w}_t$. Denote the mapping of the factor node processing, by $\Phi '$. The mapping $\Phi'$ updates the memory $\bm{b}_j$ of the $j$-th variable node by the $i$-th factor node as follows \begin{align*} &\Phi ' (i,j,\{\bm{b}_t\}_{t \in \mathcal{N}_{f}(i)})\\ =&\Psi(b_j,\mathcal{S}^{+}(\delta_{i,j},\Phi(\bm{s}_i,\{\mathcal{S}(\delta_{i,t},\bm{b}_t)\}_{t\in \mathcal{N}_{f}(i)\setminus\{j\}}))), \end{align*} where the mapping $\mathcal{S}:[0,D]\times\{0,1,*\}^\ell\to \{0,1,*\}^{\ell+D}$ is \[ \mathcal{S}(k,\bm{w}_t) = (\overset{k}{\overbrace{0,\dots,0}},w_{t,1},w_{t,2},\dots,w_{t,l},\overset{D-k}{\overbrace{0,\dots,0}}), \] the mapping $\Phi:\{0,1,*\}^{(\ell+D)\times d_v}\to\{0,1,*\}^{\ell+D}$ is \begin{align*} &\tilde{\bm{w}}_0=\Phi(\tilde{\bm{w}}_1,\dots,\tilde{\bm{w}}_{d_v}), k\in[1,\ell+D],\\ &\tilde{w}_{0,k} = \begin{cases} \sum_{i=1}^{d_v}\tilde{w}_{i,k}, & \text{if} \hspace{2mm}\tilde{w}_{1,k},\dots,\tilde{w}_{d_v,k}\in\{0,1\}, \\ *, & \text{otherwise}, \end{cases} \end{align*} the mapping $\mathcal{S}^+(k,\cdot) : \{0,1,* \}^{\ell+D} \to \{0, 1, * \}^\ell$ is \[ \mathcal{S}^+(k,\tilde{\bm{w}}_t) = (\tilde{w}_{t,k+1},\tilde{w}_{t,k+2},\dots,\tilde{w}_{t,k+\ell}), \] and the mapping $\Psi:\{0,1,*\}^{\ell \times 2}\to\{0,1,*\}^{\ell}$ is \begin{align*} &\tilde{\bm{w}}_0=\Psi(\tilde{\bm{w}}_1,\tilde{\bm{w}}_{2}),k\in[1,\ell+D]\\ &\tilde{w}_{0,k} = \begin{cases} \tilde{w}_{1,k}, & \text{if} \hspace{2mm}\tilde{w}_{1,k}\in\{0,1\}, \\ \tilde{w}_{2,k}, & \text{otherwise}. \end{cases} \end{align*} Bit-wise decoding is done in the procedure of Algorithm \ref{algc:2}. \begin{algorithm}[tb] \begin{footnotesize} \caption{Bit-wise decoding (existing algorithm) \label{algc:2}} \begin{algorithmic}[1] \REQUIRE Residual graph $\mathtt{G}$, values of memories $\bm{s}_i$ $(i \in [1,m+k'])$, and precoded packets $\bm{b}_1,\dots,\bm{b}_n$ \ENSURE precoded packets $\bm{b}_1,\dots,\bm{b}_n$ \STATE $\tau \gets 1$, \STATE $\forall j \in [1,n]~ \bm{b}_j^{(0)} \gets \bm{b}_j$ \FOR{$i\in[1,m+k^{\prime}]$, $j \in \mathcal{N}_f(i)$}\label{stp:rec2} \STATE $\bm{b}_j \gets \Phi ' (i,j,\{\bm{b}_t\}_{t \in \mathcal{N}_{f}(i)})$\label{stp:PPP} \ENDFOR \STATE $\forall j \in [1,n]~ \bm{b}_j^{(\tau)} \gets \bm{b}_j$ \IF{$\forall j \in [1,n]~ \bm{b}_j^{(\tau)} \neq \bm{b}_j^{(\tau-1)}$} \STATE $\tau \gets \tau +1$ and go to Step \ref{stp:rec2} \ENDIF \end{algorithmic} \end{footnotesize} \end{algorithm} If Algorithm \ref{algc:2} outputs $b_j(z) \in \{0,1\}^\ell$ for all $j \in [1,n]$, the decoding succeeds. Otherwise, decoding fails. \section{Observation of the bit-wise PA and Proposed Decoding Algorithm}\label{sec:new} We refer to the process of Step \ref{stp:PPP} in Algorithm \ref{algc:2} as {\it decoding process}. We refer to the edges which recover the bits of precodes as {\it updating edges}. In other words, the updating edges contribute the decoding at a decoding round. In the original bit-wise PA, since all the factor nodes are processed, the number of decoding processes per iteration is equal to the total number of edges of the factor graph generated by the receiver. Hence, the decoding process is performed on many edges which do not contribute to decoding. In this section, we observe the original bit-wise decoding. As a result, we ascertain that the only particular edges contribute the decoding. Roughly speaking, the proposed bit-wise decoding algorithm generates a set of edges used for updating variable nodes and records the edges to a list in the proper order from the set of edges. After that, the decoding process is performed only on edges in the list. Section \ref{sec:con_ori} shows that the only particular edges contribute the decoding by the evaluation of original bit-wise PA. Section \ref{sec:new_dec} gives proposed decoding algorithm which reduces the decoding process per iteration. \subsection{Decoding Process of the original bit-wise PA \label{sec:con_ori}} In this section, we use the shift distribution $\Delta(x) = \frac{1}{2}+\frac{1}{2}x$. As a precode, we employ (3,30)-regular LDPC codes. The degree distribution for the inner code is $\Omega(x) = 0.007969x + 0.493570x^2 + 0.166220x^3 + 0.072646x^4+0.082558x^5 + 0.056058x^8+0.037229x^9+0.055590x^{19}+0.025023x^{65}+0.003135x^{66}$. An edge is {\it activate} if the edge has become an updating edge. Figure \ref{fig:num_proc} displays the number of decoding processes, the number of updating edges, and the number of active edges under the original bit-wise PA with $\ell=100$ and $\alpha=0.10$. The horizontal axis of Fig.~\ref{fig:num_proc} represents the number of iterations. The symbol \# in Fig.~\ref{fig:num_proc} stands ``the number''. From the curve of active edges in Fig.~\ref{fig:num_proc}, we see that the only particular edges contribute the decoding. \begin{figure}[t] \centering \includegraphics[width=.835\linewidth]{a.eps} \caption{Number of decoding processes, updating edges, and activated edges per iteration (existing algorithm) \label{fig:num_proc}} \end{figure} \subsection{Proposed Algorithm \label{sec:new_dec}} Let $\mathcal{L}^{\mathsf{A}}$ (resp.\ $\mathcal{L}^{\mathsf{B}}$) be the set (resp.\ list) of edges which contribute the decoding by the original bit-wise PA. In other words, $\mathcal{L}^{\mathsf{A}}$ represents the set of active edges. Proposed decoding algorithm is divided into three stages. At the first stage, the decoder executes decoding process for all the edges and makes $\mathcal{L}^{\mathsf{A}}$ by adding the edges which contribute the decoding of precoded packets. At the second stage, the decoder executes decoding process for the edges in $\mathcal{L}^{\mathsf{A}}$ and makes $\mathcal{L}^{\mathsf{B}}$ by recording the order of edges which contribute the decoding. At the final stage, the decoding process is performed edges as in the order of the list $\mathcal{L}^{\mathsf{B}}$. Unless edges contribute the decoding, the edges is deleted from the list $\mathcal{L}^{\mathsf{B}}$. \begin{remark} \upshape \label{rem:0} At the second stage, decoder makes only one list $\mathcal{L}^{\mathsf{B}}$. The $j$-th element $\mathcal{L}^{\mathsf{B}}_j$ of list $\mathcal{L}^{\mathsf{B}}$ stores the $j$-th contributing edge in the whole second stage. Hence, there is a possibility that an edge is recorded in $\mathcal{L}^{\mathsf{B}}$ several times. \end{remark} Next, we explain the parameters used in the proposed algorithm. Let $l_{\mathsf{A}}$ and $l_{\mathsf{B}}$ represent the size of $\mathcal{L}^{\mathsf{A}}$ and the length of $\mathcal{L}^{\mathsf{B}}$, respectively. The number of iterations in the second stage is denoted by $t_{\mathsf{B}}$. We use time $t_{\mathsf{A}}$ and vector $\bm{V} = (V_1,V_2,\dots,V_{n})$ to reduce the time of make $\mathcal{L}^{\mathsf{A}}$. The element $V_j$ of vector $\bm{V}$ indicates whether the $j$-th variable node connects to an edge in $\mathcal{L}^{\mathsf{A}}$. More precisely, $V_j = 0$, if the $j$-th variable node is already recovered or can be updated by an edge in $\mathcal{L}^{\mathsf{A}}$. Otherwise, $V_j = 1$. Function $T$ determines whether the variable node has been recovered, namely, \begin{equation*} T(\bm{b}_j) = \begin{cases} 0, & \text{if} \hspace{2mm}\bm{b}_j\in\{0,1\}^{\ell}, \\ 1, & \text{otherwise}. \end{cases} \end{equation*} We denote the maximum number of decoding iterations at the first stage, by $t_{\mathsf{A}}$. At the first stage, if there exists $j$ such that $V_j = 1$ until the $t_{\mathsf{A}}$-th iteration, then we expect that the $j$-th precoded packet will not be recovered. Hence, in such case, the decoder halts and outputs decoding failure. \begin{remark} \upshape \label{rem:1} Time $t_{\mathsf{A}}$ is used time out of decoding. Small $t_{\mathsf{A}}$ reduces average decoding time but slightly degrades the decoding performance. Conversely, large $t_{\mathsf{A}}$ causes large average decoding time but does not degrade the decoding performance. We confirm these in Section \ref{sec:hyo_er}. \end{remark} The details of the proposed algorithm are given in Algorithm \ref{algc:6}. Step \ref{stp:LAcreate}-\ref{stp:end_LAcreate} gives the first stage of decoding. Step \ref{stp:LAbranch1} decides decoding stop. Step \ref{stp:LAbranch2} decides whether making of $\mathcal{L}^{\mathsf{A}}$ is sufficient. Step \ref{stp:LBcreate}-\ref{stp:end_LBcreate} gives the second stage of decoding. Step \ref{stp:LBuse}-\ref{stp:end_LBuse} gives the final stage of decoding. At the final stage, the edges that do not contribute the decoding is deleted. In Step \ref{stp:end_end}, if there exists an unrecovered precoded packet, then decoding restarts the first stage. Here, by this process, the decoding performance is improved. \begin{algorithm}[t] \begin{footnotesize} \caption{Scheduled bit-wise decoding \label{algc:6}} \begin{algorithmic}[1] \REQUIRE Residual graph $\mathtt{G}$, values of memories $\bm{s}_i$ $(i \in [1,m+k'])$, precoded packets $\bm{b}_1,\dots,\bm{b}_n$, time $t_{\mathsf{A}}$, and time $t_{\mathsf{B}}$ \ENSURE precoded packets $\bm{b}_1,\dots,\bm{b}_n$ \setlength{\columnsep}{4pt} \STATE $\tau \gets 0$, $l_{\mathsf{A}} \gets 0$, $l_{\mathsf{B}} \gets 0$, $\forall j \in [1,n]~ \bm{b}_j^{(\tau)} \gets \bm{b}_j$ \STATE $\tau_M \gets \tau + t_{\mathsf{A}}$, $\forall j~V_j \gets T(\bm{b}_j)$ \label{stp:start} \STATE $\tau \gets \tau + 1$\label{stp:LAcreate} \FOR{$i\in[1,m+k^{\prime}]$, $j \in \mathcal{N}_f(i)$} \STATE $\bm{d} \gets \Phi(i,j,\{\bm{b}_t\}_{t \in \mathcal{N}_{f}(i)})$ \IF{$\bm{d} \neq \bm{b}_j$} \STATE $V_j \gets 0$ \IF{$ q \in [0,l_{\mathsf{A}}-1]~ \mathcal{L}^{\mathsf{A}}_{q} \neq (\mathsf{v}_j,\mathsf{f}_i)$} \STATE $\mathcal{L}^{\mathsf{A}}_{l_{\mathsf{A}}} \gets (\mathsf{v}_j,\mathsf{f}_i)$ \STATE $l_{\mathsf{A}} \gets l_{\mathsf{A}}+1$ \ENDIF \ENDIF \STATE $\bm{b}_j \gets \bm{d}$ \ENDFOR \STATE $\forall j \in [1,n]~ \bm{b}_j^{(\tau)} \gets \bm{b}_j$ \IF{$\forall j\in[1,n]~ \bm{b}_j^{(\tau)} = \bm{b}_j^{(\tau-1)} $ or $\tau \geq \tau_M$}\label{stp:LAbranch1} \STATE Decoding halts. \ELSIF{$\exists j~ V_j=1$}\label{stp:LAbranch2} \STATE Go to Step \ref{stp:LAcreate}. \ENDIF \label{stp:end_LAcreate} \STATE $\tau_M \gets \tau +t_{\mathsf{B}}$, $\forall j~V_j \gets T(\bm{b}_j)$ \STATE $\tau \gets \tau + 1$\label{stp:LBcreate} \FOR{$q\in[0,l_{\mathsf{A}}-1]$} \STATE Set $j,i$ s.t.~$\mathcal{L}^{\mathsf{A}}_q = (\mathsf{v}_j,\mathsf{f}_i)$ \STATE $\bm{d} \gets \Phi(i,j,\{\bm{b}_t\}_{t \in \mathcal{N}_{f}(i)})$ \IF{$\bm{d} \neq \bm{b}_j$} \STATE $V_j \gets 0$, $\mathcal{L}^{\mathsf{B}}_{l_{\mathsf{B}}} \gets \mathcal{L}^{\mathsf{A}}_q$ \STATE $l_{\mathsf{B}} \gets l_{\mathsf{B}}+1$ \ENDIF \STATE $\bm{b}_j \gets \bm{d}$ \ENDFOR \STATE $\forall j \in [1,n]~ \bm{b}_j^{(\tau)} \gets \bm{b}_j$ \IF{$\forall j \in [1,n]~ \bm{b}_j^{(\tau)} = \bm{b}_j^{(\tau-1)}$ or ($\tau \geq \tau_M$ and $\exists j~V_j=1$)} \STATE $l_{\mathsf{B}} \gets 0$ and go to Step \ref{stp:start}. \ELSIF{$\tau < \tau_M$} \STATE Go to Step \ref{stp:LBcreate}. \ENDIF \label{stp:end_LBcreate} \STATE $\tau \gets \tau +1$\label{stp:LBuse} \FOR{$q\in[0,l_{\mathsf{B}}-1]$} \STATE Set $j,i$ s.t.~$\mathcal{L}^{\mathsf{B}}_q = (\mathsf{v}_j,\mathsf{f}_i)$ \STATE $\bm{d} \gets \Phi(i,j,\{\bm{b}_t\}_{t \in \mathcal{N}_{f}(i)})$ \IF {$\bm{d} = \bm{b}_j$ } \STATE Delete $\mathcal{L}^{\mathsf{B}}_q$ from $\mathcal{L}^{\mathsf{B}}$. \ENDIF \STATE $\bm{b}_j \gets \bm{d}$ \ENDFOR\label{stp:end3} \STATE $\forall j \in [1,n]~ \bm{b}_j^{(\tau)} \gets \bm{b}_j$ \IF{$\exists j \in [1,n]~ \bm{b}_j^{(\tau)} \neq \bm{b}_j^{(\tau-1)}$} \STATE Go to Step \ref{stp:LBuse}\label{stp:end_LBuse} \ELSIF{$\exists j \in [1,n]~ \bm{b}_j \notin \{0,1\}^\ell$} \STATE Go to Step \ref{stp:start}\label{stp:end_end} \ENDIF \end{algorithmic} \end{footnotesize} \end{algorithm} If Algorithm \ref{algc:6} outputs $b_j(z) \in \{0,1\}^\ell$ for all $j \in [1,n]$, the decoding succeeds. Otherwise, decoding fails. \section{Simulation Results}\label{sec:kekka} In this section, we evaluate the performance of the proposed algorithm. Section \ref{sec:hyo_er} evaluates the decoding erasure rates. Section \ref{sec:hyo_con} compares the number of decoding processes. Section \ref{sec:hyo_time} gives the decoding time. The parameters (i.e, $\mathcal{C},\Omega(x)$, and $\Delta(x)$) used in this section are same as Section \ref{sec:con_ori}. The time $t_{\mathsf{A}},t_{\mathsf{B}}$ are given by $t_{\mathsf{A}} = 6/\alpha, t_{\mathsf{B}} = 20$. \subsection{Decoding Erasure Rate \label{sec:hyo_er}} \begin{figure}[t] \centering \includegraphics[width=.785\linewidth]{DecER6.eps} \caption{Comparison of decoding erasure rate of the proposed algorithm with existing one (Original) \label{fig:dec_er}} \end{figure} The decoding erasure rate (DER) is the fraction of the trials in which some bits in the precoded packets are not recovered. Figure \ref{fig:dec_er} displays the DERs for the proposed algorithm and existing algorithm with $\ell=1000$. The horizontal axis of Fig.~\ref{fig:dec_er} represents the packet overhead $\alpha$. The curve labeled with ``Original'' in Fig.~\ref{fig:dec_er} gives the DER of existing algorithm, and the curve labeled with ``Proposed Small $t_{\mathsf{A}}$'' and ``Proposed Large $t_{\mathsf{A}}$'' in Fig.~\ref{fig:dec_er} give the DER of proposed method in the case of $t_{\mathsf{A}}=6/\alpha$ and the case of $t_{\mathsf{A}} = \infty$, respectively. As shown in Fig.~\ref{fig:dec_er}, all the DERs are nearly equal. As show in Remark \ref{rem:1}, $t_{\mathsf{A}}$ is a parameter for timeout, and in the case of $t_{\mathsf{A}} = 6/\alpha$, the decoding performance is slightly degraded compared with the existing one. \subsection{Number of Decoding Processes \label{sec:hyo_con}} In this section, we first evaluate the number of decoding processes and the number of updating edges at each iteration for the proposed algorithm. Next, we compare the number of decoding processes at bit-wise decoding for each overhead. \begin{figure}[tb] \centering \includegraphics[width=.785\linewidth]{CountLDec7.eps} \caption{Number of decoding processes at each iteration for the proposed algorithm ($\alpha=0.10, \ell=100$) \label{fig:countL}} \end{figure} Figure \ref{fig:countL} displays an example of the number of decoding processes and number of updating edges at each iteration for the proposed algorithm. The horizontal axis of Fig.~\ref{fig:countL} represents decoding iteration. As shown in Fig.~\ref{fig:countL}, from the 12th iteration to the 31st iteration, the number of decoding processes at each iteration number is significantly reduced, because decoding processes are only executed the edges in $\mathcal{L}^{\mathsf{A}}$. After 32nd iteration, the proposed algorithm executes the decoding process to the edges in $\mathcal{L}^{\mathsf{B}}$. From Fig.~\ref{fig:countL}, we see that the number of updating edges almost equals to the number of decoding processes after 32nd iteration. Hence, we conclude that the proposed algorithm well constructs edge list $\mathcal{L}^{\mathsf{B}}$. Moreover, the number of iterations required for decoding is smaller than that of the existing algorithm. Therefore, the proposed algorithm has fewer decoding processes than the existing algorithm. Next, we evaluate the number of decoding processes of an existing algorithm and the proposed algorithm for each overhead $\alpha$. Figure \ref{fig:countT} compares the number of decoding processes for existing algorithm with proposed algorithm under $\ell=1000$. The horizontal axis of Fig.~\ref{fig:countT} represents overhead $\alpha$. From Fig.~\ref{fig:countT}, the number of decoding processes of the proposed algorithm is significantly less than one of an existing algorithm. \begin{figure}[tb] \centering \includegraphics[width=.785\linewidth]{DecCount4.eps} \caption{Comparison of the number of decoding processes for the proposed algorithm with existing one (Original) \label{fig:countT}} \end{figure} \subsection{Decoding Time\label{sec:hyo_time}} In the evaluation of this section, we try 10000 times for each overhead $\alpha$. In this simulation, we use Ubuntu16.04 for OS, Intel(R)Core(TM)i7-4770CPU@3.40GHz for CPU, and 4GB DDR3 memory. Figure \ref{fig:Dec_time} displays the decoding time for existing algorithm and proposed algorithm with $\ell=1000$. The horizontal axis of Fig.~\ref{fig:Dec_time} represents overhead $\alpha$. As shown in Fig.~\ref{fig:Dec_time}, the decoding time of the proposed algorithm is much shorter than one of existing one. \begin{figure}[t] \centering \includegraphics[width=.785\linewidth]{DecTime3.eps} \caption{Comparison of the decoding time for the proposed algorithm with existing one (Original) \label{fig:Dec_time}} \end{figure} \section{Conclusion \label{sec:5}} In this paper, we have proposed an efficient bit-wise decoding algorithm for ZDF codes. Simulation results show that the proposed algorithm drastically reduces the decoding time compared with an existing algorithm. \section*{Acknowledgment} This work was supported by JSPS KAKENHI Grant Number 16K16007.
{ "timestamp": "2018-04-17T02:14:17", "yymm": "1804", "arxiv_id": "1804.05543", "language": "en", "url": "https://arxiv.org/abs/1804.05543" }
\section{Introduction} Flow control is one of the central topics in fluid mechanics with an enormous impact on other fields of engineering and applied science. Its wide range of applications includes, just to name a few, reduction of aerodynamic drag on vehicles and aircrafts, mixing enhancement in combustion and chemical processes, suppression of instabilities to avoid structural fatigue, lift increase for wind turbines, and design of biomedical devices. To emphasize the impact of flow control, it is worth noting that discovery of efficient flow control techniques for reduction of drag on ships and cars can result in mitigation of yearly CO$_2$ emission by millions of tons and annual savings of billions of dollars in the global shipping industry \cite{kim2011physics,brunton2015closed}. Despite all the interest and continuous effort, the flow control still poses a daunting challenge to our theoretical understanding and computational resources. The main source of difficulty is the combination of high-dimensionality and nonlinearity of fluid phenomena which results in computational or experimental models which are too complex and costly to control using well-developed strategies of modern control theory. The recent advances in numerical computation has led to partial success with active control of flows using models based on Navier-Stokes equations \cite{bewley2001flow,bewley2001dns,kim2007linear,kim2011physics}; however, these methods suffer from two major shortcomings: first, nonlinear models obtained from Navier-Stokes are still high-dimensional and computationally costly, thereby not allowing for fast implementation of nonlinear and computationally complex control techniques such as nonlinear model predictive control, and second, linear models used with LQR/LQG or adjoint-based controllers often rely on local linearization around equilibria or a trajectory of the flow which makes them valid only locally, and may result in suboptimal or even unstable control performance. \comments{Hassan: I have made some changes to the above paragraph} An alternative approach that has gained traction in the last two decades is identification of relatively low-dimensional flow models from data provided by numerical simulations or experiments. Some of the data-driven methods combine the measurement data with underlying physical model to identify models of the system. The major examples include construction of (usually autonomous) state space models via Galerkin projection of the Navier-Stokes equations onto the modes obtained by proper orthogonal decomposition (POD) of data \cite{holmes2012turbulence,noack2003hierarchy,balajewicz2013low}, or identification of linear input-output systems using balanced POD \cite{willcox2002balanced,rowley2005model}. There are also a few applications of system identification methods to construct linear input-output models purely from data, including the eigensystem realization algorithm \cite{cabell2006experimental,brunton2013reduced}, as well as subspace identification and autoregressive models \cite{huang2008control,herve2012physics}. Utilization of the above techniques in a variety of problems has shown great promise for low-dimensional modeling and control of complex flows from data. In this paper, we present a general and fully data-driven framework for control of nonlinear flows based on the Koopman operator theory \cite{koopman1931,mezic2013analysis}. This theory is an operator-theoretic formalism of classical dynamical systems theory with two key features: first, it allows a scalable reconstruction of the underlying dynamical system from measurement data, and second, the models obtained are linear (but possibly high-dimensional) due to the fact that the Koopman operator is a linear operator whether the dynamical system is linear or not. The linearity of the Koopman models is especially advantageous since it makes them amenable to the plethora of mature control strategies developed for linear systems. This framework for design of controller, is called \emph{Koopman-MPC} and follows the work in \cite{korda2018linear}. In the first step of our approach, we build a finite-dimensional approximation of the controlled Koopman operator from the data, using a variation of the extended dynamic mode decomposition algorithm (EDMD) \cite{williams2015data}, with a particular choice of observables assuring linearity of the resulting approximation. The ideal data would include measurements on a number of system trajectories with various input sequences. In the second step, we apply the model predictive control (MPC) to these linear models to obtain the desired objectives. The distinguishing feature is that this framework leads to a \emph{linear} MPC, solving a convex quadratic programming problem, and thereby enables a rapid solution of the underlying optimization problem, which is necessary for real-time deployment. This methodology can be also applied to problems with sparse measurements, i.e., problems with a limited number of instantaneous measurements (e.g. point measurements of the velocity field at several different locations). In that case, delay-embedding of the available measurements (and nonlinear functions thereof) is used to construct the Koopman-linear model; the MPC is then applied to the linear system whose state variable is the delay-embedded vector of measurements. The outline of this paper is as follows: A brief review of related work is given in \cref{sec:lit}. \Cref{sec:KoopmanTheory} gives a review of the Koopman operator theory for dynamical systems with input. In \cref{sec:EDMD}, we describe the EDMD algorithm for construction of the Koopman-linear model from measurement data. In \cref{sec:DelayEmbed}, we discuss using delay-embedding to construct and control Koopman-linear models from sparse measurements. An overview of the MPC framework is given in \cref{sec:MPC}. In \cref{sec:Examples}, we present two numerical examples: the Burgers' system on a periodic domain and the 2D lid-driven cavity flow. We formulate the control problem for these cases using various objectives and demonstrate our approach for both full-state and sparse measurements. We summarize the results and discuss the outlook in \cref{sec:conclusion}. \subsection{Review of related work}\label{sec:lit} The Koopman operator formalism of dynamical systems is rooted in the seminal works of Koopman and Von Neumann in the early 1930s \cite{koopman1931,koopmanandvonneumann:1932}. This formalism appeared mostly in the context of ergodic theory for much of the last century, until in mid 2000's when the works in \cite{mezic2004comparison,mezic2005} pointed out its potential for rigorous analysis of dynamical systems from data. The notion of \emph{Koopman mode decomposition (KMD)} which is based on the expansion of observable fields in terms of Koopman operator eigenvalues and eigenfunctions was also introduced in \cite{mezic2005}. KMD was first applied to a complex flow in \cite{rowley2009} where its connection with the DMD numerical algorithm \cite{schmid2010} was pointed out. The work in \cite{rowley2009} showed the promise of this viewpoint in extracting the physically relevant flow structures and time-scales from data. Following the success of this work, KMD and its numerical implementation through DMD, has become a popular decomposition for dynamic analysis of nonlinear flows \cite{schmid2011applications,hua2016dynamic,bagheri2013koopman,sayadi2014reduced,subbareddy2014direct}. The application of the Koopman operator to data-driven control of high dimensional systems is much less developed. The earliest works on generalizing the Koopman operator approach to control systems was presented in \cite{Proctor2016generalize,koopman_cont_extend} accompanied with a numerical variation of DMD algorithm \cite{proctor2016dynamic}. To the best of our knowledge, however, the only application for feedback control of fluid flows are the works in \cite{peitz2017koopman,peitz2018controlling}. The work in \cite{peitz2017koopman} considered the problem of flow control using a finite set of input values. For each value of the input, a Koopman-linear model was constructed from the data and the control problem was formulated as a switched optimal control and implemented in a receding horizon fashion. This methodology was successfully used for tracking reference output signals in Burgers equation and incompressible flow past a cylinder. The work in \cite{peitz2018controlling} proposed to remove the restriction of the input to a finite set, by interpolating between the Koopman-linear systems for each input value which led to an improvement of the control performance. In \cite{korda2018linear} (which this work is based on), a more general extension of the Koopman operator theory to systems with input was presented, and used to construct linear predictors especially suitable for model predictive control, demonstrating the effectiveness of the approach (among other examples) on the control of the Korteweg-de Vries PDE. The results in \cite{korda2018linear} showed superiority of the controlled Koopman-linear predictors constructed from data to models obtained by local linearization and Carleman's representation both for prediction and for feedback control. Let us also mention the earlier work in \cite{Glaz2017quasi} that utilized KMD to construct the normal forms, for dynamics of the flow past an oscillating cylinder; the input forcing appeared as a bilinear term in the normal forms for this flow. See \cite{sootla2017optimal} for application to pulse-based control of monotone systems as well as \cite{mauroy2017koopman} for system identification and \cite{surana_estim} for state estimation. The numerical engine behind the system identification part of the framework presented in this paper is the Extended Dynamic Mode Decomposition (EDMD) algorithm proposed in \cite{williams2015data}. Although the original DMD algorithm was invented independent of the Koopman operator theory \cite{schmid2010}, the connection between the two was known from early on \cite{rowley2009}, and DMD-type algorithms have become the popular methods for computation of the Koopman operator spectral properties. Nevertheless, the convergence of DMD algorithms for approximation of Koopman operator is just recently established in \cite{arbabi2017ergodic,korda2017convergence}. For application of Koopman-MPC to systems with sparse measurements, the EDMD is modified to include the delay embeddings of instantaneous measurements. Delay embedding is a classic technique in system identification literature (see, e.g., ~\cite{ljung1998system} for a comprehensive reference) and control as well as in linear and nonlinear time-series analysis (e.g., \cite{tjostheim1994nonparametric}). In the field of dynamical systems, the classical reference is the work of Takens~\cite{takens1981detecting} on geometric reconstruction of nonlinear attractors. The work in \cite{tu2014dynamic} suggested the combination of this technique with the DMD algorithm for identification of nonlinear systems and its role in approximation of the Koopman operator was studied in \cite{arbabi2017ergodic,korda2017data}; the use for control, in the Koopman operator context, was described in~\cite{korda2018linear}. \section{Koopman operator theory}\label{sec:KoopmanTheory} In this section, we first review the basics of the Koopman operator formalism for autonomous dynamical systems and then discuss its extension to systems with input and output. We will focus on \textit{discrete-time} dynamical systems to be consistent with the discrete-time nature of the measurement data, but most of the analysis easily carries over to the continuous-time systems. We refer the reader to \cite{mezic2013analysis,budisic2012applied} for a more detailed discussion of the Koopman operator basics. Consider the dynamical system \begin{align} x^+=T(x),\quad x\in M \end{align} defined on a state space $M$. We call any function $g:M\rightarrow \mathbb{R}$ an \textit{observable} of the system, and we note that the set of all observables forms a (typically infinite-dimensional) vector space. The Koopman operator, denoted by $\mathcal{K}$, is a linear transformation on this vector space given by \begin{align}\label{eq:KoopmanDef} \mathcal{K} g = g\circ T, \end{align} where $\circ$ denotes the function composition, i.e., $(\mathcal{K} g)(x)=g(T(x))$. Informally speaking, the Koopman operator updates the observable $g$ based on the evolution of the trajectories in the state space. The key property of the Koopman operator that we exploit in this work is its linearity, that is, for any two observables $g$ and $h$, and scalar values $\alpha$ and $\beta$, we have \begin{align} \mathcal{K}(\alpha g + \beta h)=\alpha \mathcal{K} g + \beta \mathcal{K} h, \end{align} which follows from the definition in \cref{eq:KoopmanDef}. We call the observable $\phi$ a Koopman eigenfunction associated with Koopman eigenvalue $\lambda \in \mathbb{C}$ if it satisfies \begin{align} \mathcal{K}\phi=\lambda \phi. \end{align} The spectral properties of the Koopman operator can be used to characterize the state space dynamics; for example, the Koopman eigenvalues determine the stability of the system and the level sets of certain Koopman eigenfunctions carve out the invariant manifolds and isochrons \cite{mauroy2016global,Mezic:2015,mauroy2012use}. Moreover, for smooth dynamical systems with simple nonlinear dynamics, e.g., systems that possess hyperbolic fixed points, limit cycles and tori, the evolution of observables can be described as a linear expansion in Koopman eigenfunctions \cite{mezic2017koopman}. In these systems, the spectrum of the Koopman operator consists of only point spectrum (i.e. eigenvalues) which fully describes the evolution of observables; \begin{align}\label{eq:KoopmanExpansion} \mathcal{K}^ng=\sum_{j=0}^{\infty}v_j\phi_j \lambda_j^n. \end{align} where $v_j$ is called the Koopman mode associated with Koopman eigenvalue-eigenfunction pair $(\lambda_j,\phi_j)$ and it is given by the projection of the observable $g$ onto $\phi_j$. See \cite{rowley2009} for more detail on Koopman modes, and \cite{mezic2017koopman} on the expansion in (\ref{eq:KoopmanExpansion}). The extension of the Koopman operator theory to a controlled system denoted by \begin{align}\label{eq:sysCont} x^+=T(x,u),\quad x\in M,~u\in \mathcal{U}, \end{align} requires one to work on the \textit{extended state space}, which is the Cartesian product of the state space $M$ and the space of all input sequences $\ell(\mathcal{U}) = \{(u_0)_{i=0}^\infty \mid u_i \in \mathcal{U}\}$. We denote the extended state space by $S=M\times \ell(\mathcal{U})$. Now, given an observable $g:S\rightarrow\mathbb{R}$ we can define the non-autonomous Koopman operator, \begin{align}\label{eq:KoopmanDefInput} (\mathcal{K} g)(x,(u_i)_{i=0}^\infty)=g(T(x,u_0),(u_i)_{i=1}^\infty). \end{align} See~\cite{korda2018linear} for more details on this extension. We emphasize that the linear representation of the nonlinear system by the Koopman operator is globally valid and generalizes the local linearization around equilibria \cite{mezic2017koopman}. \section{Construction of Koopman-linear system} \label{sec:EDMD} In this section, we review the construction of the Koopman-linear system as proposed by \cite{korda2018linear} using the EDMD algorithm \cite{williams2015data}. We are looking to approximate the dynamics of the nonlinear flow via a linear time-invariant system such as \begin{align}\label{eq:predictor} z^+&=Az+Bu\qquad z\in \mathbb{R}^{n},~u\in \mathbb{R}^k, \notag \\ \hat{x} &=Cz. \end{align} Consider the set of states and inputs of the nonlinear system in the form of \begin{equation}\label{eq:StateInput} X=[x_1, \ldots, x_K],\quad X^+=[x_1^+~,x_2^+,\ldots,x_K^+], \qquad U=[u_1,\ldots,u_K]. \end{equation} where $x_j^+=T(x_j,u_j)$. Let \begin{align} \boldsymbol{g}(x)= \left[ \begin{matrix} g_1(x)& \ldots & g_{m}(x) \end{matrix} \right]^\top \end{align} be a given vector of possibly nonlinear observables. These functions may represent user-specified nonlinear functions of the state as well as physical measurements (i.e., outputs) taken on the dynamical system (or nonlinear functions of such outputs). We are going to assume that we only have access to values of the observables, and therefore, explicit knowledge of the state variable in \eqref{eq:StateInput} is not required. By collecting data on the dynamical system, we can form the lifted snapshot data matrices \begin{align}\label{eq:snapshotmatrices} X_\text{lift}&=[\boldsymbol{g}(x_1), \ldots, \boldsymbol{g}(x_K)],\quad X^+_\text{lift}=[\boldsymbol{g}(x^+_1), \ldots, \boldsymbol{g}(x^+_K)], \qquad U=[u_1,\ldots,u_K]. \end{align} These data matrices are the lifted coordinates of the system in the space of observables. Note that, as in~\cite{korda2018linear}, we have not lifted $U$ coordinates to preserve the linear dependence of the predictor on the original input. The matrices $A$, $B$ and $C$ are then given by the solution to the linear least-squares problems \begin{align}\label{eq:Min} \min_{A,B}\|X^+_\text{lift}-AX_\text{lift}-BU\|_F,\quad \min_{C}\|X-CX_\text{lift}\|_F \end{align} where $\|\cdot\|_F$ denotes the Frobenius norm. The analytical solution to these two problems can be compactly written as \begin{align} \begin{bmatrix}A & B \\ C & 0 \end{bmatrix}= \begin{bmatrix}X^+_{\text{lift}} \\ X\end{bmatrix} \begin{bmatrix} X_\text{lift} \\ U \end{bmatrix}^\dagger. \end{align} When snapshot matrix $X_\text{lift}$ is fat (i.e. number of columns exceeds number of rows), it is more efficient to compute the matrices by solving the normal equations \begin{align}\label{eq:normEq} V= \mathcal{M}G, \end{align} with the unknown matrix variable $\mathcal{M}$ and given matrices \begin{align*} V=\begin{bmatrix}X^+_{\text{lift}} \\ X\end{bmatrix} \begin{bmatrix} X_\text{lift} \\ U \end{bmatrix}^\top ,\quad G=\begin{bmatrix} X_\text{lift} \\ U \end{bmatrix} \begin{bmatrix} X_\text{lift} \\ U \end{bmatrix}^\top. \end{align*} The solution $\mathcal{M}$ to~(\ref{eq:normEq}) provides the matrices $A$, $B$, $C$ through \[ \mathcal{M} = \begin{bmatrix}A & B \\ C & 0 \end{bmatrix}. \] The matrices $A$ and $B$ describe the linear dynamics of the Koopman-linear state $z=\boldsymbol g(x)$. The prediction of the original state $x$ is obtained simply by $\hat{x} = Cz$. See \cite{korda2017convergence} for a convergence analysis of EDMD for approximation of Koopman operator. \subsection{Sparse measurements and delay embedding} \label{sec:DelayEmbed} When the number of observables measured on a dynamical system is insufficient for construction of an accurate model, we can use the delay embedding of the observables. Delay embedding (i.e., embedding several consecutive output measurements into a single data point) is a classical technique ubiquitous in system identification literature (e.g.,~\cite{ljung1998system}) but also in the theory of dynamical systems for geometric reconstruction of nonlinear attractors \cite{takens1981detecting}. It has also been utilized in the context of Koopman framework in \cite{mezic2004comparison,giannakis2017data,arbabi2017ergodic}. The key feature of delay embedding here is that it provides samplings of extra observables to realize the Koopman operator. To be more precise, if we have a sequence of measurements on a single observable $h$ at the $n_d$ time instants $t_i,~t_{i+1},\ldots,t_{i+n_d-1}$, we can think of them as sampling of the $n_d$ observables $[h,~\mathcal{K} h,\ldots,~\mathcal{K} ^{n_d-1}h]$ at the single time instant $t_i$. Here, we describe how we can incorporate delay-embedding into identification and control of Koopman-linear models. The only requirement for identification is that we should have access to at least $n_d+1$ \emph{sequential} time samples on the trajectories where $n_d$ is the chosen number of delays. Let $\boldsymbol{h} $ be the vector of instantaneously measured observables on the dynamical system (e.g., point measurements of the velocity field), and $n_d$ be the delay embedding dimension. Consider the state and input matrices described in \eqref{eq:StateInput}, but now assume that they contain a string of sequential samples with length $n_d+1$, i.e., for some $j$, we have \begin{align} x_{i+1}=T(x_i,u_i),\quad i=j,\ldots,j+n_d-1. \end{align} We can delay embed the measurements on this string to construct a pair of lifted coordinates in the space of observables, \begin{align}\label{eq:temp2} \zeta_j = \begin{bmatrix} \boldsymbol{h}(x_i) \\ \vdots \\ \boldsymbol{h}(x_{i+n_d-1}) \\ u_i \\ \vdots\\ u_{i+n_d-1} \end{bmatrix},\quad \zeta_j^+ = \begin{bmatrix} \boldsymbol{h}(x_{i+1}) \\ \vdots \\ \boldsymbol{h}(x_{i+n_d}) \\ u_{i+1} \\ \vdots\\ u_{i+n_d} \end{bmatrix},\quad i=j,\ldots,j+n_d-1. \end{align} It is easy to check that $\zeta^+_j=\mathcal{K} \zeta_j$. By delay-embedding the observations on all sequential strings of data, we can form the new matrices \begin{align}\label{eq:temp3} \tilde{X}=[\zeta_1,~\zeta_2,\ldots,~\zeta_L],\quad \tilde{Y}=[\zeta_1^+,~\zeta_2^+,\ldots,~\zeta_L^+]. \end{align} We can once again lift the data using a vector of nonlinear user-specified functions $ {\boldsymbol {g}}$ to form the new lifted matrices, \begin{align}\label{eq:temp4} {X_\text{lift}}&=[ {\boldsymbol {g}}(\zeta_1),~ {\boldsymbol {g}}(\zeta_2),\ldots,~ {\boldsymbol {g}}(\zeta_L)], \notag \\ \quad Y_\text{lift}&=[ {\boldsymbol {g}}(\zeta_1^+),~ {\boldsymbol {g}}(\zeta_2^+),\ldots,~ {\boldsymbol {g}}(\zeta_L^+)]. \end{align} Having $X_\text{lift},~Y_\text{lift}$ and input matrix $U$ defined, we solve the the least-squares problems~(\ref{eq:Min}) to find the linear system matrices. It is very important for the lifting function $ {\boldsymbol g}$ to have a meaningful dependence on $u_i,\ldots,u_{i+n_d}$. This allows EDMD to approximate the dynamics of the \emph{extended} state space and discern the effect of previous inputs in the evolution of the state. The linear predictor in this case would be \begin{align}\label{eq:predictorEmbed} z^+&=Az+Bu \notag \\ \hat{\zeta} &=Cz, \end{align} Here $\hat\zeta$ denotes the prediction of the ``embedded'' state $\zeta$ (note that when $\hat\zeta$ is employed for controller design, typically only the part of $\hat \zeta$ corresponding to the most recent output prediction is used). \section{Model predictive control}\label{sec:MPC} The methodology presented in the last section allows us to construct a model of the flow in the form of a linear dynamical system (\ref{eq:predictor}). In this work, we will apply MPC to this linear model to control the original nonlinear flow, but other techniques from modern control theory could be applied as well; see the survey~\cite{mayne2000constrained} or the book \cite{grune2011nonlinear} for an overview of MPC. In the context of MPC, we formulate the control objective as minimization of a cost function over a finite-time horizon. The general strategy is to use the model in \cref{eq:predictor} to predict the system evolution over the horizon, and use these predictions to compute the optimal input sequence minimizing the given cost function along this horizon. Then we \emph{apply only the first element} of the computed input sequence to the real system, thereby producing a new value of the output, and repeat the whole process. This technique is sometimes called the \emph{receding horizon control}. In the following, we describe the notation and some mathematical aspects of this technique. The distinguishing feature when using the lifted linear predictor (\ref{eq:predictor}) is that the resulting MPC problem is a convex quadratic program (QP) despite the original dynamics being nonlinear. In addition, the complexity of solving the quadratic problem can be shown to be independent of the size of the lift if the so-called \emph{dense form} is used \cite{korda2018linear}, thereby allowing for a rapid solution using highly efficient QP solvers tailored for linear MPC applications (in our case qpOASES~\cite{ferreau2014qpoases}). Let $N$ be the length of the prediction horizon, and $\{u_i\}_{i=0}^{N-1}$ and $\{y_i\}_{i=1}^{N}$ denote the sequence of input and output values over that horizon. A very common choice of cost functions is the convex quadratic form, \begin{align}\label{eq:MPCcost} J\big(\{u_i\}_{i=0}^{N-1},\{y_i\}_{i=1}^{N}\big) = &~~y_{N}^{\top}Q_Ny_N +q^{\top} y_N \\ &+ \sum_{i=1}^{N-1} y_i^{\top}Q_i y_i + u_i^{\top}R_iu_i+q_i^{\top} y_i+r_i^{\top}u_i \notag\\ &+ u_0^{\top}R_0u_0+r_0^{\top}u_0, \notag \end{align} where $Q_{i=0,\ldots,N}$ and $R_{i=0,\ldots,N-1}$ are real symmetric positive-definite matrices. The above cost function can be used to formulate many of the common control objectives including the tracking of a reference signal. For example, assume that we want to control the flow such that its output measurements follow an arbitrary time-dependent output sequence denoted by $\{y^{\mr{ref}}\}_i$. We can formulate this objective as minimization of the distance between $\{y\}_i$ and $\{y^{\mr{ref}}\}_i$, and the corresponding cost function over the finite horizon would be \begin{align}\label{eq:MPCcost1} J_1\big(\{u_i\}_{i=0}^{N-1},\{y_i\}_{i=1}^{N}\big) &= \sum_{i=1}^{N} \big(y_i-y^{\mr{ref}}_i\big)^{\top}Q\big(y_i-y^{\mr{ref}}_i\big),\\ & = \sum_{i=1}^{N} y_i^{\top} Q y_i - 2\big(y^{\mr{ref}}_i\big)^{\top}Q y_i +\big(y^{\mr{ref}}_i\big)^{\top}Q y^{\mr{ref}}_i. \notag \end{align} where $Q$ is the weight matrix that determines the relative importance of measurements in $y$. Note that the last term in the above equation is not dependent on the input or output, and therefore it does not affect the optimal solution. By dropping this term, and letting $q_i= - Q^{\top}y^{\mr{ref}}_i$, we obtain \begin{align}\label{eq:MPCcost2} J_1\big(\{u_i\}_{i=0}^{N-1},\{y_i\}_{i=1}^{N}\big) & = \sum_{i=1}^{N} y_i^{\top} Q y_i +q_i^{\top}y_i, \end{align} which is a special form of \cref{eq:MPCcost}. In the numerical examples presented in this paper, we will use this type of cost function. The MPC controller solves the following optimization problem at \emph{each time step} of the closed loop operation \begin{align}\label{eq:MPC} \big(\{u_i^\star\}_{i=0}^{N-1},\{y_i^\star\}_{i=1}^{N}\big)&=\arg\min J\big(\{u_i\}_{i=0}^{N-1},\{y_i\}_{i=1}^{N}\big) \nonumber \\ \text{ {s.t.}} & \qquad \quad z_{i+1}=Az_i+Bu_i,\quad i=0,\ldots,N \nonumber \\ & \qquad \quad y_i = Cz_i \\ & \qquad \quad E_i^y y_i + E_i^u u_i \leq b_i,~i=0,\ldots,N-1, \nonumber \\ & \qquad \quad E_N y_N \leq b_N \nonumber \\ & \qquad \quad z_0 = { {\boldsymbol g}}(\zeta_c) \nonumber, \end{align} where $\zeta_c$ is the delay-embedded vector of measurements \begin{align}\label{eq:temp5} \zeta_c = [ \boldsymbol{h}(x_{k-n_d+1}),\ldots, \boldsymbol{h}(x_{k}), u_{k-n_d} , \ldots, u_{k-1} ]^\top \end{align} The matrices $E_{i=0,\ldots,N-1}^x$, $E_{i=0,\ldots,N-1}^u$ and $E_N$ define polyhedral state and input constraints. This is a standard form of a convex quadratic programming problem which can be efficiently solved using many available QP solvers - in our case qpOASES~\cite{ferreau2014qpoases}. The computational complexity can be further reduced by expressing the lifted state variables $z$ in terms of the control inputs $u$, thereby eliminating the dependence on the possible very large dimension of $z$; see~\cite{korda2018linear} for details. Once the optimal input sequence $\{u_i^\star\}_{i=0}^{N-1}$ is computed, we apply its first element $u_0^\star$ to the system to obtain a new output measurement which updates the current state $\zeta_c$; the whole process is then repeated in a receding horizon fashion. \Cref{algo:MPC} summarizes the closed-loop control operation, and the entire algorithm for implementation of the Koopman-MPC is illustrated in \Cref{fig:BigPic} . \begin{algorithm} \caption{Koopman MPC -- closed-loop operation}\label{algo:MPC} \begin{algorithmic}[1] \Require $\boldsymbol{h}(x_{-n_d}),\ldots,\boldsymbol{h}(x_{-1}), u_{-n_d},\ldots,u_{-1}$ \For{$k=0,1,\ldots$} \State Measure $\boldsymbol{h}(x_{k})$. \State Set $\zeta_c = [ \boldsymbol{h}(x_{k-n_d+1}),\ldots, \boldsymbol{h}(x_{k}), u_{k-n_d} , \ldots, u_{k-1} ]^\top$. \State Set $z_0 := {\boldsymbol g}(\zeta_c)$ \State Solve~(\ref{eq:MPC}) to get an optimal solution $(u_i^\star)_{i=1}^N$ \State Apply $u_1^\star$ to the nonlinear system \EndFor \end{algorithmic} \end{algorithm} \begin{figure}[t!] \begin{picture}(300,230) \put(0,0){\centerline{\includegraphics[width=.9\textwidth]{BigPic.pdf}}} \end{picture} \caption{\footnotesize \textbf{Schematic representation of Koopman-MPC framework for identification and closed-loop control of nonlinear flows.}} \label{fig:BigPic} \end{figure} \section{Numerical Examples \protect\footnote{The MATLAB implementation of the examples is available at \protect\url{https://github.com/arbabiha/KoopmanMPC_for_flowcontrol}.} }\label{sec:Examples} \subsection{Burgers equation} As the first example, we consider the Burgers equation with periodic boundary condition, \begin{align}\label{eq:Burgers} \frac{\partial v}{\partial t} + v\frac{\partial v}{\partial z}&= \nu \frac{\partial^2 v}{\partial z^2}+ f(z,t),\quad z\in[0,1],~t\in[0,\infty). \\ v(0,t)&=v(1,t) \end{align} where $\nu$ is the kinematic viscosity. Note that we have used $z$ to denote the spatial coordinates in the flow examples, hoping that it will not be confused with the Koopman-linear state in \eqref{eq:predictor}. Similar to \cite{peitz2017koopman}, we assume the forcing $f(z,t)$ is given by \begin{align}\label{eq:BurgersForcing} f(z,t) &= u_1(t) f_1(z)+u_2(t)f_2(z),\\ &= u_1(t) e^{-\big(15(z-0.25) \big)^2}+u_2(t) e^{-\big(15(z-0.75) \big)^2} \end{align} with the control input $u=(u_1,u_2)\in\mathbb{R}^2$. The control objective is to follow the reference state \begin{align}\label{eq:BurgersRef} v_{\mr{ref}}(z,0\leq t<2)&= \frac{1}{2}, \nonumber\\ v_{\mr{ref}}(z,1\leq t<4)&= 1, \nonumber\\ v_{\mr{ref}}(z,4\leq t<6)& = \frac{1}{2}, \end{align} starting from the initial condition, \begin{align}\label{eq:BurgersIC} v(z,0)= a e^{-\big(5(z-0.5) \big)^2}+(1-a) \sin(4\pi z). \end{align} with $a\in [0,1]$ chosen randomly, and with the input signals constrained as $|u_1|,|u_2|<0.1$. To construct the Koopman-linear system, we have used 50 two-second long trajectories with $\nu =0.01$. Each trajectory starts from a random initial condition as in \eqref{eq:BurgersIC}. The input control at each time instant is randomly drawn from the uniform distribution on $(u_1,u_2)\in[-0.1,0.1]^2$. The Burgers equation upwind finite-difference scheme for advection and central difference for diffusion term, with 4th-order Runge-Kutta time stepping performed on 150 spatial grid points with time steps of 0.01 second. In case of full-state measurements, we use the vector of values of $v$ at the computational grid points, the kinetic energy of $v$, and the constant observable ($\psi(v)=1$). The cost function to be minimized is the kinetic energy ($L^2$-norm) of the state tracking error, \begin{align}\label{eq:BurgersError} e(t) =\int_{0}^{1}|v(z,t)-v_{\mathrm{ref}}(z,t)|^2dz. \end{align} For the sparse measurement scenario, we assume that we only have access to the vector of sparse measurements $v^s=\boldsymbol{h}(v)$ which consists of values of $v$ at $10$ random grid points. We form the Koopman-linear state vector by including delay embedding of $v^s$ with embedding dimension $n_d=5$, the kinetic energy of instantaneous measurements $\|v^s\|^2$, and the constant observable. That is \begin{align} \zeta_c&=[v^s(t_{i-4}),~\ldots,~v^s(t_{i}),~u(t_{i-4}),\ldots,~u(t_{i-1})]^\top,\\ \boldsymbol{g}(\zeta_c)&=[\zeta_c^\top,~\|v^s(t_i)\|^2,~1]^\top \in \mathbb{R}^{60}. \end{align} We define the tracking error as \begin{align}\label{eq:BurgersErrorS} e(t) = \frac{1}{m}\|v^s(t)-v^s_{\mathrm{ref}}(t)\|^2, \end{align} and the predicted objective function used within the MPC is then \[ J = \int_0^T e(t)\, dt, \] where the prediction horizon is set to $T = 0.1$. After spatio-temporal discretization, this objective readily translates to the form~(\ref{eq:MPCcost}) with $N = 10$. The results of the controlled simulation for both scenarios are depicted in \cref{fig:Burgers1}. In both cases, the state successfully tracks the reference signal; however, the controller built via sparse measurements is slightly delayed compared to the case of full-state measurement which can be attributed to the construction of the Koopman-linear state using delay-embedding. We note that the tracking error during the transient phases is caused by input saturation as documented by the plot of the control input signal. \begin{figure}[!h] \centerline{\includegraphics[width=1 \textwidth]{Burgers.pdf} \caption{\footnotesize \textbf{Koopman-MPC for control of Burgers system.} Input signals, tracking error and state evolution in for closed-loop simulation using Koopman-linear models constructed by full-state measurements (150 observables) and sparse measurements (10 observables).} \label{fig:Burgers1} \end{figure} \paragraph{Robustness with respect to the parameter $\boldsymbol\nu$.} One question that arises in the context of low-dimensional modeling is wether the models constructed at some parameter value would be robust enough for prediction at other values. In order to test the robustness of Koopman-linear model in case of Burgers, we use the model constructed above using sparse measurements to control the Burgers system at various values of $\nu \in [10^{-4},~0.1]$. The results in \cref{fig:Burgers2} indicate that Koopman-linear model constructed at the parameter regime $\nu=0.01$ is remarkably effective over a wide parameter range, and the control performance is very robust. As expected, however, the input signal and tracking error in the diffusion-dominated regime (large $\nu$) is less fluctuating, as the diffusion helps the controller to stabilize the state around the spatially-uniform reference state in (\ref{eq:BurgersRef}). \begin{figure}[!h] \centerline{\includegraphics[width=1 \textwidth]{BurgersRobust.pdf} \caption{\footnotesize \textbf{Robustness of Koopman-MPC with respect to parameter $\boldsymbol\nu$.} The tracking error and state evolution for controlling the flow at various values of $\nu$ using a controller constructed at $\nu=0.01$ .} \label{fig:Burgers2} \end{figure} \subsection{2D lid-driven cavity flow} In the second example, we consider an incompressible viscous flow in a square cavity which is driven by motion of the top lid. The dynamics of the cavity flow is governed by the Navier-Stokes equation, which, in terms of the stream function variable reads \begin{align} \label{eq:cavityNS} &\frac{\partial }{\partial t} \nabla^2 \psi+ \frac{\partial \psi}{\partial z_2}\frac{\partial}{\partial z_1}\nabla^2 \psi - \frac{\partial \psi}{\partial z_1}\frac{\partial}{\partial z_2}\nabla^2 \psi = \frac{1}{Re}\nabla^4 \psi,\quad (z_1,z_2)\in[-1,1]^2,~t\in[0,\infty), \\ &\psi\bigg|_{z_1=\pm1} = \psi\bigg|_{z_2=-1}= 0 \quad \mr{and} \quad \frac{\partial \psi}{\partial z_2}\bigg|_{z_2=1}= f(z_1,t), \end{align} where $\psi(z_1,z_2,t)$ is the stream function, $Re$ is the Reynolds number, and $f_1(z_1,t)$ is the velocity of the top lid which acts as the forcing on the system. We assume that we can control the top lid velocity, \begin{align}\label{eq:CavityForcing} f(z_1,t)&= (1+u(t))(1-z_1^2)^2, \end{align} with the control input $u\in\mathbb{R}$. The autonomous cavity flow with $Re\leq 10000$ converges to a steady velocity profile (i.e. fixed point in the state space) which consists of a large central vortex with downstream corner eddies. At around $Re=10500$, a Hopf bifurcation makes the fixed point unstable and the solutions up to $Re=15000$ converge to a limit cycle. In this regime, the boundary of the central vortex oscillates due to the periodic shedding of vortices from the downstream corners. At higher Reynolds, the flow dynamics grows more complicated and ultimately becomes chaotic at high Reynolds. More details on the dynamics and the numerical scheme used to solve \eqref{eq:cavityNS} can be found in \cite{arbabi2017study}. We consider two control problems for the lid-driven cavity flow: \begin{enumerate}[Problem 1), leftmargin=3\parindent] \item We aim to stabilize the limit cycling flow at $Re=13000$ around the fixed point solution at $Re=10000$. This problem has the trivial solution $u_0=-3/13$, since the effective $Re$ is proportional to the top velocity and $u=u_0$ would set back the flow to the fixed point at $Re=10000$. To avoid the trivial solution, we use the input constraints $-2/13<u<2/13$. \item Using the same input constraints, we try to stabilize the limit cycling flow at $Re=13000$ around the unstable fixed point solution at the same $Re$. This problem is specially challenging since the linearization around this fixed point has eigenvalues with positive real part that are not controllable. This implies that the nonlinear system is not stabilizable and there is no linear or nonlinear (regular) feedback control that could achieve the full stabilization \cite{sontag2013mathematical}. \end{enumerate} The construction of the Koopman-linear system is similar to the Burgers system; we have used 300 two-second long trajectories of the system with control inputs that are randomly drawn from $[-3/13,3/13]$. The initial condition for each trajectory is a random convex combination of the stable fixed point at $Re=10000$, a point on the limit cycle and the unstable fixed point at $Re=13000$. The unstable fixed point is computed via the method proposed in \cite{jordi2014encapsulated}. For the full-state observation, we use the values of the stream function on the $50\times 50$ computational grid, the kinetic energy and the constant observable. For the case of sparse measurement, we use the values of stream functions at $k=2,5,50,100$ random points inside the flow domain, the $l^2$ vector norm of observed stream function values, and the constant observable. According to \cref{sec:DelayEmbed}, the dimension of the state space for the Koopman-linear system built from sparse measurements will be $n = 16,31,256,506$ respectively, which is considerably smaller than the Koopman-linear system with full-state observation ($n=2502$). Let $\psi_{\mr{ref}}$ denote the stream function at the target fixed point. In the case of the full-state measurements, we define the tracking error to be the kinetic energy of the flow distance from the reference state, i.e., \begin{align}\label{eq:MPCerror} e(t)=e_k(t):= \int_\Omega |\mathbf{v}(t)-\mathbf{v}_{\mr{ref}}|^2 d z_1 d z_2, \end{align} where $\mathbf{v}=(\partial \psi/\partial z_2,-\partial \psi/\partial z_1)$ is the velocity field, and $\Omega$ is the flow domain. In the case of sparse measurements, let $\psi_s$ be the vector of stream function measurements. Then the tracking error will be the $l^2$-norm of distance from the reference measurements, that is, \begin{align} e(t)=\|\psi_s(t)-\psi_{s,\mr{ref}}\|^2. \end{align} The objective function of the MPC is then given by \[ J = \int_0^T e(t)\,dt, \] where the prediction horizon is set to $T = 0.2$. After spatio-temporal discretization, this objective function readily translates to to the form~(\ref{eq:MPCcost}) with $N = 20$. \Cref{fig:CavityError} shows the kinetic energy of the state discrepancy ($e_k$ defined in \eqref{eq:MPCerror}) in applying the Koopman-MPC to problem 1. All the closed-loop simulations start from the same initial condition on the limit cycle. The Koopman-MPC, except for $k=1,2$, is successful in considerably reducing the flow distance from the desired state over finite time. The control inputs and the flow evolution for some values of $k$ and full-state observation is shown in \cref{fig:CavityVorticity}. In the case of full-state observation, the input signal is mostly saturated at the lower bound which results in a lower effective $Re$ for the flow, and hence getting closer to the fixed point at $Re=10000$. However, the controller occasionally uses bursts to speed up the stabilization. The effect of these intermittent bursts on the control can be deduced by comparison with the control input with $k=5$ which is saturated at the lower bound at all times. \Cref{fig:CavityError} also suggests that the control performance of the Koopman-linear systems generally scales with the number of measured observables, i.e., larger number of observables results in better control performance. This indicates that there is a reasonable tradeoff between the sparsity of measurements and the control performance. Moreover, the full-state observation offers less than 10 percent improvement over k = 50 in the terminal discrepancy, which indicates that the cavity flow dynamics is approximately low-dimensional and it can be effectively captured using low-dimensional models from data. We have observed that the choice of measurement location in the flow domain may significantly affect the control performance for very small $k$, e.g. $k=1,2,5$, and results reported in the figures only represent the typical behavior of controllers built on sparse measurements. \begin{figure}[h!] \centerline{\includegraphics[width=.55 \textwidth]{Cavity_error1.pdf}} \caption{\footnotesize \textbf{Control Performance for stabilization around the steady solution at $\boldsymbol{Re=10000}$}. Normalized kinetic energy of flow distance for cavity flow controllers built by Koopman-MPC with various number of measurements ($k$), as well as LQR based on local linearization of Navier-Stokes. } \label{fig:CavityError} \end{figure} A standard technique for flow stabilization is to use linearized Navier-Stokes with linear control strategies. \Cref{fig:CavityError} shows the performance of such technique (labeled bounded NS-LQR) in achieving the control objective of the first problem. In this method, the Navier-Stokes equation is linearized around the reference fixed point, and an optimal state feedback gain is computed that would minimize the cost function in \eqref{eq:MPCerror} over an infinite-time horizon for the linearized system (see Appendix for detail). At each time step, the computed optimal input is bounded by the constraints identical to the MPC setting and then applied to the nonlinear system. This method results in an input signal which is saturated at the lower bound and therefore its performance is identical to the case of Koopman-MPC with $k=5$ measurements. This method is successful in substantially reducing the tracking error, but unlike the Koopman-MPC framework, it is not capable of exploiting the nonlinearities far from the fixed point to speed up the stabilization. Moreover, this method is model-based and its performance is likely to degrade when uncertainties in estimating fluid properties, or input modeling errors are present. \begin{figure} \begin{picture}(300,500) \put(0,-50){\centerline{\includegraphics[width=1.2\textwidth]{Cavity_w1.pdf}}} \end{picture} \caption{\footnotesize \textbf{Closed-loop control of cavity flow evolution with Koopman-MPC.} Discrepancy in the vorticity of controlled state, and input signal, for full-state and sparse measurements. The measurement locations are marked via crosses in the leftmost column. The performance of bounded NS-LQR is identical to the Koopman-MPC with $k=5$.} \label{fig:CavityVorticity} \end{figure} The performance of the Koopman-MPC framework for stabilization around the fixed point at $Re=13000$ (problem 2) is shown in \cref{fig:CavityError2}. Recall that the target fixed point is not stabilizable and no feedback solutions exist that could asymptotically bring the state to the fixed point. Nevertheless, the controllers based on Koopman-linear systems are capable of substantially reducing the tracking error (e.g. down to 40\,\% with $k=100$). The behavior of controllers is similar to the previous problem, i.e., they tend to decrease the effective $Re$ and use occasional bursts to accelerate the stabilization. An interesting observation is that the controllers built on delay-embedding of measurements perform better than the one with full-state measurements. This shows the effectiveness of using nonlinear observables such as delay-embedded measurements to predict the nonlinear evolution. Note that for this problem, the linearized system around the fixed point is not stabilizable and there is no clear start point for designing feedback control based on linearization techniques commonly used in flow control. \begin{figure}[h!] \centerline{\includegraphics[width=1.1 \textwidth]{Cavity_error2.pdf}} \caption{\footnotesize \textbf{Control Performance for stabilization around the steady solution at $\boldsymbol{Re=13000}$}: Normalized kinetic energy of state discrepancy and the input signals for cavity flow controllers built by Koopman-MPC with various number of measurements ($k$). The steady solution is not stabilizable and there is no optimal feedback solution for LQR.} \label{fig:CavityError2} \end{figure} \paragraph{Computation time} \Cref{tab:ControlTime} summarizes the average computational time\footnote{The computations were carried out in MATLAB running on a 3.40 GHz Intel Xeon CPU and 64 GB RAM.} required to evaluate the control input at each time step of the closed-loop operation. We report separately the computation time $t_\mr{embed}$ required to build the state of the Koopman linear system by embedding the available measurements and the time $t_{\mr{MPC}}$ required to solve the optimization problem~(\ref{eq:MPC}) in the \emph{dense form}\footnote{The conversion of the optimization problem~(\ref{eq:MPC}) to the dense form consist in solving for the state variables $(z_1,\ldots,z_N)$ in terms of the control inputs $(u_0,\ldots,u_{N-1})$ and the initial state $z_0$ using the linear recursion $z^+ = Az+Bu$; the result of this straightforward linear algebra excercise can be found in the appendix of~\cite{korda2018linear}.} using the active set qpOASES solver~\cite{ferreau2014qpoases}. As evident from the table, combination of the Koopman linear representation and convex quadratic programming of the MPC framework leads to computation of the control input in a fraction of a millisecond. Note that in both examples the bulk of the computation time is spent on embedding the sparse measurements to build the Koopman linear state; this step requires data manipulation carried out purely in MATLAB and could be sped up by a tailored implementation (e.g., in~C). Of course, in a real-world implementation of this framework on nonlinear flows, other factors including the time to record and process the physical measurements should also be considered. \begin{table}[h!] \centering \caption{\small \rm Computation time for Koopman MPC of cavity flow and Burgers equations and NS-LQR control for cavity flow. The symbol ``---'' signifies a negligible embedding time in the case of full state measurement.}\label{tab:ControlTime}\vspace{2mm} \begin{tabular}{ccccc} \toprule & $\#$ of measurements $k$ & embedding dimension $n$ & $t_{\mr{embed}}~[\mr{sec}]$ & $t_{\mr{MPC}}~[\mr{sec}]$\\\midrule Burgers &10 & 62 & 4.71$\,\cdot\,10^{-5}$ & 2.28$\,\cdot\,10^{-7}$ \\ &150 & 152 & --- &2.95$\,\cdot\,10^{-7}$ \\\midrule Cavity &1 & 11 & 1.51$\,\cdot\,10^{-4}$ &8.96$\,\cdot\,10^{-6}$ \\ &2 & 16 & 1.54$\,\cdot\,10^{-4}$ &5.05$\,\cdot\,10^{-5}$ \\ &5 & 31 & 1.52$\,\cdot\,10^{-4}$ &4.19$\,\cdot\,10^{-5}$ \\ &50 & 256 & 1.55$\,\cdot\,10^{-5}$ &3.07$\,\cdot\,10^{-5}$ \\ &100 & 506 & 1.66$\,\cdot\,10^{-4}$ &7.59$\,\cdot\,10^{-5}$ \\ &2500 & 2502 & --- &4.44$\,\cdot\,10^{-6}$ \\ NS-LQR &2500 & 2501 & --- &$t_{\mr{LQR}}=6.55\,\cdot\,10^{-5}$ \\ \bottomrule \end{tabular} \end{table} \section{Conclusion and outlook}\label{sec:conclusion} In this work, we discussed the application of the Koopman-linear MPC framework, first proposed in \cite{korda2018linear}, for data-driven control of nonlinear flows. The key idea is to approximate the Koopman operator from data to obtain finite-dimensional linear systems that approximate the nonlinear global evolution of the system and use these systems as the predictor in the model predictive control framework. The combination of Koopman-linear representation of the dynamics and MPC leads to a convex quadratic programming problem that is solved at each time step; this is accomplished using highly efficient and tailored solvers for linear MPC. Moreover, the proposed framework is based solely on data and therefore robust to uncertainties and errors in available models of the nonlinear system. In the problems considered in this work, the Koopman MPC framework showed superior performance compared to feedback strategies based on local linearization and with sub-millisecond computation time. An important direction for the future work would be to optimize the data collection process to obtain more accurate and efficient Koopman linear models. This requires addressing two problems: first, finding efficient methods for sampling the extended state space of the nonlinear system; in this work we used random initial condition and random input sequences in the domain of interests to generate data for the EDMD algorithm. The second problem is identifying observables that provide the best finite-dimensional approximation of the Koopman operator in the space of observables. Using machine learning techniques (e.g, \cite{yeung2017learning,takeishi2017learning}) combined with sampling approaches (e.g., \cite{MohrandMezic:2014}) within the Koopman-MPC framework could automatize the choice of observables as well as improve control performance. \section*{Acknoweldgements} The authors would like to thank Dr. Sebastian Peitz for a constructive exchange of ideas on the subject. This research was supported in part by the ARO-MURI grant W911NF-17-1-0306, with program managers Dr. Matthew Munson and Dr. Samuel Stanton. The research of M. Korda was partly supported by the Swiss National Science Foundation under grant P2ELP2165166. \input{LQR.tex} \section*{Appendix: Model-based optimal control of cavity flow} \label{app:LQR} In this section, we describe the design of LQR controller for lid-driven cavity flow based on linearization around the steady solutions. Consider the Navier-Stokes equation in \eqref{eq:cavityNS} written as \begin{equation} \frac{\partial}{\partial t}E \psi = f( \psi,u) \end{equation} and let $\psi_0$ be the fixed-point solution corresponding to the input $u_0$, i.e., $f(\psi_0,u_0)=0$. The linearized Navier-Stokes equations around this equilibrium is given by \begin{align} \frac{\partial}{\partial t}E \tilde \psi = A\tilde{\psi} \label{eq:lin_si_descriptor} \end{align} with \begin{align} E:&=\nabla^2 (\cdot),\notag\\ A:=\frac{1}{Re}\nabla^4(\cdot)-\frac{\partial}{\partial z_2}(\cdot)\frac{\partial}{\partial z_1}\nabla^2 \psi_0 + \frac{\partial }{\partial z_1}(\cdot) &\frac{\partial}{\partial z_2}\nabla^2 \psi_0 -\frac{\partial \psi_0}{\partial z_2}\frac{\partial}{\partial z_1}\nabla^2 (\cdot) + \frac{\partial \psi_0}{\partial z_1}\frac{\partial}{\partial z_2}\nabla^2 (\cdot)~. \notag \end{align} and $\tilde{\psi}$ is stream function in the linearized equations. Similar to \eqref{eq:CavityForcing}, the control input to the system is the amplitude of the top velocity lid, which results in the following boundary conditions, \begin{align} \tilde\psi\bigg|_{\partial\Omega}=0,\quad \frac{\partial\tilde\psi}{\partial n} \bigg|_{z_1=\pm1 \text{~or~} z_2=-1}=0,\quad \text{and}\quad \frac{\partial\tilde\psi}{\partial z_2} \bigg|_{z_2=+1}= u(t)(1-z_1^2)^2, \label{eq:lin_bc} \end{align} with $\tilde{u}$ as the deviation from the base input $u_0$. In order to transform the boundary control problem into the standard linear time-invariant (LTI) format, we introduce the extension function \begin{align} H(z_1,z_2)=\frac{1}{4}(1-z_1^2)^2(1+z_2)^2(z_2-1), \end{align} and use the change of variables \begin{align} \eta (z_1,z_2) = \tilde \psi(z_1,z_2) - H(z_1,z_2)\tilde{u}(t), \end{align} The linear system in the new variable reads \begin{align} \frac{\partial}{\partial t} E \eta = A\eta + AH \tilde{u} - EH\frac{d\tilde{u}}{dt}, \end{align} with homogeneous boundary condition, \begin{align} \eta \bigg|_{\partial\Omega}=\frac{\partial\eta}{\partial n}\bigg|_{\partial\Omega}=0. \label{eq:etaBC} \end{align} The above is in fact an LTI \emph{descriptor} system which can be written as \begin{align*} \frac{\partial}{\partial t} \begin{bmatrix} E && 0 \\ 0 && 1 \end{bmatrix} \begin{bmatrix} \eta \\ \tilde{u} \end{bmatrix} = \begin{bmatrix} A && AH \\ 0 && 0 \end{bmatrix} \begin{bmatrix} \eta \\ \tilde{u} \end{bmatrix} + \begin{bmatrix} -EH \\ 1 \end{bmatrix} \frac{d\tilde{u}}{d t}. \end{align*} or in a more compact form, \begin{align} \mathbf{E}\dot{\mathbf{x}}=\mathbf{A}\mathbf{x} + \mathbf{B}\dot{\tilde{u}},\label{eq:linsys1D} \end{align} where $\mathbf{x}=[\eta,\tilde{u}]^\top$ is the embedded state. We are interested in finding the optimal input $d\tilde{u}/dt$ (and $\tilde{u}$) for the above system that minimizes the cost function, \begin{align}\label{eq:LQRcost} J(\mathbf{x},\dot{u}) &=\int_0^\infty\bigg[ \int_\Omega |\mathbf{v}|^2 dA + \frac{}{}\alpha_1\tilde{u}^2+\alpha_2(\dot{\tilde{u}})^2\bigg] dt \end{align} where $\mathbf{v}=(\partial\tilde{\psi}/\partial z_2,-\partial \tilde{\psi}/\partial z_1)$ is the velocity field of the linearized system. We have used the Chebyshev collocation scheme \cite{trefethen2000spectral} to spatially discretize the linear system in \eqref{eq:linsys1D} and the cost function in \eqref{eq:LQRcost} . We have chosen $\alpha_1=\alpha_2=10^{-6}$ to minimally penalize the input and avoid infinitely large solutions. If the linear system is stabilizable (for example in the case of steady solution at $Re=10000$), solving the continuous-time algebraic Riccati equation (ARE) (see e.g. \cite{arnold1984generalized} for descriptor formulation of ARE), would give the optimal feedback gain $k=[k_\eta~~k_u]$. The LQR optimal input $\tilde{u}$ is then computed by time stepping the following ordinary differential equation \begin{align*} \dot{\tilde u}= -k_uu-k_\eta\eta= -k_u\tilde u-k_\eta(\tilde\psi-H\tilde u), \end{align*} and the input $u=u_0+\tilde{u}$ is applied to the nonlinear system. If the fixed point is not linearly stabilizable, such as the steady solution at $Re=13000$, then ARE does not have a solution and there is no stabilizing input.
{ "timestamp": "2018-06-08T02:04:46", "yymm": "1804", "arxiv_id": "1804.05291", "language": "en", "url": "https://arxiv.org/abs/1804.05291" }
\section{Introduction} Molecular communication (MC) systems encode information into the characteristics of signaling molecules. This is very different from conventional electromagnetic- (EM-) based communication systems that embed data into the properties of EM waves \cite{Nakano_Molecular_2013,Farsad_comprehensive_2016}. MC systems are suitable for communication at small scale and in fluids where EM-based communication is inefficient or even infeasible. Functioning MC systems are envisioned to enable revolutionary applications, e.g., sensing of a target substance in biotechnology, targeted drug delivery in medicine, and monitoring of oil pipelines or chemical reactors in industrial applications. An important step towards realizing the aforementioned applications is to build testbeds that allow the verification of the theoretical channel models and the transmission strategies proposed in the MC literature. To this end, MC testbeds based on spraying alcohol into open space and using acids and bases within closed vessels have been proposed in \cite{Farsad_Tabletop_2013} and \cite{Farsad_Novel_2017}, respectively. These testbeds have been extended to multiple-input multiple-output (MIMO) systems, and improved channel models have been proposed to account for discrepancies between theory and experimental results \cite{Koo_Molecular_2016,Farsad_Channel_2014}. Recently, an in-vessel MC testbed was proposed in \cite{TestBed_Harold} that uses specifically designed magnetic nanoparticles as information carriers, which are biocompatible, clinically safe and do not interfere with chemical processes like alcohol \cite{Farsad_Tabletop_2013} or acids and bases \cite{Farsad_Novel_2017} may do. Nevertheless, the aforementioned MC testbeds are all at macroscale, i.e., with dimensions on the order of several tens of centimeters, whereas many prospective applications of MC systems are envisioned to be at microscale. Biologically inspired experimental studies have been conducted in \cite{Krishnaswamy_Time_2013,Nakano_Microplatform_2008,Felicetti_Modeling_2014,Akyildiz_testbed_2015,Nakano_Interface_2014}. In particular, in \cite{Krishnaswamy_Time_2013}, bacterial populations were used as transceivers connected through a microfluidic pathway. In \cite{Felicetti_Modeling_2014}, soluble CD40L molecules were released from platelets (as transmitter) into a fluid medium that upon contact triggered the activation of endothelial cells (as receiver). Moreover, in \cite{Nakano_Microplatform_2008}, a microplatform was designed to demonstrate the propagation of molecular signals through a line of patterned HeLa cells (human cervical cancer cells) expressing gap junction channels. In \cite{Nakano_Interface_2014}, artificially synthesized materials were embedded into the cytosol of living cells and, in response to stimuli induced in the cell, emitted fluorescence that could be externally detected by fluorescence microscopy. Similarly, in \cite{Akyildiz_testbed_2015}, the response of genetically engineered \textit{Escherichia coli} (\textit{E. coli}) bacteria to the surrounding molecules was used as basis for the design of a biological receiver. One particular challenge for designing microscale MC testbeds is the fact that controlling an MC system at microscale is difficult. To address this issue, in this paper, we propose a biological signal conversion interface which converts an optical signal, which can be easily controlled using a light-emitting diode (LED), into a chemical signal by changing the pH of the environment. This setup can be seen as a microscale modulator that can be embedded in future MC systems\footnote{Throughout the paper, we use the terms ``optical-to-chemical signal converter'' and ``modulator'' interchangeably.}. The modulator is realized using \textit{E. coli} bacteria that express the light-driven proton pump gloeorhodopsin (GR), a bacterial type~I rhodopsin. Upon inducing external light stimuli, these bacteria can change their surrounding pH level by exporting protons into the environment. The authors of \cite{Choi_Cyanobacterial_2014} examined the proton flux due to illumination of \textit{E. coli} bacteria expressing GR (but not for applications in an MC system). In particular, one proton can be transferred to the periplasmic space in less than $1$~ms from an almost inexhaustible pool inside the cell arising from the cell's energy metabolism \cite{Lanyi_Proton_2006}. As a result, in a bacterial suspension, the change of proton concentration in the surrounding medium can be detected within few seconds as a change of pH. Therefore, we expect a relatively fast signal conversion with this setup in comparison with the setup in \cite{Krishnaswamy_Time_2013} where a chemical signal was generated based on gene expression. Using experimentally derived data from our testbed, we develop an analytical model for the induced chemical signal as a function of the applied optical signal. Finally, using a pH sensor as detector, we show for an example scenario that the proposed setup is able to successfully convert an optical signal representing a sequence of binary symbols into a chemical signal with a bit rate of $1$~bit/min using on-off keying (OOK) modulation and differential detection. The proposed setup can serve as the basis for the development of testbeds using other light-driven pumps that generate other chemical signals, e.g., Na$^+$ and K$^+$ ions~\cite{LightPump_2013,LightPump_2017}. We note that the systems in \cite{Nakano_Microplatform_2008,Felicetti_Modeling_2014,Nakano_Interface_2014,Akyildiz_testbed_2015} were demonstrated for a single shot transmission. Furthermore, the setup with continuous transmission in \cite{Krishnaswamy_Time_2013} achieves low data rates on the order of one bit/h. In contrast, the testbed in this paper achieves significantly higher data rates on the order of one bit/min. \section{System Setup and Preliminaries} In this section, first, an overview of the experimental system is provided. Subsequently, the photocycle of bacteriorhodopsin, the main biological mechanism that is exploited for the proposed microscale modulator, is discussed in detail. \begin{figure}[t] \includegraphics[width=0.82\columnwidth]{transmitterModel} \caption{% Biological modulator model. (a) Benchtop experimental setup; (b) Schematic illustration. } \label{Fig:SysMod} \end{figure} \subsection{System Overview} The developed testbed is shown in Fig.~\ref{Fig:SysMod}a and schematically illustrated in Fig.~\ref{Fig:SysMod}b. The proposed modulator is based on \textit{E. coli} bacteria expressing bacteriorhodopsin in their cell membrane for easy-to-control optical signal conversion. A glass tube containing the bacterial suspension is installed in a light-isolated incubator in order to keep environmental conditions, such as temperature, constant. An LED is focused on the bacterial suspension and is controlled using an Arduino microcontroller and a personal computer (PC). Thereby, the information generated by the transmitter PC is first encoded into an optical signal which is then converted to a chemical signal, i.e., a pH change, by the bacteria. In fact, upon illumination, the bacteriorhodopsin in the bacteria plasma membrane pumps protons out into the channel, see Fig.~\ref{Fig:Bacteria}. Assuming dilluted solution, the proton pumping reduces the pH according to $\text{pH}=-\log_{10}(c_{\text{H}^+})$ where $c_{\text{H}^+}$ is the concentration of protons in mol/l \cite{Farsad_Novel_2017}. In order to evaluate the efficiency of the proposed signal converter, we deploy a pH sensor in the bacterial suspension which tracks the pH variations over time. This sensor reports the pH values to a receiver PC for signal processing. The technical details of the components of the testbed and the cultivation of the bacteria are provided in Section~3 and the modulation and detection schemes used to collect and process the measurement data are presented in Section~5. \begin{figure}[t] \includegraphics[width=0.73\columnwidth]{functionBacteriorhodopsin} \caption{% The light-driven proton pump bacteriorhodopsin. (a) Biological function of bacteriorhodopsin in a native cell; (b) Schematic transmission model. } \label{Fig:Bacteria} \end{figure} \subsection{Bacteriorhodopsin Photocycle} The modulator in this testbed consists of engineered bacterial cells expressing GR inserted into the plasma membrane of the cell\footnote{GR is a specific bacteriorhodopsin belonging to the family of bacterial type~I rhodopsins. Throughout this paper, we use GR and bacteriorhodopsin interchangeably.}. The GR protein provides a gate through the membrane via seven transmembrane domains formed by amino acid helices. Due to a hydrophobic barrier on the cytoplasmic side, the protein is not providing any transport of molecules in the ground state. To perform proton-transfer, a chromophore group, the all-trans retinal, is needed. The retinal is integrated into the protein and acts as biochemical pumping lever. The photocycle of bacteriorhodopsin was investigated intensively over the past decades in the biology community \cite{Lanyi_Bacteriorhodopsin_2004}. In the ground state, retinal is in the all-trans configuration and a proton is bound to the residue of amino acid Asp96 inside the cell on the cytoplasmic side. By the energy of one photon, the retinal is subject to a trans$\rightarrow$cis transition at carbon atom C14 and thereby performs the lever action. As a result, one proton is transferred from the Schiff base to the residue of amino acid Asp85 on the periplasmic side of the protein. Investigations of the bacteriorhodopsin photocycle strongly suggest that the protonation of Asp85 causes the passage of a proton through a water network embedded in the amino acid residues of the protein on the extracellular side \cite{Patterson_Ultrafast_2010}. Hence, the proton can move through the plasma membrane against the electrochemical potential along the amino acid residues inside the protein. Furthermore, Asp85 reprotonates the Schiff base and the retinal regenerates to the ground state, ready for a new cycle. The photo-isomerization of retinal and the release of one proton to the extracellular side is the fastest known bacterial photoreaction and is performed in less than \SI{1}{\micro\second} \cite{Patterson_Ultrafast_2010}. However, the regeneration from the excited to the ground state takes 15~ms which makes it the time-limiting factor in the photocycle. By increasing the proton-gradient in the natural host, the polarization of the membrane is used to drive, e.g., an ATPase to convert light energy to chemical energy, see Fig.~\ref{Fig:Bacteria}a, or to drive the flagellar apparatus. Light is one of the most important external signals used to convey information from the external world to biological systems. In fact, in addition to the ion transporting rhodopsins, there are also sensory rhodopsins functioning as light-signal transducers in nature \cite{Kaneko_Conversion_2017}. Therefore, organisms make use of light not only as energy source but also as information signal. In this paper, we exploit bacteriorhodopsin for a biological modulator. \section{Experimental Setup} In this section, we first describe the procedure for cultivation of the bacteria, formation of the spheroplasts that are needed for efficient proton pumping, and the performed measurement mechanism. Subsequently, we provide a brief discussion on the variability that is expected to occur in MC systems that employ biological~components. \subsection{Bacterial Cultivation} In this paper, we use genetically modified \textit{E. coli} bacteria, namely the strain \textit{E. coli} $\textrm{DH5}\alpha\textrm{Mcr}$, carrying the vector DNA pKJ900 with the gene encoding GR from \textit{Gloeobacter violaceus} under control of the chemically induced \textit{ptac} promoter that was proposed in \cite{Choi_Cyanobacterial_2014}. Bacteria from a dry agar culture were pre-cultured for 6~h at \SI{37}{\celsius} and shaked at 175 revolutions per minute (rpm) in 20~ml lysogeny broth (LB) medium (i.e., 10~g/l tryptone, 10~g/l NaCl, 5~g/l yeast extract) with \SI{25}{\micro\gram/\milli l} chloramphenicol, to select bacteria with antibiotic resistance genetically encoded in the vector DNA. Subsequently, for the main culture, 400~ml LB with chloramphenicol was inoculated to a final optical density at 600~nm of OD\textsubscript{600~nm}$=0.02$ (approximately $0.02\times (8\times 10^{8})=1.6\times 10^{7}$ cells/ml), and incubated for 1~h at \SI{37}{\celsius} at 175~rpm constant shaking, to adapt to the fresh medium conditions. Thereafter, \SI{100}{\micro M} isopropyl-$\beta$-D-thiogalactopyranosid (IPTG) for chemical induction of the transcription, and \SI{10}{\micro M} retinal were added. Since \textit{E. coli} cells do not produce retinal, it has to be supplied by the medium in which the cells grow. Afterwards, the main culture was incubated at \SI{35}{\celsius} and 75~rpm in the dark, since lower temperature supports IPTG induction and retinal incorporation seems to be more successful in rather anaerobe conditions \cite{Gopal_Strategies_2013,Hartmann_Anaerobic_1980}. \subsection{Spheroplast Formation} Considering that GR is located in the plasma membrane, protons are pumped to the periplasmic space between the cytosolic and the outer membrane (OM). Therefore, most released protons are trapped and cannot easily reach the extracellular environment. To address this issue, we standardized a protocol based on sonication with the aim to remove the OM so that the protons are released directly into the surrounding medium. Among many protocols already described in the literature, using lysozyme resulted in a high and pure yield of spheroplasts but in a lower final volume and concentration of cells compared to sonication \cite{Hobb_Evaluation_2009,Liu_effect_2006}. Thus, OM removal by sonication is the method that better fitted the requirements of the system. The IPTG-induced cells were harvested by centrifugation (4000~xg, 5~min). Afterwards, the cells were resuspended in 320~ml 0.9\% NaCl in total and exposed to 6 times of 20~s sonication bath with ice (10\% power in Bandelin Sonorex Digital 10P) and 20~s regeneration. After centrifugation with 8000~xg for 10~min, the cell pellet was resuspended again in 320~ml 0.9\% NaCl. The proportion of OM removal is strongly dependent on the dilution during sonication. In total, 6 cycles of sonication were performed. After the last centrifugation, the spheroplasts were resuspended in an unbuffered, osmotically balancing solution (120~mM NaCl, 10~mM MgCl$_2$, 10~mM KCl, 10~mM MgSO$_4$, \SI{100}{\micro M} CaCl$_2$) of pH=5.5 and adjusted to an OD\textsubscript{600~nm} of 15 (approximately $15\times (8\times 10^{8})=1.2\times 10^{10}$ cells/ml). The resulting solution was a mixture of spheroplasts (50-60\%, optically estimated), cells with partly removed OM, and intact cells. In a reaction tube, 6~ml of the cell suspension was incubated for 5~h in \SI{35}{\celsius} in the testbed setup, see Fig.~\ref{Fig:SysMod}, stirred at level 6 of a IKA RCT basic magnetic stirrer, and finally dark adapted to the ionic conditions before it was used for signal transmission. \subsection{Measurement} The bacteria, constantly incubated in a dark environment, were illuminated by an LED with optical power 1~W, which operated at wavelength 550~nm due to the maximum absorption of GR \cite{Choi_Cyanobacterial_2014}. The LED was controlled by a custom Matlab\textsuperscript{\textregistered} (MathWorks\textsuperscript{\textregistered}, Natick, MA, United States) graphical user interface (GUI), which allowed mapping a user-defined bit sequence to an appropriate sequence of light stimuli. The transmitter PC was connected to an Arduino Mega 2560 (Rev. 3) microcontroller via serial connection. The GUI controled one of the digital output pins of the microcontroller, which in turn provided the control signal for the custom LED driver circuit PT4115 (CR Powtech, Shaghai, China). The measurement was performed when the temperature was stable at $35\pm\SI{0.2}{\celsius}$ and the pH was adapted to between 5.6-5.8, since this was the most effective operating range to generate a strong signal from the bacteria \cite{Wang_Spectroscopic_2003}. The pH signal in general was documented for at least 30 min to ensure stability. The absolute pH level was detected with a SenTix 950 (Xylem Analytics, WTW, Weilheim, Germany) microelectrode using the potentiometric pH meter inoLab\textsuperscript{\textregistered} Multi 9310 IDS (Xylem Analytics, WTW, Weilheim, Germany). Since our main objective was to characterize the optical-to-chemical signal conversion, the pH microelectrode was inserted directly into the bacterial solution. The measured real-time data were continuously streamed via serial connection to the receiver PC, where they were analyzed, displayed, and stored by a custom Matlab\textsuperscript{\textregistered} GUI. \subsection{Variability in Biological Systems} The proposed testbed is based on living biological organisms. Hence, we expect conditional unique characteristics that usually are not observed in synthetic non-living systems. In particular, in the following, we highlight the factors that may cause variations in the overall system response and may help in interpreting the experimental data. These factors include, but are not limited to, the portion of spheroplasts in the cell mixture and the number of GR molecules with integrated retinal in each cell. Moreover, as bacteria age, changes in the system response could also arise from degenerating processes in the cell. These factors may lead to a baseline drift in the chemical signal over time, cf. Section~5. To minimize these effects, we followed a careful protocol for preparation of the bacteria and the measurement procedure as discussed in the previous subsections. Nevertheless, residual variations still exist that will be studied and modeled in the next section. Determining for example the exact numbers of spheroplasts and functional GR molecules, and the development of efficient protocols to reduce or eliminate variations in these numbers, are important topics for future research. \section{System Characterization} In this section, we develop an analytical model to characterize the chemical signal induced by the bacteria as a function of the applied optical signal. \subsection{System Step Response to Illumination and Darkness} Let $T^{\mathrm{symb}}$ denote the length of a symbol interval. In this paper, we assume a rectangular pulse for the optical signal that spans fraction $\alpha$ of the symbol interval. In other words, for this pulse shape, the LED is turned on from the beginning of the symbol interval until time $\alpha T^{\mathrm{symb}}$ and is turned off for the remaining time $(1-\alpha) T^{\mathrm{symb}}$ of the symbol interval. Moreover, before transmission starts, the bacteria are in a dark adaptation state and an equilibrium in pH is established. Our motivation for adopting this specific pulse shape is to partially return to the equilibrium state after illumination. This is particularly important if pulses are transmitted consecutively, e.g., corresponding to consecutive binary ones for OOK signaling. To characterize this system, in the following, we develop an analytical model for the step response of the system to illumination and darkness. \subsection{Analytical Model} Anaytical models to describe the proton release rate or proton concentration (or equivalently pH) as a function of a given induced optical intensity have been developed in \cite{Hamid_Modulator,zifarelli2008buffered}. In particular, in \cite{Hamid_Modulator}, the photocycle of the bacteriorhodopsin was modeled as a Markov chain and the corresponding proton release rate was derived. Moreover, in \cite{zifarelli2008buffered}, the expected pH change in the proximity of a proton pumping cell was derived as a function of time. In this paper, we do not aim to develop such models as the considered system is much more complex than those investigated in \cite{Hamid_Modulator,zifarelli2008buffered}. Nevertheless, we exploit a simple insight that these analytical models provide. Specifically, the model in \cite{zifarelli2008buffered} reveals an exponential change in proton concentration over time in response to the change of the optical intensity and convergence of the proton concentration to an equilibrium level after a sufficient time. We take this insight into account to develop a parametric model. Moreover, from our measurement data, we observed that the proton concentration may exhibit a certain drift that was not predicted in \cite{Hamid_Modulator,zifarelli2008buffered} but is included in our model. This effect can be attributed to a slow variation in the behavior of the bacteria over time, e.g., due to the aging of the bacteria. Let $c_{\text{H}^+}(t)$ denote the proton concentration as a function of time. Motivated by the measurement data, we model $c_{\text{H}^+}(t)$ as the sum of the following three different components: \textit{i)} \textbf{Slow drift component:} As mentioned before, our measurements exhibit a slow drift over relatively long time intervals (e.g., on the order of $20$ min) compared to the considered symbol interval duration, which is on the order of $1$ min. The concentration change due to this drift is denoted by $d(t)$ and is modeled by the linear deterministic function $d(t)=m^{\mathrm{d}}t$ where $m^{\mathrm{d}}$ is the slope of the drift when measurement starts at $t=0$. \textit{ii)} \textbf{Signal-dependent component:} A variation in the optical signal causes a change in the proton concentration within each symbol interval. Let $c_{\text{H}^+}^{\mathrm{eq},0}$ and $c_{\text{H}^+}^{\mathrm{eq},1}$ denote the equilibrium proton concentrations, i.e., $\underset{t\to\infty}{\lim}\,\,c_{\text{H}^+}(t)$, under darkness and illumination, respectively, when the drift and noise components of $c_{\text{H}^+}(t)$ are absent. Assuming that the change of proton concentration, denoted by $x(t)$, or equivalently the change of pH level, in the bacteria suspension is proportional to the deviation from the equilibrium level, $x(t)$ can be modeled by the following ordinary differential equation (ODE): \begin{equation} \frac{\mathrm{d}x(t)}{\mathrm{d}t} = -\frac{1}{\tau_i} \left(x(t) - c_{\text{H}^+}^{\mathrm{eq},i} \right),\, \label{eq:ODE} \end{equation} where $\tau_0$ and $\tau_1$ are the time constants for the darkness and illumination states, respectively. Considering $x(t_0)=0$ at initial time $t_0$, the ODE in (\ref{eq:ODE}) has the following exponential solution \begin{equation} x(t) = \left(c_{\text{H}^+}^{\mathrm{eq},i} - c_{\text{H}^+}(t_0)\right) \left(1-\exp\left(-t/\tau_i\right)\right). \label{eq:ODE_sol} \end{equation} For the pulse shape introduced in Section~4.1 and assuming that the start of the symbol interval is at time $t_0$, the proton concentration change $x(t)$ within one symbol interval is obtained as \begin{IEEEeqnarray}{ll} x(t) = \begin{cases} \big(c_{\text{H}^+}^{\mathrm{eq},1} - c_{\text{H}^+}(t_0)\big) \big(1-\exp\left(-t/\tau_1\right)\big), \,\, t_0\leq t \leq t_0+\alpha T^{\mathrm{symb}} \\ \big(c_{\text{H}^+}^{\mathrm{eq},0} - c_{\text{H}^+}(t_0+\alpha T^{\mathrm{symb}})\big) \big(1-\exp\left(-t/\tau_0\right)\big), \\ \hspace{3.4cm} t_0+\alpha T^{\mathrm{symb}} < t \leq t_0+T^{\mathrm{symb}} \end{cases} \hspace{-0.2cm} \label{eq:overall_sol} \end{IEEEeqnarray} \textit{iii)} \textbf{Random fluctuation component:} There are additional fluctuations in $c_{\text{H}^+}(t)$ which are much faster than the above two components. We model these fluctuations as noise denoted by $e(t)$. This noise may include diffusion (counting) noise, pH sensor circuitry noise, and the noise inherent to the biological machinery of the bacteria. A careful modeling of these noise sources is out of the scope of this paper. One analytical approximation that is often accurate when there are several independent noise sources is to model the overall noise as Gaussian noise. The validity of the Gaussian noise model will be investigated in future work. To summarize, the proton concentration is given by \begin{equation}\label{Eq:Model} c_{\text{H}^+}(t) = c_{\text{H}^+}(t_0) + x(t) + d(t) + e(t). \end{equation} Note that conditioned on the transmitted symbols, components $x(t)$ and $d(t)$ are deterministic, whereas $e(t)$ is random. The motivation for adopting the above model comes from both the analysis in \cite{zifarelli2008buffered} and our measurement data, cf. Section~5.2. We emphasize that the above model is not directly derived based on physical laws and is in fact a parametric model whose parameters can be adjusted to fit the measurement data. \section{Experimental Verification} In this section, we present and analyze experimental data obtained by the proposed optical-to-chemical signal conversion interface. To this end, we first present the considered transmission scheme. \subsection{Transmission Scheme} Data transmission is preceded by a period of dark adaptation of \SI{30}{\minute}. Afterwards, the following modulation and detection schemes are adopted. \subsubsection{Modulation} We employ OOK modulation with the pulse shape introduced in Section~4.1. Thereby, for a binary one, the LED illuminates the bacteria suspension for the first $\alpha$ fraction of the symbol interval and is turned off for the remaining fraction, whereas for a binary zero, the LED is turned off during the whole symbol interval. We note that other modulation schemes such as general concentration shift keying and pulse position modulation can be easily realized with our testbed and the analytical model proposed in Section~5 can also be straightforwardly generalized to these modulation schemes. \subsubsection{Detection} Due to the random fluctuations and the drift observed in the measurement data, a simple threshold detector performs poorly for the original pH signal. To overcome these effects, we employ a smoothing filter to mitigate the random fluctuations and a differential detector to eliminate the drift. The sampling rate of the pH signal is $1$ Hz, i.e., one sample/second. The measured pH signal is smoothed by a moving average filter with a length of \SI{30}{samples} and then differentiated, where the differences were computed between signal values with a distance of \SI{20}{samples}. Then, a threshold detector is applied to recover the data from the differentiated pH signal denoted by $\Delta \text{pH}$. The peak associated with the first binary one bit(s) can be used as synchronization signal to determine the start of transmission. The value of this peak (these peaks) can be used as a reference to determine the decision threshold. For the example shown in this section, the threshold is set as $\eta=\beta\Delta \text{pH}^{\mathrm{p}}+(1-\beta)\Delta \text{pH}^{\mathrm{d}}$ where $\Delta \text{pH}^{\mathrm{p}}$ is the peak value, $\Delta \text{pH}^{\mathrm{d}}$ is the average $\Delta \text{pH}$ before transmission starts during the dark adaptation, and $\beta\in[0,1]$ is a design parameter that determines how close the threshold is to the peak value $\Delta \text{pH}^{\mathrm{p}}$. \begin{figure} \includegraphics[scale=0.55]{analytic_single_shot} \caption{% a) Optical signal; b) Measured pH and analytical $\text{pH}^{\mathrm{mdl}}$ vs. time. }\label{Fig:SingleShot} \end{figure} \begin{figure} \includegraphics[scale=0.55]{analytic_multiple_shot} \caption{% a) Optical signal corresponding to symbol sequence $[10011000101011101101]$; b) Measured pH and analytical $\text{pH}^{\mathrm{mdl}}$ vs. time. }\label{Fig:MultipleShot} \end{figure} \subsection{Model Verification} In the following, the accuracy of the model proposed in (\ref{Eq:Model}) is investigated by comparing $\text{pH}^{\mathrm{mdl}} = -\log_{10}( c_{\text{H}^+}(0) + x(t) + d(t))$ with the measurement data. We employ the least square error criterion of the MatLab Curve Fitting Toolbox\texttrademark~ to obtain model parameters $c_{\text{H}^+}^{\mathrm{eq},0}$, $c_{\text{H}^+}^{\mathrm{eq},1}$, $\tau_0$, $\tau_1$, and $m^{\mathrm{d}}$. Note that the values of these parameters may be different for different cell cultures. In order to study the effect of the drift under both the illumination and dark states, we consider a long symbol duration of $T^{\mathrm{symb}}=110$ min with $\alpha=0.5$. In Fig.~\ref{Fig:SingleShot}a, we show the optical signal and in Fig.~\ref{Fig:SingleShot}b, we show the corresponding measured pH and the analytical $\text{pH}^{\mathrm{mdl}}$ for one cell culture vs. time. The parameters of the proposed model are found as $c_{\text{H}^+}^{\mathrm{eq},0}=1.53\times10^{-6}$ mol/l (pH of $5.81$), $c_{\text{H}^+}^{\mathrm{eq},1}=1.65\times10^{-6}$ mol/l (pH of $5.78$), $\tau_0=3.18$ min, $\tau_1=1.84$ min, and $m^{\mathrm{d}}=-3.12\times 10^{-5}$ mol/l/s. As expected, the pH level decreases after illumination and increases during darkness; hence, the optical signal is successfully converted to a chemical signal. From the measured pH shown in Fig.~\ref{Fig:SingleShot}b, we observe a baseline drift during both the illumination and darkness intervals. Overall, we observe that the proposed analytical model is in very good agreement with the measurement~data. In Fig.~\ref{Fig:MultipleShot}a, we show the optical signal corresponding to the $20$-symbol sequence $[10011000101011101101]$ with $T^{\mathrm{symb}}=1$ min and $\alpha=0.25$, and in Fig.~\ref{Fig:MultipleShot}b, we show the corresponding measured pH and the analytical $\text{pH}^{\mathrm{mdl}}$ vs. time. The parameters of the proposed model are found as $c_{\text{H}^+}^{\mathrm{eq},0}=2.82\times10^{-6}$ mol/l (pH of $5.54$), $c_{\text{H}^+}^{\mathrm{eq},1}=5.79\times10^{-6}$ mol/l (pH of $5.23$), $\tau_0=6.39$ min, $\tau_1=8.48$ min, and $m^{\mathrm{d}}=-6.43\times 10^{-5}$ mol/l/s. The values of the model parameters are different from those obtained for Fig.~\ref{Fig:SingleShot} since the measurements were gathered from different bacterial cultures. Again, we observe from Fig.~\ref{Fig:MultipleShot}a that the proposed analytical model explains the measurement data well even if multiple symbols are transmitted. \subsection{Signal Conversion} Finally, we show the successful recovery of the following randomly chosen $80$-symbol sequence \begin{IEEEeqnarray}{ll}\label{Eq:Sequence} [10011000&10101110110101111010011001010010 \nonumber\\ &0100111011011101110001001011010010000000] \end{IEEEeqnarray} that is converted from an optical signal to a chemical signal using the proposed experimental setup and the modulation and detection schemes introduced in Section~5.1 with $T^{\mathrm{symb}}=1$ min and $\alpha=0.25$. In Fig.~\ref{Fig:Diff_detection}a, we show the measured pH and the smoothed signal vs. time. As can be observed from this figure, the random noise in the measured pH is efficiently mitigated by smoothing. Nevertheless, different signal drifts can be observed in intervals $[0,25]$, $[26,44]$, $[45,69]$, and $[70,80]$ min which are caused by inter-symbol interference (ISI) as well as the baseline drift. Moreover, these drifts are not present in Fig.~\ref{Fig:Diff_detection}b which depicts the differentiated signal vs. time. The negative peaks in the differentiated signal, which result from illumination when transmitting a binary ``1'', are very pronounced and substantially exceed the noise level. Thereby, a simple threshold detector using a detection threshold $\eta$ with $\beta=0.25$ can successfully recover all $80$ symbols. \begin{figure} \includegraphics[width=\columnwidth]{diff_detection} \caption{a) Measured pH and smoothed signal vs. time for the symbol sequence in (\ref{Eq:Sequence}); b) Differentiated signal used for detection vs. time and adopted detection threshold. The symbol intervals are represented by vertical dotted lines. For illustration, time intervals where binary symbol ``1'' is transmitted (detected) are highlighted by yellow (blue) in Fig.~\ref{Fig:Diff_detection}a (Fig.~\ref{Fig:Diff_detection}b).}% \label{Fig:Diff_detection}% \end{figure} \section{Conclusions and Future Work} In this paper, we introduced a biological microscale modulator based on \textit{E.~coli} bacteria that express the light-driven proton pump gloeorhodopsin and, in response to external light stimuli, can locally change their surrounding pH level by pumping protons into the channel. We provided an analytical model to characterize the induced chemical signal as a function of the applied optical signal. We further showed that the results from the proposed analytical model are in very good agreement with the measurement data for a sequence of transmitted symbols. Furthermore, using a pH sensor as detector, employing OOK modulation, and detection based on the differential signal, a sample sequence of $80$ consecutive bits was converted to a chemical signal and successfully recovered. We note that the high data rate of at least $1$ bit/min achieved by our testbed is a big step forward compared to existing organic testbeds (e.g., the data rate of the system in \cite{Krishnaswamy_Time_2013} is approximately $1$ bit/h). In future work, we plan to replace the pH sensor by a bacterial receiver, e.g. a pH-sensitive green fluorescent protein (GFP). Having both an optical-to-chemical transmitter and a chemical-to-optical receiver, we can set up a full MC system at microscale that can be easily controlled and read out at macroscale. \bibliographystyle{myACM} {\footnotesize
{ "timestamp": "2018-04-17T02:14:36", "yymm": "1804", "arxiv_id": "1804.05555", "language": "en", "url": "https://arxiv.org/abs/1804.05555" }
\section{Introduction} In many scientific investigations, effect modification discovery is a major goal. For example, in precision medicine research, recommending an appropriate treatment among many existing choices is a central question. Based on patient's characteristics, such recommendation amounts to estimating treatment effect modification \cite{Kraemer2013}. Another example is health disparity research that focuses on discovering modification of the association between disparity categories (e.g. race and socioeconomic status) and health outcomes. Potential modifiers are usually individual risk variables and social factors such as health system access variables \cite{Braveman2006}. In the classical regression modeling framework, this amounts to discovering interactions between covariates and a certain interested variable. Take the precision medicine example, the goal is to find patient characteristics that interact with the treatment indicator. If the interest focuses on treatment recommendation, then main effects of these characteristics are irrelevant because they are the same for all treatment choices. Similarly for the health disparity example, the goal is to find modifiers that interact with the disparity categories. If the interest focuses on elimination of disparity, then main effects of modifiers are irrelevant because they are the same for all disparity categories. Traditionally effect modification or statistical interaction discovery is conducted mainly by testing or estimating product terms in outcome models. Such discovery is hard as it usually requires large sample sizes \cite{Greenland1993}, especially when many covariates are present. Recent works in the area of precision medicine illustrate that when the goal is treatment recommendation, investigation on the product term in an outcome model may not be ideal as the outcome is also affected by covariate main effects \cite{Zhao2012, Tian2014, Xu2015, Chen2017, zhang2012class, lu2013}. As we have discussed above, these main effects are not relevant for treatment recommendation. Therefore these works focus on learning contrast functions which are differences of conditional expectations of the outcome under two treatment choices. Most of the existing works use either nonparametric \cite{Zhao2012, zhang2012class} or parametric approaches \cite{Kraemer2013, lu2013, Xu2015}. The nonparametric is flexible but may not be ideal when faced with large number of covariates. Parametric approaches are obviously quite susceptible to model misspecification. \cite{song2017} considered single index models for the contrast function. This relaxation of the parametric to semiparametric fills an important middle ground. Obviously single index can still be too rigid and allowing more than one index may provide more flexibility to capture the heterogeneity in effect modification. More importantly, whereas the choice of the estimating equation in \cite{song2017} was intuitive, no systematic investigation was given to explore other possible estimating equations. Therefore issues such as efficiency are left largely untackled. For parametric and nonparametric models, efficiency improvement has been considered, but not in a systematic fashion \citep{Tian2014, Zhou2017, Fu2016, Chen2017}. We consider a more general semiparametric approach which is essentially a multiple index model. Under our framework, we make the following new contributions. First, based on the well-established semiparametric estimation theory \citep{BKRWbook, Tsiatisbook}, we characterize all valid estimating equations, including the efficient scores under our framework. This leads to many possible choices of estimating equations, and efficiency consideration becomes very natural in our approach. Second, because the multiple index model is intrinsically related to dimension reduction \citep{Xia2002, Xia2007}, our method can also be used as a dimension reduction tool for interaction discovery with a specific variable. Third, we do not restrict the treatment or exposure variable to be binary. Literature for more than two treatment choices seem to very sparse \citep{Lou2017}. Fourth, we also study the asymptotic and non-asymptotic properties of the resulting estimators based on a careful analysis of the computing algorithm. This enables inference and provides useful insights for using our approach in practice. \section{A semiparametric framework for modeling contrast functions} \label{s:method} Suppose $\boldsymbol{X}\in \mathcal{X}$ is a $p$-dimensional vector of covariates, $Y$ is an outcome, and $T$ is a discrete variable whose effect on $Y$ and modification of this effect by ${\boldsymbol X}$ are of interest. We first consider the case when $T$ has only two levels and denote $\pi_t(\boldsymbol{X}) \equiv P(T=t|\boldsymbol{X})$ for $t\in\{-1, 1\}$. We can also use $\{1, 2\}$, instead of $\{-1, 1\}$, to denote the levels of $T$ and to conform with our notation below for the more general case. However we keep $\{-1, 1\}$ as it leads to simpler notation in our presentation. The main goal is to learn the following contrast function, \begin{equation}\label{eq:model_contrast} \Delta(\boldsymbol{X}) \equiv E[Y|T=1,\boldsymbol{X}]-E[Y|T=-1,\boldsymbol{X}], \end{equation} based on observed data. When $\Delta(\boldsymbol{X})>0$, $T=1$ rather than $T=-1$ leads to better clinical outcome for given $\boldsymbol{X}$, and vice versa. We specify $\Delta(\boldsymbol{X})=g(\boldsymbol{B}_0^\top \boldsymbol{X})$ where $g$ is an unknown function and $\boldsymbol{B}_0$ is a $p\times d$ matrix. {To solve the identifiable issue between $g$ and $\boldsymbol{B}_0$, we assume that the columns of $\boldsymbol{B}_0$ form a Grassmann manifold \citep{cook2007}}. This model is a single ($d=1$) or multiple index ($d>1$) model. In Appendix \ref{appA} we show that our model for \eqref{eq:model_contrast} is equivalent to the following model for the outcome $Y$: \begin{equation}\label{eq:model2} Y=\frac{1}{2}Tg(\boldsymbol{B}_0^\top \boldsymbol{X})+\epsilon(\boldsymbol{X}) \end{equation} where $\epsilon(\boldsymbol{X})$ is some random variable depending on $\boldsymbol{X}$ and satisfying the following conditional mean condition \begin{equation}\label{eq:condition_error} E\left [\frac{T}{\pi_T(\boldsymbol{X}) }\epsilon(\boldsymbol{X})\middle\vert \boldsymbol{X}\right ]=0. \end{equation} One can further denote $h(T, \boldsymbol{B}_0^\top \boldsymbol{X})\equiv \frac{1}{2}Tg(\boldsymbol{B}_0^\top \boldsymbol{X})$ and write Model~(\ref{eq:model2}) as \begin{equation}\label{eq:model1} Y=h(T, \boldsymbol{B}_0^\top \boldsymbol{X})+\epsilon(\boldsymbol{X}), \end{equation} where $h$ is an unknown function that satisfies $h(1, \cdot)+h(-1, \cdot)=0$, and $\epsilon(\boldsymbol{X})$ satisfies (\ref{eq:condition_error}). If one directly works with Model~(\ref{eq:model1}), the restriction on $h$ above is needed to guarantee the identifiability of $h(T, \boldsymbol{B}_0^\top \boldsymbol{X})$. Because $T$ is binary, Model~(\ref{eq:model1}) with the restriction on $h$ is equivalent to Model~(\ref{eq:model2}) with some straightforward algebra. Model~(\ref{eq:model2}) or (\ref{eq:model1}) is very flexible. Because (\ref{eq:condition_error}) is specified for the conditional expectation, instead of the distribution, the outcome $Y$ can be of many types. In addition, many forms for $\epsilon(\boldsymbol{X})$ may satisfy (\ref{eq:condition_error}). One example is $\epsilon(\boldsymbol{X}) = f_1(\boldsymbol{X})+ e$ for any $f_1$ and for $e$ that is independent of $T$ given $\boldsymbol{X}$. Apparently, Condition (\ref{eq:condition_error}) is satisfied because $E\left [{T}/{\pi_T(\boldsymbol{X})} \middle\vert \boldsymbol{X} \right ]=0.$ In this particular example, $f_1(\boldsymbol{X})$ can be viewed as the main effect of $\boldsymbol{X}$. Now consider the case when $T$ has $K$ levels and denote $\pi_t(\boldsymbol X) \equiv P(T=t|\boldsymbol X)$ for $t\in\{1,\cdots,K\}$. To fully represent effect modification, we need to use $K-1$ contrasts. For example when $K=3$, we can use contrasts such as $ E[Y|T=1,\boldsymbol{X}]-E[Y|T=2,\boldsymbol{X}]$ and $E[Y|T=3,\boldsymbol{X}]-\frac{1}{2}(E[Y|T=1,\boldsymbol{X}]+E[Y|T=2,\boldsymbol{X}])$. In general, we extend the concept of the contrast function in (\ref{eq:model_contrast}) to a contrast vector function of length $K-1$ as follows \begin{equation}\label{eq:trteffect_multi} {\boldsymbol \Delta}(\boldsymbol{X}) \equiv {\boldsymbol\Omega}\left( \begin{matrix} E[Y|T=1,\boldsymbol{X}]\\ \vdots\\ E[Y|T=K,\boldsymbol{X}] \end{matrix} \right). \end{equation} where ${\boldsymbol\Omega}$ is a pre-specified $(K-1)\times K$ matrix. The $K-1$ rows of ${\boldsymbol\Omega}$ represent the interested contrasts. For $K=2$, ${\boldsymbol\Omega}=(1, -1)$. For the above example of $K=3$, we have \begin{eqnarray*} {\boldsymbol\Omega}= \begin{pmatrix} 1 & -1 & 0 \\ -\frac{1}{2} & -\frac{1}{2} &1 \end{pmatrix}. \end{eqnarray*} For the contrasts to be interpretable, we require the sum of $i$th row of ${\boldsymbol\Omega}$ to be 0, that is, $\sum_{j=1}^K{\boldsymbol\Omega}_{ij}=0$ for $i=1, \dots, K-1$. Reasonably, we also require ${\boldsymbol\Omega}{\boldsymbol\Omega}^\top$ to be invertible. In this setup, the corresponding model is \begin{equation}\label{eq:model_contrast_multi} {\boldsymbol \Delta}(\boldsymbol{X}) = {\boldsymbol g} (\boldsymbol{B}_0^\top\boldsymbol{X}). \end{equation} Denote ${\boldsymbol\Omega}_{\cdot j}$ as the $j$th column of ${\boldsymbol\Omega}$. Then similar to the binary setting, the equivalent model for the outcome $Y$ is \begin{equation}\label{eq:model2_multi} Y={\boldsymbol\Omega}_{\cdot T}^\top \left ({\boldsymbol\Omega}{\boldsymbol\Omega}^\top\right)^{-1} {\boldsymbol g}(\boldsymbol{B}_0^\top\boldsymbol{X})+\epsilon(\boldsymbol{X})\end{equation} where $\epsilon(\boldsymbol{X})$ is some random variable depending on $\boldsymbol{X}$ and satisfying that \begin{equation}\label{eq:condition_error_multi} E\left [\frac{{\boldsymbol\Omega}_{\cdot T}}{\pi_T(\boldsymbol{X})}\epsilon(\boldsymbol{X})|\boldsymbol{X}\right ]={\boldsymbol 0}_{K-1}. \end{equation} Here ${\boldsymbol 0}_{K-1}$ is a vector of 0's with length $K-1$. The equivalence of these two models are easily derived noting that $\sum_{j=1}^K{\boldsymbol\Omega}_{\cdot j}={\boldsymbol 0}_{K-1}$. With some further simple algebra, Model (\ref{eq:model2_multi}) can be shown to be equivalent to \begin{equation}\label{eq:model1_multi} Y=h(T, \boldsymbol{B}_0^\top\boldsymbol{X})+\epsilon(\boldsymbol{X}), \end{equation} where $\epsilon(\boldsymbol{X})$ satisfies Condition (\ref{eq:condition_error_multi}), and $h$ is an unknown function such that $\sum_{t=1}^K h(t, \cdot)=0$. It is easy to see that Models (\ref{eq:model2_multi}) and (\ref{eq:model1_multi}) contain Models (\ref{eq:model2}) and (\ref{eq:model1}) of $K=2$ as special cases respectively. \section{Tangent spaces and semiparametric efficient scores} For notational simplicity, we denote $\epsilon(\boldsymbol{X})$ as $\epsilon$ in the rest of the article. We present our results under Model \eqref{eq:model2_multi} with the constraint \eqref{eq:condition_error_multi}. We characterize the nuisance tangent space and its orthogonal complement. The efficient score is also derived. We follow closely the notions of \cite{Tsiatisbook} and assume that the function class of our interests is the mean zero Hilbert space $\mathcal{H}=\{f(\epsilon, \boldsymbol{X}, T):E(f)=0\}$. The likelihood of $(\boldsymbol{X}, T, Y)$ is \begin{equation} \eta_{\boldsymbol{X}}(\boldsymbol{X})\pi_T(\boldsymbol{X})\eta_{\epsilon}\left (Y-{\boldsymbol\Omega}_{\cdot T}^\top \left ({\boldsymbol\Omega}{\boldsymbol\Omega}^\top\right)^{-1} {\boldsymbol g}(\boldsymbol{B}_0^\top\boldsymbol{X}),\boldsymbol{X}, T\right ), \end{equation} where $\eta_{\boldsymbol{X}}$ is the density of $\boldsymbol{X}$, and $\eta_{\epsilon}$ is the density of $\epsilon$ conditional on $\boldsymbol X$ and $T$, with respect to some dominating measure. Note that $\eta_{\boldsymbol{X}}$, $\pi_T$, $\eta_{\epsilon}$, and $g$ are infinite-dimensional nuisance parameters. Let $$\boldsymbol{w}_T=\frac{{\boldsymbol\Omega}_{\cdot T}}{\pi_T(\boldsymbol{X})}.$$ The tangent spaces correspond to $\eta_{\boldsymbol{X}}$, $\eta_{\epsilon}$, and $\pi_T$ are \begin{eqnarray*} \Lambda_{\boldsymbol{X}}&=&\{f(\boldsymbol{X})\in \mathcal{H}: E[f]=0\} \label{eq:Lambda2s_multi}\\% \Lambda_{\epsilon} &=& \left \{f(\epsilon,\boldsymbol{X}, T)\in \mathcal{H}: E(f|\boldsymbol{X}, T)=0 \text{ and } E\Big[\frac{{\boldsymbol\Omega}_{\cdot T}}{\pi_T(\boldsymbol{X})}f\epsilon|\boldsymbol{X}\Big]=0\right \}.\label{eq:Lambda1s_multi} \\% \Lambda_{\pi} &=& \{f(\boldsymbol{X}, T)\in \mathcal{H}: E[f\,|\,\boldsymbol{X}]=0\}. \end{eqnarray*} Through some algebra, we can rewrite $\Lambda_{\pi}$ as \begin{equation*}\label{eq:Lambdapi_multi} \Lambda_{\pi} = \left \{\boldsymbol{w}_T^\top \left ({\boldsymbol\Omega}{\boldsymbol\Omega}^\top\right)^{-1}\boldsymbol h_{\pi}(\boldsymbol{X}), \forall \boldsymbol h_{\pi}(\boldsymbol{X}):\mathcal{X}\mapsto R^{K-1}\right \}. \end{equation*} The tangent space of $\boldsymbol g$ is \begin{equation*}\label{eq:Lambdag_multi} \Lambda_{\boldsymbol g} = \left \{\frac{\eta^{'}_{\epsilon, 1}(\epsilon,\boldsymbol{X}, T)}{\eta_{\epsilon}(\epsilon,\boldsymbol{X}, T)}{\boldsymbol\Omega}_{\cdot T}^\top \left ({\boldsymbol\Omega}{\boldsymbol\Omega}^\top\right)^{-1}\boldsymbol h_{\boldsymbol g}(\boldsymbol{B}_0^\top \boldsymbol{X}),\forall \boldsymbol h_{\boldsymbol g}(\boldsymbol{B}_0^\top \boldsymbol{X}):\mathcal{X}\mapsto R^{K-1}\right \}, \end{equation*} where $\eta^{'}_{\epsilon, 1}(\cdot)$ is the derivative of $\eta_{\epsilon}(\epsilon, \boldsymbol{X}, T)$ w.r.t $\epsilon$. Let ${\perp}$ denote the orthogonal complement of a Hilbert space. Denote $\Lambda\equiv \Lambda_{\boldsymbol{X}}+\Lambda_{\epsilon}+\Lambda_{\pi}+\Lambda_{\boldsymbol g}$. Then we have \begin{theorem}\label{thm:nuissance_multi} The orthogonal complement of nuisance tangent space, $\Lambda^{\perp}$, is subspace characterized by all functions with the form \begin{equation*} \boldsymbol{w}_T^\top\left [\epsilon- E(\epsilon|\boldsymbol{X})\right ]\left [\boldsymbol{\alpha}(\boldsymbol{X})-E\{\boldsymbol \alpha(\boldsymbol{X})|\boldsymbol{B}_0^\top \boldsymbol{X}\}\right ], \end{equation*} for any function $\boldsymbol{\alpha}(\boldsymbol{X}):\mathcal{X}\mapsto R^{K-1}$. \end{theorem} Detailed proofs of this theorem and other theorem and corollaries are given in the Appendix \ref{appB}. To obtain the efficient score, we need to project the score function onto $\Lambda^{\perp}$. The following theorem provides a formula to project any function onto $\Lambda^{\perp}$ and thus contains the efficient score as a special case. \begin{theorem}\label{lemma:projection_multi} For any function $h(\epsilon, \boldsymbol{X}, T)\in \mathcal{H}$, its projection onto $\Lambda^{\perp}$ is given by \begin{equation*} \boldsymbol{w}_T^\top \left \{\epsilon-E(\epsilon|\boldsymbol{X}) \right \}\boldsymbol C(\boldsymbol{B}_0^\top\boldsymbol{X}), \end{equation*} where \begin{eqnarray*} {\boldsymbol C}(\boldsymbol{B}_0^\top\boldsymbol{X})&=&{\boldsymbol V}({\boldsymbol X})\big \{\boldsymbol D(\boldsymbol{X})-E[{\boldsymbol V}({\boldsymbol X})|\boldsymbol{B}_0^\top \boldsymbol{X}]^{-1}E[{\boldsymbol V}({\boldsymbol X})\boldsymbol D(\boldsymbol{X})|\boldsymbol{B}_0^\top \boldsymbol{X}]\big\},\\ {\boldsymbol V}({\boldsymbol X})^{-1} &=&E(\boldsymbol{w}_T\boldsymbol{w}_T^\top \epsilon^2|\boldsymbol{X})-E(\boldsymbol{w}_T\boldsymbol{w}_T^\top|\boldsymbol{X})E(\epsilon|\boldsymbol{X})^2, \\% \boldsymbol D(\boldsymbol{X})&=&E(\boldsymbol{w}_Th\epsilon|\boldsymbol{X})-E(\epsilon|\boldsymbol{X})E(\boldsymbol{w}_Th|\boldsymbol{X}). \end{eqnarray*} \end{theorem} Note that $\boldsymbol C(\boldsymbol{B}_0^\top\boldsymbol{X})$ depends on ${\boldsymbol X}$, in addition to $\boldsymbol{B}_0^\top\boldsymbol{X}$. But we have suppressed it for notational simplicity. After setting $h$ as the score function in Theorem \ref{lemma:projection_multi}, we obtain the efficient score in the following corollary. \begin{corollary}\label{thm:effcient_score_multi} The efficient score of $\boldsymbol{B}$ is given by the vectorization of a $d\times p$ matrix whose $(i,j)$ coordinate is given by $$\boldsymbol{w}_T^\top \left \{\epsilon-E(\epsilon|\boldsymbol{X}) \right \}\boldsymbol C_{i,j}^*(\boldsymbol{B}_0^\top\boldsymbol{X}),$$ where \begin{align*} \boldsymbol C_{i,j}^*(\boldsymbol{B}_0^\top\boldsymbol{X})=&{\boldsymbol V}({\boldsymbol X})\left \{X_{j}-E[{\boldsymbol V}({\boldsymbol X})|\boldsymbol{B}_0^\top \boldsymbol{X}]^{-1}E[{\boldsymbol V}({\boldsymbol X})X_{j}\, |\,\boldsymbol{B}_0^\top \boldsymbol{X}] \right \}\\ & \times \partial_i{\boldsymbol g} (\boldsymbol{B}_0^\top \boldsymbol{X}), \end{align*} $X_{j}$ is the $j$th component of ${\boldsymbol X}$, and $\partial_i{\boldsymbol g} $ is the derivative of ${\boldsymbol g}$ with respect to its $i$th index. \end{corollary} As a special case when $K=2$, we have the following corollaries. \begin{corollary}\label{thm:ortho_space} For $K=2$ and $T\in \{-1, 1\}$, \begin{equation*} \Lambda^{\perp}=\left \{w_T \big[\alpha(\boldsymbol{X})-E\{\alpha(\boldsymbol{X})|\boldsymbol{B}_0^\top \boldsymbol{X}\} \big ] \left [\epsilon-E(\epsilon|\boldsymbol{X})\right ], \forall \alpha(\boldsymbol{X}):\mathcal{X}\mapsto R\right \}. \end{equation*} where $w_T =T/\pi_T({\boldsymbol X})$. \end{corollary} \begin{corollary}\label{lemma:projection} For $K=2$ and $T\in \{-1, 1\}$, the projection of any function $h(\epsilon, \boldsymbol{X}, T)\in \mathcal{H} $ onto $\Lambda^{\perp}$ is given by $$ w_T\, C(\boldsymbol{B}_0^\top\boldsymbol{X})\left \{\epsilon-E[\epsilon|\boldsymbol{X}]\right \}, $$ where \begin{eqnarray*} C(\boldsymbol{B}_0^\top\boldsymbol{X})&=&V({\boldsymbol X})\left \{D(\boldsymbol{X})-\frac{E[V({\boldsymbol X})D(\boldsymbol{X})|\boldsymbol{B}_0^\top \boldsymbol{X}]}{E[V({\boldsymbol X})|\boldsymbol{B}_0^\top \boldsymbol{X}]}\right \} \\%, V(\boldsymbol{X})&=&\left \{E(w_T^2\epsilon^2|\boldsymbol{X})-E(\epsilon|\boldsymbol{X})^2E(w_T^2|\boldsymbol{X})\right \}^{-1} \\%, D(\boldsymbol{X})&=&E(w_Th\epsilon|\boldsymbol{X})-E(\epsilon|\boldsymbol{X})E(w_Th|\boldsymbol{X}). \end{eqnarray*} Therefore, the efficient score is \begin{equation*} w_T\, \boldsymbol C^*(\boldsymbol{B}_0^\top\boldsymbol{X})\left \{\epsilon-E(\epsilon|\boldsymbol{X})\right \}, \end{equation*} where \begin{equation*} \boldsymbol C^*(\boldsymbol{B}_0^\top\boldsymbol{X})=V({\boldsymbol X})\, \nabla g(\boldsymbol{B}_0^\top \boldsymbol{X})\otimes \left \{\boldsymbol{X}-\frac{E[V({\boldsymbol X})\boldsymbol{X}|\boldsymbol{B}_0^\top \boldsymbol{X}]}{E[V({\boldsymbol X})|\boldsymbol{B}_0^\top \boldsymbol{X}]}\right \}, \end{equation*} and $\otimes$ is Kronecker product. \end{corollary} In some cases such as clinical trials, $\pi_T({\boldsymbol X})$ may be {\bf known}. In this case, there is no corresponding tangent space $\Lambda_{\pi}$ and the corresponding nuisance tangent space $\tilde{\Lambda}\equiv \Lambda_{\boldsymbol{X}}+\Lambda_{\epsilon}+\Lambda_{\boldsymbol g}$. Its orthogonal complement $\tilde{\Lambda}^{\perp}$ is then larger and can be shown to be the sum of $\Lambda^{\perp}$ and $\mathcal{S}_2$ defined in (\ref{def_S2}). For any function $h(\epsilon, \boldsymbol{X}, T)$, its projection on $\tilde{\Lambda}^{\perp}$ is its projection on ${\Lambda}^{\perp}$ plus an additional term $\boldsymbol{w}_T^\top E(\boldsymbol{w}_T\boldsymbol{w}_T^\top|\boldsymbol{X})^{-1}E(\boldsymbol{w}_Th|\boldsymbol{X})$. However the efficient score is unchanged as $E(\boldsymbol{w}_Th|\boldsymbol{X})=0$ when $h$ is chosen as the score function. \section{Estimation and algorithm} We first consider estimation of $\boldsymbol{B}_0$ with fixed $d$. Then we propose a method for determining $d$ similar to \citet{Xia2002}. For simplicity, we present our method with $K=2$. Generalization to $K>2$ is straightforward. From Corollary \ref{lemma:projection}, we can see that the efficient score is hard to estimate directly due to many conditional expectations involved. It can be simplified under some special cases which lead to a local semiparametric efficient estimator. For example, when $\epsilon$ has constant variance conditional on $\boldsymbol{X}$, ${\boldsymbol V}(\boldsymbol{X})$ becomes a non-zero constant. The efficient score is then $w_T\nabla g(\boldsymbol{B}_0^\top \boldsymbol{X})\otimes (\boldsymbol{X}-E[\boldsymbol{X}|\boldsymbol{B}_0^\top \boldsymbol{X}])(\epsilon-E[\epsilon|\boldsymbol{X}])$. In general, without the above simplifying assumption on $\epsilon$, the following class of estimating equations are all unbiased for estimating $\boldsymbol{B}_0$ under Model (\ref{eq:model2_multi}), \begin{equation*} \tilde{S}=\big\{w_T\nabla g(\boldsymbol{B}_0^\top \boldsymbol{X})\otimes \boldsymbol{X} (\epsilon-\eta(\boldsymbol{X})), \forall \eta(\boldsymbol{X}):\mathcal{X}\mapsto R\big\}. \end{equation*} This will be our choice of estimating equations. The obvious benefit of using this function class $\tilde{S}$ is that solving the estimating equations is equivalent to minimizing the loss function $\pi_T({\boldsymbol X})^{-1}{\{Y-\frac{1}{2}Tg(\boldsymbol{B}_0^\top \boldsymbol{X})-\eta(\boldsymbol{X})\}^2}$. The corresponding sample version is \begin{equation}\label{eq:loss} L_g(\boldsymbol{B})=\frac{1}{n}\sum_{i=1}^n\frac{\{Y_i-\frac{1}{2}T_ig(\boldsymbol{B}^{T} \boldsymbol{X}_i)-\eta(\boldsymbol{X}_i)\}^2}{\pi_{T_i}(\boldsymbol{X}_i)}. \end{equation} For notational simplicity, $\pi_T(\boldsymbol{X})$ is assumed known. But our method works just as well with consistently estimated $\pi_T(\boldsymbol{X})$ as we demonstrate in our simulation studies. Note that one still needs to choose $\eta(\boldsymbol{X})$, which can play an important role for the efficiency of the resulting estimator. A convenient choice is $\eta(\boldsymbol{X})=0$ adopted in \cite{Chen2017} and in \cite{Tian2014}. Another choice is $\eta(\boldsymbol{X})=\{1-2\pi(\boldsymbol{X})\}g(\boldsymbol{B}_0^\top \boldsymbol{X})$ used by \citet{song2017}. However, we can show that $$\eta^*(\boldsymbol{X})={E[\epsilon|\boldsymbol{X}]}$$ leads to the most efficient estimator. We consider all these choices of $\eta(\boldsymbol{X})$ in our method. Now to estimate $\boldsymbol{B}_0$ through minimizing $L_g(\boldsymbol{B})$, because $g$ is unknown, we employ a minimum average variance estimation (MAVE) type of method as advocated in \cite{Xia2002}. In particular minimization is based on the following approximating loss function: \begin{align}\label{eq:tMAVE} &L( \boldsymbol{B}, \{a_j, \boldsymbol{b}_j\}_{j=1}^n)\\ \nonumber =&\frac{1}{n^2}\sum_{j=1}^n\sum_{i=1}^n\frac{\{Y_i-\frac{1}{2}T_i[a_j+ \boldsymbol{b}_j^\top ( \boldsymbol{B}^\top \boldsymbol{X}_i- \boldsymbol{B}^\top \boldsymbol{X}_j)]-\eta(\boldsymbol{X}_i)\}^2}{\pi_{T_i}(\boldsymbol{X}_i)}w_{ij}, \end{align} where $w_{ij}=K_h( \boldsymbol{B}^\top \boldsymbol{X}_j- \boldsymbol{B}^\top \boldsymbol{X}_i)$ and $K_h(\cdot)=\frac{1}{h^d}K(\cdot/h)$ is a kernel function with bandwidth $h$. The minimizer above is expected to be able to recover $span\{\boldsymbol{B}_0\}$, which is the column space of $\boldsymbol{B}_0$. The extra parameters $a_j \in R$ and ${\boldsymbol b}_j \in R^d$ can be thought of as approximations to $g$ and its gradient at each point $\boldsymbol{B}^\top \boldsymbol{X}_j$, and the kernel weight $w_{ij}$ ensures the adequacy of the local linear approximation of $g$ in its neighborhood. We call our method interaction MAVE (iMAVE). \subsection{The iMAVE method with fixed $\eta$} In this section, the algorithm to minimize (\ref{eq:tMAVE}) is introduced. The procedure is an alternatively weighted least square algorithm and can be implemented using the following steps. \begin{enumerate} \item Initial estimator, $\boldsymbol{B}_{(1)}$, is obtained. Please see our comments after the algorithm on how to obtain $\boldsymbol{B}_{(1)}$. \item Let $\boldsymbol{B}_{(t)}$ be the estimator at the $t$th iteration. Calculate $$w_{ij}^{(t)}=K_h( \boldsymbol{B}_{(t)}^\top \boldsymbol{X}_i- \boldsymbol{B}_{(t)}^\top \boldsymbol{X}_j).$$ \item Solve the following weighted least square to obtain $$(a_j^{(t)}, \boldsymbol{b}_j^{(t)})=\arg \min_{a_j, \boldsymbol{b}_j}L_1(a_j, \boldsymbol{b}_j),$$ for $j=1,\cdots, n$, where \begin{equation*} L_1(a_j, \boldsymbol{b}_j)=\frac{1}{n}\sum_{i=1}^n\frac{\{Y_i-\eta(\boldsymbol{X}_i)-\frac{1}{2}T_i[a_j+ \boldsymbol{b}_j^\top ( \boldsymbol{B}_{(t)}^\top \boldsymbol{X}_i- \boldsymbol{B}_{(t)}^\top \boldsymbol{X}_j)]\}^2}{\pi_{T_i}(\boldsymbol{X}_i)}w_{ij}^{(t)}. \end{equation*} \item Solve the following weighted least square to obtain $$\tilde{\boldsymbol{B}}_{(t+1)}=\arg\min_{\boldsymbol{B}} L_2(\boldsymbol{B}),$$ where \begin{align*} &L_2(\boldsymbol{B})\\ \nonumber =&\frac{1}{n^2}\sum_{j=1}^n\sum_{i=1}^n\frac{\{Y_i-\eta(\boldsymbol{X}_i)-\frac{1}{2}T_i[a_j^{(t)}+ {\boldsymbol{b}_j^{(t)}}^\top ( \boldsymbol{B}^\top \boldsymbol{X}_i- \boldsymbol{B}^\top \boldsymbol{X}_j)]\}^2}{\pi_{T_i}(\boldsymbol{X}_i)}w_{ij}^{(t)}. \end{align*} \item Normalize to obtain $\boldsymbol{B}_{(t+1)}=\tilde{\boldsymbol{B}}_{(t+1)}\{[\tilde{\boldsymbol{B}}_{(t+1)}]^\top \tilde{\boldsymbol{B}}_{(t+1)}\}^{-1/2}$. \item If the discrepancy, $|\boldsymbol{B}_{(t+1)}-\boldsymbol{B}_{(t)}|$, is smaller than a pre-specified tolerance, or a max number of iterations achieved, then we output $\boldsymbol{B}_{(t+1)}$. If not, go back to Step (2) and start a new iteration. \end{enumerate} The initial estimator $\boldsymbol{B}_{(1)}$ needs to be a consistent estimator for our theoretical analysis. To get a consistent $\boldsymbol{B}_{(1)}$, one choice is to solve a simplified version of \eqref{eq:tMAVE} by only expanding $g$ at $\boldsymbol 0$, \begin{equation*} L(\boldsymbol{B})=\frac{1}{n}\sum_{i=1}^n\frac{\{Y_i-\frac{1}{2}T_i \boldsymbol{B}^\top \boldsymbol{X}_i\}^2}{\pi_{T_i}(\boldsymbol{X}_i)}\tilde{w}_{i0}, \end{equation*} where $\tilde{w}_{i0}=K_h(\boldsymbol{B}^\top \boldsymbol{X}_i)$. Or one can utilize the method of \citet{song2017} when $d=1$. In our simulation studies, we find that the convergence of the algorithm is quite insensitive to the choice of $\boldsymbol{B}_{(1)}$. \subsection{The iMAVE2 method with estimated $\eta^*$} \label{subsect:eff aug} The following two-step procedure is proposed to estimate $\eta^*({\boldsymbol X})={E[\epsilon|\boldsymbol{X}]}$ for a given ${\boldsymbol X}$. First, we obtain an estimate $\hat{\boldsymbol B}$ of $\boldsymbol{B}_0$ with a pre-fixed $\eta$. Then $g$ is estimated by \begin{equation}\label{eq:predict_g} \hat{g}(\boldsymbol{X})=\frac{\sum_{i=1}^{n} w_{T_i}Y_iK_{h}(\hat{ \boldsymbol{B}}^\top ( \boldsymbol{X}_i- \boldsymbol{X}))}{\sum_{i=1}^{n} K_{h}(\hat{ \boldsymbol{B}}^\top ( \boldsymbol{X}_i- \boldsymbol{X}))}, \end{equation} where $K_{h}$ is a kernel function with $K_{h}(\boldsymbol{X})=h^{-d}K(\boldsymbol{X}/h)$. The kernel $K$ and bandwidth $h$ can be different from those used before. The estimated residual is $\hat{\epsilon}_i=Y_i-\frac{1}{2}T_i\hat{g}(\hat{\boldsymbol{B}}^\top \boldsymbol X_i)$. We can then estimate $E[\epsilon|\boldsymbol{X}]$, by \begin{equation}\label{eq:estimation_mainEffect} \frac{\sum_{i=1}^n\hat{\epsilon}_i{K}_{h}(\boldsymbol{X}_i-\boldsymbol{X})}{\sum_{i=1}^n{K}_{h}(\boldsymbol{X}_i-\boldsymbol{X})}, \end{equation} where $K_{h}$ is another kernel function with $K_{h}(\boldsymbol{X})=h^{-p}K(\boldsymbol{X}/h)$. Again, the kernel $K$ and bandwidth $h$ can be different from those used before. With an estimated $\hat{\eta}^*$, a possibly improved estimator $\hat{ \boldsymbol{B}}^*$ of ${\boldsymbol B}_0$ can be obtained. We call this efficiency improved estimation method iMAVE2. Other approach to obtain $\eta^*$ can also be considered. For example, it may be estimated based on an external independent dataset or given directly through prior knowledge. When $\eta^*$ can not be estimated reliably, especially when the dimensionality of $\boldsymbol{X}$ is high or when the sample size $n$ is small, as long as the estimator is a function of ${\boldsymbol X}$, the resulting $\hat{\boldsymbol B}^*$ is still unbiased in principle. Therefore instead of nonparametric estimators, parametric models may be used too to estimate $\eta^*$. \subsection{Dimension determination} \label{subsect:dimension determine} There is a need to determine the dimension $d$, especially when $p$ is large. Many methods proposed in the dimension reduction literature are applicable in our setting too \citep{Koch2007, Schott1994, Cook1998}. In this paper, we adopt the same procedure as \citet{Xia2002}, which is a consistent procedure based on cross-validation. Given a dimension $d\in \{0, 1, \cdots, p\}$, the procedure goes through the following steps. \begin{enumerate} \item Randomly split the dataset into five folds, and $\mathcal{I}_1,\cdots, \mathcal{I}_5$ are the index sets corresponding to these folds. \item Choose four folds as a training data set, and the rest as a testing data set. Our model is fitted on the training data set to obtain an estimate $\hat{\boldsymbol{B}}$ based on iMAVE. We predict the contrast function $g$ on the testing data set using \eqref{eq:predict_g}. \item Calculate the following score. \begin{equation*} CV(d, m)=\frac{1}{|\mathcal{I}_m|}\sum_{i\in \mathcal{I}_m}\left(\frac{1}{2}\frac{T_iY_i}{\pi_{T_i}(\boldsymbol{X}_i)}-\hat{g}_{(-m)}(\hat{\boldsymbol{B}}^\top \boldsymbol{X}_i)\right)^2, \end{equation*} where $\hat{g}_{(-m)}(\cdot)$ is estimated using all other folds except the $m$th fold. \item Repeat Step (2) and (3) for a different selection of folds as training and testing data sets, until each fold has been chosen as a testing data set. Then average $CV(d, m)$, for $m=1,\cdots, 5$ to obtain $CV(d)$. \end{enumerate} The estimated dimension is $ \hat{d}=arg\min_{0\leq d\leq p} CV(d). $ It is intuitively clear that over-estimating the true dimension $d$ to a slightly larger value is much less of a concern than under-estimating. \section{Theoretical results} \label{s:theory} In this section, we analyze our estimator in a unified framework of statistical and algorithm properties assuming binary $T$ for notational simplicity. We study both iMAVE and iMAVE2. Consistency of the dimension determination procedure can be established in exactly the same way as \cite{Xia2002}, hence we omit it here. The non-convexity of (\ref{eq:tMAVE}) makes it intractable to obtain theoretical results for prediction or classification error by simply mimicking the usual analysis of empirical risk minimization \citep{Vapnikbook}. It is also hard to analyze the convergence rate or asymptotic distribution of the proposed estimators due to a lack of characterization of the minimizers. On the other hand, because we carry out our optimization by iteratively solving a weighted least square problem, we can track the change of each iteration similar to \cite{Xia2002} and \cite{Xia2007}. This leads us to propose a unified framework of joint statistical and algorithm analysis. For any matrix $\boldsymbol{A}$, $|\boldsymbol{A}|$ represents the Frobenius norm of $\boldsymbol{A}$. For any random matrix $\boldsymbol{A}_n$, we say $\boldsymbol{A}_n=O_p(a_n)$ if each entry of $\boldsymbol{A}_n$ is $O_p(a_n)$. Let $\boldsymbol{B}_{(t)}$ be the estimator used in the $t$th iteration, and $\hat{\boldsymbol{B}}$ be the limit of $\boldsymbol{B}_{(t)}$ when $t\to +\infty$. The existence of the limit of $\boldsymbol{B}_{(t)}$ as well as the convergence of the algorithm, similar to \cite{Xia2007}, can be concluded from the proof. Denote $\delta_{\boldsymbol{B}}^{(t)}=|\boldsymbol{B}_{(t)}-\boldsymbol{B}_0|$. Our goal is to answer the following questions for both iMAVE and iMAVE2: \begin{enumerate} \item Suppose that $\delta_{\boldsymbol{B}}^{(1)}$ has some convergence rate to $0$. After $t$ iterations, what is the convergence rate of $\delta_{\boldsymbol{B}}^{(t)}$? \label{Q1} \item What is the convergence rate of $\delta_{\hat{\boldsymbol{B}}}\equiv |\hat{ \boldsymbol{B}}-\boldsymbol{B}_0|$? \label{Q2} \item Whether there is asymptotic efficiency gain of iMAVE2 compared with iMAVE? \label{Q3} \end{enumerate} Questions \ref{Q1} and \ref{Q2} are answered by Theorems \ref{thm2} and \ref{thm1}, respectively. Question \ref{Q3} is answered by Theorems~\ref{thm3} and \ref{thm:efficient}. Theorem~\ref{thm2} is a new result beyond \citet{Xia2002} and \citet{Xia2007}. It essentially quantifies the non-asymptotic property of our estimators. It implies that under certain conditions, $\delta_{\boldsymbol{B}}^{(t)}$ converges to $0$ with a rate at least $(n/\log n)^{-1/2}$ almost surely when $t$ is large enough and $d\leq 5$. When $d>5$, the convergence rate is bounded by a quantity related to bandwidth and $d$, and slower than $(n/\log n)^{-1/2}$. Theorem~\ref{thm1} implies that under certain conditions, $\delta_{\hat{\boldsymbol{B}}}$ converges to $0$ in probability with the order of $n^{-1/2}$ when $d\leq 5$. When $d>5$, the convergence rate is slower than $n^{-1/2}$. The convergence rate in Theorem~\ref{thm1} is different than that in Theorem~\ref{thm2} by a factor of $\log n$ due to the difference of convergence modes. Theorem~\ref{thm2} provides deeper result with both statistical and algorithm properties. Theorems~\ref{thm3} and \ref{thm:efficient} provide the asymptotic distributions of iMAVE and iMAVE2 estimators, respectively. Theorem \ref{thm6} provides the accuracy of estimating $g$ based on $\hat{\boldsymbol{B}}$. Combining with the previous results in Section \ref{s:method}, we will see that difference of the asymptotic covariance matrices of iMAVE and iMAVE2 is always positive semi-definite. Thus, iMAVE2 is more efficient than iMAVE. The conditions needed for our theorems are as follows. Let $\xi_{\boldsymbol{B}}(\boldsymbol{u})=E (\boldsymbol{X}\boldsymbol{X}^\top| \boldsymbol{B}^\top \boldsymbol{X}=\boldsymbol{u})$ and $\mu_{\boldsymbol{B}}(\boldsymbol{u})\equiv E(\boldsymbol{X}| \boldsymbol{B}^\top \boldsymbol{X}=\boldsymbol{u})$. We denote the distribution of $ \boldsymbol{B}^\top \boldsymbol{X}$ as $f_{\boldsymbol{B}}( \boldsymbol{B}^\top \boldsymbol{x})$. \begin{enumerate} \item[(C.1)] The density of $\boldsymbol{X}$, $f_{\boldsymbol{X}}(\boldsymbol{x})$, has bounded 4th order derivatives and compact support. $\mu_{ \boldsymbol{B}}(\boldsymbol{u})$ and $\xi_{ \boldsymbol{B}}(\boldsymbol{u})$ have bounded derivatives with respect to $ \boldsymbol{u}$ and $ \boldsymbol{B}$, for $ \boldsymbol{B}$ in a small neighborhood of $ \boldsymbol{B}_0:| \boldsymbol{B}- \boldsymbol{B}_0|\leq \delta$, for some $\delta>0$. \item[(C.2)] The matrix $\boldsymbol{M}_0=\int \nabla g(\boldsymbol{B}_0^\top \boldsymbol{x})\nabla^\top g(\boldsymbol{B}_0^\top \boldsymbol{x})\times f_{\boldsymbol{B}_0}(\boldsymbol{B}_0^\top \boldsymbol{x})f_{\boldsymbol{X}}(\boldsymbol{x})d\boldsymbol{x}$ has full rank $d$. \item[(C.3)] $K(\cdot)$ is a spherical symmetric univariate density function with bounded 2nd order derivative and compact support. \item[(C.4)] $g$ has bounded derivative. The error $\epsilon$ is bounded, or unbounded but there exist some $M$ and $\nu_0\in [0,+\infty)$ such that \begin{equation*}\label{cond:epsilon} E\left \{\exp[w_T\epsilon/M]-1-|w_T\epsilon|/M \, \big| \, \boldsymbol{X}\right \} M^2\leq \nu_0/2. \end{equation*} \item[(C.5)] The bandwidth $h_1=c_1n^{-r_h}$, where $0<r_h\leq 1/(p_0+6)$, $p_0=\max\{p,3\}$. For $t\geq 2$, $h_t=\max\{r_nh_{t-1},\hbar\}$, where $r_n=n^{-r_h/2}$, $\hbar=c_3n^{-r'_h}$ with $0<r'_h\leq 1/(d+3)$, and $c_1$ - $c_4$ are constants. \item[(C.6)] $f_{\boldsymbol{B}}( \boldsymbol{B}^\top \boldsymbol x)$ is bounded away from 0. In addition, $E[w_TY|\boldsymbol{B}^\top \boldsymbol{X}=\boldsymbol{u}]$ is Lipschitz continuous. \end{enumerate} Condition (C.6) is only needed for Theorem \ref{thm6}. Conditions (C.1) - (C.5) are similar to \citet{Xia2007} except the requirement for compact support of covariates. This requirement is needed for iMAVE2 because $g$ needs to be estimated to a certain rate for the asymptotic property of iMAVE2. For iMAVE, this requirement can be replaced by a finite moment condition. Epanechnikov and quadratic kernels satisfy Condition (C.3). The Gaussian kernel can also be used to guarantee our theoretical results with some modification to the proofs. According to \citet{Xia2007}, Condition (C.2) suggests that the dimension $d$ can not be further reduced. Condition (C.4) indicates that $\epsilon$ has to be conditionally subgaussian. The bandwidth requirement in Condition (C.5) can be easily met. Condition (C.6) characterizes the smoothness of $g$ as typically required for conditional expectation estimation. \begin{theorem}\label{thm2} Under Conditions (C.1) - (C.5), suppose that the initial estimator, $\boldsymbol{B}_{(1)}$, satisfies $\delta_{\boldsymbol{B}}^{(1)}/h_1\to 0$, if $n$ is large enough, then there exists a constant $C_1$ such that when number of iterations $$t\geq 1+\log \min\left \{\frac{3C_1\{\delta_n+\delta_{d\hbar}^2\hbar+\hbar^4\}}{\delta_{\boldsymbol{B}}^{(1)}+2C_1h_1^4},1\right \} \Big /\log \frac{2}{3},$$ we have $$ \delta_{\boldsymbol{B}}^{(t)}\leq (3C_1+1)\{\delta_n+\delta_{d\hbar}^2\hbar+\hbar^4\}, \text{almost surely}, $$ where $\delta_n=(n/\log n)^{-1/2}$ and $\delta_{d\hbar}=(n\hbar^d/\log n)^{-1/2}$. \end{theorem} A simple observation from Theorem \ref{thm2} implies that to reach the same accuracy when $d$ increases, the number of iterations required is increasing linearly in $d$. This provides a useful guidance on the maximum number of iterations for the algorithm. \begin{theorem}\label{thm1} Under same conditions as Theorem~\ref{thm2}, there exists a matrix $\boldsymbol{B}_0^{\perp}$ whose column space is the orthogonal complement of the column space of $\boldsymbol{B}_0$, such that $$ \hat{\boldsymbol{B}}=\boldsymbol{B}_0(\boldsymbol I_d+O_p(\hbar^4+\delta_{d\hbar}^2+n^{-1/2}))+\boldsymbol{B}_0^{\perp}O_p(\hbar^4+\delta_{d\hbar}^2+n^{-1/2}). $$ \end{theorem} Theorem~\ref{thm1} implies that when $\hat{\boldsymbol{B}}$ is decomposed based on the column space of $\boldsymbol{B}_0$ and its orthogonal complement, the component in the column space of $\boldsymbol{B}_0^{\perp}$ converges to $0$, and the projection of $\hat{\boldsymbol{B}}$ on the column space of $\boldsymbol{B}_0$ converges to $\boldsymbol{B}_0$. To obtain the $n^{-1/2}$ convergence rate, we need $\hbar^4+\delta_{d\hbar}^2=O(n^{-1/2})$. In this case, $d$ has to be smaller than $5$. \begin{theorem}\label{thm3} Assume the same conditions as Theorem \ref{thm2} and $\hbar^4+\delta_{d\hbar}^2=o_p(n^{-1/2})$. Denote $\nu_{\boldsymbol{B}}(\boldsymbol{x}) \equiv \mu_{\boldsymbol{B}}(\boldsymbol{B}^\top \boldsymbol{x})-\boldsymbol{x}$. Let $l(\hat{\boldsymbol{B}})$ and $l(\boldsymbol{B}_0)$ be vectorizations of the matrices $\hat{\boldsymbol{B}}$ and $\boldsymbol{B}_0$, respectively. Then \begin{equation*} \sqrt{n}\{l(\hat{\boldsymbol{B}})-l(\boldsymbol{B}_0)\}\to N(0,\boldsymbol{D}_0^+\boldsymbol{\Sigma}_0\boldsymbol{D}_0^+), \end{equation*} where $\boldsymbol{\Sigma}_0=Var\left \{w_{T_i}\nabla g(\boldsymbol{B}_0^\top \boldsymbol{X}_i)\otimes \nu_{\boldsymbol{B}_0}(\boldsymbol{X}_i)\{\epsilon_i-\eta(\boldsymbol{X}_i)\}\right \}$. The expression of $\boldsymbol{D}_0^+$ can be found in our proof of this theorem from the supplemental document. \end{theorem} \begin{theorem} \label{thm6} Suppose that Conditions (C.1) - (C.6) are satisfied and $g$ is estimated by kernel $K_{h}$ of some order $m$. Then $h$ can be selected such that when $n$ is large enough, \begin{equation*} \|\hat{ g}-g\|_{\infty}\leq O\Big\{ {(n/\log n)}^{-\frac{m}{2m+d}} \Big \}, \text{almost surely}. \end{equation*} where $m$ can be any integer when $d\leq 5$, but $m \le {4d}/{(d-5)}$ when $d> 5$ \end{theorem} \begin{theorem}\label{thm:efficient} Denote $\delta_{ph}\equiv (nh^p/\log n)^{-1/2}$ and $\delta_{dh_1}\equiv (nh_1^d/\log n)^{-1/2}.$ In iMAVE2, suppose $\|\hat{g}-g\|_{\infty}=O(\delta_{dh_1})$ almost surely, and $\delta_{ph}^2+h^{2m}=o(n^{-1/2})$ when estimating $\eta^*$ by $\hat{\eta}^*$ using \eqref{eq:estimation_mainEffect}. Then, under Conditions (C.1) - (C.5), for iMAVE2, Theorems \ref{thm2} and \ref{thm1} still hold and Theorem \ref{thm3} holds with the asymptotic variance, $\boldsymbol{D}_0^+\boldsymbol{\Sigma}_0^*\boldsymbol{D}_0^+$, where $\boldsymbol{\Sigma}_0^*=Var\Big[w_{T_i}\nabla g(\boldsymbol{B}_0^\top \boldsymbol{X}_i)\otimes \nu_{\boldsymbol{B}_0}(\boldsymbol{X}_i)\{\epsilon_i-\eta^*(\boldsymbol{X}_i)\}\Big]$, and $\boldsymbol{\Sigma}_0-\boldsymbol{\Sigma}_0^*$ is positive semi-definite. \end{theorem} Detailed proofs for all theorems are given in the supplemental document. \section{Simulation} \label{s:simulation} Here our method is evaluated and compared with existing methods. In particular, we compare with the outcome weighted learning method based on a logistic loss in \citet{Xu2015}, the modified covariate method under the squared loss proposed in \citet{Tian2014}, and residual weighted learning method \citet{Zhou2017} based on a logistic loss. We first evaluate estimation results assuming $d$ is known and then investigate dimension determination. When $d$ is fixed as 1, our iMAVE method should perform similar to that of \cite{song2017} which uses the B-spline in estimating $g$. \subsection{Estimation evaluation with known $d$} \label{subsect:simulation_estimation} Data are generated by the following model, \begin{eqnarray} y=(\gamma{\boldsymbol \beta}^\top \boldsymbol X)^2+\frac{1}{2}Tg({\boldsymbol \beta}^\top \boldsymbol X)+\epsilon, \label{simulation model} \end{eqnarray} where $\epsilon\sim N(0,\sigma^2)$ and $g$ is chosen as \begin{enumerate} \item Linear: $g({\boldsymbol {\boldsymbol \beta}}^\top \boldsymbol X)=\tau{\boldsymbol \beta}^\top \boldsymbol X$; \item Logistic: $g({\boldsymbol \beta}^\top \boldsymbol X)=\tau\{ (1+e^{-{\boldsymbol \beta}^\top \boldsymbol X})^{-1}-0.5\}$; \item Gaussian: $g({\boldsymbol \beta}^\top \boldsymbol X)=\tau\{\Phi({\boldsymbol \beta}^\top \boldsymbol X)-0.5\}$, where $\Phi(\cdot)$ is the Gaussian distribution function. \end{enumerate} We set $\gamma=1$, $\sigma=0.6$, $\tau=7$, and $T$ is generated to be $-1$ or $1$ with equal probability and independent with all other variables. The true ${\boldsymbol \beta}_0$ is chosen to be $(1,1,1,1)^\top $. $\boldsymbol X$ is generated from $N(0,\boldsymbol I_{4\times4})$. The sample size $n$ varies from $200$, $500$ to $1000$. Results are summarized from 1000 simulated data sets. Table~\ref{tab:coef_low} investigates the asymptotic bias of the iMAVE and iMAVE2 and the possible gain in efficiency from the latter. The ratios $\hat{\beta}_j/\hat{\beta}_1$, $j=2,3,4$, are reported due to the Grassmann manifold assumption for identifiability. We can see that both methods are consistent. As sample size increases, the biases become negligible. There is noticeable improvement from iMAVE2 over iMAVE in terms of MSE. We further consider prediction results under known and estimated propensity score by logistic regression. In particular we investigate the estimated effect modification in terms of rank correlation and classification rate over test data sets generated independently according to the true simulation model above but with sample sizes of $10000$. The classification rate is determined by sign of the fitted classifier and that of the true $g({\boldsymbol \beta}_0^\top \boldsymbol X)$. For example, for iMAVE and iMAVE2, we evaluate the concordance between $\hat{g}(\hat{\boldsymbol \beta}^\top\! \boldsymbol X)>0$ and $g({\boldsymbol \beta}_0^\top \boldsymbol X)>0$. The rank correlation is determined by the fitted classifier and the true $g({\boldsymbol \beta}_0^\top \boldsymbol X)$. For example, for iMAVE and iMAVE2, we evaluate the rank correlation between $\hat{g}(\hat{\boldsymbol \beta}^\top\! \boldsymbol X)$ and $g({\boldsymbol \beta}_0^\top \boldsymbol X)$. Because the resulting estimators of \cite{Xu2015}, \cite{Tian2014}, and \cite{Zhou2017} are parametric, we also include results of iMAVE(index) and iMAVE2(index) which compare the concordance between $\hat{\boldsymbol \beta}^\top\! \boldsymbol X>0$ and ${\boldsymbol \beta}_0^\top \boldsymbol X>0$ and the rank correlation between $\hat{\boldsymbol \beta}^\top\! \boldsymbol X$ and ${\boldsymbol \beta}_0^\top \boldsymbol X$. This represents a more fair comparison with the parametric methods. The index comparison only makes sense when $g$ is monotone which is the case in our simulation setting. From Figure~\ref{fig:plot1}, our methods have the best correct classification rates for the test datasets in all settings with known propensity score. In terms of rank correlation, iMAVE2(index) is the best followed by iMAVE(index). The performances of iMAVE and iMAVE2 sacrifice slightly due to the estimation of $g$. The method of \citet{Tian2014} is slightly better in terms of rank correlation, but obviously if $g$ is not monotone, one can imagine its performance to deteriorate. {We further investigate the setting when $\pi_T({\boldsymbol X})$ needs to be estimated. In this case, we generate $T$ from a logistic model with coefficients $\tilde{\boldsymbol \beta}=(0.2,-0.2,0.2,-0.2)^\top$ and then fit a logistic regression for $\pi_T({\boldsymbol X})$. After estimating $\pi_T({\boldsymbol X})$, all methods are implemented with the estimated $\pi_T({\boldsymbol X})$.} From Figure~\ref{fig:plot2}, our methods have the best correct classification rate and rank correlation than all other methods in all settings with estimated propensity score. \begin{table} \caption{Simulation results for coefficient estimation.} \label{tab:coef_low} \begin{center} \begin{tabular}{ccccccccc} \toprule \multirow{2}{*}{Size}& & & \multicolumn{2}{c}{Linear} & \multicolumn{2}{c}{Gaussian} & \multicolumn{2}{c}{Logistic}\\ \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \\[-1.5ex] & & & iMAVE & iMAVE2 & iMAVE & iMAVE2 & iMAVE & iMAVE2\\[.5ex] \hline\\[-1ex] \multirow{6}{*}{200}& \multirow{3}{*}{mean} & $\hat{\beta}_2/\hat{\beta}_1$ & 0.9995 & 0.9986 & 0.8630 & 0.9161 & 0.7797 & 0.8611 \\ & & $\hat{\beta}_3/\hat{\beta}_1$ & 1.0021 & 1.0021 & 0.8960&0.9410 & 0.8192 & 0.8884 \\ & & $\hat{\beta}_4/\hat{\beta}_1$ & 1.0042 & 1.0035 & 0.8891&0.9408 & 0.8013 & 0.8802 \\[1ex] \cline{3-9} \\[-1ex] & \multirow{3}{*}{$\sqrt{mse}$} & $\hat{\beta}_2/\hat{\beta}_1$ & 0.0563 & 0.0378 & 0.3122&0.2044 & 0.4106 & 0.2890 \\ & & $\hat{\beta}_3/\hat{\beta}_1$ & 0.0586 & 0.0386 &0.2971&0.1977 & 0.4056 & 0.2837\\ & & $\hat{\beta}_4/\hat{\beta}_1$ & 0.0540 & 0.0361 & 0.3075&0.2055 & 0.4191 &0.2847\\[0.5ex] \hline\\[-1ex] \multirow{6}{*}{500}& \multirow{3}{*}{mean} & $\hat{\beta}_2/\hat{\beta}_1$ & 0.9978 &0.9994 & 0.9526&0.9759& 0.8995&0.9484\\ & & $\hat{\beta}_3/\hat{\beta}_1$ & 1.0010 & 1.0004 & 0.9701&0.9854 & 0.9193& 0.9625 \\ & & $\hat{\beta}_4/\hat{\beta}_1$ & 1.0020 & 1.0004 & 0.9452 & 0.9798 & 0.8994& 0.9477\\[1ex] \cline{3-9} \\[-1ex] & \multirow{3}{*}{$\sqrt{mse}$} & $\hat{\beta}_2/\hat{\beta}_1$ & 0.0372 & 0.0207 & 0.1676&0.0975 & 0.2539& 0.1558 \\ & & $\hat{\beta}_3/\hat{\beta}_1$ & 0.0329 & 0.0188 & 0.1663&0.0935& 0.2587 & 0.1507 \\ & & $\hat{\beta}_4/\hat{\beta}_1$ & 0.0326 & 0.0184& 0.1675&0.0925& 0.2531 & 0.1505 \\[0.5ex] \hline\\[-1ex] \multirow{6}{*}{1000}& \multirow{3}{*}{mean} & $\hat{\beta}_2/\hat{\beta}_1$ & 1.0015 & 1.0006 & 0.9994&1.0032 & 0.9728&0.9913 \\ & & $\hat{\beta}_3/\hat{\beta}_1$ & 1.0009 & 1.0007 & 1.0020&1.0026 & 0.9794&0.9946\\ & & $\hat{\beta}_4/\hat{\beta}_1$ & 0.9993 & 1.0006 & 0.9980& 1.0018 & 0.9756 & 0.9897 \\[1ex] \cline{3-9} \\[-1ex] & \multirow{3}{*}{$\sqrt{mse}$} & $\hat{\beta}_2/\hat{\beta}_1$ & 0.0233 & 0.0124 & 0.1014&0.0515 & 0.1656&0.0905 \\ & & $\hat{\beta}_3/\hat{\beta}_1$ & 0.0247 & 0.0125 & 0.1017&0.0533& 0.1672&0.0894 \\ & & $\hat{\beta}_4/\hat{\beta}_1$ & 0.0236 & 0.0123 & 0.1033&0.0520 & 0.1627 & 0.0885 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{figure} \vspace{6pc} \includegraphics[scale=0.6]{plot1.eps} \caption{Simulation results for rank correlation and classification rate with {\bf known} $\pi_T({\boldsymbol X})$. The point represents the median, and the vertical line represents the range from the $0.25$ to the $0.75$ quantiles, of the results from 1000 simulations. \label{fig:plot1}} \end{figure} \begin{figure} \vspace{6pc} \includegraphics[scale=0.6]{plot2.eps} \caption{Simulation results for rank correlation and classification rate with {\bf estimated} $\pi_T({\boldsymbol X})$. The point represents the median, and the vertical line represents the range from the $0.25$ to the $0.75$ quantiles, of the results from 1000 simulations. \label{fig:plot2}} \end{figure} \subsection{Dimension determination} Here we evaluate our dimension determination procedure through simulation. Our data are generated according to the same model \eqref{simulation model} and parameter choices as Section \ref{subsect:simulation_estimation} but with the following differences. First we set $p=10$ and the true $d=2$. Consequently, the function $g$ is \begin{equation*} g(\boldsymbol B^\top \boldsymbol X)=\tau\{\Phi(\boldsymbol \beta_1^\top \boldsymbol X)-0.5\}+\tau\{\Phi(\boldsymbol \beta_2^\top \boldsymbol X)-0.5\} \end{equation*} where $\boldsymbol \beta_1$ is set as $$(1,1,1,1,1,1,1,1,1,1)^\top$$ and $\boldsymbol \beta_2$ as $$(1,-1,1,-1,1,-1,1,-1,1,-1)^\top.$$ We set $\gamma=0.1$ and the sample size $n$ is fixed at $500$. Over $100$ simulated data sets, our procedure was able to choose the correct dimension $2$ for $72$ times, $3$ for $26$ times, and $4$ for $2$ time. As we mentioned before, over-estimating the dimension slightly is not a big issue. There is no under-estimation of $d$, but slight over-estimation in some data sets. \section{Application to a mammography screening study} This is a randomized study that included female subjects who were non-adherent to mammography screening guidelines at baseline (i.e., no mammogram in the year prior to baseline) \citep{champion2007effect}. One primary interest of the study was to compare the intervention effect of \textcolor{black}{phone counseling on mammography screening (phone intervention)} versus usual care at 21 months post-baseline. The outcome is whether a subject took mammography screening during this time period. There are 530 subjects with 259 in the phone intervention group and 271 in the usual care group. Baseline covariates include socio-demographics, health belief variables, stage of readiness to undertake mammography screening, and number of years had a mammogram in past 2 to 5 years in the study. In total, there are $211$ covariates including second order interactions among the covariates. Our methods are compared with \cite{Xu2015}, \cite{Tian2014}, and \cite{Zhou2017}. To evaluate the performances in the real data, we proceed as follows. For a fitted assignment rule, say $\hat{r}(\boldsymbol X)$, denote the treatment decision rule as by $\hat{T}(\boldsymbol X)=sign\{\hat{r}(\boldsymbol X)\}$. The following two quantities are then used to evaluate the performances. \begin{equation*} E[\Delta_1]=E[Y|\hat{T}(\boldsymbol X) = 1, T = 1]-E[Y|\hat{T}(\boldsymbol X) = 1, T = -1], \end{equation*} and, \begin{equation*} E[\Delta_{-1}]=E[Y|\hat{T}(\boldsymbol X) = -1, T = -1]-E[Y|\hat{T}(\boldsymbol X) = -1, T = 1]. \end{equation*} They represent gains in the outcome expectations between the recommendation agreeing and disagreeing subgroups. If both $E[\Delta_{-1}]$ and $E[\Delta_1]$ are positive, then the estimated treatment decision rule can improve the outcome. The actual evaluation was based on cross-validation. First, $80\%$ of subjects were randomly selected into a training set and the rest into a testing set. Apparently, due to this further reduction of sample size, we had to reduce the number of covariates for fitting. We performed screening procedures for all methods in a uniform fashion. In particular, the method of \cite{Tian2014} with lasso penalty was fitted on the training sets for variable selection. After variable selection, the selected covariates were fitted by each method. For iMAVE and iMAVE2, dimension selection from $d=1,2,3$ was also implemented. Then, the benefit quantities defined above were calculated on the testing set. The cross-validation was based on 100 splits. In Table \ref{tab:compare_realdata}, our methods seem to have advantages as they lead to larger $\hat{E}[\Delta_1]$ and $\hat{E}[\Delta_{-1}]$. The average percentages of subjects assigned to $T=1$ and $-1$ in the test sets are also given in the table. \begin{table} \caption{Results for the mammography screening study from 100 cross validations. } \label{tab:compare_realdata} \begin{center} \begin{tabular}{ccccc} \toprule & \multicolumn{2}{c}{$\hat{E}[\Delta_1]$} & \multicolumn{2}{c}{$\hat{E}[\Delta_{-1}]$}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \\[-1.75ex] & & Avg \% of & & Avg \% of \\ Method & Mean(SE)& subj in $T=1$ & Mean(SE)& subj to $T=-1$ \\ \hline \\[-1.5ex] iMAVE & 0.032(0.014) & 42\% & 0.052(0.012) & 58\% \\ iMAVE2 & 0.036(0.014) & 42\% & 0.054(0.012) & 58\% \\ Tian & 0.022(0.013) & 44\% & 0.043(0.011) & 56\%\\ Xu & 0.026(0.012) & 43\% & 0.044(0.012) & 57\%\\ Zhou& 0.020(0.013) &41\% &0.041(0.011) &59\% \\ \bottomrule \end{tabular} \end{center} \end{table} \section{Discussion} In this article, we have proposed a very general semiparametric modeling framework for effect modification estimation. Whereas our main motivational setting is from precision medicine, the framework is generally applicable to statistical interaction discovery with interested variable in many other settings. For example in health disparities research, a complex and interrelated set of individual, provider, health system, societal, and environmental factors contribute to disparities in health and health care. Federal efforts to reduce disparities often include a focus on designated priority populations who are particularly vulnerable to health and health care disparities. Our approach seems ideal for data analysis in this setting. When there are many covariates, we have focused on dimension reduction. But one could also easily incorporate variable selection into our framework when the dimension $d$ is fixed. In particular, Lasso type of regularization can be used together with our estimating equation. This can be a fruitful path for future work as identification of important variables is an important practical issue. \section*{Acknowledgements} Research reported in this article was partially funded through a Patient-Centered Outcomes Research Institute (PCORI) Award (ME-1409-21219). The views in this publication are solely the responsibility of the authors and do not necessarily represent the views of the PCORI, its Board of Governors or Methodology Committee. \begin{supplement} \sname{Supplemental Materials}\label{suppA} \stitle{Proof of Theorems} \slink[url]{Supplemental Material} \sdescription{Proofs of Theorems \ref{thm2}-\ref{thm:efficient} are contained in \ref{suppA}. These theorems provide both the asymptotic and nonasymptotic properties of our methods.} \end{supplement} \bibliographystyle{imsart-nameyear}
{ "timestamp": "2018-04-17T02:11:02", "yymm": "1804", "arxiv_id": "1804.05373", "language": "en", "url": "https://arxiv.org/abs/1804.05373" }
\section{Introduction} \label{sec:intro} So far, photon structure is most efficiently studied with the accelerator instruments, where available energies reach TeV. There is, however, some potential in exploring the cosmogenic photons as well. Despite the cosmic photon fluxes being extremely low in comparison to accelerator data, different physics mechanisms are involved at the production sites and during the subsequent propagation to the Earth. This gives prospects for a complementary study on photons. Also the energies of cosmogenic photons can be higher than in accelerators. Gamma astronomers report on power-law spectra of $\gamma$-rays extending without a cut-off or a spectral break to tens of TeV \cite{hess-pevatrons-nature2016} and plan building instruments capable of detecting photons of energies up to 300~TeV \cite{cta}. But these are not the largest photon energies expected on Earth. Within the research concerning ultra-high energy cosmic rays (UHECR), i.e. particles with energies exceeding 10$^{18}$ eV, all the prominent scenarios predict photon fluxes reaching the Earth. One distinguishes two major classes of UHECR models: ``bottom-up'', based on the acceleration and subsequent interaction of nuclei; and ``top-down'', based on a decay or annihilation of hypothetical supermassive particles (see \cite{Bhattacharjee:1998qc} for a review). The two classes differ significantly in the predicted fractions of photons among UHECR: < 1\% the former and even up to c.a.~50\% the latter. Hence, determining the UHECR mass composition, including identification of photons or setting upper limits on their fluxes, is an effort towards distinguishing between the two major classes which should give a hint on photon production and properties at the highest energies. Moreover, it is worth emphasizing that any result on UHE photons, including non-observation, is meaningful for the foundations of physics at the highest energies, allowing constraints on e.g. Lorentz invariance violation (LIV) \cite{Galaverni:2007tq}, QED nonlinearities \cite{maccione08}, space-time structure \cite{maccione-liv-spacetime-foam-2010} or the already mentioned ``top-down'' scenarios. The searches performed so far have not confirmed the existence of UHE photons, resulting in setting upper limits to photon fluxes and fractions \cite{Aglietta:2007yx,Abraham:2009qb,auger-photons-bleve2015,ta-photons-zayyad-2013,ta-photons-rubtsov2015}. In this paper we summarize the recent advances in UHE photon search performed based on the data collected by the largest cosmic-ray instrument: the Pierre Auger Observatory~\cite{auger-2015,auger-upgrade} resulting in setting the most stringent upper limits. The summary is based on Refs.~\cite{auger-photon-pointsources-2014},\cite{auger-diffuse-photon-2017}, and \cite{auger-targeted-photon-2017}. We also briefly sketch an outlook on the further possible efforts, complementary to the current searches, based on the cascade approach. \section{Photon signatures} \label{sec:signatures} Investigations on cosmic rays of energies above 10$^{15}$~eV can be done only indirectly, through the interpretation of properties of extensive air showers (EAS) induced in the atmosphere by primary cosmic particles. The interpretation is made based on interaction models which incorporate cross section data from accelerators within the available energy range and assume extrapolations beyond this range. Obviously, the higher the primary energy under investigation, the higher the risk of being mistaken with the extrapolations being used to interpret the data. Another disadvantage and potential uncertainty is the progressing degeneracy of the information about primary particles and their first interactions in the atmosphere with the rise of subsequent generations of secondary particles. Thus, if there is some mistreatment in the models at the very beginning of an air shower creation and propagation, it might significantly affect the final outcome of the analysis. Keeping in mind all the theoretical assumptions implying the existence of uncertainties, an effort is being made to identify the ultra-high energy primary particles that initiate the largest air showers observed. The effort is based on using simulations to predict air shower properties, characteristic for primaries of different types and confronting these predictions with observations. EAS initiated by UHE photons should posses two independent properties: significantly delayed development and reduced muon content \cite{Risse:2007sd}. The former signature is based on the assumption that at the highest energies the pair production formation length gets elongated so much that the destructive interference of the fields associated with the atoms and particles nearby the primary suppresses the pair production process in the upper atomosphere, thus delaying the first interaction and the consequent air shower development. This is so-called LPM effect \cite{Landau:1953um,Landau:1953gr,Migdal:1956tc}, a standard in air shower modeling \cite{corsika}. Here it is important to note that the LPM suppression is sensitive to the possible LIV effects including both increase and reduction of the pair production formation length\cite{Vankov:2002gt}. While the former would strengthen the UHE photon discrimination power based on delayed air shower development, the latter would do the opposite: reduce or even invert the LPM effect, and hence make photon-induced air showers develop more similarly to those initiated by nuclei. The present state-of the art photon identification methods involving the expectation of a delayed development of an air shower induced by a photon assume the standard LPM effect, without any LIV effects. The other expected property of photon-induced air showers, the low muon content N$_{\mu}$, is also founded on conventional physics assumptions concerning the photonuclear cross section $\sigma_{\gamma-air}$. Air shower muons are thought to originate mostly from the charged pion decays, and charged pions originate from hadronic interactions. Therefore the observed N$_{\mu}$ should correspond to the size of a hadronic component of an air shower. If the interaction initiating an air shower is not hadronic, one would consequently expect that the hadronic component is initiated only by one of the secondary particles, thus its size and the corresponding N$_{\mu}$ are smaller comparing to an air shower induced by a hadronic interaction. According to the standard extrapolations for a photon of energy $E_{\gamma}=10^{19}$~eV, $\sigma_{\gamma-air}$ should be ca.~30 times smaller than the electron pair production cross section, although some existing models allow $\sigma_{\gamma-air}$ to be only ca. 3 times smaller at the highest energies \cite{Risse:2007sd}. The energies of the secondary and virtual photons inside air showers are naturally lower than the primary energy and thus the $\sigma_{\gamma-air}$ uncertainty is correspondingly lower. Since in any case one expects that primary photons at the highest energies would pair produce more readily than interact with air molecules, the hadronic component and N$_{\mu}$ in the corresponding air showers will be in general smaller than in case of hadron-induced showers. The data collected so far by the Pierre Auger Observatory do not reveal muon-poor showers, just the opposite -- the muon content seems to exceed the simulated values at the highest energies \cite{auger-muon-excess16}, which might point to a yet unconsidered physical source of uncertainty related to N$_{\mu}$. \section{Diffuse photon flux and hybrid UHE photon limits with multivariate analysis} In this section we follow Ref.~\cite{auger-diffuse-photon-2017} to present the upper limits to diffuse UHE photon fluxes obtained using the hybrid detector of the Pierre Auger Observatory. The observables used for this analysis are tightly connected with the key photon signatures introduced in the previous section: $X_{max}$ [g/cm$^2$] -- the depth in the atmosphere where an EAS reaches maximum of its development, $N_{stat}$ -- the number of triggered stations in an effective EAS footprint composed of the triggered stations, and $S_b = \sum_{i}^N S_i (R_i/R_0)^b$, where $S_i$ and $R_i$ are the signal and the distance from the shower axis of the $i$-th station, $R_0$~=~1000~m is a reference distance and b = 4 is a constant optimized to have the best separation power between photon and nuclear primaries in the energy region above 10$^{18}$~eV. $S_i$ are measured in units of VEM (Vertical Equivalent Muon, i.e. the signal produced by a muon traversing the station vertically). Due to the standard LPM effect $X_{max}$ in an EAS induced by a UHE photon is expected to occur deeper in the atmosphere than in the case of showers initiated by nuclei. On the other hand, the low muon content expected in showers initiated by UHE photons results in a lower trigger capability at larger distances, and consequently smaller $N_{stat}$ and $S_b$ when comparing to the footprints of hadron-induced EAS. The primary type discrimination power of the above three observables is illustrated in Fig.~\ref{fig:signatures}. \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{signatures.png} \caption{Photon-proton discrimination power of the shower observables used for the UHE photon search in the hybrid detector of the Pierre Auger Observatory.\cite{auger-diffuse-photon-2017}} \label{fig:signatures} \end{center} \end{figure} The Boosted Decision Tree (BDT) multivariate analysis method has been applied to the available data set using $X_{max}$, $N_{stat}$ and $S_b$. The BDT method has been chosen after examining several variants, as illustrated in the left panel of Fig.~\ref{fig:signatures-bdt}, leading to the optimal selection cut suitable for $E_\gamma>10^{18}$~eV (Fig.~\ref{fig:signatures-bdt}, right panel). \begin{figure}[ht] \begin{center} \includegraphics[width=\linewidth]{signatures-bdt.png} \caption{The variants of the multivariate analysis method considered for the diffuse photon flux studies (left) and the selection cut on the response of the optimum variable (right). \cite{auger-diffuse-photon-2017}} \label{fig:signatures-bdt} \end{center} \end{figure} The few photon candidates found in this way were checked to be consistent with the proton background which led to determining the new hybrid upper limits to photon fluxes, as seen in Fig.~\ref{fig:limits}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\linewidth]{limits.png} \caption{The newest hybrid upper limits to the diffuse photon fluxes based on the Pierre Auger Observatory data (blue arrows) and the corresponding uncertainties (dashed regions around the arrows) compared to the theoretical predictions (check Ref.~\cite{auger-diffuse-photon-2017} for references) and to the limits provided by other experiments.~\cite{auger-diffuse-photon-2017}} \label{fig:limits} \end{center} \end{figure} The new limits are more stringent in comparison to previous ones (see Ref.~\cite{auger-photons-bleve2015}), and the uncertainties of the limits are specified for the first time. The strong constraints on the ``top-down'' models can be concluded under the assumptions explained in Sections~\ref{sec:intro} and \ref{sec:signatures}. For the first time also the photon flux predictions from one of the ``bottom-up'' scenarios (``GZK proton I'') are constrained below $E_\gamma$=10$^{19}$~eV. The reader is referred to Ref.~\cite{auger-diffuse-photon-2017} for details and references. \section{Directional blind search} Complementing the diffuse photon flux study, the Pierre Auger Observatory performed also the blind search for arrival directions where photon excesses could be observed \cite{auger-photon-pointsources-2014}. While the diffuse flux study aims at identifying all the photon candidates regardless their arrival directions, the directional blind search tries to identify photon candidates grouping directionally, and thus pointing to possible photon sources. The sensitive search for point sources was performed within a declination band from $-$85$^{\circ}$ to +20$^{\circ}$, and in an energy range from 10$^{17.3}$~eV to 10$^{18.5}$~eV, were the photon fluxes are not precluded by the diffuse photon search. No photon point source has been detected and an upper limit on the photon flux has been derived for every direction. In the study the method to derive a skymap of upper limits to the photon fluxes of point sources (Fig.~\ref{fig:directional}) was specified. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\linewidth]{directional-map.png} \caption{Celestial map of photon flux upper limits in photons km$^{−2}$yr$^{−1}$ illustrated in Galactic coordinates.~\cite{auger-photon-pointsources-2014}} \label{fig:directional} \end{center} \end{figure} \section{Targeted search} To reduce the statistical penalty of many trials from that of a blind directional search in Ref.~\cite{auger-targeted-photon-2017} several Galactic and extragalactic candidate objects were grouped in classes and analyzed for a significant photon excess above the background expectation. No evidence for photon emission from candidate sources was found and the corresponding particle and energy flux upper limits were given. These limits significantly constrain predictions of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region (Fig~\ref{fig:targeted}). \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\linewidth]{targeted.png} \caption{Photon flux as a function of energy from the Galactic center region. Measured data by H.E.S.S. are indicated, as well as the extrapolated photon flux at Earth in the EeV range, given the quoted spectral indices (\cite{hess-pevatrons-nature2016}; conservatively the extrapolation does not take into account the increase of the $p$-$p$ cross section toward higher energies). The Auger limit is indicated by a green line. A variation of the assumed spectral index by $\pm$0.11 according to systematics of the H.E.S.S. measurement is denoted by the light green and blue band. A spectral index with cutoff energy E$_{cut}$=2.0$\times$10$^6$~TeV is indicated as well.~\cite{auger-targeted-photon-2017}} \label{fig:targeted} \end{center} \end{figure} \section{Summary and Outlook} \label{sec:sumary} In this report the status of the ultra-high energy photon search at the Pierre Auger Observatory is summarized with emphasis on the recent advances, including the searches for the diffuse UHE photon flux, emission from discrete sources and targeted photon search. Although none of the searches performed so far proved an existence of ultra-high energy photons, the diffuse, directional and grouped upper limits to photon fluxes provide valuable astrophysical constraints. The end of the road to discovering UHE photons in the Pierre Auger Observatory has not yet been reached. The new substantial amount of data of increased sensitivity to primary mass will be acquired within the next several years with the AugerPrime upgraded detectors~\cite{auger-upgrade} which could lead either to identification of photons or to further constraints on the ``bottom-up'' predictions. The UHE photon search can be continued also with alternative methods. UHE photon sensitivity comparable to the Pierre Auger Observatory is possible e.g. with the Cherenkov Telescope Array (CTA) \cite{uhe-photon-cta-exposure-neronov16}. Considering gamma ray detection technique for the UHE photon search implies using an alternative set of observables, unreachable by standard cosmic-ray arrays. The first studies in this direction point to a chance for a very precise identification of a photon primary or photon pre-shower, even event-by-event \cite{credo-gamma-rays-icrc2017,credo-general-icrc2017}. Another interesting scenario involving UHE photons and their interactions is based on questioning some of the assumptions underlying the physical interpretation of the presently reported non-observation of UHE photons. In these scenarios (e.g. LIV with rapid photon decay) UHE photons produced at astrophysical distances have very short lifetimes and have a negligible chance to reach Earth~\cite{Klinkhamer:2008ky,chadha83-phot-decay,kostelecky2002-phot-decay,jacobson-2005-liv-rev}. On the other hand, the experimental verification of such models would be possible only if there is a chance to observe at least partly the products of UHE photon interactions: extensive cascades of cosmic rays. Such a chance should then be determined for the scenarios leading to UHE photon cascading in order to proceed with the experimental effort. Alternatively one could also think of a global cosmic-ray analysis strategy dedicated to hunting for large scale time correlations independently of the expectations from the existing theoretical models. \section*{Acknowledgements} PH thanks the Pierre Auger Collaboration, in particular Daniel Kuempel and Marcus Niechciol for help in summarizing the status of the UHE photon search, and Mikhail V. Medvedev, \L{}ukasz Bratek, and David d'Enterria for inspiring discussions about the perspectives of further studies in this direction.
{ "timestamp": "2018-05-08T02:15:13", "yymm": "1804", "arxiv_id": "1804.05613", "language": "en", "url": "https://arxiv.org/abs/1804.05613" }
\section{Introduction} Conformal predictors are confidence predictors that result in prediction sets for all confidence levels. Thus, Conformal Prediction (CP) is a framework that complements the predictions of machine learning algorithms with reliable measures of confidence. Transductive Conformal Prediction (TCP) works in an on-line transductive setting, such that learning and prediction occur simultaneously. In this sense confidence in a prediction is tailored both to the previously seen objects (whose features and labels are known) and to the features of the new object, whose label is to be predicted. By conditioning on the new objects conformal predictors take account of how difficult a particular object is to label and adjust their confidence in the prediction accordingly, as opposed to having an overall error rate for labelling all new objects \cite{vovk2005algorithmic}. The output from a TCP algorithm is thus a point prediction and a region prediction, such as a 95\% prediction region which, under minimal assumptions, contains the true label with a probability of at least 0.95 \cite{shafer2008tutorial}. The method for point prediction embedded within the CP framework can be almost any machine learning algorithm, such as random forests, support vector machines or neural networks. Based on the chosen learning algorithm a nonconformity measure is created which evaluates the ``strangeness" of the new object relative to those previously seen. The TCP algorithm utilizes this nonconformity score to define the appropriate prediction region \cite{shafer2008tutorial}. The fully on-line mode of TCP can be very computationally demanding (with the learning algorithm updated for each new data point). The theory however extends easily to the off-line inductive (batch) mode giving rise to what we refer to here as Inductive Conformal Prediction (ICP). CP has been used in moderately sized problems, e.g. to predict quantitative structure-activity relationships of molecules \cite{norinder2014introducing}, to assess complication risks following coronary procedures \cite{balasubramanian2014conformal} and to detect anomalies in fishing vessel trajectories \cite{smith2015conformal}. It has also been shown to scale up well on a distributed computing implementation to very large datasets, such as the Higgs boson dataset of 11 million data points \cite{capuccini2015conformal}, the largest binary classification dataset in the UCI machine learning repository \cite{bache2013uci}. In the release version of conformalClassification we use random forests as the underlying machine learning method, where the vote for each class – the ratio between the number of trees in the forest voting for a given class divided by the total number of tree – gives the conformity score for each data point. \section{Background and Notations} This section gives a brief background about CP and fixes notations and definitions used throughout the article. The object space is denoted by $\mathcal{X} \in \mathbb{R}^p$, where $p$ is the number of features, and label space is denoted by $\mathcal{Y} \in (1,2,...,l)$, where $l$ is the number of class labels. We assume that each observation consists of an object and its label, and its space is given as $\mathcal{Z} := \mathcal{X} \times \mathcal{Y}$. The typical classification problem is, given a training dataset $Z = \{ z_1 , ..., z_n \} $ -- where $n$ is the number of observations in the training set, and each observation $z_i = (x_i, y_i)$ is a labeled observation -- we want to predict the label of a new observation $x_{new}$ whose label is unknown. The exchangeability \citep{shafer2008tutorial} of observations is assumed throughout the paper. The nonconformity measure is a function that measures the disagreement of possible labels of a test object with respect to an observed distribution. \begin{definition} [Nonconformity Measure] A nonconformity measure is a measurable function $\mathcal{A} : \mathcal{Z} \times \mathcal{Z} \rightarrow \mathbb{R}$ such that $\mathcal{A}(Z_1 , Z_2 )$ does not depend on the ordering of observations in the set $Z_1$. \end{definition} The nonconformity scores are most often derived from the underlying algorithms used for point prediction. For classification problems, the error rate may be higher in some classes than others, to overcome this issue the nonconformity scores are applied on per class basis, this is referred to as Mondrian CP \cite{norinder2014introducing}. Alternatively, the conformity measure can be defined as, $1 - \mathcal{A} $(nonconformity measure) A natural conformity measure for classification problems using random forests method \cite{Breiman} is the proportion of votes for each class, the ratio between the number of trees in the forest voting for a given class divided by the total number of trees. \begin{align} \label{eq:def_nonconformity} \begin{split} \alpha_i(y) &= \frac{\#\text{trees voting for class y}}{\#\text{of trees}} \end{split} \end{align} We denote by $\alpha_i(y)$, the conformity score for $i^{th}$ observation for class $y$. Each component $\alpha_i(y)$ that corresponds to the sample $(x_i,y_i)$ is computed by equation (\ref{eq:def_nonconformity}) based on the augmented sample $\{ z_1 , ..., z_n, z_{n+1}=(x_{new},y) \}$. Then p-value as defined below, \cite{vovk2005algorithmic}, describes the lack of conformity of the new observation $x_{new}$ to the training set $Z$. \begin{align*} p_y &= \frac{| \{ z_i \in Z : y_i=y, \alpha_i(y) < \alpha_{new}(y) \} | + u_i* | \{ z_i \in Z : y_i=y, \alpha_i(y) = \alpha_{new}(y)\} |}{n_y+1} \\ \end{align*} where $u_i \sim U[0,1]$, $n_y$ denotes the number of observations having the true label as class-y in the training set. The p-value $p(y)=p_y, \ y \in Y$ lies in $ \left( \frac{1}{n_y+1},1 \right)$. The smaller the $p(y)$ is, the less likely the true pair is $(x_{new},y)$. Multiplying the borderline cases by $u_i$ results in what are known as smoothed conformal predictors \citep{vovk2005algorithmic}. \begin{definition}[Transductive Conformity Prediction (TCP)] Given a training dataset $Z$ and a new observation $x_{new}$, the transductive conformal predictor (TCP ), corresponding to a nonconformity measure $\mathcal{A}$, checks each of a set of hypothesis (for all possible labels) for the new observation $x_{new}$, assigns to it a p-value, and finds the prediction region for the test set $x_{new}$ at a significance level $\epsilon \in (0, 1)$. \end{definition} The predicted region of a test observation is a subset of $\mathcal{Y}$ , denoted as $\Gamma^{\epsilon} = \{ y \mid p_y > \epsilon \}$, at a significance level $\epsilon \in (0, 1)$. A prediction region $\Gamma^{\epsilon} = \{ y \mid p_y > \epsilon \}$ contains the true value of a test observation with probability at least $1 -\epsilon$. The prediction region $\Gamma^{\epsilon}$ can be any one of the following: \begin{enumerate} \item Empty, when $|\Gamma^{\epsilon}| = 0$. \item Singleton, when $|\Gamma^{\epsilon}| = 1$. \item Multiple, when $|\Gamma^{\epsilon}| >1$. \end{enumerate} \begin{algorithm}[H] \caption{\textbf{TCP}} \label{algo:TCP} \textbf{Input:}{ (training dataset:$Z$, test data:$x_{new}$, label set:$Y$, a nonconformity measure:$\mathcal{A})$}\\ \textbf{Output:}{\textbf{ p-values} }\\ \For{each $y \in \mathcal{Y}$ }{ $z_{n+1} = (x_{new},y) $;\\ $Z^* = (Z,z_{n+1})$ ;\\ Compute the transductive nonconformity scores:\\ $\alpha_i = \mathcal{A}(Z^*, z_i)$ for each $z_i \in Z^*$;\\ \textbf{\\} Compute p-value: $ p(y) = \frac{| \{ i \in \{1,..,n+1\} : y_i=y, \alpha_i(y) < \alpha_{new}(y) \} | + u_i*| \{ i \in \{1,..,n+1\} : y_i=y, \alpha_i(y) = \alpha_{new}(y) \} |}{n_y + 1}$;\\ } $\textbf{p-values} = \{ p(y)| y \in \mathcal{Y}\}$;\\ \textbf{return \textbf{p-values}};\\ \end{algorithm} For further details on TCP, we refer to \cite{vapnik1998statistical}, \cite{shafer2008tutorial}, \cite{vovk2005algorithmic} and \cite{balasubramanian2014conformal}. The computational expense of TCP, whereby the prediction rule is updated for each new example for each class label, may be computationally intractable for large datasets. To address this issue the batch-mode ICP method was introduced. For ICP, the training set $Z$ is partitioned into two different sets: the proper training set, ${Z_p = z 1 , . . . , z_q }$ of size $q$, and the calibration set ${Z_c = z_{q+1} , . . . , z_n }$ of size $n-q$. ICP relies on the idea that how well the calibration set conforms to the proper training set. The ICP p-value is then computed as \begin{align*} p_y &= \frac{| \{ z_i \in Z_c : y_i=y, \alpha_i(y) < \alpha_{new}(y) \} | + u_i* | \{ z_i \in Z_c : y_i=y, \alpha_i(y) = \alpha_{new}(y)\} |}{n_y+1}, \\ \end{align*} where $n_y$ denotes the number of observations having the true label as class-y in the calibration set. \begin{algorithm}[H] \caption{\textbf{ICP}} \label{algo:ICP} \textbf{Input:}{ (training dataset:$Z$, test data:$x_{new}$, label set:$Y$, a nonconformity measure:$\mathcal{A})$}\\ \textbf{Output:}{\textbf{ p-values} }\\ partition $Z$ into proper training set $Z_p$ and calibration set $Z_c$ \\ Compute nonconformity scores:\\ $\alpha_i = \mathcal{A}(Z_p, z_i)$ for each $z_i \in Z_c$;\\ \textbf{\\} Compute nonconformity score for test observation: $\alpha_{new} = \mathcal{A}(Z_p, (x_{new}, y) )$ for each $y \in Y$ \\ Compute p-values: \\ $ p(y) = \frac{| \{ z_i \in Z_c : y_i=y, \alpha_i(y) < \alpha_{new}(y) \} | + u_i* | \{ z_i \in Z_c : y_i=y, \alpha_i(y) = \alpha_{new}(y)\} |}{n_y+1}$;\\ $\textbf{p-values} = \{ p(y)| y \in Y \}$;\\ \textbf{return \textbf{p-values}};\\ \end{algorithm} To evaluate the performance of conformal predictors, we consider the following criterion: error rate, validity, efficiency and observed fuzziness. A predictor makes an error when the predicted region does not contain the true label, that is $ y \not\in |\Gamma^{\epsilon}|$. Given a training dataset $Z$ and an external test set $Z_T$, and $|Z_T| = m$. Suppose that a conformal predictor gives prediction regions as $\Gamma_1^{\epsilon}, ...., \Gamma_m^{\epsilon}$, then the error rate is defined as follows. \begin{definition}[Error rate] \begin{align} \label{eq:errorRate} ER^{\epsilon} &= \frac{ 1}{m} \sum\limits_{i=1}^{m} \textbf{I}_{ \{y_i \not\in \Gamma_i^{\epsilon} \} }, \end{align} where $y_i$ is the true class label of the $i^{th}$ test case and $\textbf{I}$ is an indicator function. \end{definition} The efficiency can be computed as the ratio of predictions with more than one class over number of observations in the test set. \begin{definition}[Efficiency] \begin{align} \label{eq:efficiency} EFF^{\epsilon} = \frac{ 1}{m} { \sum\limits_{i=1}^{k} I_{(|\Gamma^{\epsilon}| >1 )}} \end{align} \end{definition} The deviation from exact validity can be computed as (\cite{carlsson2017comparing}) the Euclidean norm of the difference of the observed error and the expected error for a given set of predefined significance levels. Let us assume a set of significance levels $\epsilon = \{ \epsilon_1, ..., \epsilon_k \}$, then the formula for the validity can be given as follows. \begin{definition}[Deviation from Validity] \begin{align} \label{eq:validity} VAL = \sqrt{ \sum\limits_{i=1}^{k} (ER^{\epsilon_i} -\epsilon_i)^2 } \end{align} \end{definition} The Observed fuzziness is defined as the sum of all p-values for the incorrect class labels. \begin{definition}[Observed Fuzziness] \begin{align} \label{eq:ObsFuzz} ObsFuzz =\frac{ 1}{m} \sum\limits_{i=1}^{m} \sum\limits_{y_i \neq y } p_i^y, \end{align} \end{definition} We note that for the above measure of performances, smaller values are preferable. \section{Conclusions} The conformalClassification package implements Transductive Conformal Prediction and Inductive Conformal Prediction for Classification problems using Random Forests as the underlying machine learning algorithm. \section{Future Development} In future releases, we plan to extend package to use other machine learning algorithms, (e.g. support vector machines) for model fitting. \section{Acknowledgements} The authors acknowledge UPPMAX, Uppsala Multidisciplinary Centre for Advanced Computational Science for providing computational resources. The authors would also like to thank Philip J. Harrison for comments and recommendations during the preparation of this manuscript and R package. \bibliographystyle{plain}
{ "timestamp": "2018-04-17T02:13:22", "yymm": "1804", "arxiv_id": "1804.05494", "language": "en", "url": "https://arxiv.org/abs/1804.05494" }
\section{Introduction} We consider the stochastic convex optimization problem \begin{align}\label{main problem} \min_{x\in \mathbb{R}^n} \ f(x)\triangleq \mathbb{E}[\aj{F(x,\xi(\omega))}], \end{align} where $ \aj{\xi}: \Omega \rightarrow \mathbb{R}^o$, ${F}: \mathbb{R}^n \times \mathbb{R}^o \rightarrow \mathbb{R}$, {and} $(\Omega,\mathcal{F},\mathbb{P})$ denotes the associated probability space. Such problems have broad applicability in engineering, economics, statistics, and machine learning. Over the last two decades, two avenues for solving such problems have emerged via sample-average approximation~(SAA)~\cite{kleywegt2002sample} and stochastic approximation (SA)~\cite{robbins51sa}. In this paper, we focus on quasi-Newton variants of the latter. Traditionally, SA schemes have been afflicted by a key shortcoming in {that such schemes display a markedly poorer convergence rate than their deterministic variants.} For instance, in standard stochastic gradient schemes for strongly convex smooth problems {with Lipschitz continuous gradients}, the mean-squared error diminishes at a rate of $\mathcal{O}(1/k)$ while deterministic schemes display a geometric rate of convergence. This gap can be reduced by utilizing an increasing sample-size of gradients, an approach first considered in~\cite{FriedlanderSchmidt2012,byrd12}, and subsequently refined for gradient-based methods for strongly convex~\cite{shanbhag15budget,jofre2017variance,jalilzadeh2018optimal}, convex~\cite{jalilzadeh16egvssa,ghadimi2016accelerated,jofre2017variance,jalilzadeh2018optimal}, and nonsmooth convex regimes~\cite{jalilzadeh2018optimal}. Variance-reduced techniques have also been considered for stochastic quasi-Newton (SQN) techniques~\cite{lucchi2015variance,zhou2017stochastic,bollapragada2018progressive} under twice differentiability and strong convexity requirements. To the best of our knowledge, the only available SQN scheme for merely convex but smooth problems is the regularized SQN scheme presented in our prior work~\cite{yousefian2017stochastic} where an iterative regularization of the form ${1 \over 2} \mu_k \|x_k - x_0\|^2$ is employed to address the lack of strong convexity while $\mu_k$ is driven to zero at a suitable rate. Furthermore, a sequence of matrices $\{H_k\}$ is generated using a regularized L-BFGS \us{update} or ({\bf rL-BFGS}) update. However, much of the extant schemes in this regime either have gaps in the rates (compared to deterministic counterparts) or cannot contend with nonsmoothness. \\ \begin{wrapfigure}[12]{r}{0.35\textwidth} \vspace*{-1cm} {\includegraphics[scale=0.23]{lewis_different.pdf}}\caption{Lewis-Overton example \label{fig:nssqn}}{} \end{wrapfigure} \noindent {\bf Quasi-Newton schemes for nonsmooth convex problems.} { There have been some attempts to apply (L-)BFGS directly to the deterministic nonsmooth convex problems. But the method may fail as shown in~\cite{lukvsan1999globally, haarala2004large, lewis2008behavior}; e.g. in~\cite{lewis2008behavior}, the authors consider minimizing ${1\over 2}\|x\|^2+\max\{2|x_1|+x_2,3x_2\}$ in $\mathbb{R}^2$, BFGS takes a null step (steplength is zero) for different starting points and fails to converge to the optimal solution $(0,-1)$ (except when initiated from $(2,2)$) (See Fig.~\ref{fig:nssqn}). Contending with nonsmoothness has been considered via a subgradient quasi-Newton method \cite{yu2010quasi} for which global convergence can be recovered by identifying a descent direction and utilizing a line search. An alternate approach~\cite{yuan2013gradient} develops a globally convergent trust region quasi-Newton method in which Moreau smoothing was employed. Yet, there appear to be neither non-asymptotic rate statements available nor considerations of stochasticity in nonsmooth regimes.\\ \noindent {\bf Gaps.} Our research is motivated by several gaps. (i) First, can we develop smoothed generalizations of ({\bf rL-BFGS}) that can contend with nonsmooth problems in a seamless fashion? (ii) Second, can one recover determinstic convergence rates (to the extent possible) by leveraging variance reduction techniques? (iii) Third, can one address nonsmoothness on stochastic convex optimization, which would allow for addressing more general problems as well as accounting for the presence of constraints? (iv) Finally, much of the prior results have stronger assumptions on the moment assumptions on the noise which require weakening to allow for wider applicability of the scheme. \subsection{{A survey of literature}} Before proceeding, we review some relevant prior research in stochastic quasi-Newton methods and variable sample-size schemes for stochastic optimization. {In Table~\ref{table results}, we summarize the key advances in SQN methods where much of prior work focuses on strongly convex (with a few exceptions). Furthermore, from Table~\ref{table assumption}, it can be seen that an assumption of twice continuous differentiability and boundedness of eigenvalues on the true Hessian is often made. In addition, almost all results rely on having a uniform bound on the conditional second moment of stochastic gradient error. \begin{table}[htb] \scriptsize \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline &Convexity&Smooth&$N_k$&$\gamma_k$&Conver. rate&Iter. complex.&Oracle complex.\\ \hline\hline RES \cite{mokhtari2014res}&SC&\cmark&N&$1/k$&$\mathcal O(1/k)$&-&-\\ \hline Block BFGS \cite{gower2016stochastic}&\multirow{3}{*}{SC}&\multirow{3}{*}{\cmark}& \multirow{2}{*}{N (full grad}&\multirow{3}{*}{$\gamma$}&\multirow{3}{*}{$\mathcal O(\rho^k)$}&\multirow{3}{*}{-}&\multirow{3}{*}{-}\\ Stoch. L-BFGS \cite{moritz2016linearly}&&&&&&&\\ &&&periodically) &&&&\\ \hline SQN \cite{wang2017stochastic}&NC&\cmark&$N$&$k^{-0.5}$& $\mathcal O(1/\sqrt k)$& $\mathcal O(1/\epsilon^2)$&- \\ \hline \multirow{2}{*}{SdLBFGS-VR \cite{wang2017stochastic}}&\multirow{2}{*}{NC}&\multirow{2}{*}{\cmark}&$N$(full grad&\multirow{2}{*}{$\gamma$}&\multirow{2}{*}{ $\mathcal O(1/k)$}&\multirow{2}{*}{ $\mathcal O(1/\epsilon)$}&\multirow{2}{*}{-} \\ &&&periodically)&&&&\\ \hline r-SQN \cite{yousefian2017stochastic}&C&\cmark&$1$&$k^{-2/3+\varepsilon}$&$\mathcal O(1/k^{1/3-\varepsilon})$&-&-\\ \hline SA-BFGS \cite{zhou2017stochastic}&SC&\cmark&$N$&$\gamma_k$&$\mathcal O(\rho^k)$&$\mathcal O(\ln(1/\epsilon))$&$\mathcal O({1/ \epsilon^2}(\ln({1/\epsilon}))^4)$\\ \hline Progressive&\multirow{2}{*}{NC}&\multirow{2}{*}{\cmark}&\multirow{2}{*}{-}&\multirow{2}{*}{$\gamma$}&\multirow{2}{*}{$\mathcal O(1/k)$}&\multirow{2}{*}{-}&\multirow{2}{*}{-}\\ Batching \cite{bollapragada2018progressive}&&&&&&&\\ \hline Progressive &\multirow{2}{*}{SC}&\multirow{2}{*}{\cmark}&\multirow{2}{*}{-}&\multirow{2}{*}{$\gamma$}&\multirow{2}{*}{$\mathcal O(\rho^k)$}&\multirow{2}{*}{-}&\multirow{2}{*}{-}\\ Batching \cite{bollapragada2018progressive}&&&&&&&\\ \hline\hline \eqref{VS-SQN}&SC&\cmark&$\lceil \rho^{-k}\rceil$&$\gamma$&$\mathcal O(\rho^k)$&$\mathcal O({\kappa}\ln(1/\epsilon))$&$\mathcal O(\kappa/\epsilon)$\\ \hline \eqref{sVS-SQN}&SC&\xmark&$\lceil {\rho^{-k}}\rceil$&$ {\gamma}$&$\mathcal O({\rho^k})$&$\mathcal O({\ln(1/\epsilon)})$&$\mathcal O({1/\epsilon})$\\ \hline \eqref{rVS-SQN}&C&\cmark&$\lceil k^a\rceil$&$k^{-\varepsilon}$&$\mathcal O(1/k^{1-\varepsilon})$&$\mathcal O(1/\epsilon^{1\over 1-\varepsilon})$&$\mathcal{O}(1/\epsilon^{(3+\varepsilon)/(1-\varepsilon)})$\\ \hline \eqref{rsVS-SQN}& C&\xmark&$\lceil k^a\rceil$&$k^{-1/3+\varepsilon}$&$\mathcal O(1/k^{1/3})$&$\mathcal O(1/\epsilon^{{3}})$& $\mathcal O\left({1/ {\epsilon}^{{(2+\varepsilon)/( 1/3)}}}\right)$\\ \hline \end{tabular} \caption{Comparing convergence rate of related schemes (note that $a>1$)} \label{table results} \end{table} \begin{table}[htb] \scriptsize \begin{tabular}{|c|c|c|c|p{3in}|} \hline &Convexity&Smooth&state-dep. noise&Assumptions\\ \hline\hline RES \cite{mokhtari2014res}&SC&\cmark&\xmark&${\underline{\lambda}}\mathbf{I} \preceq H_k\preceq {\overline{\lambda}} \mathbf{I}, \quad 0<{\underline{\lambda}}\leq {\overline{\lambda}}$, $f$ is twice differentiable \\ \hline Stoch. block BFGS \cite{gower2016stochastic}&\multirow{3}{*}{SC}&\multirow{3}{*}{\cmark}&\multirow{3}{*}{\xmark}&\multirow{2}{*}{${\underline{\lambda}}\mathbf{I} \preceq \nabla^2 f(x) \preceq {\overline{\lambda}} \mathbf{I}, \quad 0<{\underline{\lambda}}\leq {\overline{\lambda}}$, $f$ is twice differentiable}\\ Stoch. L-BFGS \cite{moritz2016linearly}&&&&\\ \hline SQN for non convex \cite{wang2017stochastic}& NC &\cmark& \xmark& $\preceq \nabla^2 f(x) \preceq {\overline{\lambda}} \mathbf{I}, \quad 0<{\underline{\lambda}}\leq {\overline{\lambda}}$, $f$ is differentiable\\ \hline SdLBFGS-VR \cite{wang2017stochastic}&NC&\cmark&\xmark&$\nabla^2 f(x) \preceq {\overline{\lambda}} \mathbf{I},\quad {\overline{\lambda}}\geq 0$, $f$ is twice differentiable \\ \hline r-SQN \cite{yousefian2017stochastic}&C&\cmark&\xmark&${\underline{\lambda}}\mathbf{I} \preceq H_k\preceq {\overline{\lambda}} \mathbf{I}, \quad 0<{\underline{\lambda}}\leq {\overline{\lambda}}$, $f$ is differentiable\\ \hline \multirow{2}{*}{SA-BFGS \cite{zhou2017stochastic}}&\multirow{2}{*}{SC}&\multirow{2}{*}{\cmark}&\multirow{2}{*}{\xmark}&$f_k(x)$ is standard self-concordant for every possible sampling, The Hessian is Lipschitz continuous, \\ &&&&${\underline{\lambda}}\mathbf{I} \preceq \nabla^2 f(x) \preceq {\overline{\lambda}} \mathbf{I}, \quad 0<{\underline{\lambda}}\leq {\overline{\lambda}}$, $f$ is C$^2$\\ \hline Progressive Batching \cite{bollapragada2018progressive}&NC&\cmark&\xmark& $\nabla^2f(x) \preceq {\overline{\lambda}} \mathbf{I}, \quad {\overline{\lambda}}\geq 0$, sample size is controlled by the exact inner product quasi-Newton test, $f$ is C$^2$\\ \hline Progressive Batching \cite{bollapragada2018progressive}&SC&\cmark&\xmark&${\underline{\lambda}}\mathbf{I} \preceq \nabla^2 f(x) \preceq {\overline{\lambda}} \mathbf{I}, \quad 0<{\underline{\lambda}}\leq {\overline{\lambda}}$, sample size controlled by exact inner product quasi-Newton test, $f$ is C$^2$\\ \hline \hline \eqref{VS-SQN} &SC&\cmark&\cmark&${\underline{\lambda}}\mathbf{I} \preceq H_k\preceq {\overline{\lambda}}_k \mathbf{I}, \quad 0<{\underline{\lambda}}\leq {\overline{\lambda}}_k$, $f$ is differentiable\\ \hline \eqref{sVS-SQN} &SC&\xmark&\cmark&${\underline{\lambda}}_k\mathbf{I} \preceq H_k\preceq {\overline{\lambda}}_k \mathbf{I}, \quad 0<{\underline{\lambda}}_k\leq {\overline{\lambda}}_k$\\ \hline \multirow{2}{*}{\eqref{rVS-SQN} }&\multirow{2}{*}{C}&\multirow{2}{*}{\cmark}&\cmark&${\underline{\lambda}}\mathbf{I} \preceq H_k\preceq {\overline{\lambda}}_k \mathbf{I}, \quad 0<{\underline{\lambda}}\leq {\overline{\lambda}}_k$, $f(x)$ has quadratic growth property\\ &&&\xmark&${\underline{\lambda}}\mathbf{I} \preceq H_k\preceq {\overline{\lambda}} \mathbf{I}$, , $f$ is differentiable\\ \hline \multirow{2}{*}{\eqref{rsVS-SQN} }& \multirow{2}{*}{C}&\multirow{2}{*}{\xmark}&\cmark&${\underline{\lambda}}_k\mathbf{I} \preceq H_k\preceq {\overline{\lambda}}_k \mathbf{I}, \quad 0<{\underline{\lambda}}_k\leq {\overline{\lambda}}_k$, $f(x)$ has quadratic growth property, ${\underline{\lambda}}_k\mathbf{I} \preceq H_k\preceq {\overline{\lambda}}_k \mathbf{I}$\\ \hline \end{tabular} \caption{Comparing assumptions of related schemes } \label{table assumption} \end{table} \noindent {\bf (i) Stochastic quasi-Newton~(SQN) methods.} QN schemes~\cite{liu1989limited,nocedal99numerical} have proved enormously influential in solving nonlinear programs, motivating the use of stochastic Hessian information~\cite{byrd12}. \aj{In 2014, Mokhtari and Riberio ~\cite{mokhtari2014res} introduced a regularized stochastic version of the Broyden- Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method \cite{Fletcher} by updating the matrix $H_k$ using a modified BFGS update rule} to ensure convergence while limited-memory variants~\cite{byrd2016stochastic, mokhtari2015global} and nonconvex generalizations~\cite{wang2017stochastic} were subsequently introduced. In our prior work~\cite{yousefian2017stochastic}, an SQN method was presented for merely convex smooth problems, characterized by rates of $\mathcal O(1/k^{{1\over 3}-\varepsilon})$ and $\mathcal O(1/k^{1-\varepsilon})$ for the stochastic and deterministic case, respectively. In~\cite{yousefian17smoothing}, we utilize convolution-based smoothing to address nonsmoothness and provide a.s. convergence guarantees and rate statements. \noindent {\bf (ii) Variance reduction schemes for stochastic optimization.} Increasing sample-size schemes for finite-sum machine learning problems~\cite{FriedlanderSchmidt2012,byrd12} have provided the basis for a range of variance reduction schemes in machine learning~\cite{roux2012stochastic,xiao2014proximal}, amongst reduction others. By utilizing variable sample-size (VS) stochastic gradient schemes, linear convergence rates were obtained for strongly convex problems~\cite{shanbhag15budget,jofre2017variance} and these rates were subsequently improved (in a constant factor sense) through a VS-{\em accelerated} proximal method developed by Jalilzadeh et al.~\cite{jalilzadeh2018optimal} ({called ({\bf VS-APM})}). In convex regimes, Ghadimi and Lan~\cite{ghadimi2016accelerated} developed an accelerated framework that admits the optimal rate of $\mathcal{O}(1/k^2)$ and the optimal oracle complexity (also see ~\cite{jofre2017variance}), improving the rate statement presented in~\cite{jalilzadeh16egvssa}. More recently, in ~\cite{jalilzadeh2018optimal}, Jalilzadeh et al. present a smoothed accelerated scheme that admits the optimal rate of $\mathcal{O}(1/k)$ and optimal oracle complexity for nonsmooth problems, recovering the findings in~\cite{ghadimi2016accelerated} in the smooth regime. Finally, more intricate sampling rules are developed in~\cite{bollapragada2017adaptive,pasupathy2018sampling}. \noindent {\bf (iii) Variance reduced SQN schemes.} Linear~\cite{lucchi2015variance} and superlinear~\cite{zhou2017stochastic} convergence statements for variance reduced SQN schemes were provided in twice differentiable regimes under suitable assumptions on the Hessian. A ({\bf VS-SQN}) scheme with L-BFGS~\cite{bollapragada2018progressive} was presented in strongly convex regimes under suitable bounds on the Hessian. } \subsection{Novelty and contributions} {In this paper, we consider four variants of our proposed variable sample-size stochastic quasi-Newton method, \us{distinguished by whether the function $F(x,\omega)$ is strongly convex/convex and smooth/nonsmooth. The vanilla scheme is given by \begin{align} x_{k+1}:=x_k-\gamma_kH_k{\frac{\sum_{j=1}^{N_k} u_k(x_k,\omega_{j,k})}{N_k}}, \end{align} where $H_k$ denotes an approximation of the inverse of the Hessian, \af{$\omega_{j,k}$ denotes the $j^{th}$ realization of $\omega$ at the $k^{th}$ iteration}, $N_k$ denotes the sample-size at iteration $k$, and $u_k(x_k,\omega_{j,k})$ is given by one of the following: (i) ({\bf VS-SQN}) where $F(.,\omega)$ is strongly convex and smooth, $u_k (x_k, \omega_{j,k}) \triangleq \nabla_x F(x_k,\omega_{j,k})$; (ii) Smoothed ({\bf VS-SQN}) or ({\bf sVS-SQN}) where $F(.,\omega)$ is strongly convex and nonsmooth and $F_{\eta_k}(x,\omega)$ is a smooth approximation of $F(x,\omega)$, $u_k (x_k, \omega_{j,k}) \triangleq \nabla_x F_{\eta_k}(x_k,\omega_{j,k})$; (iii) Regularized ({\bf VS-SQN}) or ({\bf rVS-SQN}) where $F(.,\omega)$ is convex and smooth {and $F_{\mu_k}(.,\omega)$ is a regularization of $F(.,\omega)$}, $u_k (x_k, \omega_{j,k}) \triangleq \nabla_x F_{\mu_k}(x_k,\omega_{j,k})$; (iv) regularized and smoothed ({\bf VS-SQN}) or ({\bf rsVS-SQN}) where $F(.,\omega)$ is convex and possibly nonsmooth and $F_{\eta_k,\mu_k}(.,\omega)$ denotes a regularized smoothed approximation, $u_k (x_k, \omega_{j,k}) \triangleq \nabla_x F_{\eta_k,\mu_k}(x_k,\omega_{j,k})$. We recap these definitions in the relevant sections. We briefly discuss our contibutions and accentuate the novelty of our work.\\ \noindent (I) {\em A regularized smoothed L-BFGS update.} A regularized smoothed L-BFGS update ({\bf rsL-BFGS}) is developed in Section~\ref{sec:rslbfgs}, extending the realm of L-BFGS scheme to merely convex and possibly nonsmooth regimes by integrating both regularization and smoothing. As a consequence, SQN techniques can now contend with merely convex {and nonsmooth} problems with convex constraints.\\ \noindent (II) {\em Strongly convex problems.} (II.i) ({\bf VS-SQN}). In Section~\ref{sec:3}, we present a variable sample-size SQN scheme and prove that the convergence rate is $\mathcal{O}(\rho^k)$ (where $\rho < 1$) while the iteration and oracle complexity are proven to be $\mathcal{O}({\kappa^{m+1}} \ln(1/\epsilon))$ and $\mathcal{O}(1/\epsilon)$, respectively. Notably, our findings are under a weaker assumption of {either} state-dependent noise (thereby extending the result from~\cite{bollapragada2018progressive}) and do not necessitate assumptions of twice continuous differentiability~\cite{moritz2016linearly,gower2016stochastic} or Lipschitz continuity of Hessian~\cite{zhou2017stochastic}. (II.ii). ({\bf sVS-SQN}). By integrating a smoothing parameter, we extend ({\bf VS-SQN}) to contend with nonsmooth but smoothable objectives. Via Moreau smoothing, we show that ({\bf sVS-SQN}) retains the optimal rate and complexity statements of ({\bf VS-SQN}) while {sublinear} rate statements for {$(\alpha,\beta)$ smoothable functions} are also provided.} \\ \noindent (III) {\em Convex problems.} (III.i) ({\bf rVS-SQN}). A {\em regularized ({\bf VS-SQN}) } scheme is presented in Section~\ref{sec:4} based on {the} ({\bf rL-BFGS}) update and admits a rate of {$\mathcal{O}(1/k^{1-2\varepsilon})$} with an oracle complexity of $\mathcal O\left({ {\epsilon}^{-{3+\varepsilon\over 1-\varepsilon}}}\right)$, improving prior rate statements for SQN schemes for smooth convex problems and obviating prior inductive arguments. In addition, we show that ({\bf rVS-SQN}) produces sequences that converge to the solution in an a.s. sense. Under a suitable growth property, these statements can be extended to the state-dependent noise regime. (III.ii) ({\bf rsVS-SQN}). {\em A regularized smoothed $(${\bf VS-SQN}$)$} is presented that leverages the ({\bf rsL-BFGS}) update and allows for developing rate $\mathcal O(k^{-{1\over 3}})$ amongst the first known rates for SQN schemes for nonsmooth convex programs. Again imposing a growth assumption allows for weakening the requirements to state-dependent noise.\\ \noindent (IV) {\em Numerics.} Finally, in Section~\ref{sec:5}, we apply the ({\bf VS-SQN}) schemes on strongly convex/convex and smooth/nonsmooth stochastic optimization problems. In comparison with variable sample-size accelerated proximal gradient schemes, we observe that ({\bf VS-SQN}) schemes compete well and outperform gradient schemes for ill-conditioned problems when the number of QN updates increases. {In addition, SQN schemes do far better in computing sparse solutions, in contrast with standard subgradient and variance-reduced accelerated gradient techniques.} {Finally}, via smoothing, ({\bf VS-SQN}) schemes can be seen to resolve both nonsmooth and constrained problems.\\ {\bf Notation.} $\mathbb{E}[\bullet]$ denotes the expectation with respect to the probability measure $\mathbb{P}$ and we refer to \aj{${\nabla_x} {F}(x,\xi(\omega))$} by ${\nabla_x} {F}(x,\omega)$. We denote the optimal objective value (or solution) of \eqref{main problem} by $f^*$ (or $x^*$) and the set of the optimal solutions by $X^*$, {which is assumed to be nonempty}. {For a vector $x\in \mathbb R^n$ and a {nonempty} set $X \subseteq\mathbb R^n$, the Euclidean distance of $x$ from $X$ is denoted by ${\rm dist}(x,X)$.} \section{{Background and Assumptions}} In Section~\ref{sec:smooth}, we provide some background on smoothing techniques {and} then proceed to define the {\em regularized and smoothed L-BFGS method} or {\bf(rsL-BFGS)} update rule {employed for generating} the sequence of Hessian approximations $H_k$ in Section~\ref{sec:rslbfgs}. We conclude this section with a summary of the main assumptions in Section~\ref{sec:assump}. \subsection{Smoothing of nonsmooth convex functions} \label{sec:smooth} We begin by defining of {$L$-smoothness} and $(\alpha,\beta)-${\em smoothability}~\cite{beck17fom}. \begin{definition} A function $f:\mathbb R^n\to \mathbb R$ is said to be $L$-smooth if it {is} differentiable and {there exists an $L > 0$ such that} $\|\nabla f(x)-\nabla f(y) \|\leq L\|x-y\|$ for all $x,y\in \mathbb R^n$. \end{definition} \begin{definition}{\bf [($\alpha,\beta$)-smoothable~\cite{beck17fom}]} {A convex function $f: \mathbb{R}^n \to \mathbb{R}$ is $(\alpha,\beta)$-smoothable if there exists a convex C$^1$ function $f_{\eta}: \mathbb{R}^n \to \mathbb{R}$ satisfying the following: (i) $f_{\eta}(x) \leq f(x) \leq f_{\eta}(x)+\eta \beta$ for all $x$; and (ii) $f_{\eta}(x)$ is $\alpha/\eta$-smooth.} \end{definition} {Some instances of smoothing~\cite{beck17fom} include the following}: \noindent (i) If $f(x) \triangleq \|x\|_2$ and $f_\eta(x) { \triangleq } \sqrt{\|x\|_2^2 + \eta^2} - \eta$, then $f$ is $(1,1)-$smoothable function; \noindent (ii) If $f(x) \triangleq \max\{x_1,x_2, \hdots, x_n\}$ and $f_{\eta}(x) { \triangleq } \eta \ln(\sum_{i=1}^n e^{x_i/\eta})-\eta \ln(n)$, then $f$ is $(1,\ln(n))$-smoothable; (iii) If $f$ is a proper, closed, and convex function and \begin{align} \label{moreau} f_\eta(x) \triangleq \min_{u} \ \left\{f(u)+{1\over 2\eta}\|u-x\|^2 \right\}, \end{align} (referred to as Moreau proximal smoothing)~\cite{moreau1965proximite}, {then} $f$ is $(1,B^2)$-smoothable where $B$ denotes a uniform bound on $\|s\|$ where $s \in \partial f(x)$. It may be recalled that Newton's method is the de-facto standard for computing a zero of a nonlinear equation~\cite{nocedal99numerical} while variants such as semismooth Newton methods have been employed for addressing nonsmooth equations~\cite{facchinei1996inexact, facchinelt1997semismooth}. More generally, in constrained regimes, such techniques take the form of interior point schemes which can be viewed as the application of Newton's method on the KKT system. Quasi-Newton variants of such techniques can then we applied when second derivatives are either unavailable or challenging to compute. However, in {constrained} stochastic regimes, there has been far less available via {a direct application of} {quasi-Newton} schemes. We consider a smoothing approach that leverages the unconstrained reformulation of a constrained convex program where $X$ is a closed and convex set and {${\bf 1}_{X}(x)$ is an indicator function}: \begin{align} \tag{P} \min_x f(x) + {\bf 1}_X(x). \end{align} Then the smoothed problem can be represented as follows: \begin{align} \tag{P$_{\eta}$} \min_x f(x) + {\bf 1}_{X,\eta}(x), \end{align} where ${\bf 1}_{X,\eta}(\cdot)$ denotes the Moreau-smoothed variant of ${\bf 1}_X(\cdot)$~\cite{moreau1965proximite} defined as follows. \begin{align} {\bf 1}_{X,\eta}(x) \triangleq \min_{u \in \mathbb{R}^n} \left\{ {\bf 1}_X(u) + {1\over 2\eta} \|x-u\|^2\right\} = {1\over 2\eta} d_X^2(x), \end{align} where the second equality follows from ~\cite[Ex.~6.53]{beck17fom}. Note that ${\bf 1}_{X,\eta}(x)$ is continuously differentiable with gradient given by $\tfrac{1}{2\eta} \nabla_x d_X^2(x) = {1\over \eta} (x - \mbox{prox}_{{\bf 1}_X}(x)) = {1\over \eta} (x - \Pi_X(x))$, where $\Pi_{X}(x)\triangleq \mbox{argmin}_{y\in X}\{\|x-y\|^2\}$. Our interest lies in reducing the smoothing parameter $\eta$ after every iteration, a class of techniques {(called {\em iterative smoothing schemes}) that have been applied for solving } stochastic optimization~\cite{yousefian17smoothing,jalilzadeh2018optimal} and stochastic variational inequality problems~\cite{yousefian17smoothing}. {Motivated by our recent work~\cite{jalilzadeh2018optimal} in which a smoothed variable sample-size accelerated proximal gradient scheme is proposed for nonsmooth stochastic convex optimization,} we consider a framework where at iteration $k$, an $\eta_k$-smoothed function $f_{\eta_k}$ is utilized where the Lipschitz constant of ${\nabla} f_{\eta_k}(x)$ is $1/\eta_k$. \subsection{Regularized and Smoothed L-BFGS Update }\label{sec:rslbfgs} {When \vvs{the} function $f$ is strongly convex \vvs{but possibly nonsmooth}, we \vvs{adapt} the standard L-BFGS \vvs{scheme} (\vvs{by replacing the true gradient by a sample average}) where the approximation of the inverse Hessian $H_k$ is \vvs{defined} as follows using pairs $(s_i,y_i)$ and $\eta_i$ \vvs{denotes} a smoothing parameter: \begin{align}\label{lbfgs} s_i &:= x_{i}-x_{i-1},\\ \notag \tag{\bf Strongly Convex (SC)} \ {y_i} & := { \sum_{j=1}^{{N_{i-1}}}{\nabla_x} {F}_{{\eta_{i}}}({x_{i}},{\omega}_{{j,i-1}})\over {N_{i-1}}} -{ \sum_{j=1}^{{N_{i-1}}}{\nabla_x} {F}_{{\eta_{{i}}}}({x_{i-1}},{\omega}_{{j,i-1}})\over {N_{i-1}}},\\ \notag H_{k,j}&:=\left(\mathbf{I}-\frac{y_is_i^T}{{y_i^Ts_i}}\right)^TH_{k,j-1}\left(\mathbf{I}-\frac{y_is_i^T}{y_i^Ts_i}\right)+\frac{s_is_i^T}{y_i^Ts_i},\quad i :=k-2(m-j), \ 1 \leq j\leq m, \ \forall i, \end{align} where $H_{k,0}=\frac{s_k^Ty_k}{y_k^Ty_k}\mathbf{I}$.} We note that at iteration $i$, we generate $\nabla_x F_{\eta_i}(x_i,\omega_{j,i-1})$ and $\nabla_x F_{\eta_i}(x_{i-1},\omega_{j,i-1})$, implying there are twice as many sampled gradients generated. \vvs{Next,} we discuss how the sequence of approximations $H_k$ is generated {when} $f$ is merely convex {and} not necessarily smooth. We overlay the regularized \vvs{L-BFGS}~\cite{mokhtari2014res,yousefian2017stochastic} scheme with a smoothing and refer to the proposed scheme as the ({\bf rsL-BFGS}) update. As in {({\bf rL-BFGS})}~\cite{yousefian2017stochastic}, {we update the regularization and smoothing parameters $\{\eta_k ,\mu_k\}$ and matrix $H_k$ at alternate {iterations} to keep the secant condition satisfied.} We update the regularization parameter $\mu_k$ and smoothing parameter $\eta_k$ as follows. \begin{align}\label{eqn:mu-k} \begin{cases} \mu_{k}:=\mu_{k-1}, \quad \us{\eta_k := \eta_{k-1}}, & \text{if } k \text{ is odd}\\ \mu_{k}<\mu_{k-1}, \quad \us{\eta_k < \eta_{k-1}}, & {\text{otherwise}.} \end{cases} \end{align} We construct the update in terms of $s_i$ and $y_i$ {for convex problems}, \begin{align}\label{equ:siyi-LBFGS} s_i &:= x_{i}-x_{i-1},\\\notag \tag{\bf Convex (C)}\ {y_i} & := { \sum_{j=1}^{{N_{i-1}}}{\nabla_x} {F}_{{\eta_{{i}}^{\delta}}}({x_{i}},{\omega}_{{j,i-1}})\over {N_{i-1}}} -{ \sum_{j=1}^{{N_{i-1}}}{\nabla_x} {F}_{{\eta_{{i}}^\delta}}({x_{i-1}},{\omega}_{{j,i-1}})\over {N_{i-1}}}+{\mu_i^{\bar \delta}}s_i,\end{align} where $i$ is odd and $0 < \delta,\bar \delta \leq 1$ are scalars controlling the level of smoothing and regularization in updating matrix $H_k$, respectively. The update policy for $H_k$ is given as follows: \begin{align}\label{eqn:H-k}H_{k}:= \begin{cases} H_{k,m}, & \text{if } k \text{ is odd} \\ H_{k-1}, & \text{otherwise} \end{cases} \end{align} where $m<n$ (in large scale settings, $m<<n$) is a fixed integer that determines the number of pairs $(x_i,y_i)$ to be used to estimate $H_k$. The matrix $H_{k,m}$, for any $k\geq 2m-1$, is updated using the following recursive formula: \begin{align}\label{eqn:H-k-m} H_{k,j}&:=\left(\mathbf{I}-\frac{y_is_i^T}{{y_i^Ts_i}}\right)^TH_{k,j-1}\left(\mathbf{I}-\frac{y_is_i^T}{y_i^Ts_i}\right)+\frac{s_is_i^T}{y_i^Ts_i},\quad i :=k-2(m-j), \quad 1 \leq j\leq m, \quad \forall i, \end{align} {where $H_{k,0}=\frac{s_k^Ty_k}{y_k^Ty_k}\mathbf{I}$. It is important to note that our regularized method inherits the computational efficiency from ({\bf L-BFGS}). Note that {Assumption \ref{assum:convex2}} holds for our choice of smoothing. \subsection{Main assumptions}\label{sec:assump} \vvs{A subset of our results require smoothness of $F(x,\omega)$ as formalized by the next assumption.} \begin{assumption}\label{assum:convex-smooth} $($a$)$ The function ${F}(x,\omega)$ is {convex and continuously differentiable} over $\mathbb R^n$ for any $\omega \in \Omega$. $($b$)$ The function $f$ is C$^1$ with $L$-Lipschitz continuous gradients over $\mathbb R^n$. \end{assumption} \noindent {In Sections 3.2 (II) and 4.2,} we assume the following on the smoothed functions ${F}_{\eta}(x,\omega)$. \begin{assumption}\label{assum:convex2} For any $\omega \in \Omega$, ${F}(x,\omega)$ is $(1, \beta)$ {smoothable}, i.e. $F_{\eta}(x,\omega)$ is $C^1$, convex, and ${1\over \eta}$-smooth. \end{assumption} \noindent {We now assume the following on the conditional second moment on the sampled gradient (in either the smooth or the smoothed regime) produced by the stochastic first-order oracle.} \begin{assumption}[{\bf Moment requirements for state-dependent noise}]\label{state noise} Smooth: Suppose $\bar{w}_{k,N_k} \triangleq \nabla_x f(x_k) -\tfrac{\sum_{j=1}^{N_k}\nabla_x {F}(x_k,\omega_{j,k})}{N_k}$ and $\mathcal{F}_k \triangleq \sigma\{x_0, x_1, \hdots, x_{k-1}\}$. (S-M) There exist $\nu_1, \nu_2>0$ such that $\mathbb E[\|{\bar{w}}_{k,N_k}\|^2\mid \mathcal F_k]\leq {\tfrac{\nu_1^2\|x_k\|^2+\nu_2^2}{N_k}}$ a.s. for $k \geq 0$. (S-B) For $k \geq 0$, $\mathbb E[{\bar{w}}_{k,N_k}\mid \mathcal F_k] = 0$, a.s. . Nonsmooth: Suppose $\bar{w}_{k,N_k} \triangleq {\nabla} f_{\eta_k}(x_k) - {\tfrac{\sum_{j=1}^{N_k}\nabla_x {F}_{\eta_k}(x_k,\omega_{j,k})}{N_k}}$, $\eta_k > 0$, and $\mathcal{F}_k \triangleq \sigma\{x_0, x_1, \hdots, x_{k-1}\}$. (NS-M) There exist $\nu_1,\nu_2>0$ such that $\mathbb E[\|{\bar{w}}_{k,N_k}\|^2\mid \mathcal F_k]\leq {\tfrac{\nu_1^2\|x_k\|^2+\nu_2^2}{N_k}}$ a.s. for $k \geq 0$. (NS-B) For $k \geq 0$, $\mathbb E[{\bar{w}}_{k,N_k}\mid \mathcal F_k] = 0$, a.s. . \end{assumption} Finally, we make the following assumption on the sequence of Hessian approximations $\{H_k\}$. Note that these properties follow when either the regularized update ({\bf rL-BFGS}) or the regularized smoothed update ({\bf rsL-BFGS}) \vvs{is} employed (see Lemmas \ref{H_k sc}, \ref{H_k ns sc}, \ref{rLBFGS-matrix}, and \ref{rsLBFGS-matrix}). \begin{assumption}[{\bf Properties of $H_k$}]\label{assump:Hk} \begin{enumerate} \item[] \item[{(S)}] The following hold for every matrix in the sequence $\{H_k\}_{k \in\mathbb{Z}_+}$ where $H_k \in \mathbb{R}^{n \times n}$. (i) \ $H_k$ is $\mathcal{F}_{k}$-measurable; (ii) \ $H_k$ is symmetric and positive definite and there exist $\aj{{\underline{\lambda}}_k},{{\overline{\lambda}}_k}>0$ such that $\aj{{\underline{\lambda}}_k}\mathbf{I} \preceq H_k \preceq {{\overline{\lambda}}_k} \mathbf{I}$ {a.s.} for all $k\geq 0.$ \item[{(NS)}] The following hold for every matrix in the sequence $\{H_k\}_{k \in\mathbb{Z}_+}$ where $H_k \in \mathbb{R}^{n \times n}$. (i) $H_k$ is $\mathcal{F}_{k}$-measurable; (ii) \ $H_k$ is symmetric and positive definite and there exist positive scalars ${\underline{\lambda}}_{k},{\overline{\lambda}}_k$ such that ${\underline{\lambda}}_{k}\mathbf{I} \preceq H_k \preceq {{\overline{\lambda}}_k} \mathbf{I}$ {a.s.} for all $k\geq 0.$ \end{enumerate} \end{assumption} \section{Smooth and nonsmooth strongly convex problems}\label{sec:3} In this section, we {derive the} rate and oracle complexity of the \eqref{rVS-SQN} scheme for smooth {and nonsmooth} strongly convex problems by considering the \eqref{VS-SQN} and \eqref{sVS-SQN} schemes. \subsection{Smooth strongly convex optimization} We begin by considering ~\eqref{main problem} when $f$ is {$\tau-$}strongly convex and $L-$smooth {and we define $\kappa \triangleq L/{\tau}$.} {Throughout this subsection, we consider the ({\bf VS-SQN})} scheme, {defined next, where $H_k$ is generated by the ({\bf L-BFGS}) scheme. } \begin{align}\tag{\bf VS-SQN}\label{VS-SQN} x_{k+1}:=x_k-\gamma_kH_k\frac{\sum_{j=1}^{N_k} \nabla_x F(x_k,{\omega}_{j,k})}{N_k}. \end{align} {Next, we derive bounds on the eigenvalues of $H_k$ under strong convexity (see \cite{berahas2016multi} for proof)}. \begin{lemma}[{\bf Properties of {Hessian approx. produced by} (L-BFGS)}]\label{H_k sc} Let {the function $f$} be $\tau$-strongly convex. Consider the \eqref{VS-SQN} method. Let $s_i$, $y_i$ and $H_k$ be given by \eqref{lbfgs}, \aj{where $F_\eta(.)=F(.)$}. Then $H_k$ satisfies Assumption \ref{assump:Hk}{(S)}, with ${\underline{\lambda}}={1\over L(m+n)}$ and ${\overline{\lambda}}=\left({L(n+m)\over \tau}\right)^{m}$. \end{lemma}} \begin{proposition}[{\bf Convergence in mean}]\label{thm:mean:smooth:strong} Consider the iterates generated by the \eqref{VS-SQN} scheme. Suppose $f$ is {$\tau$-}strongly convex. Suppose Assumptions~\ref{assum:convex-smooth}, \vvs{\ref{state noise} (S-M), \ref{state noise} (S-B)}, and \ref{assump:Hk} (S) hold {and $\{N_k\}$ {is} an increasing sequence}. Then the following inequality holds for all $k\geq 1$, {where} $N_0> {2\nu_1^2{\overline{\lambda}}\over \tau^2{\underline{\lambda}}}$ and $\gamma_k \triangleq {1\over L{\overline{\lambda}}}$ {for all $k$}. \begin{align*} \mathbb E\left[f(x_{k+1})-f(x^*)\right]& \leq\left(1-{\tau {\underline{\lambda}}\over L{\overline{\lambda}}}\red{+{2\nu_1^2\over L\tau N_0}}\right)\mathbb E\left[f(x_{k})-f(x^*)\right]+\red{ 2 \nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}. \end{align*} \end{proposition} \begin{proof} From Lipschitz continuity of $\nabla f(x)$ and update rule \eqref{VS-SQN}, we have the following: \begin{align*} f(x_{k+1})&\leq f(x_k)+\nabla f(x_k)^T(x_{k+1}-x_k)+{L\over 2 }\|x_{k+1}-x_k\|^2\\& =f(x_k)+\nabla f(x_k)^T\left(-\gamma_kH_k(\nabla f(x_k)+\bar w_{k,N_k})\right)+{L\over 2 }\gamma_k^2\left\|H_k(\nabla f(x_k)+\bar w_{k,N_k})\right\|\uvs{^2}, \end{align*} {where $\bar{w}_{k,N_k} \triangleq \frac{\sum_{j=1}^{N_k} \left({\nabla_{x}} {F}(x_k,\omega_{j,k})-\nabla f(x_k)\right)}{N_k}$}. By taking expectations \uvs{conditioned on} $\mathcal F_k$, using Lemma {\ref{H_k sc}}, and Assumption \ref{state noise} (S-M) and (S-B), we obtain the following. \begin{align*} & \quad \mathbb E\left[f(x_{k+1})-f(x_k)\mid \mathcal F_k\right]\leq -\gamma_k \nabla f(x_k)^TH_k\nabla f(x_k)+{L\over 2 }\gamma_k^2\|H_k\nabla f(x_k)\|^2+{\gamma_k^2{\overline{\lambda}}^2L\over 2 }\mathbb E[\|\bar w_{k,N_k}\|^2\mid \mathcal F_k]\\& ={\gamma_k}\nabla f(x_k)^TH_k^{1/2}\left(-I+{L\over 2 }\gamma_k\uvs{H_k}\right)H_k^{1/2}\nabla f(x_k)+{\gamma_k^2{\overline{\lambda}}^2L(\nu_1^2\|x_k\|^2+\nu_2^2)\over 2 N_k}\\& \leq -\gamma_k \left(1-{L\over 2 }\gamma_k{\overline{\lambda}}\right)\|H_k^{1/2}\nabla f(x_k)\|^2+{\gamma_k^2{\overline{\lambda}}^2L(\nu_1^2\|x_k\|^2+\nu_2^2)\over 2 N_k} = {-\gamma_k\over 2}\|H_k^{1/2}\nabla f(x_k)\|^2+{ \nu_1^2\|x_k\|^2+\nu_2^2\over 2LN_k}, \end{align*} where {the last inequality follows from} $\gamma_k= \tfrac{1}{L{\overline{\lambda}}}$ for all $k$. Since $f$ is strongly convex with modulus $\tau$, $\|\nabla f(x_k)\|^2\geq 2\tau \left(f(x_k)-f(x^*)\right)$. \uvs{Therefore by subtracting $f(x^*)$} from both sides, we obtain: \begin{align}\label{strong:smooth} \mathbb E\left[f(x_{k+1})-f(x^*)\mid \mathcal F_k\right]\nonumber&\leq f(x_{k})-f(x^*)-{\gamma_k{\underline{\lambda}}\over 2}\|\nabla f(x_k)\|^2+{ \nu_1^2\|x_k-x^*+x^*\|^2+\nu_2^2\over 2LN_k}\\& \leq\left(1-\tau\gamma_k{\underline{\lambda}}+{2\nu_1^2\over L\tau N_k}\right) (f(x_{k})-f(x^*))+{ 2\nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}, \end{align} where the last inequality \uvs{is a consequence of} $f(x_k)\geq f(x^*)+{\tau\over 2}\|x_k-x^*\|^2$. Taking unconditional expectations on both sides of \eqref{strong:smooth}, choosing $\gamma_k={ 1\over L{\overline{\lambda}}}$ for all $k$ and \uvs{invoking} the assumption that $\{N_k\}$ is an increasing sequence, \uvs{we obtain the following.} \begin{align*} \mathbb E\left[f(x_{k+1})-f(x^*)\right]& \leq\left(1-{\tau {\underline{\lambda}}\over L{\overline{\lambda}}}+{2\nu_1^2\over L\tau N_0}\right)\mathbb E\left[f(x_{k})-f(x^*)\right]+{ 2\nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}. \end{align*} \end{proof} \uvs{We now leverage this result in deriving a rate and oracle complexity statement.} \begin{theorem}[{\bf Optimal rate and oracle complexity}] Consider the iterates generated by the \eqref{VS-SQN} scheme. Suppose $f$ is $\tau$-strongly convex and Assumptions~~\ref{assum:convex-smooth}, \vvs{\ref{state noise} (S-M), \ref{state noise} (S-B)}, and \ref{assump:Hk} (S) hold. In addition, suppose $\gamma_k={1\over L{\overline{\lambda}}}$ for all $k$. (i) If $a\triangleq \left(1-{\tau {\underline{\lambda}}\over L{\overline{\lambda}}}+{2\nu_1^2\over L\tau N_0}\right)$, $N_k \triangleq \lceil N_0\rho^{-k}\rceil$ where $\rho<1$ and $N_0 \geq {2\nu_1^2{\overline{\lambda}}\over \tau^2{\underline{\lambda}}}$. Then for {every $k \geq 1$} and some scalar $C$, the following holds: $\mathbb E\left[f(x_{K+1})-f(x^*)\right]\leq C(\max\{a,\rho\})^{{k}}.$ \blue{(ii) Suppose $x_{K+1}$ is an $\epsilon$-solution such that $\mathbb{E}[f(x_{{K+1}})-f^*]\leq \epsilon$. Then the iteration and oracle complexity of \eqref{VS-SQN} are $\mathcal{O}({\kappa^{m+1}} \ln (1/\epsilon))$ and $\mathcal{O}({\kappa^{m+1} \over\epsilon})$, respectively implying that $\sum_{k=1}^K N_k \leq \mathcal O\left({ \kappa^{m+1}\over \epsilon}\right).$ } \end{theorem} \begin{proof} {\bf (i)} Let $a \triangleq \left(1-{\tau {\underline{\lambda}}\over L{\overline{\lambda}}}+{2\nu_1^2\over L\tau N_0}\right)$, $b_k \triangleq { 2 \nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}$, and $N_k \triangleq \lceil N_0\rho^{-k}\rceil\geq N_0\rho^{-k}$. Note that, choosing $N_0\geq {2\nu_1^2{\overline{\lambda}}\over \tau^2{\underline{\lambda}}}$ leads to $a<1$. Consider $C \triangleq \uvs{ \mathbb{E}[f(x_0)-f(x^*)]}+\left({2 \nu_1^2\|x^*\|^2+\nu_2^2\over 2N_0L}\right){1\over 1-( \min\{a,\rho\}/\max\{a,\rho\})}$. {Then} {by Prop.~\ref{thm:mean:smooth:strong},} we obtain {the following for every $k \geq 1$.} \begin{align*} \mathbb E&\left[f(x_{K+1})-f(x^*)\right] \leq a^{K+1}{\mathbb{E}\left[f(x_0)-f(x^*)\right]}+\sum_{i=0}^{K}a^{K-i}b_{i}\\ &\leq a^{K+1}{\mathbb{E}\left[f(x_0)-f(x^*)\right]}+{(\max\{a,\rho\})^K(2 \nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_0L}\sum_{i=0}^{K}\left({\min\{a,\rho\}\over \max\{a,\rho\}}\right)^{K-i}\\ &\leq a^{K+1}{\mathbb{E}\left[f(x_0)-f(x^*)\right]}+\left({(2 \nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_0L}\right){{(\max\{a,\rho\})^K}\over 1-( \min\{a,\rho\}/\max\{a,\rho\})}\leq C(\max\{a,\rho\})^{K}. \end{align*} Furthermore, {we may derive the number of steps $K$ to obtain an $\epsilon$-solution. Without loss of generality, suppose $\max\{a,\rho\}=a$. Choose $N_0 { \ \geq \ } {4\nu_1^2{{\underline{\lambda}}}\over \tau^2{{\overline{\lambda}}}}$, then $a=\left(1-\left({\tau {\underline{\lambda}}\over 2L{\overline{\lambda}}}\right)\right)=1-{1\over \alpha\kappa}$, where $\alpha={2{\overline{\lambda}}\over {\underline{\lambda}}}$}. Therefore, since $\frac{1}{a} = \frac{1}{(1-\frac{1}{\alpha {\kappa}})}$, by using the definition of ${\underline{\lambda}}$ and ${\overline{\lambda}}$ in Lemma \ref{H_k sc} to get $\alpha= {2{\overline{\lambda}}\over {\underline{\lambda}}}=\mathcal O(\kappa^m)$, we obtain that \begin{align} \left(\frac{ \ln(C) - \ln(\epsilon)} {\ln(1/a)}\right) = \left(\frac{\ln (C/\epsilon)}{\ln(1/(1-{1\over \alpha \kappa}))}\right) = \left(\frac{\ln (C/\epsilon)}{-\ln((1-{1\over \alpha \kappa}))}\right) \leq \left(\frac{\ln (C/\epsilon)}{{1\over \alpha \kappa}}\right) \notag = \mathcal{O} ({\kappa^{m+1}} \ln(\tilde {C}/\epsilon)), \end{align} {where the {bound} holds when $\alpha \kappa > 1$. It follows that the iteration complexity of computing an $\epsilon$-solution is $\mathcal{O}(\kappa^{m+1} \ln(\tfrac{C}{\epsilon}))$.} \blue{{\bf(ii)} To compute a vector $x_{K+1}$ satisfying $\mathbb{E}[f(x_{{K+1}})-f^*]\leq \epsilon$, we {consider the case where $a > \rho$ while the other case follows similarly.} Then we have that $C{a}^{K}\leq \epsilon$, implying that $K = \lceil \ln_{(1/ {a})}(C/\epsilon)\rceil.$ To obtain the optimal oracle complexity, we require $\sum_{k=1}^{K} N_k$ gradients. If $N_k=\lceil N_0a^{-k}\rceil\leq 2N_0a^{-k}$, we obtain the following since $(1-a) = 1 \slash (\alpha {\kappa})$. \begin{align*} & \quad \sum_{k=1}^{\ln_{(1/{a})}\left(C/\epsilon\right)+1} 2N_0a^{-k} \leq \frac{2N_0}{\left(\frac{1}{{a}} -1\right)}\left({1\over a}\right)^{3+\ln_{(1/ {a})}\left(C/\epsilon\right)} \leq \left( C \over \epsilon\right)\frac{2N_0}{a^2(1-{a})} = \frac{ 2N_0\alpha {\kappa} C}{a^2\epsilon}. \end{align*} Note that $a=1-{1\over \alpha\kappa}$ {and $\alpha=\mathcal O(\kappa^m)$}, implying that {\begin{align*} a^2 & = 1-2/(\alpha \kappa)+1/(\alpha^2\kappa^2)\geq {\alpha^2\kappa^2-2\alpha\kappa^2+1\over \alpha^2\kappa^2}\geq{ \alpha^2\kappa^2-2\alpha\kappa^2\over \alpha^2\kappa^2}={(\alpha^2-2\alpha)\over \alpha^2}\\ \implies & {\kappa\over a^2}\leq {\alpha^2 \kappa\over (\alpha^2-2\alpha)}=\left(\alpha\over \alpha-2\right)\kappa \implies \sum_{k=1}^{\ln_{(1/{a})}\left(C/\epsilon\right)+1} a^{-k} \leq {2N_0\alpha^2\kappa C\over (\alpha-2)\epsilon}=\mathcal O\left({{ \kappa^{m+1}}\over \epsilon}\right). \end{align*}}} \end{proof} We prove a.s. convergence of iterates by using the super-martingale convergence lemma from~\cite{polyak1987introduction}. \begin{lemma}[{\bf super-martingale convergence}]\label{almost sure} Let $\{v_k\}$ be a sequence of nonnegative random variables, where $\mathbb E{[v_0]}<\infty$ and let $\{{\chi_k}\}$ and $\{\beta_k\}$ be deterministic scalar sequences such that $0\leq {\chi_k} \leq 1$ and $\beta_k\geq 0$ for all $k\geq 0$, $\sum_{k=0}^\infty{\chi_k}=\infty$, $\sum_{k=0}^\infty \beta_k<\infty $, and $\lim_{k\rightarrow \infty}{\beta_k\over {\chi_k}}=0$, and $\mathbb E{[v_{k+1}\mid \mathcal F_k]\leq (1-{\chi_k})v_k+\beta_k}$ a.s. for all $k\geq 0$. Then, $v_k\rightarrow 0$ almost surely as $k\rightarrow \infty$. \end{lemma} \begin{theorem}[{\bf a.s. convergence under strong convexity}] Consider the iterates generated by the \eqref{VS-SQN} scheme. Suppose $f$ is $\tau$-strongly convex. Suppose Assumptions~\ref{assum:convex-smooth}, \vvs{\ref{state noise} (S-M), \ref{state noise} (S-B)}, and \ref{assump:Hk} (S) hold. In addition, suppose $\gamma_k={1\over L{\overline{\lambda}}}$ for all $k \geq 0$. Let $\{N_k\}_{k\geq 0}$ be an increasing sequence such that $\sum_{k=0}^\infty {1\over N_k}<\infty$ and $N_0> {2\nu_1^2{\overline{\lambda}}\over \tau^2{\underline{\lambda}}}$. Then $\lim_{k\rightarrow \infty}f(x_k)=f(x^*)$ almost surely. \end{theorem} \begin{proof} Recall that in \eqref{strong:smooth}, we derived {the following for $k \geq 0$.} \begin{align*} \mathbb E\left[f(x_{k+1})-f(x^*)\mid \mathcal F_k\right]\nonumber&\leq\left(1-\tau\gamma_k{\underline{\lambda}}+{2\nu_1^2\over L\tau N_k}\right) (f(x_{k})-f(x^*))+{ 2\nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}. \end{align*} If $v_k \triangleq f(x_k)-f(x^*)$, $\chi_k \triangleq \tau\gamma_k{\underline{\lambda}}-{2\nu_1^2\over L\tau N_k}$, $\beta_k \triangleq { 2\nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}$, $\gamma_k={1\over L{{\overline{\lambda}}}}$, and $\{N_k\}_{k\geq 0}$ be an increasing sequence such that $\sum_{k=0}^\infty {1\over N_k}<\infty$ where $N_0> {2\nu_1^2{\overline{\lambda}}\over \tau^2{\underline{\lambda}}}$, (e.g. $N_k\geq {\lceil N_0k^{1+\epsilon}\rceil}$) the requirements of Lemma~\ref{almost sure} are {seen to be} satisfied. {Hence}, $f(x_k)-f(x^*)\rightarrow 0$ {a.s.} as $k\rightarrow \infty$ and by strong convexity of $f$, it follows that $\|x_k-x^*\|^2\to 0$ a.s. . \end{proof} Having studied the variable sample-size SQN method, we now consider the special case where $N_k=1$. Similar to Proposition~\ref{thm:mean:smooth:strong}, the following inequality holds for $N_k=1$: \begin{align}\label{bound sqn} &\nonumber \ \mathbb E\left[f(x_{k+1})-f(x^*)\right]\leq f(x_k)-f(x^*)-\gamma_k \left(1-{L\over 2 }\gamma_k{\overline{\lambda}}\right)\|H_k^{1/2}\nabla f(x_k)\|^2+{\gamma_k^2{\overline{\lambda}}^2L(\nu_1^2\|x_k\|^2+\nu_2^2)\over 2}\\&\nonumber\leq \left(1-2\gamma_k{L^2\over \tau}{\overline{\lambda}}(1-{L\over 2}\gamma_k{\overline{\lambda}})\right)\left(f(x_k)-f(x^*)\right)+{\gamma_k^2{\overline{\lambda}}^2L(\nu_1^2\|x_k-x^*+x^*\|^2+\nu_2^2)\over 2}\\& \leq \left(1-2\gamma_k{\overline{\lambda}}{L^2\over \tau}+\gamma_k^2{\overline{\lambda}}^2{L^3\over\tau}+{2\nu_1^2\gamma_k^2{\overline{\lambda}}^2L\over \tau}\right)\left(f(x_k)-f(x^*)\right)+{\gamma_k^2{\overline{\lambda}}^2L(2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2}, \end{align} where the second inequality is obtained by using Lipschitz continuity of $\nabla f(x)$ and the strong convexity of $f(x)$. Next, to obtain the convergence rate of SQN, we use the following lemma~\cite{xie2016si}. \begin{lemma}\label{induction} Suppose $e_{k+1}\leq (1-2a\gamma_k+\gamma_k^2b)e_k+\gamma_k^2c$ for all $k\geq 1$. Let $\gamma_k=\gamma/k$, $\gamma>1/(2a)$, $K\triangleq\lceil {\gamma^2b\over 2a\gamma-1} \rceil+1$ and $Q(\gamma,K)\triangleq \max \left\{{\gamma^2c\over 2a\gamma-\gamma^2b/K-1},Ke_K\right\}$. Then $\forall k\geq K$, $e_k\leq {Q(\gamma,K)\over k}$. \end{lemma} Now from inequality \eqref{bound sqn} and Lemma \ref{induction}, the following proposition follows. \begin{proposition}[{\bf Rate of convergence of SQN with $N_k = 1$}] Suppose Assumptions\aj{ ~\ref{assum:convex-smooth}}, \vvs{\ref{state noise} (S-M), \ref{state noise} (S-B)} and \ref{assump:Hk} (S) hold. Let $a={L^2{\overline{\lambda}}\over \tau}$, $b={{\overline{\lambda}}^2L^3+2\nu_1^2{\overline{\lambda}}^2L\over \tau}$ and $c={{\overline{\lambda}}^2L(2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2}$. Then, $\gamma_k={\gamma\over k}$, $\gamma>{1\over L {\overline{\lambda}}}$ and $N_k=1$ the following holds: $\mathbb E\left[f(x_{k+1})-f(x^*)\right]\leq {Q(\gamma,K)\over k}$, where $Q(\gamma,K)\triangleq \max \left\{{\gamma^2c\over 2a\gamma-\gamma^2b/K-1},K(f(x_K)-f(x^*))\right\}$ and $K\triangleq\lceil {\gamma^2b\over 2a\gamma-1} \rceil+1$. \end{proposition} \blue{\begin{remark} It is worth emphasizing that the proof techniques, while aligned with avenues adopted in~\cite{byrd12,FriedlanderSchmidt2012,bollapragada2018progressive}, {extend results} in~\cite{bollapragada2018progressive} to the regime of state-dependent noise~\cite{FriedlanderSchmidt2012} while the oracle complexity statements are classical (cf.~\cite{byrd12}). We also observe that in the analysis of deterministic/stochastic first-order methods, any non-asymptotic rate statements rely on utilizing problem parameters (e.g. the strong convexity modulus, Lipschitz constants, etc.). In the context of quasi-Newton methods, obtaining non-asymptotic bounds also requires ${\underline{\lambda}}$ and ${\overline{\lambda}}$ (cf.~\cite[Theorem 3.1]{bollapragada2018progressive}, \cite[Theorem 3.4]{berahas2016multi}, and \cite[Lemma~2.2]{wang2017stochastic}) {since the impact of $H_k$ needs to be addressed.} One {avenue for weakening the dependence on such parameters lies} in using line search schemes. However when the problem is expectation-valued, the steplength arising from a line search leads to a {dependence between the} steplength (which is now random) and the direction. Consequently, standard analysis fails and one has to appeal to tools such as empirical process theory (cf.~\cite{iusem2017variance}). {This remains the focus of future work.} \end{remark}} \subsection{Nonsmooth strongly convex optimization} Consider \eqref{main problem} where $f(x)$ is a strongly convex but nonsmooth function. In this subsection, we examine two avenues for solving this problem, of which the first utilizes Moreau smoothing with a fixed smoothing parameter while the second requires $(\alpha, \beta)$ smoothability with a diminishing smoothing parameter.\\ \noindent {\bf (I) Moreau smoothing with fixed $\eta$.} \blue{In this subsection, we focus on the special case where $f(x) \triangleq h(x) + g(x)$, $h$ is a closed, convex, and proper function, $g(x) \triangleq \mathbb{E}[F(x,{\omega})]$, and $F(x,\omega)$ is a $\tau-$strongly convex $L-$smooth function for every $\omega$}. {We begin by noting that the Moreau envelope of $f$, denoted by $f_{\eta}(x)$ and defined as \eqref{moreau}, retains both the minimizers of $f$ as well as its strong convexity as captured by the following result based on \cite[Lemma~2.19]{planiden2016strongly}. \begin{lemma}\label{feta} Consider a convex, closed, and proper function $f$ and its Moreau envelope $f_{\eta}(x)$. Then the following hold: (i) $x^*$ is a minimizer of $f$ over $\mathbb{R}^n$ if and only if $x^*$ is a minimizer of $f_{\eta}(x)$; (ii) $f$ is $\sigma$-strongly convex on $\mathbb{R}^n$ if and only if $f_{\eta}$ is $\tfrac{\sigma}{\eta\sigma+1}$-strongly convex on $\mathbb{R}^n$. \end{lemma} Consequently, it suffices to minimize the (smooth) Moreau envelope with a {\em fixed} smoothing parameter $\eta$, as shown in the next result. For notational simplicity, we choose $m=1$ but the rate results hold for $m>1$ and define $f_{N_k}(x) \triangleq h(x) + \tfrac{1}{N_k}\sum_{j=1}^{N_k} F(x,\aj{\omega_{j,k}})$. {Throughout this subsection, we consider the smoothed variant of \eqref{VS-SQN}, referred to the \eqref{sVS-SQN} scheme, defined next, where $H_k$ is generated by the ({\bf sL-BFGS})} update rule, $\nabla_x f_{\eta_k}(x_k)$ denotes the gradient of the Moreau-smoothed function, {given by $\tfrac{1}{\eta_k}(x_k- \mbox{prox}_{\eta_k,f}(x_k))$, while $\nabla_{x} f_{\eta_k,N_k}(x_k) $, the gradient of the Moreau-smoothed and sample-average function $f_{N_k}(x)$, is defined as $\tfrac{1}{\eta_k}(x_k- \mbox{prox}_{\eta_k,f_{N_k}}(x_k))$ {and} ${\bar w_k} \triangleq \nabla_x {f_{\eta_k,N_k}(x_k)-\nabla_x {f_{\eta_k}(x_k)}}$. Consequently the update rule for $x_k$ becomes the following}. \begin{align}\tag{\bf sVS-SQN}\label{sVS-SQN} x_{k+1}:=x_k-\gamma_kH_k{\left(\nabla_{x} f_{\eta_k}(x_k)+{\bar w_k}\right)}, \end{align} At each iteration of \eqref{sVS-SQN}, {the error in the gradient is captured by ${\bar{w}}_k$.} We show that ${\bar{w}}_k$ satisfies Assumption \ref{state noise} (NS) by utilizing the following {assumption on the gradient of function}. \begin{assumption}\label{Lip_moreau} Suppose there exists $\nu>0$ such that for all $i\geq 1$, $\mathbb{E}[\|{\bar u}_{{k}}\|^2\mid \mathcal F_k] \leq {\tfrac{\nu^2}{N_k}}$ holds almost surely, where {${\bar{u}_k}=\nabla_{x} g(x_{k})-{\tfrac{\aj{\sum_{j=1}^{N_k}}\nabla_x F(x_k,\omega_{j,k})}{N_k}}$}. \end{assumption} \begin{lemma}\label{bound sub} Suppose {$F(x,\omega)$ is $\tau$-strongly convex in $x$ for almost every $\omega$.} Let $f_{\eta}$ {denote the Moreau smoothed approximation of $f$}. Suppose Assumption \ref{Lip_moreau} holds {and $\eta<2/L$}. Then, $\mathbb E[\|\bar w_k\|^2\mid \mathcal F_k]\leq {\nu_1^2\over {N_k}}$ for all $k\geq 0$, where {$\nu_1 \triangleq \nu/(\eta\tau)$}. \end{lemma} \begin{proof} {We begin by noting that $f_{N_k}(x)$ is $\tau$-strongly convex.} Consider the two problems: \begin{align} \mbox{prox}_{\eta, f}(x_k) & \triangleq \mbox{arg} \min_{u} \left[ f(u) + {1\over 2\eta} \|x_k-u\|^2\right], \label{prox1} \\ \mbox{prox}_{\eta, f_{N_k}}(x_k) & \triangleq \mbox{arg} \min_{u} \left[ f_{N_k}(u) + {1\over 2\eta} \|x_k-u\|^2\right]. \label{prox2} \end{align} {Suppose $x^*_{{k}}$ and $x^*_{N_k}$ denote the optimal unique} solutions of \eqref{prox1} and \eqref{prox2}, respectively. From the definition of Moreau smoothing, {it follows that} {\begin{align*} \bar{w}_k & = \nabla_x f_{\eta_k,N_k}(x_k) - \nabla_x f_{\eta_k}(x_k) = {1\over \eta_k} (x_k - \mbox{prox}_{\eta_k, f_{N_k}}(x_k)) - {1\over \eta_k} (x_k - \mbox{prox}_\aj{\eta_k, f}(x_k)) \\ & = \mbox{prox}_{\eta_k,f_{N_k}}(x_k) - \mbox{prox}_{\eta_k, f} (x_k) = {1\over \eta} (x_{N_k}^* - x_k^*),\end{align*}} {which implies} $\mathbb E[\|\bar w_k\|^2\mid \mathcal F_k]={1\over \eta^2}\mathbb E[\|x^*_{{k}}-x^*_{N_k}\|^2\mid \mathcal F_k]$. The following inequalities {are a consequence of invoking} strong convexity and {the} optimality conditions of \eqref{prox1} and \eqref{prox2}: \begin{align*} f(x^*_{N_k}) + {1\over 2\eta} \|x^*_{N_k} - x_k\|^2 &\geq f(x^*_{{k}}) + {1\over 2\eta} \|x^*_{{k}} - x_k\|^2 + {1\over 2} \left(\tau + {1\over \eta}\right) \|x^*_{{k}}-x^*_{N_k}\|^2, \\ f_{N_k}(x^*_{{k}}) + {1\over 2\eta} \|x^*_{{k}} - x_k\|^2 &\geq f_{N_k}(x^*_{N_k})+ {1\over 2\eta} \|x^*_{N_k} - x_k\|^2+ {1\over 2} \left(\tau + {1\over \eta}\right) \|x^*_{N_k}-x^*_{{k}}\|^2. \end{align*} Adding the above inequalities, we have that \begin{align*} f(x^*_{N_k}) -f_{N_k}(x^*_{N_k}) + f_{N_k}(x_{{k}}^*)-f(x_{{k}}^*) &\geq \left(\tau+{1\over \eta}\right)\|x^*_{N_k}-x_{{k}}^*\| ^2. \end{align*} From the definition of $f_{N_k}(x_k)$ and $\beta \triangleq \tau+\tfrac{1}{\eta}$, and {by the} convexity and $L-$smoothness of $F(x,\omega)$ in $x$ for a.e. $\omega$, {we may prove} the following. \begin{align*} &\beta \|x^*_{{k}}-x^*_{N_k}\|^2 \leq {f(x^*_{N_k}) -f_{N_k}(x^*_{N_k}) + f_{N_k}(x_{{k}}^*)-f(x_{{k}}^*)} \\ & = \frac{\sum_{j=1}^{N_k} (g(x^*_{N_k})-F(x^*_{N_k},\aj{\omega_{j,k}}))}{N_k} + \frac{\sum_{j=1}^{N_k} (F(x^*_{{k}},\aj{\omega_{j,k}})-g(x^*_{{k}}))}{N_k} \\ & \leq \frac{\sum_{j=1}^{N_k} \left(g(x^*_k) +\nabla_{x} g(x^*_{k})^T(x^*_{N_k}-x^*_k)+ \tfrac{L}{2}\|x^*-x^*_{N_k}\|^2 - F(x^*_k,\aj{\omega_{j,k}}) - \nabla_x F(x^*_{k},\aj{\omega_{j,k}})^T(x^*_{N_k}-x^*_k)\right)}{N_k}\\ & + \frac{\sum_{j=1}^{N_k} (F(x^*_{{k}},\aj{\omega_{j,k}})-g(x^*_{{k}}))}{N_k} = \frac{\sum_{j=1}^{N_k} (\nabla_{x} g(x^*_{k})-\nabla_x F(x^*_k,\aj{\omega_{j,k}}))^T(x^*_{N_k}-x^*_k)}{N_k}+ \tfrac{L}{2}\|x^*-x^*_{N_k}\|^2\\ & = {\bar{u}_k^T}(x^*_{N_k}-x^*_k)+ \tfrac{L}{2}\|x^*_{{k}}-x^*_{N_k}\|^2. \end{align*} Consequently, by taking conditional expectations {and using Assumption \ref{Lip_moreau}}, we have the following. \begin{align*} \mathbb{E}[\beta \|x^*_{{k}}-x^*_{N_k}\|^2 \|\mid \mathcal{F}_k] & = \mathbb E[{\bar{u}_k^T}(x^*_{N_k}-x^*_k)\mid \mathcal F_K]+ \tfrac{L}{2}\mathbb E[\|x^*_{{k}}-x^*_{N_k}\|^2\mid \mathcal F_k]\\ & \leq \tfrac{1}{2\tau} \mathbb{E}[\|{\bar{u}_k}\|^2 \mid \mathcal{F}_k] + \tfrac{\tau+L}{2}\mathbb{E}[\|x^*_k-x^*_{N_k}\|^2 \mid \mathcal{F}_k]\\ \implies \mathbb{E}[ \|x^*_{{k}}-x^*_{N_k}\|^2 \|\mid \mathcal{F}_k] & \leq \tfrac{1}{{\tau^2}} \mathbb{E}[\|{\bar u_k}\|^2 \mid \mathcal{F}_k] \leq \tfrac{1}{{\tau^2}} \tfrac{\nu^2}{N_k}, \mbox{ if $\eta < 2/L$.} \end{align*} {We may then conclude that $\mathbb E[\|\bar w_{k,N_k}\|^2\mid \mathcal F_k]\leq {\nu^2\over \eta^2\tau^2N_k}$.} \end{proof} {Next, we derive bounds on the eigenvalues of $H_k$ under strong convexity (similar to Lemma~\ref{H_k sc})}. \begin{lemma}[{\bf Properties of {Hessian approx. produced by} (L-BFGS) and (sL-BFGS)}]\label{H_k ns sc} Let {the function $f$} be $\tau$-strongly convex. Consider the \eqref{sVS-SQN} method. Let $s_i$, $y_i$ and $H_k$ be given by \eqref{lbfgs}. Then $H_k$ satisfies Assumption \ref{assump:Hk}{(NS)}, with ${{\underline{\lambda}}_k}={\eta_k\over (m+n)}$ and ${{\overline{\lambda}}_k}=\left({n+m\over \eta_k\tau}\right)^{m}$. \end{lemma} We now show that under Moreau smoothing, {a} linear rate of convergence is retained. \begin{theorem}\label{moreau_strong} Consider the iterates generated by the \eqref{sVS-SQN} scheme {where $\eta_k = \eta$ for all $k$.} {Suppose $f(x)=h(x) + g(x)$, where $h$ is a closed, convex, and proper function, $g(x) \triangleq \mathbb{E}[F(x,\omega)]$, and $F(x,\omega)$ is a $\tau-$strongly convex $L-$smooth function.} {Suppose} Assumptions \ref{assump:Hk} (NS) and \ref{Lip_moreau} hold. Furthermore, suppose $f_\eta(x)$ denotes a Moreau smoothing of $f(x)$. In addition, suppose $m = 1$, {$\eta\leq \min\{2/L,(4(n+1)^2/\tau^2)^{1/3}\}$}, $d \triangleq 1-{\tau^2\eta^3\over {4}(n+1)^2(1+\eta\tau)}$, $N_k\triangleq \lceil N_0{q^{-k}}\rceil$ for all $k\geq 1$, $\gamma\triangleq{\tau\eta^2\over {4}(1+n) }$, $c_1 \triangleq \max\{q,d\}$, and $c_2 \triangleq \min\{q,d\}$. (i) Then $\mathbb E[\|x_{{k}+1}-x^*\|^2] \leq D c_1^{{k}+1}$ for all $k$ where \begin{align*} \ D \triangleq \max\left\{\left({2\mathbb E[f_\eta(x_0)-f_\eta(x^*)](1+\eta\tau)\over \tau}\right),\left({5(1+\eta\tau)\eta{\nu_1}^2\over 8\tau{{N_0}}(c_1-c_2)}\right)\right\}. \end{align*} (ii) \blue{Suppose $x_{{K+1}}$ is an $\epsilon$-solution such that $\mathbb{E}[f(x_{{K+1}})-f^*]\leq \epsilon$. {Then, the iteration and oracle complexity of computing $x_{K+1}$} {are} $\mathcal{O}(\ln(1/\epsilon))$ steps and {$\mathcal O\left({ 1/ \epsilon}\right)$}, respectively.} \end{theorem} \begin{proof} (i) From Lipschitz continuity of $\nabla f_{\eta}(x)$ and update \eqref{sVS-SQN}, we have the following: \begin{align*} f_{\eta}(x_{k+1})&\leq f_{\eta}(x_k)+\nabla f_{\eta}(x_k)^T(x_{k+1}-x_k)+{1\over 2\eta}\|x_{k+1}-x_k\|^2\\& =f_{\eta}(x_k)+\nabla f_{\eta}(x_k)^T\left(-{\gamma}H_k(\nabla f_\eta(x_k)+\bar w_{k,N_k})\right)+{1\over 2\eta}{\gamma}^2\left\|H_k(\nabla f_\eta(x_k)+\bar w_{k,N_k})\right\|\uvs{^2}\\ &{=f_{\eta}(x_k)-\gamma\nabla f_\eta(x_k)^TH_k\nabla f_\eta(x_k)-\gamma\nabla f_\eta(x_k)^TH_k\bar w_{k,N_k}+{\gamma^2\over 2\eta}\|H_k\nabla f_\eta(x_k)\|^2}\\ &{+{\gamma^2\over 2\eta}\|H_k\bar w_{k,N_k}\|^2+{\gamma^2\over \eta}{H_k\nabla f_\eta(x_k)^T}H_k\bar w_{k,N_k}}\\ &{\leq f_{\eta}(x_k)-\gamma\nabla f_\eta(x_k)^TH_k\nabla f_\eta(x_k)+{\eta\over 4}\|\bar w_{k,N_k}\|^2+{\gamma^2\over \eta}\vvs{\|\nabla f_{\eta}(x_k)^TH_k\|}^2+{\gamma^2\over 2\eta}\|H_k\nabla f_\eta(x_k)\|^2}\\ &+{{\overline{\lambda}}^2\gamma^2\over 2\eta}\|\bar w_{k,N_k}\|^2+{\gamma^2\over 2\eta}\|H_k\nabla f_\eta(x_k)\|^2+{{\overline{\lambda}}^2\gamma^2\over 2\eta}\|\bar w_{k,N_k}\|^2, \end{align*} where in the last inequality we used the fact that $a^Tb\leq {\eta\over 4\gamma}\|a\|^2+{\gamma\over \eta}\|b\|^2$. From Lemma \ref{bound sub}, $\mathbb E[\|\bar w_k\|^2\mid \mathcal F_k]\leq {\nu_1^2\over {N_k}}$, where $\nu_1=\nu/ (\eta\tau)$. Now by taking conditional expectations with respect to $\mathcal F_k$, using Lemma \ref{H_k ns sc} and Assumption \ref{assump:Hk} (NS), we obtain the following. \begin{align}\label{sc_nonsmooth_bound1} \nonumber\mathbb E\left[f_{\eta}(x_{k+1})-f_{\eta}(x_k)\mid \mathcal F_k\right]& \leq -{\gamma}\nabla f_{\eta}(x_k)^TH_k\nabla f_{\eta}(x_k)+{2\gamma^2\over \eta}\|H_k\nabla f_{\eta}(x_k)\|^2+{\left({{{\overline{\lambda}}}^2\gamma^2\over \eta}+{\eta\over 4}\right){{\nu_1^2}\over N_k}}\\ \nonumber& ={\gamma}\nabla f_{\eta}(x_k)^TH_k^{1/2}\left(-I+{2\gamma\over \eta}H_k^T\right)H_k^{1/2}\nabla f_{\eta}(x_k)+{\left({{{\overline{\lambda}}}^2\gamma^2\over \eta}+{\eta\over 4}\right){{\nu_1^2}\over N_k}}\\ \nonumber& \leq -{\gamma} \left(1-{2\gamma\over \eta}{\overline{\lambda}}\right)\|H_k^{1/2}\nabla f_{\eta}(x_k)\|^2+{\left({{\overline{\lambda}}^2\gamma^2\over \eta}+{\eta\over 4}\right){{\nu_1^2}\over N_k}}\\ &= {-{\gamma}\over 2}\|H_k^{1/2}\nabla f_{\eta}(x_k)\|^2+{5\eta\nu_1^2\over 16{N_k}}, \end{align} {where {the last equality follows from} $\gamma={\eta\over 4{\overline{\lambda}}}$.} Since $f_{\eta}$ is $\tau/(1+\eta\tau)$-strongly convex (Lemma~\ref{feta}), $\|\nabla f_{\eta}(x_k)\|^2\geq 2\tau/(1+\eta\tau) \left(f_{\eta}(x_k)-f_{\eta}(x^*)\right)$. {Consequently,} by subtracting $f_\eta(x^*)$ {from both sides} by {invoking} Lemma \ref{H_k ns sc}, we obtain: \begin{align}\label{strong_nonsmooth_moreau} \mathbb E\left[f_{\eta}(x_{k+1})-f_\eta(x^*)\mid \mathcal F_k\right] & \leq f_{\eta}(x_{k})-f_\eta(x^*)-{\gamma{\underline{\lambda}}\over 2}\|\nabla f_{\eta}(x_k)\|^2+{5\eta\nu_1^2\over 16{N_k}}\\ & \leq\left(1-{\tau\over 1+\eta\tau}\gamma{\underline{\lambda}}\right)(f_{\eta}(x_{k})-f_\eta(x^*))+{5\eta\nu_1^2\over 16{N_k}}.\notag \end{align} Then by taking unconditional expectations, we obtain the following sequence of inequalities: \begin{align} \notag \qquad \mathbb E\left[f_{\eta}(x_{k+1})-f_\eta(x^*)\right] & \leq \left(1-{\tau\over 1+\eta\tau}\gamma{\underline{\lambda}}\right)\mathbb E\left[f_{\eta}(x_{k})-f_\eta(x^*)\right]+{5\eta\nu_1^2\over 16{N_k}} \\ \label{bound f_eta_moreau} & =\left(1-{\tau^2\eta^3\over 4(n+1)^2(1+\eta\tau)}\right)\mathbb E\left[f_{\eta}(x_{k})-f_\eta(x^*)\right]+{5\eta\nu_1^2\over 16{N_k}}, \end{align} where the last equality arises from choosing ${\underline{\lambda}}={\eta\over 1+n}$, ${\overline{\lambda}}={1+n\over\tau \eta}$ {(by Lemma \ref{H_k ns sc} for $m=1$)}, $\gamma={\eta\over 4{\overline{\lambda}}}={\tau\eta^2\over 4(1+n) }$ and using the fact that $N_k \geq N_0$ for all $k>0$. Let $d \triangleq 1-{\tau^2\eta^3\over 4(n+1)^2(1+\eta\tau)}$ and $b_k \triangleq {5\eta\nu_1^2\over 16{N_k}}$. Then {for $\eta<(4(n+1)^2/\tau^2)^{1/3}$, we have $d<1$}. Furthermore, by recalling that {$N_k=\lceil N_0q^{-k}\rceil$}, {it follows that $d_k \leq \tfrac{5\eta \nu_1^2q^k}{16N_0},$ we obtain the following bound from \eqref{bound f_eta_moreau}.} \begin{align*} \mathbb E\left[f_{\eta}(x_{K+1})-f_\eta(x^*)\right]& \leq d^{K+1}\mathbb E[f_\eta(x_0)-f_\eta(x^*)]+\sum_{i=0}^{K}d^{K-i}b_i \\ & \leq d^{K+1}\mathbb E[f_\eta(x_0)-f_\eta(x^*)]+{5\eta\nu_1^2\over 16{N_0}}\sum_{i=0}^{K}d^{K-i}q^{i}. \end{align*} If $q<d$, then $\sum_{i=0}^{K}d^{K-i}q^{i}=d^K \sum_{i=0}^K (q/d)^i\leq d^K\left({1\over 1-q/d}\right)$. {Since} $f_{\eta}$ retains the minimizers of $f$, ${\tau\over 2(1+\eta\tau)}\|x_k-x^*\|^2\leq f_{\eta}(x_k)-f_\eta(x^*)$) by strong convexity of $f_\eta$, implying the following. \begin{align*} {\tau\over 2(1+{\eta}\tau)}\mathbb E[\|x_{K+1}-x^*\|^2]& \leq d^{K+1}\mathbb E[f_\eta(x_0)-f_\eta(x^*)]+d^K\left({5\eta{\nu_1}^2\over 16{{N_0}}(1-q/d)}\right). \end{align*} Dividing both {sides} by ${\tau\over 2(1+\eta\tau)}$, the desired result is obtained. \begin{align*} \mathbb E[\|x_{K+1}-x^*\|^2] & \leq d^{K+1}\left({2\mathbb E[f_\eta(x_0)-f_\eta(x^*)](1+\eta\tau)\over \tau}\right)+d^K\left({5(1+\eta\tau)\eta{\nu_1}^2\over 8\tau{{N_0}}(1-q/d)}\right) = D d^{K+1}, \\ \mbox{ where } D & \triangleq \max\left\{\left({2\mathbb E[f_\eta(x_0)-f_\eta(x^*)](1+\eta\tau)\over \tau}\right),\left({5(1+\eta\tau)\eta{\nu_1}^2\over 8\tau{{N_0}}(d-q)}\right)\right\}. \end{align*} {Similarly, if} $d<q$, $\mathbb E[\|x_{K+1}-x^*\|^2]\leq D q^{K+1}$ where $$\ D \triangleq \max\left\{\left({2\mathbb E[f_\eta(x_0)-f_\eta(x^*)](1+\eta\tau)\over \tau}\right),\left({5(1+\eta\tau)\eta{\nu_1}^2\over 8\tau{{N_0}}(q-d)}\right)\right\}.$$ \blue{(ii) To find an $x_{K+1}$ such that $\mathbb E[\|x_{K+1}-x^*\|^2]\leq \epsilon$, suppose $d<q$ with no loss of generality. Then for some $C>0$, ${Cq^K}\leq \epsilon$, implying that $K=\lceil{\log}_{1/q}(C/\epsilon)\rceil$. {It follows that} \begin{align*} \sum_{k=0}^K N_k\leq\sum_{k=0}^{1+{\log}_{1/q}\left({C\over \epsilon}\right)} N_0q^{-k} = N_0 \frac{\left({\left({1\over q}\right) \left( {1\over q}\right)^{\log{1/q}\left(\tfrac{C}{\epsilon}\right)}-1}\right)} {\left({1/q-1}\right)} \leq N_0 \frac{\left(\tfrac{C}{\epsilon}\right)}{1-q ={\mathcal O(1/\epsilon)}. \end{align*}} \end{proof} \begin{remark} While a linear rate has been proven via Moreau smoothing, the effort to compute a gradient of the Moreau map \eqref{moreau} may be expensive. In addition, { $f(x)$ is defined as a sum of a deterministic closed, convex, and proper function $h(x)$ and an expectation-valued $L$-smooth and strongly convex function $g(x)$.} {This motivates considering the use of a more general {expectation-valued} function {with nonsmooth convex integrands}. We examine smoothing avenues for such problems but this would necessitate driving the smoothing parameter to zero, leading to a significantly poorer convergence rate but the per-iteration complexity can be much smaller.} \end{remark} \noindent {\bf (II) $(\alpha,\beta)$ smoothing with diminishing $\eta$.} Consider \eqref{main problem} where $f(x)$ is a strongly convex and nonsmooth, {while $F(x,\omega)$ is assumed to be an $(\alpha,\beta)$-smoothable function for every $\omega \in \Omega$.} Instances include settings where $f(x) { \ \triangleq \ } h(x)+g(x)$, $h(x)$ is strongly convex and smooth, and $g(x)$ is convex and nonsmooth. {In contrast in this subsection, we do not require such a structure and {allow for the stochastic component to be afflicted by nonsmoothness.}} We impose the following assumption on the sequence of smoothed functions. \begin{assumption}\label{nonsmooth_bound_all} Let $f_{\eta_k}(x)$ be a smoothed counterpart of $f(x)$ with parameter $\eta_k$ where $\eta_{k+1} \leq \eta_k$ for $k \geq 0$. There exists a scalar $B$ such that $f_{\eta_{k+1}}(x) \leq f_{\eta_k}(x)+{1\over 2}\left({\eta_k^2\over \eta_{k+1}}-{\eta_k}\right)B^2$ for all $x$. \end{assumption} {We observe that Assumption~\ref{nonsmooth_bound_all} holds for some common smoothings of convex nonsmooth functions~\cite{beck17fom} that satisfy $(\alpha,\beta)$ smoothability as verified next.} \begin{lemma} Consider a convex function $f(x)$ and any $\eta>0$. Then Assumption~\ref{nonsmooth_bound_all} holds for the following smoothing {functions} for any $x$. \begin{enumerate} \item[(i)] $f(x) { \ \triangleq \ } \|x\|_2$ and $f_{\eta}(x) \ \triangleq \ \sqrt{\|x\|^2_2+\eta^2}-\eta.$ \item[(ii)] $f(x) \ \triangleq \ \max\{x_1,x_2\}$ and $f_{\eta}(x) \ \triangleq \ \eta \ln( e^{x_1/ \eta} +e^{x_2/\eta})-\eta \ln(2)$. \end{enumerate} \end{lemma} \begin{proof} {\bf i)} The following holds for some $B$ {and $\eta_k\geq \eta_{k+1}>0$ }such that ${1\over 2}B^2 {\eta_k\over \eta_{k+1}}\geq 1$: \begin{align*} f_{\eta_{k+1}}(x)=\sqrt{\|x\|^2_2+\eta_{k+1}^2}-\eta_{k+1}\leq \sqrt{\|x\|^2_2+\eta_k^2}-\eta_k+(\eta_{k}-\eta_{k+1})\leq f_{\eta_k}(x)+{1\over 2}B^2{\eta_k\over \eta_{k+1}}(\eta_k-\eta_{k+1}). \end{align*} {\bf ii)} By using the fact that \uvs{${\eta_{k+1}\over \eta_k}\leq 1$}, the following holds if $x_2 < x_1$ (without loss of generality): \begin{align*} f_{\eta_{k+1}}(x)&= {\eta_{k+1}\uvs{\eta_k\over \eta_{k}}} \ln\left(e^{x_1/ \eta_{k+1}} + e^{x_2/\eta_{k+1}}\right)-\eta_{k+1} \ln(n)\\ &=\eta_k \ln\left(e^{x_1/ \eta_{k+1}} + e^{x_2/\eta_{k+1}}\right)^{\eta_{k+1}\over \eta_k}-\eta_{k+1} \ln(n)-\eta_k \ln(n)+\eta_k \ln(n)\\ & = \eta_k \ln\left(\left(e^{x_1/\eta_{k+1}}\right)^{\eta_{k+1} \over \eta_{k}}\left( 1+ e^{{(x_2-x_1)}/ \eta_{k+1}}\right)^{\eta_{k+1}\over \eta_k}\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\ & = \eta_k \ln\left(\left(e^{x_1/\eta_{k}}\right)\left( 1+ e^{{(x_2-x_1)}/ \eta_{k+1}}\right)^{\eta_{k+1}\over \eta_k}\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\ & \leq \eta_k \ln\left(\left(e^{x_1/\eta_{k}}\right)\left( 1+ {\eta_{k+1}\over \eta_k}e^{{(x_2-x_1)}/ \eta_{k+1}}\right)\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\ & = \eta_k \ln\left(e^{x_1/\eta_{k}}+ {\eta_{k+1}\over \eta_k}e^{{x_1/\eta_k}+{(x_2-x_1)}/ \eta_{k+1}})\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\ & = \eta_k \ln\left(e^{x_1/\eta_{k}}+ {\eta_{k+1}\over \eta_k}e^{{x_2/\eta_k}+(x_2-x_1)(\tfrac{1}{\eta_{k+1}}-\tfrac{1}{\eta_k})})\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\ & = \eta_k \ln\left(e^{x_1/\eta_{k}}+ e^{x_2/\eta_k} \underbrace{\left({\eta_{k+1}\over \eta_k}e^{\left(x_2-x_1\right)\left(\tfrac{1}{\eta_{k+1}}-\tfrac{1}{\eta_k}\right)}\right)}_{ \scriptsize \ \leq \ 1 }\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n) \end{align*} \begin{align*} & \leq \eta_k \ln\left(e^{x_1/\eta_{k}}+ e^{x_2/\eta_k})\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\ & = f_{\eta_k}(x)+(\eta_k-\eta_{k+1})\ln(n)\leq f_{\eta_k}(x)+{1\over 2}\frac{\eta_k}{\eta_{k+1}}\left({\eta_k}-{\eta_{k+1}}\right)B^2, \end{align*} where the first inequality follows from ${{a^y} \leq 1 + y(a-1)}$ for $y \in [0,1]$ and $a \geq 1$, {the second inequality follows from the $x_2<x_1$} while the third is a result of noting that $\tfrac{\eta_k}{2\eta_{k+1}}B^2 \geq 1$. \end{proof} We are now ready to provide our main convergence rate for more general smoothings. {Note that without loss of generality, we assume that $F(x,\omega)$ is $(1,\beta)$-smoothable for every $\omega \in \Omega$. } \begin{lemma}[{\bf Smoothability of $f$}]\label{lemma-smooth-f} Consider a function $f(x) \triangleq \mathbb{E}[F(x,\omega)]$ such that $F(x,\omega)$ is $(\alpha,\beta)$ smoothable for every $\omega \in \Omega$. Then $f(x)$ is $(\alpha,\beta)$ smoothable. \end{lemma} \begin{proof} By hypothesis, $ F_{\eta}(x,\omega) \leq F(x,\omega) \leq F_{\eta}(x,\omega)+\eta \beta$ for every $x$. Then by taking expectations, we have that $f_{\eta}(x) \leq f(x) \leq f_{\eta}(x)+\eta \beta$ for every $x$. In addition, by $\alpha/\eta$-smoothness of $F_{\eta}$, and Jensen's inequality we have $ \| \nabla_x f_{\eta}(x) - \nabla_x f_{\eta}(y) \| \overset{\tiny \mbox{Jensen's}}{\leq} \|\nabla_x F_{\eta}(x,\omega) - \nabla_x F_{\eta}(y,\omega) \| \leq {\alpha \over \eta} \|x-y\|, $ for all $x,y$. \end{proof} {We now prove our main convergence result.} \begin{theorem}[{\bf Convergence in mean}]\label{thm:mean:nonsmooth:strong} Consider the iterates generated by the \eqref{sVS-SQN} scheme. Suppose $f$ {and $f_\eta$ are} $\tau$-strongly convex, Assumptions~~\ref{assum:convex2}, \ref{state noise} (NS-M), \ref{state noise} (NS-B), \ref{assump:Hk} (NS), and \ref{nonsmooth_bound_all} hold. In addition, suppose $m = 1$, $\eta_k \triangleq \left({2(n+1)^2\over \tau^2(k+2)}\right)^{1/3}$ , {$N_0=\lceil {2^{4/3}\nu_1^2(n+1)^{1/3}\over \tau^{5/3}}\rceil$}, $N_k \triangleq \lceil N_0(k+2)^{a+2/3}\rceil$ for some $a>1$, and $\gamma_k \triangleq {\tau\eta_k^2\over 1+n }$ for all $k\geq 1$. Then any $K\geq 1$, the following holds. \begin{align*} \mathbb E\left[f(x_{K+1})-f(x^*)\right]&\leq{f(x_0)-f(x^*)\over K+2}+\left(\frac{(n+1)^{1/3}}{2^{2/3}\tau^{2/3}(a-1)}\right){2\nu_1^2\|x^*\|^2+\nu_2^2\over K+2}\\&+\left(\frac{2(n+1)^{{2/3}}}{ \tau^{2/3}}\right){B^2\over (K+3)^{1/3}}+\left(\frac{2^{5/3}(n+1)^{2/3}}{\tau^{7/3}(a-2/3)}\right){B^2\nu_1^2\over K+2}=\mathcal O(1/K^{1/3}). \end{align*} (ii) Suppose $x_{K+1}$ is an $\epsilon$-solution such that $\mathbb{E}[f(x_{{K+1}})-f^*]\leq \epsilon$. {Then the iteration and oracle complexity of} \eqref{sVS-SQN} {are} $\mathcal{O}(1/\epsilon^{3})$ steps and \red{$\mathcal O\left({ 1\over \epsilon^{8+\varepsilon}}\right)$, respectively.} \end{theorem} \begin{proof} (i) {By Lemma~\ref{lemma-smooth-f} and Assumption~\ref{assum:convex2}, $f$ is $(1,B)$-smoothable and $\nabla_x f_{\eta}(x)$ is $1/\eta$-smooth.} From Lipschitz continuity of $\nabla f_{\eta_k}(x)$ and {the definition of} \eqref{sVS-SQN}, {the following holds.} \begin{align*} f_{\eta_k}(x_{k+1})&\leq f_{\eta_k}(x_k)+\nabla f_{\eta_k}(x_k)^T(x_{k+1}-x_k)+{1\over 2\eta_k}\|x_{k+1}-x_k\|^2\\& =f_{\eta_k}(x_k)+\nabla f_{\eta_k}(x_k)^T\left(-\gamma_kH_k(\nabla f_{\eta_k}(x_k)+\bar w_{k,N_k})\right)+{1\over 2\eta_k}\gamma_k^2\left\|H_k(\nabla f_{\eta_k}(x_k)+\bar w_{k,N_k})\right\|\uvs{^2}. \end{align*} Now by taking conditional expectations with respect to $\mathcal F_k$, using Lemma \eqref{rsLBFGS-matrix}(c), Assumption \ref{state noise} (NS-M), \ref{state noise} (NS-B), Assumption \ref{assump:Hk} (NS) {and \eqref{unbias_smooth}} we obtain: \begin{align}\label{sc_nonsmooth_bound1} \nonumber&\mathbb E\left[f_{\eta_k}(x_{k+1})-f_{\eta_k}(x_k)\mid \mathcal F_k\right]\leq -\gamma_k \nabla f_{\eta_k}(x_k)^TH_k\nabla f_{\eta_k}(x_k)+{1\over 2\eta_k}\gamma_k^2\|H_k\nabla f_{\eta_k}(x_k)\|^2\\ \nonumber&+{\gamma_k^2{\overline{\lambda}}_k^2( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2\eta_kN_k}\\ \nonumber& =-\gamma_k\nabla f_{\eta_k}(x_k)^TH_k^{1/2}\left(I-{1\over 2\eta_k}\gamma_kH_k^T\right)H_k^{1/2}\nabla f_{\eta_k}(x_k)+{\gamma_k^2{\overline{\lambda}}_k^2( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2\eta_kN_k}\\ \nonumber& \leq -\gamma_k \left(1-{1\over 2\eta_k}\gamma_k{\overline{\lambda}}_k\right)\|H_k^{1/2}\nabla f_{\eta_k}(x_k)\|^2+{\gamma_k^2{\overline{\lambda}}_k^2( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2\eta_kN_k}\\ &\leq {-\gamma_k\over 2}\|H_k^{1/2}\nabla f_{\eta_k}(x_k)\|^2+{\eta_k( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2N_k}, \end{align} where in the first inequality, we use the fact that $\mathbb E[\bar w_{k,N_k}\mid \mathcal F_k]=0$, while in the second inequality, we employ $H_k\preceq {\overline{\lambda}}_k \mathbf{I}$, and last inequality follows from the assumption that $\gamma_k= {\eta_k\over {\overline{\lambda}}_k}$. Since $f_{\eta_k}$ is strongly convex with modulus $\tau$, $\|\nabla f_{\eta_k}(x_k)\|^2\geq 2\tau \left(f_{\eta_k}(x_k)-f_{\eta_k}(x^*)\right)\geq 2\tau \left(f_{\eta_k}(x_k)-f(x^*)\right)$. By subtracting $f(x^*)$ {from both sides} by \uvs{invoking} Lemma \ref{rsLBFGS-matrix} (c), {and taking unconditional expectations}, we obtain: \begin{align}\label{strong_nonsmooth} &\mathbb E\left[f_{\eta_k}(x_{k+1})-f(x^*)\right]\nonumber\leq \mathbb E[f_{\eta_k}(x_{k})-f(x^*)]-{\gamma_k{\underline{\lambda}}_{k}\over 2}\|\nabla f_{\eta_k}(x_k)\|^2+{\eta_k( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2N_k}\\ \nonumber& \leq\left(1-\tau\gamma_k{\underline{\lambda}}_{k}\right)\mathbb E[f_{\eta_k}(x_{k})-f(x^*)]+{\eta_k( \nu_1^2\|x_k+x^*-x^*\|^2+\nu_2^2)\over 2N_k}\\& \leq\left(1-\tau\gamma_k{\underline{\lambda}}_{k}\right)\mathbb E[f_{\eta_k}(x_{k})-f(x^*)]+\uvs{\frac{\uvs{2}\eta_k\nu_1^2\|x_k-x^*\|^2}{2N_k}}+{\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}. \end{align} By \uvs{the} strong convexity of $f$ and the \uvs{relationship} between $f$ and $f_{\eta_k}$, ${\tau\over 2}\|x_k-x^*\|^2\leq f(x_k)-f(x^*)\leq f_{\eta_k}(x_k)-f(x^*)+\eta_k{\beta}$. Therefore, \eqref{strong_nonsmooth} can be written as follows: \begin{align}\label{strong:nonsmooth} \mathbb E\left[f_{\eta_k}(x_{k+1})-f(x^*)\right]\nonumber &\leq \left(1-\tau\gamma_k{\underline{\lambda}}_{k}+{2\nu_1^2\eta_k\over \tau N_k}\right)\mathbb E\left[f_{\eta_k}(x_{k})-f(x^*)\right]+{2\nu_1^2\eta_k^2{\beta}\over \tau N_k}\\&+{\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}. \end{align} By choosing $m=1$, ${\underline{\lambda}}_{k}={\eta_k\over 1+n}$, ${\overline{\lambda}}_{\us{k}}={1+n\over\tau \eta_k}$ and $\gamma_k={\eta_k\over {\overline{\lambda}}_k}={\tau\eta_k^2\over 1+n }$, \eqref{strong:nonsmooth} can be rewritten as follows. \begin{align}\label{bound f_eta} \mathbb E\left[f_{\eta_k}(x_{k+1})-f(x^*)\right]\nonumber& \leq\left(1-{\tau^2\eta_k^3\over (n+1)^2}+{2\nu_1^2\eta_k\over \tau N_k}\right)\mathbb E\left[f_{\eta_k}(x_{k})-f(x^*)\right]+{2\nu_1^2\eta_k^2{\beta}\over \tau N_k}\\&+{\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}. \end{align} By using Assumption \ref{nonsmooth_bound_all}, we have the following for any $x_{k+1}$: \begin{align}\label{bound f_eta_k+1} f_{\eta_{k+1}}(x_{k+1})\leq f_{\eta_k}(x_{k+1})+{1\over 2}\left({\eta_k^2\over \eta_{k+1}}-{\eta_k}\right)B^2. \end{align} Substituting \eqref{bound f_eta_k+1} in \eqref{bound f_eta} leads to the following \begin{align*} \mathbb E\left[f_{\eta_{k+1}}(x_{k+1})-f(x^*)\right]& \leq\left(1-{\tau^2\eta_k^3\over (n+1)^2}+{2\nu_1^2\eta_k\over \tau N_k}\right)\mathbb E\left[f_{\eta_k}(x_{k})-f(x^*)\right]+{\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}\\&+{{\max \{B^2,\beta\}}\over 2}\left({\eta_k^2\over \eta_{k+1}}-{\eta_k}+{4\nu_1^2\eta_k^2\over \tau N_k}\right). \end{align*} Let $\uvs{d_k} \uvs{ \ \triangleq \ } 1-{\tau^2\eta_k^3\over (n+1)^2}+{2\nu_1^2\eta_k\over \tau N_k}$, $b_k \uvs{ \ \triangleq \ } {\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}$, and $c_k \uvs{ \ \triangleq \ } {\eta_k^2\over \eta_{k+1}}-{\eta_k}+{4\nu_1^2\eta_k^2\over \tau N_k}$. Therefore the following is obtained recursively by using the fact that $\mathbb E[f_{\eta_0}(x_0)]\leq \mathbb E[f(x_0)]$: \begin{align*} \mathbb E\left[f_{\eta_{K+1}}(x_{K+1})-f(x^*)\right]& \leq \left(\prod_{k=0}^{K}\uvs{d_k}\right)\mathbb E[f(x_0)-f(x^*)]+\sum_{i=0}^{K}\left(\prod_{j=0}^{K-i-1}\uvs{d}_{K-j}\right)b_i\\&+{{\max \{B^2,\beta\}}\over 2}\sum_{i=0}^{K}\left(\prod_{j=0}^{K-i-1}\uvs{d}_{K-j}\right)c_i. \end{align*} By choosing $\eta_k=\left({2(n+1)^2\over \tau^2(k+2)}\right)^{1/3}$, $N_k=\lceil N_0(k+2)^{a+2/3}\rceil$ for all $k\geq 1$, $a>1$ and $N_0=\lceil {2^{4/3}\nu_1^2(n+1)^{1/3}\over \tau^{5/3}}\rceil$, and noting that $f(x_0)-f(x^*)\geq0$, we obtain that \begin{align}\prod_{k=0}^{K} \uvs{d}_k\leq\prod_{k=0}^{K}\left(1-{2\over k+2}+{1\over (k+2)^{a+1}}\right)\leq \prod_{k=0}^{K}\left(1-{1\over k+2}\right) = {1\over K+2}\end{align} and $\prod_{j=0}^{K-i-1} \uvs{d}_{K-j}\leq{i+2\over K+2}$. Hence, we have that \begin{align}\label{bound f simplify} &\mathbb E\left[f_{\eta_{K+1}}(x_{K+1})-f(x^*)\right] \leq {1\over K+2}\left(f(x_0)-f(x^*)\right)+\sum_{i=0}^{K}{b_i(i+2)\over K+2}+{{\max \{B^2,\beta\}}\over 2}\sum_{i=0}^K {c_i(i+2)\over K+2}\\ \nonumber&={{\left(f_{\eta_{0}}(x_0)-f(x^*)\right)} \over K+2}+{( 2\nu_1^2\|x^*\|^2+\nu_2^2)2^{1/3}(n+1)^{1/3}\over 2\tau^{2/3}}\sum_{i=0}^{K}{(i+2)^{2/3}\over (K+2)N_i}+{{\max \{B^2,\beta\}}\over 2}\sum_{i=0}^K {c_i(i+2)\over K+2}. \end{align} Note that we have the following inequality from the definition of $c_i=\overbrace{\left({\eta_i^2\over \eta_{i+1}}-{\eta_i}\right)}^{A_i}+\overbrace{4\nu_1^2\eta_i^2\over \tau N_i}^{D_i}$ \us{and by recalling that $\eta_k =\left({2(n+1)^2\over \tau^2(k+2)}\right)^{1/3}$.} \begin{align}\label{bound c_i} \sum_{i=0}^K A_i(i+2)&\nonumber= \us{\sum_{i=0}^K \left({\eta_i^2\over \eta_{i+1}}-{\eta_i}\right) (i+2)} = {2^{1/3}(n+1)^{{2/3}}\over \tau^{2/3}}\sum_{i=0}^K \left({(i+3)^{1/3}\over (i+2)^{2/3}}-{1\over (i+2)^{1/3}}\right){(i+2)}\\&\leq {2^{1/3}(n+1)^{{2/3}}\over \tau^{2/3}}\sum_{i=0}^K \left((i+3)^{2/3}-(i+2)^{2/3}\right)\leq {2^{1/3}(n+1)^{{2/3}}\over \tau^{2/3}} (K+3)^{2/3}. \end{align} Additionally, for any $a>1$ the following holds: \begin{align} \label{integ-bound} \sum_{i=0}^{K}{1\over (i+2)^a}\leq \int_{-1}^{K}{1\over (x+2)^a}dx={1\over 1-a}(K+2)^{1-a}+{1\over a-1}\leq {1\over a-1}. \end{align} We also have that the following inequality holds if $N_k=\lceil N_0(k+2)^{a+2/3}\rceil $: \begin{align} \sum_{i=0}^K D_i(i+2)&={2^{8/3}\nu_1^2(n+1)^{2/3}\over \tau^{7/3}}\sum_{i=0}^K {1\over (i+2)^{a +1/3}}\overset{\tiny \eqref{integ-bound}}{\leq} {2^{8/3}\nu_1^2(n+1)^{2/3}\over \tau^{7/3}(a-2/3)}\label{bound D_i}. \end{align} \us{Therefore, substituting} \eqref{bound c_i} and \eqref{bound D_i} within \eqref{bound f simplify}, we have: \begin{align*} \mathbb E\left[f_{\eta_{K+1}}(x_{K+1})-f(x^*)\right]& \us{ \ \leq \ }{1\over K+2}\mathbb E[f_{\eta_{0}}(x_0)-f(x^*)]+{( 2\nu_1^2\|x^*\|^2+\nu_2^2)(n+1)^{1/3}\over 2^{2/3}(K+2)\tau^{2/3}(a-1)}\\& +{{\max \{B^2,\beta\}}(n+1)^{{2/3}}(K+3)^{2/3}\over 2^{2/3}N_0\tau^{2/3}(K+2)}+{{\max \{B^2,\beta\}}2^{5/3}\nu_1^2(n+1)^{2/3}\over \tau^{7/3}(a-2/3)(K+2)} . \end{align*} Now by using the fact that $f_{\eta}(x) \leq f(x) \leq f_{\eta}(x)+\eta {\beta}$ we obtain {for some $C>0$}: \begin{align*} \mathbb E&\left[f(x_{K+1})-f(x^*)\right]={1\over K+2}\mathbb E[f(x_0)-f(x^*)]+{( 2\nu_1^2\|x^*\|^2+\nu_2^2)(n+1)^{1/3}\over 2^{2/3}(K+2)\tau^{2/3}(a-1)}\\&+{{\max \{B^2,\beta\}}(n+1)^{{2/3}}\over \tau^{2/3}}\left({(K+3)^{2/3}\over 2^{2/3}(K+2)}+{1\over (K+3)^{1/3}}\right)+{{\max \{B^2,\beta\}}2^{5/3}\nu_1^2(n+1)^{2/3}\over \tau^{7/3}(a-2/3)(K+2)}\\&\leq{\mathbb E[f(x_0)-f(x^*)]\over K+2}+\left(\frac{(n+1)^{1/3}}{2^{2/3}\tau^{2/3}(a-1)}\right){2\nu_1^2\|x^*\|^2+\nu_2^2\over K+2}\\&+\left(\frac{2(n+1)^{{2/3}}}{ \tau^{2/3}}\right){{\max \{B^2,\beta\}}\over (K+3)^{1/3}}+\left(\frac{2^{5/3}(n+1)^{2/3}}{\tau^{7/3}(a-2/3)}\right){{\max \{B^2,\beta\}}\nu_1^2\over K+2}\leq\mathcal O(K^{1/3}). \end{align*} (ii) To find $x_{K+1}$ such that $\mathbb E[f(x_{ K+1})]-f^*\leq \epsilon$, we have ${C\over K^{1/3}}\leq \epsilon$ which implies that $K=\lceil {\left(C\over \epsilon\right)^{3}}\rceil$. Therefore, \us{by utilizing the identity that for $x \geq 1$, $ \lceil x \rceil \leq 2x, $} we have the following for $a=1+\varepsilon$: \begin{align*} \sum_{k=0}^K N_k\leq\sum_{k=0}^{1+\left({C\over \epsilon}\right)^3} 2\us{ N_0 (k+2)^{5/3+\varepsilon}} & \leq \int_0^{2+\left({C\over \epsilon}\right)^3}\us{2^{8/3+\varepsilon}}N_0 (x+2)^{5/3+\varepsilon}dx\leq\frac{2^{8/3+\varepsilon}N_0\left(4+\left({C\over \epsilon}\right)^3\right)^{8/3+\varepsilon}}{8/3+\varepsilon}\\ & \leq \mathcal O(1/\epsilon^{8+\varepsilon}). \end{align*} \end{proof} \begin{remark} Instead of iteratively reducing the smoothing parameter, one may employ a fixed smoothing parameter for all $k$, i.e. $\eta_k=\eta$. By similar arguments, we obtain the following inequalities for $N_k=\lceil N_0 \rho^{-k}\rceil$, where $0<\rho<1$ and $N_0>{4\nu_1^2(n+1)\over \tau3 \eta^2}$: \begin{align*} \mathbb E\left[f(x_{k+1})-f(x^*)\right]\leq \alpha_0^k\mathbb E\left[f_{\eta}(x_0)-f_\eta(x^*)\right]+{\eta\alpha_0^k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2(1-{\rho\over \alpha_0})}+{\eta B^2\over {1\over \alpha_0}-1}+{2B^2\nu_1^2\eta^2\alpha_0^K\over \tau(1-{\rho\over \alpha_0})}, \end{align*} where $\alpha_k=1-{\tau^2\eta^3\over n+1}+{2\nu_1^2\over \tau N_k}$. To find $x_{K+1}$ such that $\mathbb E[f(x_{ K+1})]-f^*\leq \epsilon$, one can easily verify that $K>\mathcal O\left({\ln(1/\epsilon)\over \ln(1/(1-\epsilon^3))}\right)$, which is slightly worse than $\mathcal O(\epsilon^{-3})$ for iterative smoothing. Note that in Section 3.2 (I), we merely require that there is a uniform bound on the subgradients of $F(x,\omega)$, a requirement that then allows for applying Moreau smoothing (but do not require unbiasedness). However, in Section 3.2 (II), we do assume that an unbiased gradient of the smoothed functions $f_{\eta}(x)$ is available (Asusmption~\ref{state noise} (NS-B). This holds for instance when we have access to the true gradient of $ F_{\eta}(x,\omega)$, i.e. $\nabla_x F_{\eta}(x,\omega)$. Here, unbiasedness follows directly, as seen next. Let $f_\eta(x)\triangleq \mathbb E[F_\eta(x,\omega)]$. By using Theorem 7.47 in \cite{shapiro09lectures} (interchangeability of the derivative and the expectation), we have: \begin{align}\label{unbias_smooth}\nabla f_\eta(x)=\nabla \mathbb E[F_\eta(x,\omega)]=\mathbb E[\nabla F_\eta(x,\omega)]\implies \mathbb E[\nabla f_\eta(x)-\nabla F_\eta(x,\omega)]=0.\end{align} In an effort to be more general, we claim that there exists an oracle that can produce an unbiased estimate of $\nabla_x f_{\eta}(x) \triangleq \mathbb{E}[F_{\eta}(x,\omega)]$ for every $\eta > 0$ as formalized by Assumption~\ref{state noise} (NS-B). \end{remark} \section{Smooth and nonsmooth convex optimization}\label{sec:4} {In this section, we weaken the {strong convexity} requirement and analyze the rate and oracle complexity of (\eqref{rVS-SQN}) and (\eqref{rsVS-SQN}) {in smooth and nonsmooth regimes}, respectively.} \subsection{Smooth convex optimization} {Consider the setting when $f$ is an $L$-smooth convex function. In such an instance, a regularization of $f$ and its gradient can be defined as follows.} \begin{definition}[{\bf Regularized function and gradient map}]\label{def:regularizedf} Given a sequence $\{\mu_k\}$ of positive scalars, the function $f_{\mu_k}$ and its gradient $\nabla f_k(x)$ are defined as follows for {any $x_0\in \mathbb R^n$}: \begin{align*} f_{\mu_k}(x)&\triangleq f(x)+\frac{\mu_k}{2}{\|x-x_0\|^2},\quad \hbox{for any } k \geq 0, \qquad \nabla f_{\mu_k}(x)\triangleq\nabla f(x)+\mu_k(x-x_0),\quad \hbox{for any } k \geq 0. \end{align*} Then{,} $f_{\mu_k}$ and ${\nabla} f_{\mu_k}$ satisfy the following: (i) $f_{\mu_k}$ is $\mu_k$-strongly convex; (ii) $f_{\mu_k}$ has Lipschitzian gradients with parameter $L+\mu_k$; (iii) $f_{\mu_k}$ has a unique minimizer over $\mathbb R^n$, denoted by $x^*_k$. Moreover, for any $x \in \mathbb R^n$~\cite[sec. 1.3.2]{polyak1987introduction}, \begin{align*} 2\mu_k (f_{\mu_k}(x)-f_{\mu_k}(x^*_k))& \leq \|\nabla f_{\mu_k}(x)\|^2\leq 2(L+\mu_k) \left(f_{\mu_k}(x)-f_{\mu_k}(x^*_k)\right).\end{align*} \end{definition} We consider the following update rule \eqref{rVS-SQN}, { where $H_k$ is generated by {\bf rL-BFGS} scheme.} \begin{align}\tag{\bf rVS-SQN}\label{rVS-SQN} x_{k+1}:=x_k-\gamma_kH_k{\frac{\sum_{j=1}^{N_k} \nabla_x F_{\mu_k}(x_k,\omega_{j,k})}{N_k}}. \end{align} {For a subset of the results, we assume quadratic growth property. } \begin{assumption}{\bf(Quadratic growth)}\label{growth} Suppose that the function $f$ has a nonempty set $X^*$ of minimizers. There exists $\alpha>0$ such that $f(x)\geq f(x^*)+{\alpha\over 2}\mbox{dist}^2(x,X^*)$ holds for all $x\in \mathbb R^n$: \end{assumption}} In the next lemma the bound for eigenvalues of $H_k$ is derived (see Lemma 6 in \cite{yousefian2017stochastic}). \begin{lemma}[{\bf Properties of Hessian approximations produced by (rL-BFGS)}]\label{rLBFGS-matrix} Consider the \eqref{rVS-SQN} method. Let $H_k$ be given by the update rule \eqref{eqn:H-k}-\eqref{eqn:H-k-m} \us{with $\eta_k = 0$ for all $k$,} and $s_i$ and $y_i$ are defined in \eqref{equ:siyi-LBFGS}. {Suppose $\mu_k$ is} updated according to the procedure \eqref{eqn:mu-k}. Let Assumption.~\ref{assum:convex-smooth}(a,b) hold. Then the following hold. \begin{itemize} \item [(a)] For any odd $k > 2m$, $s_k^T{y_k} >0$; (b) For any odd $k > 2m$, $H_{k}{y}_k=s_k$; \item [(c)] For any $k > 2m$, $H_k$ satisfies Assumption \ref{assump:Hk}{(S)}, ${{\underline{\lambda}}}={\frac{1}{(m+n)(L+\mu_0^{\bar \delta})}}$, $\lambda = {\frac{(m+n)^{n+m-1}{(L+\mu_0^{\bar \delta})}^{n+m-1}}{(n-1)!}}$ and ${{\overline{\lambda}}_k}= \lambda \mu_k^{-\bar \delta(n+m)},$ {for scalars $\delta,\bar \delta>0$.} Then {for all $k$, we have that $H_k = H_k^T$ and $\mathbb E[{H_k\mid\mathcal F_k}]=H_k$ and {{\underline{\lambda}}\mathbf{I} \preceq H_{k} \preceq {\overline{\lambda}}_k \mathbf{I}}$ both hold in an a.s. fashion.} \end{itemize} \end{lemma} \begin{lemma}[An error bound]\label{lemma:main-ineq} Consider the \eqref{VS-SQN} method and suppose Assumptions \ref{assum:convex-smooth}, \ref{state noise}(S-M), \ref{state noise}(S-B), \ref{assump:Hk}(S) \red{ and \ref{growth}} hold. {Suppose} $\{\mu_k\}$ is a non-increasing sequence, and $\gamma_k$ satisfies \begin{align}\label{mainLemmaCond}\gamma_k \leq \frac{{{\underline{\lambda}}}}{{{{\overline{\lambda}}}_k ^2}(L+\mu_0)},\quad \hbox{for all }k\geq 0. \end{align}Then, the following inequality holds for all $k$: \begin{align}\label{ineq:cond-recursive-F-k} \nonumber\mathbb E[{f_{\mu_{k+1}}(x_{k+1})\mid\mathcal F_k}]-f^* &\leq (1-{{{\underline{\lambda}}}}\mu_k\gamma_k)(f_{\mu_k}(x_k)-f^*) +\frac{{{\underline{\lambda}}}\mbox{dist}^2(x_0,X^*)}{2}\mu_k^2\gamma_k\\&+\frac{ (L+\mu_k){{\overline{\lambda}}_k ^2}{( \nu_1^2\|x_k\|^2+\nu_2^2)}}{2N_k}\gamma_k^2. \end{align} \end{lemma} \begin{proof} By the Lipschitzian property of $\nabla f_{\mu_k}${, update rule \eqref{rVS-SQN} and Def.~\ref{def:regularizedf}}, we obtain {\begin{align}\label{ineq:term1-2} & \quad f_k(x_{{k+1}}) \nonumber\leq f_{\mu_k}(x_k)+\nabla f_{\mu_k}(x_k)^T(x_{k+1}-x_k)+\frac{ (L+\mu_k)}{2}\|x_{k+1}-x_k\|^2 \\&\leq f_{\mu_k}(x_k)-\gamma_k\underbrace{\nabla f_{\mu_k}(x_k)^TH_k(\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k})}_{\tiny\hbox{Term } 1}+ \frac{ (L+\mu_k)}{2}\gamma_k^2\underbrace{\|H_k(\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k})\|^2}_{\tiny\hbox{ Term } 2}, \end{align}} {where} $\bar{w}_{k,N_k} \triangleq \frac{\sum_{j=1}^{N_k} \left({\nabla_{x}} {F}_{\mu_k}(x_k,\omega(\omega_{j,k}))-\nabla f_{\mu_k}(x_k)\right)}{N_k}$. Next, we estimate the conditional expectation of Terms 1 and 2. From Assumption \ref{assump:Hk}, we have \begin{align*} \hbox{Term }1 &= \nabla f_{\mu_k}(x_k)^TH_k\nabla f_{\mu_k}(x_k)+\nabla f_{\mu_k}(x_k)^TH_k\bar w_{k,N_k}\geq {{\underline{\lambda}}}\|\nabla f_{\mu_k}(x_k)\|^2+\nabla f_{\mu_k}(x_k)^TH_k\bar w_{k,N_k}. \end{align*} {Thus, taking conditional expectations, from \eqref{ineq:term1-2},} we obtain \begin{align}\label{equ:Term1} \notag\mathbb E[{\hbox{Term } 1\mid\mathcal F_k}] &\notag\geq {{\underline{\lambda}}}\|\nabla f_{\mu_k}(x_k)\|^2+\mathbb E[{\nabla f_{\mu_k}(x_k)^TH_k\bar w_{k,N_k}\mid\mathcal F_k}]\\ &={{\underline{\lambda}}}\|\nabla f_{\mu_k}(x_k)\|^2+\nabla f_{\mu_k}(x_k)^TH_k\mathbb E[{\bar w_{k,N_k}\mid\mathcal F_k}] ={{\underline{\lambda}}}\|\nabla f_{\mu_k}(x_k)\|^2,\end{align} where $\mathbb E[{\bar w_{k,N_k}\mid\mathcal F_k}]=0$ and $\mathbb E[{H_k\mid\mathcal F_k}]=H_k$ {a.s.} Similarly, invoking Assumption~\ref{assump:Hk}(S), we may bound Term 2. \begin{align*} \hbox{Term } 2&= (\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k})^TH_k^2(\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k}) \leq {{{\overline{\lambda}}_k} ^2}\|\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k}\|^2 \\&={{{\overline{\lambda}}_k} ^2}\left(\|\nabla f_{\mu_k}(x_k)\|^2+\|\bar w_{k,N_k}\|^2+2\nabla f_{\mu_k}(x_k)^T\bar w_{k,N_k}\right).\end{align*} Taking conditional expectations in the preceding inequality and using Assumption \ref{state noise} \vvs{(S-M), \ref{state noise} (S-B)}, we obtain \begin{align}\label{equ:Term2} \mathbb E[{\hbox{Term } 2\mid\mathcal F_k}]\notag&\leq{{\overline{\lambda}}_k^2}\Big(\|\nabla f_{\mu_k}(x_k)\|^2+\mathbb E[{\|\bar w_{k,N_k}\|^2\mid\mathcal F_k}]\\&+2\nabla f_{\mu_k}(x_k)^T\mathbb E[{\bar w_{k,N_k}\mid\mathcal F_k}]\Big) \leq {{\overline{\lambda}}^2_k}\left(\|\nabla f_{\mu_k}(x_k)\|^2+{{\nu_1^2\|x_k\|^2+\nu_2^2}\over N_k}\right).\end{align} By taking conditional expectations in \eqref{ineq:term1-2}, and by \eqref{equ:Term1}--\eqref{equ:Term2}, \begin{align* \quad \mathbb E[{f_{\mu_k}(x_{k+1})\mid\mathcal F_k}] &\leq f_{\mu_k}(x_k)-\gamma_k{{\underline{\lambda}}}\|\nabla {\mu_k}(x_k)\|^2+{{{\overline{\lambda}}}_k ^2}\frac{ (L+\mu_k)}{2}\gamma_k^2\left(\|\nabla f_{\mu_k}(x_k)\|^2+{{\nu_1^2\|x_k\|^2+\nu_2^2}\over N_k}\right) \\ &\leq f_{\mu_k}(x_k)-\frac{\gamma_k{\underline{\lambda}}}{2}\|\nabla f_{\mu_k}(x_k)\|^2\left(2-\frac{{{\overline{\lambda}}_k ^2}\gamma_k(L+\mu_k)}{{{\underline{\lambda}}}}\right)+{{\overline{\lambda}}_k ^2}\frac{ (L+\mu_k)}{2}{\gamma_k^2{( \nu_1^2\|x_k\|^2+\nu_2^2)}\over N_k}. \end{align*} \us{From} \eqref{mainLemmaCond}, $\gamma_k\leq \frac{{{\underline{\lambda}}}}{{{\overline{\lambda}}_k ^2}(L+\mu_0)}$ for any $k \geq 0$. Since $\{\mu_k\}$ is {a} non-increasing sequence, it follows that \begin{align*} \gamma_k \leq \frac{{{\underline{\lambda}}}}{{{\overline{\lambda}}_k ^2}(L+\mu_k)} \implies 2-\frac{{{\overline{\lambda}}_k ^2}\gamma_k(L+\mu_k)}{{{\underline{\lambda}}}} \geq 1.\end{align*} Hence, the following holds. \begin{align*} \mathbb E[{f_{\mu_k}(x_{k+1}) \mid\mathcal F_k}]&\leq f_{\mu_k}(x_k)-\frac{\gamma_k{{\underline{\lambda}}}}{2}\|\nabla f_{\mu_k}(x_k)\|^2+{{\overline{\lambda}}_k ^2}\frac{ (L+\mu_k)}{2}{\gamma_k^2{( \nu_1^2\|x_k\|^2+\nu_2^2)}\over N_k}\\ &\hspace{-0.2in} \overset{\tiny \mbox{(iii) in Def.~\ref{def:regularizedf}}}{\leq} f_{\mu_k}(x_k)-{{\underline{\lambda}}}\mu_k\gamma_k(f_{\mu_k}(x_k)-f_{\mu_k}(x^*_k))+{{\overline{\lambda}}_k ^2}\frac{ (L+\mu_k)}{2}{\gamma_k^2{( \nu_1^2\|x_k\|^2+\nu_2^2)}\over N_k}. \end{align*} By using Definition \ref{def:regularizedf} and {non-increasing {property} of } $\{\mu_k\}$, \begin{align} \label{ineq:lemmaLastIneq} \notag &\mathbb E[{f_{\mu_{k+1}}(x_{k+1})\mid \mathcal F_k}] \leq\mathbb E[{f_{\mu_k}(x_{k+1})\mid \mathcal F_k}]\implies\\& \mathbb E[{f_{\mu_{k+1}}(x_{k+1})\mid\mathcal F_k}]\leq f_{\mu_k}(x_k)-{{\underline{\lambda}}}\mu_k\gamma_k(\overbrace{f_{\mu_k}(x_k)-f_{\mu_k}(x^*_k)}^{\tiny{\mbox{Term 3}}})+{{\overline{\lambda}}^2 _k}\frac{ (L+\mu_k)}{2}{\gamma_k^2{( \nu_1^2\|x_k\|^2+\nu_2^2)}\over N_k}. \end{align} Next, we derive a lower bound for Term 3. Since $x_k^*$ is the unique minimizer of $f_{\mu_k}$, we have $f_{\mu_k}(x_k^*) \leq f_{\mu_k}(x^*)$. Therefore, invoking Definition \ref{def:regularizedf}, for an arbitrary optimal solution $x^* \in X^*$, \begin{align*} f_{\mu_k}(x_k)-f_{\mu_k}(x^*_k) & \geq f_{\mu_k}(x_k)-f_{\mu_k}(x^*) =f_{\mu_k}(x_k)-f^*-\frac{\mu_k}{2}\|x^*-x_0\|^2.\end{align*} From the preceding relation and \eqref{ineq:lemmaLastIneq}, we have \begin{align*} \mathbb E[{f_{\mu_{k+1}}(x_{k+1})\mid\mathcal F_k}]& \leq f_{\mu_k}(x_k)-{\underline{\lambda}}\mu_k\gamma_k\mathbb E[f_{\mu_k}(x_k)-f^*]+\frac{{\underline{\lambda}}\|x^*-x_0\|^2\mu_k^2\gamma_k}{2} \\&+\frac{(L+\mu_k){{\overline{\lambda}}^2 _k}{( \nu_1^2\|x_k\|^2+\nu_2^2)}\gamma_k^2}{2N_k}. \end{align*} By subtracting $f^*$ from both sides {and {by noting that this inequality holds for all $x^* \in X^*$ where $X^*$ denotes the solution set}, the desired result is obtained.} \end{proof} {We now {derive the rate for} sequences produced by \eqref{rVS-SQN} {under the following assumption.}} \begin{assumption \label{assum:sequences-ms-convergence} Let the positive sequences {$\{N_k,\gamma_k,\mu_k,t _k\}$} satisfy the following conditions: {\begin{itemize} \item [(a)] $\{\mu_k\}, \{\gamma_k\}$ are non-increasing sequences such that $\mu_k,\gamma_k \to 0$; $\{t_k\}$ is {an} increasing sequence; \item [(b)] $\left(1-{{\underline{\lambda}}\mu_k\gamma_k}{+{2(L+\mu_0){\overline{\lambda}}_k^2\nu_1^2\gamma_k^2\over N_k\alpha}}\right)t_{k+1}\leq t_k, \ \forall k\geq \tilde K$ for some $\tilde K\geq 1$; \item [(c)] $\sum_{k=0}^{\infty}{\mu_k^2\gamma_k}{t_{k+1}}={\bar c_0}<\infty$; (d) $\sum_{k=0}^\infty {\mu_k^{-{2}\bar\delta(n+m)}\gamma_k^2\over N_k}{t_{k+1}}={\bar c_1}<\infty$; \end{itemize}} \end{assumption} \begin{theorem}[{\bf Convergence of \eqref{rVS-SQN} in mean}]\label{thm:mean} Consider the \eqref{rVS-SQN} scheme and suppose Assumptions ~\ref{assum:convex-smooth}, \ref{state noise}(S-M), \ref{state noise}(S-B),~\ref{assump:Hk}(S),~\ref{growth} and~\ref{assum:sequences-ms-convergence} hold. {There exists} $\tilde K\geq 1$ and scalars $\bar c_0, \bar c_1$ (defined in Assumption ~\ref{assum:sequences-ms-convergence}) such that the following inequality holds for all {$K\geq \tilde K+1$}:\begin{align}\label{ineq:bound} \mathbb E[{f(x_{K})-f^*}] \leq {{t_{\tilde K}}\over t_K}\mathbb E[{{f_{\mu_{\tilde K}}(x_{\tilde K})}-f^*}] +{\bar c_0+\bar c_1\over t_K}. \end{align} \end{theorem} \begin{proof} We begin by noting that Assumption~\ref{assum:sequences-ms-convergence}(a,b) implies that \eqref{ineq:cond-recursive-F-k} holds for $k \geq \tilde K$, where $\tilde K$ is defined in Assumption~\ref{assum:sequences-ms-convergence}(b). Since the conditions of Lemma \ref{lemma:main-ineq} are met, taking expectations on both sides of \eqref{ineq:cond-recursive-F-k}: \begin{align* \mathbb E[{f_{\mu_{k+1}}(x_{k+1})-f^*}] \notag & \leq \left(1-{\underline{\lambda}}{\mu_k\gamma_k}\right)\mathbb E[{f_{\mu_k}(x_k)-f^*}] +\frac{{\underline{\lambda}}\mbox{dist}^2(x_0,X^*)}{2}\mu_k^2\gamma_k \\&+\frac{ (L+\mu_0){{\overline{\lambda}}^2 _k}{( \nu_1^2\|x_k-x^*+x^*\|^2+\nu_2^2)}}{2N_k}\gamma_k^2 \quad \forall k\geq \tilde K. \end{align*} {Now by using the quadratic growth property i.e. $\|x_k-x^*\|^2\leq {2\over \alpha}\left(f(x)-f(x^*)\right)$ and the fact that $\|x_k-x^*+x^*\|^2\leq 2\|x_k-x^*\|^2+2\|x^*\|^2$, we obtain the following relationship} \begin{align*} \mathbb E[{f_{\mu_{k+1}}(x_{k+1})-f^*}] \notag & \leq \left(1-{\underline{\lambda}}\mu_k\gamma_k{+{2(L+\mu_0){\overline{\lambda}}_k^2\nu_1^2\gamma_k^2\over N_k\alpha}}\right)\mathbb E[{f_{\mu_k}(x_k)-f^*}] +\frac{{\underline{\lambda}}\mbox{dist}^2(x_0,X^*)}{2}\mu_k^2\gamma_k \\&+\frac{ (L+\mu_0){{\overline{\lambda}}^2 _k}( {2\nu_1^2\|x^*\|^2+\nu_2^2})}{2N_k}\gamma_k^2. \end{align*} By multiplying both sides by $t_{k+1}$, using Assumption~\ref{assum:sequences-ms-convergence}(b) and ${\overline{\lambda}}_k=\lambda \mu_k^{-\bar\delta(n+m)}$, we obtain \begin{align}\label{ineq:cond-recursive-F-k-expected2} & \quad t_{k+1}\mathbb E[{f_{\mu_{k+1}}(x_{k+1})-f^*}] \leq t_k\mathbb E[{f_{\mu_k}(x_k)-f^*}] +A_1\mu_k^2\gamma_kt_{k+1} +\frac{ A_2{ \mu_k^{-2\bar\delta(n+m)}}}{N_k}\gamma_k^2t_{k+1}, \end{align} where $A_1\triangleq \tfrac{ \underline{\lambda} \mbox{\scriptsize dist}^2(x_0,X^*)}{2}$ and $A_2\triangleq\frac{ (L+\mu_0){\lambda^2 }( {2\nu_1^2\|x^*\|^2+\nu_2^2})}{2}$. By summing \eqref{ineq:cond-recursive-F-k-expected2} from {$k=\tilde K$} to $K-1$, for {$K\geq \tilde K+1$}, and dividing both sides by $t_K$, we obtain \begin{align*} &\nonumber \mathbb E[{f_{\mu_K}(x_{K})-f^*}] \leq {{t_{\tilde K}}\over t_K}\mathbb E[{{f_{\mu_{\tilde K}}(x_{\tilde K})}-f^*}] +{\sum_{k={\tilde K}}^{K-1}A_1\mu_k^2\gamma_kt_{k+1}\over t_K} +{\sum_{k={\tilde K}}^{K-1}A_2\mu_k^{-2\bar\delta(n+m)}\gamma_k^2t_{k+1}N_k^{-1}\over t_K}. \end{align*} From Assumption \ref{assum:sequences-ms-convergence}(c,d), $\sum_{k={\tilde K}}^{K-1}\left( A_1 \mu_k^2\gamma_kt_{k+1}+ A_2 \mu_k^{-2\bar\delta(n+m)}\gamma_k^2{t_{k+1}\over N_k}\right)\leq {A_1\bar c_0+A_2\bar c_1}$. Therefore, \af{by using the fact that $f(x_K)\leq f_{\mu_K}(x_K)$, we obtain} \\ $ \mathbb E[{f(x_{K})-f^*}] \leq {{t_{\tilde K}}\over t_K}\mathbb E[{{f_{\mu_{\tilde K}}(x_{\tilde K})}-f^*}] +{{\bar c_0+\bar c_1}\over t_K}.$ \end{proof} We now show that the requirements of Assumption~\ref{assum:sequences-ms-convergence} are satisfied under suitable assumptions. \begin{corollary} Let $N_k\triangleq\lceil N_0 k^a\rceil$, $\gamma_k\triangleq\gamma_0k^{-b}$, $\mu_k\triangleq\mu_0k^{-c}$ and {$t_k\triangleq t_0(k-1)^{h}$} for some $a,b,c,h>0$. {Let $2\bar \delta (m+n)=\varepsilon$ for $\varepsilon>0$}. {Then Assumption~\ref{assum:sequences-ms-convergence} holds if} {${a+2b-c\varepsilon\geq b+c}, \ {N_0 {\geq}{(L+\mu_0)\lambda^2\nu_1^2\gamma_0\over \alpha{\underline{\lambda}}\mu_0}},\ b+c<1$, $h\leq 1$, $b+2c-h>1$ and $a+2b-h-c\varepsilon>1$}. \end{corollary} \begin{proof} {From} $N_k=\lceil N_0 k^a\rceil\geq N_0 k^a$, $\gamma_k=\gamma_0k^{-b}$ and $\mu_k=\mu_0k^{-c}$, {the} requirements to satisfy Assumption \ref{assum:sequences-ms-convergence} are as follows: \begin{itemize} \item [(a)] $\lim_{k \to \infty }{\gamma_0}k^{-b}=0, \lim_{k \to \infty }{\mu_0}k^{-c}=0 \Leftrightarrow b ,c>0$ ; \item [(b)] $\left(1-{{\underline{\lambda}}\mu_k\gamma_k}{+{2(L+\mu_0){\overline{\lambda}}_k^2\nu_1^2\gamma_k^2\over N_k\alpha}}\right)\leq {t_k\over t_{k+1}} \Leftrightarrow \left(1-{1\over k^{b+c}}+{1\over k^{a+2b-c\varepsilon}}\right)\leq (1-1/k)^h$. From the Taylor expansion of right hand side and assuming $h\leq 1$, we get $\left(1-{1\over k^{b+c}}+{1\over k^{a+2b-c\varepsilon}}\right)\leq 1-M/k$ for some $M>0$ and $\forall k\geq \tilde K$ which means $\left(1-{{\underline{\lambda}}\mu_k\gamma_k}{+{2(L+\mu_0){\overline{\lambda}}_k^2\nu_1^2\gamma_k^2\over N_k\alpha}}\right)\leq {t_k\over t_{k+1}} \Leftrightarrow h\leq1, \ b+c<1, \ {a+2b-c\varepsilon\geq b+c} \ \mbox{and}\ {N_0={(L+\mu_0)\lambda^2\nu_1^2\gamma_0\over \alpha{\underline{\lambda}}\mu_0}}$; \item [(c)] $\sum_{k=0}^{\infty}{\mu_k^2\gamma_k}{t_{k+1}}<\infty\Leftarrow \sum_{k=0}^\infty {1\over k^{b+2c-h}}<\infty\Leftrightarrow b+2c-h>1$; \item [(d)] $\sum_{k=0}^\infty {\mu_k^{-{2}\bar\delta(n+m)}\gamma_k^2\over N_k}{t_{k+1}}<\infty\Leftarrow \sum_{k=0}^\infty {1\over k^{a+2b-h-c\varepsilon}}<\infty\Leftrightarrow a+2b-h-c\varepsilon>1$; \end{itemize} \end{proof} One can easily verify that $a=2+\varepsilon$, $b=\varepsilon$ and {$c=1-{2\over 3}\varepsilon$} and $h=1-\varepsilon$ satisfy these conditions. {We derive complexity statements for \eqref{rVS-SQN} for a specific choice of parameter sequences.} \begin{theorem}[{\bf Rate statement and Oracle complexity}]\label{oracle smooth} Consider the \eqref{rVS-SQN} scheme and suppose Assumptions ~\ref{assum:convex-smooth}, \ref{state noise}(S-M), \ref{state noise}(S-B), \ref{assump:Hk}(S), { \ref{growth}} and \ref{assum:sequences-ms-convergence} hold. Suppose $\gamma_k\triangleq{\gamma_0k^{-b}}$, $\mu_k\triangleq{\mu_0k^{-c}}$, $\triangleq t_k=t_0(k-1)^h$ and $N_k\triangleq\lceil N_0k^{a}\rceil$ where {$\red{N_0={(L+\mu_0)\lambda^2\nu_1^2\gamma_0\over \alpha{\underline{\lambda}}\mu_0}}$, $a=2+\varepsilon$, $b=\varepsilon$ and {$c=1-{2\over 3}\varepsilon$}} and $h=1-\varepsilon$. \noindent (i) {Then the following holds for $K \geq \tilde{K}$ where $\tilde K\geq 1$ and $\tilde C \triangleq {{f_{\mu_{\tilde K}}(x_{\tilde K})}-f^*} $.} \begin{align}\label{rate K} & \mathbb E[{f(x_{K})-f^*}] \leq {\tilde C+\bar c_0+\bar c_1\over K^{1-\varepsilon}}. \end{align} (ii) Let ${\epsilon>0}$ and {$ K\geq \tilde K+1$} such that $\mathbb E[f(x_{ K})]-f^*\leq {\epsilon}$. Then{,} {$\sum_{k=0}^{ K}N_k\leq {\mathcal O\left({ {\epsilon}^{-{3+\varepsilon\over 1-\varepsilon}}}\right)}$}. \end{theorem} \begin{proof} (i) {By choosing the sequence parameters as specified, the result follows immediately from Theorem \ref{thm:mean}.} \noindent (ii) To find an $x_{ K}$ such that $\mathbb E[f(x_{ K})]-f^*\leq {\epsilon}$ we have ${\tilde C+\bar c_0+\bar c_1\over \tilde K^{1-\varepsilon}}\leq {\epsilon}$ which implies that $ K=\lceil {\left(\tilde C+\bar c_0+\bar c_1\over {\epsilon}\right)^{1\over1-{\varepsilon}}}\rceil$. Hence, the following holds. \begin{align*} & \sum_{k=0}^{ K} N_k\leq \sum_{k=0}^{1+{(C/{\epsilon})^{1\over 1-\varepsilon}}}2N_0 k^{2+\varepsilon} \leq 2N_0\int_0^{{1+{(C/{\epsilon})}^{1\over 1-\varepsilon}}} {x}^{2+\varepsilon} \ d{x}=\frac{2N_0\left({1+\left(C/{\epsilon}\right)^{1\over 1-\varepsilon}}\right)^{3+\varepsilon}}{3+\varepsilon} \leq \mathcal O\left({{\epsilon}^{-{3+\varepsilon\over 1-\varepsilon}}}\right). \end{align*} \end{proof} {One may instead consider the following requirement on the conditional second moment on the sampled gradient instead of state-dependent noise (Assumption \ref{state noise}). \begin{assumption}\label{assum_error} \vvs{Let $\bar{w}_{k,N_k} \triangleq \nabla_x f(x_k) - \tfrac{\sum_{j=1}^{N_k} \nabla_x F(x_k,\omega_{j,k})}{N_k}$. Then there exists $\nu>0$ such that $\mathbb{E}[\|\bar{w}_{k,N_k}\|^2\mid \mathcal{F}_k] \leq \tfrac{\nu^2}{N_k}$ and $\mathbb{E}[\bar{w}_{k,N_k} \mid \mathcal{F}_k] = 0$ {hold} a. s. for all $k$, where $\mathcal{F}_k \triangleq \sigma\{x_0, x_1, \hdots, x_{k-1}\}$.} \end{assumption} By invoking Assumption \ref{assum_error}, we can derive the rate result without requiring a quadratic growth property of objective function. \begin{corollary} [{\bf Rate statement and Oracle complexity}] Consider \eqref{rVS-SQN} and suppose Assumptions ~\ref{assum:convex-smooth}, \ref{assump:Hk}(S), \ref{assum:sequences-ms-convergence} and \ref{assum_error} hold. Suppose $\gamma_k={\gamma_0k^{-b}}$, $\mu_k={\mu_0k^{-c}}$, $t_k=t_0(k-1)^h$ and $N_k=\lceil k^{a}\rceil$ where $a=2+\varepsilon$, $b=\varepsilon$ and $c=1-{4\over 3}\varepsilon$ and $h=1-\varepsilon$. \noindent (i) {Then for $K \geq \tilde{K}$ where $\tilde K\geq 1$ and $\tilde C \triangleq {{f_{\mu_{\tilde K}}(x_{\tilde K})}-f^*} $,} $ \mathbb E[{f(x_{K})-f^*}] \leq {\tilde C+\bar c_0+\bar c_1\over K^{1-\varepsilon}}. $ (ii) Let ${\epsilon>0}$ and {$ K\geq \tilde K+1$} such that $\mathbb E[f(x_{ K})]-f^*\leq {\epsilon}$. Then, {$\sum_{k=0}^{ K}N_k\leq {\mathcal O\left({ {\epsilon}^{-{3+\varepsilon\over 1-\varepsilon}}}\right)}$}. \end{corollary} \blue{\begin{remark} Although the oracle complexity of (\ref{rVS-SQN}) is poorer than the canonical $\mathcal{O}(1/\epsilon^2)$, there are several reasons to consider using the SQN schemes when faced with a choice between gradient-based counterparts. (a) Sparsity. In many machine learning problems, the sparsity properties of the estimator are of relevance. However, averaging schemes tend to have a detrimental impact on the sparsity properties while non-averaging schemes do a far better job in preserving such properties. Both accelerated and unaccelerated gradient schemes for smooth stochastic convex optimization rely on averaging and this significantly impacts the sparsity of the estimators. (See Table \ref{compare_spars} in Section \ref{sec:5}). (b) Ill-conditioning. As is relatively well known, quasi-Newton schemes do a far better job of contending with ill-conditioning in practice, in comparison with gradient-based techniques. (See Tables \ref{quad_ill} and \ref{convex_ill} in Section \ref{sec:5}.) \end{remark}} } \subsection{Nonsmooth convex optimization} We now consider problem~\eqref{main problem} when $f$ is nonsmooth but $(\alpha,\beta)$-smoothable and consider the \eqref{rsVS-SQN} scheme, defined as follows, where $H_k$ is generated by {\bf rsL-BFGS} scheme. \begin{align}\tag{\bf rsVS-SQN}\label{rsVS-SQN} x_{k+1}:=x_k-\gamma_kH_k{\frac{\sum_{j=1}^{N_k} \nabla_x F_{\eta_k,\mu_k}(x_k,\omega_{j,k})}{N_k}}. \end{align} {Note that in this section, we set $m=1$ for the sake of simplicity but the analysis can be extended to $m>1$. Next, we generalize Lemma \ref{rLBFGS-matrix} to {show that Assumption \ref{assump:Hk} is satisfied and both the secant condition ({{\bf SC}}) and the secant equation ({{\bf SE}}). ({See Appendix for Proof.})} \begin{lemma}[{\bf Properties of Hessian approximation produced by (rsL-BFGS)}]\label{rsLBFGS-matrix} Cons-\\ ider the \eqref{rsVS-SQN} method, {where} $H_k$ {is updated} by \eqref{eqn:H-k}-\eqref{eqn:H-k-m}, $s_i$ and $y_i$ are defined in \eqref{equ:siyi-LBFGS} {and} $\eta_k$ and $\mu_k$ are updated according to procedure \eqref{eqn:mu-k}. Let Assumption \ref{assum:convex2} holds. Then the following hold. \begin{itemize} \item [(a)] For any odd $k > 2m$, {(SC) holds}, i.e., $s_k^T{y_k} >0$; \item [(b)] For any odd $k > 2m$, {(SE) holds}, i.e., $H_{k}{y}_k=s_k$. \item [(c)] For any $k > 2m$, $H_k$ satisfies Assumption~\ref{assump:Hk}{(NS)} with ${{{\underline{\lambda}}_{k}}={1\over (m+n)(1/\eta_k^\delta+\mu_0^{\bar \delta})}}$ and $\\ {{{\overline{\lambda}}_{k}}={(m+n)^{n+m-1}(1/\eta_k^\delta+\mu_0^{\bar \delta})^{n+m-1}\over (n-1)!\mu_k^{(n+m)\bar \delta}}}$, {for scalars $\delta,\bar \delta>0$.} Then {for all $k$, we have that $H_k = H_k^T$ and $\mathbb E[{H_k\mid\mathcal F_k}]=H_k$ and {{\underline{\lambda}}_{k}\mathbf{I} \preceq H_{k} \preceq {\overline{\lambda}}_k \mathbf{I}}$ both hold in an a.s. fashion.} \end{itemize} \end{lemma} We now derive a rate statement for the mean sub-optimality.} \begin{theorem}[{\bf Convergence in mean}]\label{thm:mean:nonsmooth} Consider the \eqref{rsVS-SQN} scheme. Suppose Assumptions ~\ref{assum:convex2}, \ref{state noise} (NS-M), \ref{state noise} (NS-B), \ref{assump:Hk} (NS), {and \ref{growth}} hold. Let {$\gamma_k=\gamma$, $\mu_k=\mu$, and $\eta_k=\eta$ be chosen such that \eqref{mainLemmaCond} holds ({where $L=1/\eta$}).} {If {$\bar x_K \triangleq \frac{\sum_{k=0}^{K-1}x_k({\underline{\lambda}}\mu\gamma-C/N_k)}{\sum_{k=0}^{K-1}({\underline{\lambda}}\mu\gamma-C/N_k)}$}, then \eqref{non_smooth_lemma} holds for $K \geq 1$ and {$C={2(1+\mu\eta){\overline{\lambda}}^2\nu_1^2\gamma^2\over \alpha \eta}$}. \begin{align}\label{non_smooth_lemma} \left(K{\underline{\lambda}} \mu \gamma{-\sum_{k=0}^{K-1}{C\over N_k}}\right)\mathbb E[{f_{\eta,\mu}(\bar x_{K})-f^*}] \nonumber&\leq\mathbb E[f_{\eta,\mu}(x_0)-f^*]+\eta B^2+{{\underline{\lambda}} \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma K\\&+\sum_{k=0}^{K-1}{(1+\mu\eta){\overline{\lambda}}^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta}. \end{align} \end{theorem} \begin{proof} Since Lemma \ref{lemma:main-ineq} {may be invoked}, by taking expectations on both sides of \eqref{ineq:cond-recursive-F-k}, for any $k\geq 0$ {letting $\bar{w}_{k,N_k} \triangleq \frac{\sum_{j=1}^{N_k} \left({\nabla_{x}} {F}_{\eta_k,\mu_k}(x_k,\omega_{j,k})-\nabla f_{\eta_k,\mu_k}(x_k)\right)}{N_k},$} and by letting {${{{\underline{\lambda}}}\triangleq {1\over (m+n)(1/\eta^\delta+\mu^{\bar \delta})}}$, {${{\overline{\lambda}}}\triangleq {(m+n)^{n+m-1}(1/\eta^\delta+\mu^{\bar \delta})^{n+m-1}\over (n-1)!\mu^{(n+m)\bar \delta}}$}}, { using the quadratic growth property i.e. $\|x_k-x^*\|^2\leq {2\over \alpha}\left(f(x)-f(x^*)\right)$ and the fact that $\|x_k-x^*+x^*\|^2\leq 2\|x_k-x^*\|^2+2\|x^*\|^2$, we obtain the following} \begin{align*} \mathbb E[{f_{{\eta},{\mu}}(x_{k+1})-f^*}] & \leq \left(1-{{{\underline{\lambda}}}}{\mu\gamma}{+{2(1+\mu \eta){\overline{\lambda}}^2\nu_1^2\gamma^2\over \alpha N_k\eta}}\right)\mathbb E[{f_{\eta,{\mu}}(x_k)-f^*}] + {{\underline{\lambda}} \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma\\& +{(1+\mu\eta){\overline{\lambda}}^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta} \end{align*} \begin{align*} \implies \left( {\underline{\lambda}}{\mu\gamma}{-{2(1+\mu \eta){\overline{\lambda}}^2\nu_1^2\gamma^2\over \alpha N_k\eta}}\right)\mathbb E[{f_{\eta,{\mu}}(x_k)-f^*}] & \leq \mathbb E[{f_{\eta,{\mu}}(x_k)-f^*}]- \mathbb E[{f_{{\eta},{\mu}}(x_{k+1})-f^*}] \\ +{{\underline{\lambda}} \mbox{dist}^2(x_0,X^*)\mu^2\gamma\over 2} & +{(1+\mu\eta){\overline{\lambda}}^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta} .\end{align*} Summing from $k=0$ to $K-1$ and by invoking {Jensen's inequality}, we obtain the following \begin{align*} \left(K{\underline{\lambda}} \mu \gamma{-\sum_{k=0}^{K-1}{C\over N_k}}\right)\mathbb E[{f_{\eta,\mu}(\bar x_{K})-f^*}] &\leq\mathbb E[f_{\eta,\mu}(x_0)-f^*]-\mathbb E[f_{\eta,\mu}(x_K)-f^*]\\&+{{\underline{\lambda}} \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma K+\sum_{k=0}^{K-1}{(1+\mu\eta){\overline{\lambda}}^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta}, \end{align*} where {$C={2(1+\mu\eta){\overline{\lambda}}^2\nu_1^2\gamma^2\over \alpha\eta}$} and {$\bar x_K \triangleq \frac{\sum_{k=0}^{K-1}x_k({\underline{\lambda}}\mu\gamma-C/N_k)}{\sum_{k=0}^{K-1}({\underline{\lambda}}\mu\gamma-C/N_k)}$}. Since $\mathbb E[{f(x)}]\leq \mathbb E[f_{\eta}(x)]+\eta_kB^2$ and $f_\mu(x)=f(x)+{\mu\over 2}\|x-x_0\|^2$, {we have that} $-\mathbb E[f_{\eta,\mu}(x_K)-f^*]\leq -{\mathbb{E}}[f_\mu(x_K)-f^*]+\eta B^2\leq \eta B^2$. Therefore, we obtain the following: \begin{align*} \left(K{\underline{\lambda}} \mu \gamma{-\sum_{k=0}^{K-1}{C\over N_k}}\right)\mathbb E[{f_{\eta,\mu}(\bar x_{K})-f^*}] & \leq\mathbb E[f_{\eta,\mu}(x_0)-f^*]+\eta B^2\\&+{{\underline{\lambda}} \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma K+\sum_{k=0}^{K-1}{(1+\mu\eta){\overline{\lambda}}^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta}. \end{align*} \end{proof} {We refine this result for a set of parameter sequences.} \begin{theorem}[{\bf Rate statement and oracle complexity}]\label{thm:rate K} Consider \eqref{rsVS-SQN} and suppose Assumptions ~\ref{assum:convex2}, \ref{state noise} (NS-M), \ref{state noise} (NS-B), \ref{assump:Hk} (NS), {and \ref{growth}} hold, $\gamma {\triangleq} c_\gamma K^{-1/3+\bar \varepsilon}$, $\mu {\triangleq} {K^{-1/3}}$, $\eta \triangleq K^{-1/3}$ and $N_k\triangleq \lceil N_0{(k+1)}^{a}\rceil$, where $\bar \varepsilon \triangleq \tfrac{5\varepsilon}{3}$, $\varepsilon>0$, {$N_0>{C\over {\underline{\lambda}} \mu \gamma}$}, {$C={2(1+\mu\eta){\overline{\lambda}}^2\nu_1^2\gamma^2\over \alpha\eta}$} and $a>1$. Let $\delta={\varepsilon\over n+m-1}$ and $\bar \delta={\varepsilon\over n+m}$. \noindent (i) For any $K \geq 1$, $ \mathbb{E}[f(\bar x_{ K})]-f^*\leq {\mathcal O}(K^{-1/3}).$ \noindent (ii) Let ${\epsilon>0}$, {$a = (1+\epsilon)$}, and $ K\geq 1$ such that $\mathbb E[f(\bar x_{ K})]-f^*\leq {\epsilon}$. {Then{,} {$\sum_{k=0}^{ K}N_k\leq \mathcal O\left({ {\epsilon}^{-{(2+\varepsilon)\over 1/3}}}\right)$}}. \end{theorem} \begin{proof} (i) {First, note that for $a>1$ and $N_0>{C\over {\underline{\lambda}}\mu\gamma}$ we have $\sum_{k=0}^{K-1} {C\over N_k}<\infty$. Therefore we can let $C_4\triangleq \sum_{k=0}^{K-1}{C\over N_k}. $} { Dividing both sides} of \eqref{non_smooth_lemma} by $K{\underline{\lambda}}\mu\gamma{-C_4}$ {and by recalling} that $f_\eta(x)\leq f(x)\leq f_\eta(x)+\eta B^2$ and $f(x)\leq f_\mu(x)$, we obtain \begin{align*} \mathbb E[{f(\bar x_{K})-f^*}] & \leq{\mathbb E[f_\mu(x_0)-f^*]\over K{\underline{\lambda}} \mu \gamma{-C_4}}+{\eta B^2\over K{\underline{\lambda}} \mu \gamma{-C_4}}+\frac{{{\underline{\lambda}} \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma K}{K{\underline{\lambda}}\mu\gamma{-C_4}} \\&+\frac{\sum_{k=0}^{K-1}{(1+\mu\eta){\overline{\lambda}}^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta}}{K{\underline{\lambda}}\mu\gamma{-C_4}}+\eta B^2. \end{align*} Note that by choosing $\gamma=c_\gamma K^{-1/3+\bar \varepsilon}$, $\mu={K^{-1/3}}$ and $\eta=K^{-1/3}$, where $\bar \varepsilon=5/3\varepsilon$, inequality \eqref{mainLemmaCond} is satisfied for sufficiently small $c_\gamma$. By choosing$N_k=\lceil N_0{(k+1)}^a\rceil\geq N_0 (k+2)^a$ for any $a>1$ and {$N_0>{C\over {\underline{\lambda}}\mu\gamma}$}, we have that \begin{align*} & \sum_{k=0}^{K-1}{1\over (k+1)^a} \leq 1+\int_{0}^{K-1} (x+1)^{-a}dx\leq 1+{K^{1-a}\over 1-a} \\ \implies &\mathbb E[{f(\bar x_{K})-f^*}] \leq{C_1\over K{\underline{\lambda}} \mu \gamma{-C_4}}+{\eta B^2\over K{\underline{\lambda}} \mu \gamma{-C_4}}+{C_2{\underline{\lambda}}\mu^2\gamma K \over K{\underline{\lambda}}\mu\gamma{-C_4}}+{C_3(1+\mu\eta){\overline{\lambda}}^2\gamma^2\over \eta N_0(K \mu \gamma{-C_4})}(1+K^{1-a})+\eta B^2, \end{align*} where $C_1=\mathbb E[f_\mu(x_0)-f^*]$, $C_2={ \mbox{dist}^2(x_0,X^*)\over 2}$ and $C_3={ {2\nu_1^2\|x^*\|^2+\nu_2^2}\over 2(1-a)}$. Choosing the parameters $\gamma,\mu$ and $\eta$ as stated and noting that {${{{\underline{\lambda}}}= {1\over (m+n)(1/\eta^\delta+\mu^{\bar \delta})}}=\mathcal O(\eta^\delta)= \mathcal O(K^{-\delta/3})$ and ${\overline{\lambda}}={(m+n)^{n+m-1}(1/\eta^\delta+\mu^{\bar \delta})^{n+m-1}\over (n-1)!\mu^{(n+m)\bar \delta}}=\mathcal O(\eta^{-\delta(n+m-1)/\mu^{\bar \delta(n+m)}})= \mathcal O(K^{2\varepsilon/3})$, where we used the assumption that $\delta={\varepsilon\over n+m-1}$ and $\bar \delta={\varepsilon\over n+m}$}. Therefore, we obtain $\mathbb E[{f(\bar x_{K})-f^*}] \leq \mathcal O(K^{-1/3-5\varepsilon/3}+\delta/3)+\mathcal O(K^{-2/3-5\varepsilon/3+\delta/3})+\mathcal O(K^{-1/3})+\mathcal O(K^{-2/3+3\varepsilon})+\mathcal O(K^{-1/3})= \mathcal O(K^{-1/3}).$ (ii) The proof is similar to part (ii) of Theorem \ref{oracle smooth}. \end{proof} \begin{remark} {Note that in Theorem \ref{thm:rate K} we choose steplength, regularization, and smoothing parameters {as constant parameters in accordance with the length of the simulation trajectory $K$, i.e. $\gamma,\mu,\eta$ are constants.} This is akin to the avenue chosen by Nemirovski et al.~\cite{nemirovski_robust_2009} where the steplength is chosen in accordance with the length of the simulation trajectory $K$.} \end{remark} Next, we relax Assumption \ref{growth} (quadratic growth property) and impose a stronger bound on the conditional second moment of the sampled gradient. \begin{assumption}\label{non growth} \vvs{Let $\bar{w}_{k,N_k} \triangleq \nabla_x f_{\eta_k}(x_k) - \tfrac{\sum_{j=1}^{N_k} \nabla_x F_{\eta_k}(x_k,\omega_{j,k})}{N_k}$. Then there exists $\nu>0$ such that $\mathbb{E}[\|\bar{w}_{k,N_k}\|^2\mid \mathcal{F}_k] \leq \tfrac{\nu^2}{N_k}$ and $\mathbb{E}[\bar{w}_{k,N_k} \mid \mathcal{F}_k] = 0$ {hold} almost surely for all $k$ and $\eta_k > 0$, where $\mathcal{F}_k \triangleq \sigma\{x_0, x_1, \hdots, x_{k-1}\}$.} \end{assumption} \begin{corollary} [{\bf Rate statement and Oracle complexity}] Consider the \eqref{rsVS-SQN} scheme. Suppose Assumptions ~\ref{assum:convex2}, \ref{assump:Hk} (NS) and \ref{non growth} hold and $\gamma {\triangleq} c_\gamma K^{-1/3+\bar \varepsilon}$, $\mu {\triangleq} {K^{-1/3}}$, $\eta \triangleq K^{-1/3}$ and $N_k\triangleq \lceil{(k+1)}^{a}\rceil$, where $\bar \varepsilon \triangleq \tfrac{5\varepsilon}{3}$, $\varepsilon>0$ and $a>1$. \noindent (i) For any $K \geq 1$, $ \mathbb E[f(\bar x_{ K})]-f^*\leq \mathcal O(K^{-1/3}). $ \noindent (ii) Let ${\epsilon>0}$, $a = (1+\epsilon)$, and $ K\geq 1$ such that $\mathbb E[f(\bar x_{ K})]-f^*\leq {\epsilon}$. {Then{,} {$\sum_{k=0}^{ K}N_k\leq \mathcal O\left({ {\epsilon}^{-{(2+\varepsilon)\over 1/3}}}\right)$}}. \end{corollary} } \section{Numerical Results}\label{sec:5} In this section, we compare the behavior of the proposed VS-SQN schemes with their accelerated/unaccelerated gradient counterparts on a class of strongly convex/convex and smooth/nonsmooth stochastic optimization problems {with the intent of examining empirical error and sparsity of estimators (in machine learning problems) as well as the ability to contend with ill-conditioning.} \noindent {\bf Example 1.} First, we consider the logistic regression problem, defined as follows: \begin{align}\tag{LRM} \min_{x \in \mathbb R^n} \ f(x) \triangleq \frac{1}{N}\sum_{i=1}^N\ln \left(1+{\exp} \left(-u_i^Txv_i\right)\right), \end{align} where $u_i \in \mathbb R^n$ is the input binary vector associated with article $i$ and $v_i \in \{-1,1\}$ represents the class of the $i$th article. A {$\mu$-}regularized variant of such a problem is defined as follows. \begin{align}\label{logisticReg}\tag{reg-LRM} \min_{x \in \mathbb R^n} \ f(x) \triangleq \frac{1}{N}\sum_{i=1}^N\ln \left(1+{\exp}\left(-u_i^Txv_i\right)\right)+\frac{\mu}{2}\|x\|^2. \end{align} We consider the {\sc sido0} dataset~\cite{lewis2004rcv1} where $N = 12678$ and $n = 4932$. \noindent {\bf (1.1) Strongly convex and smooth problems}: To apply \eqref{VS-SQN}, we consider (Reg-LRM) where the problem is strongly convex and $\mu=0.1$. We compare the behavior of the scheme with an accelerated gradient scheme~\cite{jalilzadeh2018optimal} and set the overall sampling buget equal to $1e4$. {We observe that \eqref{VS-SQN} competes well with ({\bf VS-APM}).} (see Table~\ref{SC_tab_smooth} and Fig.~\ref{fig} (a)). \begin{table}[htb] \centering \scriptsize \begin{tabular}{|c|c|c||c|c|} \hline &\multicolumn{2}{|c||}{SC, smooth}&\multicolumn{2}{|c|}{SC, nonsmooth \aj{(Moreau smoothing)}}\\ \hline & {\bf VS-SQN}& ({\bf VS-APM}) &{\bf sVS-SQN}& ({\bf sVS-APM}) \\ \hline \hline sample size: $N_k$& $\rho^{-k}$&$\rho^{-k}$&$\lfloor q^{-k}\rfloor$&$\lfloor q^{-k}\rfloor$\\ \hline steplength: $\gamma_k$&0.1&0.1&$\eta_k^2$&$\eta_k^2$\\ \hline smoothing: $\eta_k$&-&-&$0.1$&$0.1$\\ \hline $f(x_k)$& $5.015$e-$1$&$5.015$e-$1$&$8.905$e-$1$&$1.497$e+$0$\\ \hline \end{tabular} \caption{{\bf sido0:} SC, smooth and nonsmooth} \label{SC_tab_smooth} \vspace{-0.2in} \end{table} \noindent {\bf (1.2) Strongly convex and nonsmooth}: We consider a nonsmooth variant where an $\ell_1$ regularization is added with $\lambda=\mu=0.1$: \begin{align}\label{SC nonsmooth LRM} \min_{x \in \mathbb R^n} f(x):=\frac{1}{N}\sum_{i=1}^N\ln \left(1+\mathbb{E}\left(-u_i^Txv_i\right)\right)+{\mu\over 2}\|x\|^2+\lambda\|x\|_1. \end{align} From~\cite{beck12smoothing}, a smooth approximation of $\|x\|_1$ is given by the following $$\sum_{i=1}^n H_\eta (x_i) = \begin{cases} x_i^2/2\eta, &\mbox{if } |x_i|\leq \eta \\ |x_i|-\eta/2, & \mbox{o.w.} \end{cases},$$ where $\eta$ is a smoothing parameter. The perfomance of \eqref{sVS-SQN} is shown in Figure \ref{fig} (b) while parameter choices are provided in Table \ref{SC_tab_smooth} and the total sampling budget is $1e5$. {We see that empirical behavior of \eqref{VS-SQN} } and \eqref{sVS-SQN} is similar to {\bf (VS-APM)}{~\cite{jalilzadeh2018optimal} and {\bf (rsVS-APM)}~\cite{jalilzadeh2018optimal}, respectively. Note that while in the strongly convex regimes, both schemes {display} similar (linear) rates, we do not have a rate statement for smoothed ({\bf sVS-APM})~\cite{jalilzadeh2018optimal}.} \begin{figure}[htb] \vspace{-0.1in} \centering { \includegraphics[scale=0.085]{SC_smooth_comp} \includegraphics[scale=0.085]{moreau} \includegraphics[scale=0.085]{C_smooth_comp} \includegraphics[scale=0.085]{C_nonsmooth_comp}} \caption{Left to right: (a) SC smooth, (b) SC nonsmooth, (c) C smooth, (d) C nonsmooth\label{fig}}{} \end{figure} \noindent {\bf (1.3) Convex and smooth}: We implement \eqref{rVS-SQN} on the (LRM) problem and compare the result with VS-APM~\cite{jalilzadeh2018optimal} and r-SQN~\cite{yousefian2017stochastic}. We again consider the {\sc sido0} dataset with a total budget of $1e5$ while the parameters are tuned to ensure good performance. In Figure \ref{fig} (c) we compare three different methods while the choices of steplength and sample size can be seen in Table~\ref{compare_tab}. \us{We note that (VS-APM) produces slightly better solutions, which is not surprising since it enjoys a rate of $\mathcal{O}(1/k^2)$ with an optimal oracle complexity. However, \eqref{rVS-SQN} is competitive and appears to be better than (r-SQN) by a significant margin in terms of the function value.} \begin{table}[htb] \centering \scriptsize \begin{tabular}{|c|c|c|c||c|c|} \hline &\multicolumn{3}{|c||}{convex, smooth}&\multicolumn{2}{|c|}{convex, nonsmooth}\\ \hline & {\bf rVS-SQN}& r-SQN & VS-APM & {\bf rsVS-SQN}& sVS-APM \\ \hline \hline sample size: $N_k$& $k^{2+\varepsilon}$&1&$k^{2+\varepsilon}$& $(k+1)^{1+\varepsilon}$&$(k+1)^{1+\varepsilon}$\\ \hline steplength: $\gamma_k$&$k^{-\varepsilon}$&$k^{-2/3}$&$1/(2L)$&$K^{-1/3+\varepsilon}$&$1/(2k)$\\ \hline regularizer: $\mu_k$&$k^{2/3\varepsilon-1}$&$k^{-1/3}$&-&$K^{-1/3}$&-\\ \hline smoothing: $\eta_k$&-&-&-&$K^{-1/3}$&$1/k$\\ \hline $f(x_k)$&1.38e-1&2.29e-1&9.26e-2&6.99e-1&7.56e-1\\ \hline \end{tabular} \caption{ {\bf sido0:} C, smooth and nonsmooth} \label{compare_tab} \end{table} \noindent {\bf (1.4.) Convex and nonsmooth}: Now we consider the nonsmooth problem in which $\lambda=0.1$. \begin{align}\label{nonsmooth LRM} \min_{x \in \mathbb R^n} f(x):=\frac{1}{N}\sum_{i=1}^N\ln \left(1+\exp\left(-u_i^Txv_i\right)\right)+\lambda\|x\|_1. \end{align} We implement {\bf rsVS-SQN} scheme with a total budget of $1e4$. (see Table~\ref{compare_tab} and Fig.~\ref{fig} (d)) \us{observe that it competes well with (sVS-APM)~\cite{jalilzadeh2018optimal}, which has a superior convergence rate of $\mathcal{O}(1/k)$.} \blue{\noindent {\bf (1.5.) Sparsity} {We now compare} the sparsity of the estimators obtained via (\ref{rVS-SQN}) scheme with averaging-based stochastic gradient schemes. Consider the following example where we consider the smooth approximation of $\|.\|_1$, leading to a convex and smooth problem. \begin{align*} \min_{x \in \mathbb R^n} f(x):=\frac{1}{N}\sum_{i=1}^N\ln \left(1+\exp\left(-u_i^Txv_i\right)\right)+\lambda\aj{\sum_{i=1}^n\sqrt{x_i^2+\lambda_2}}, \end{align*} where we set $\lambda=1$e-$4$. We chose the parameters according to Table \ref{compare_tab}, total budget is $1e5$ and $\|x_K\|_0$ denotes the number of entries in $x_K$ that are greater than $1$e-$4$. Consequently, {$n_0 \triangleq n - \|x_K\|_0$} denotes the number of ``zeros'' in the vector. As it can be seen in Table \ref{compare_spars}, the solution obtained by (\ref{rVS-SQN}) is significantly {sparser than that obtained by} ({\bf VS-APM}) and standard stochastic gradient. In fact, SGD produces nearly dense vectors while (\ref{rVS-SQN}) produces vectors, $10\%$ of which are sparse for $\lambda_2 = 1e$-$6.$} \begin{table}[htb] \centering \scriptsize \blue{\begin{tabular}{|c|c|c|c|c|} \hline &{\bf rVS-SQN}&({\bf VS-APM})&SGD\\ \hline $N_k$&$k^{2+\epsilon}$&$k^{2+\epsilon}$&1\\ \hline $\#$ of iter.&66&66&1e5 \\ \hline $n_0$ for $\lambda_2=1$e-$5$&144&31&0\\ \hline $n_0$ for $\lambda_2=1$e-$6$&497&57&2\\ \hline \end{tabular}} \caption{{\bf sido0:} Convex, smooth } \label{compare_spars} \end{table} \noindent {\bf Example 2. Impact of size and ill-conditioning.} {In Example {1}, we observed that \eqref{rVS-SQN} {competes well} with VS-APM for a subclass of machine learning problems. We now consider a stochastic quadratic program over a general probability space and observe similarly competitive behavior. In fact, \eqref{rVS-SQN} often outperforms ({\bf VS-APM})~\cite{jalilzadeh2018optimal} (see Tables~\ref{sc_tab_example} and \ref{c_tab_example})}. We consider the following problem. \begin{align*} \min_{x\in \mathbb R^n} \mathbb E\left[{1\over 2}x^TQ(\omega)x+c(\omega)^Tx\right], \end{align*} where $Q(\omega)\in \mathbb R^{n\times n}$ is a random symmetric matrix such that the eigenvalues are chosen uniformly at random and the minimum eigenvalue is one and zero for strongly convex and convex problem, respectively. Furthermore, $ {c_\omega}=-Q(\omega)x^0$, where $x^0\in \mathbb R^{n\times 1}$ is a vector whose elements are chosen randomly from the standard Gaussian distribution. \begin{table}[htb] \begin{minipage}[b]{0.5\linewidth} \scriptsize \begin{tabular}{|c|c|c|} \hline &\eqref{VS-SQN}&({\bf VS-APM})\\ \hline n&$\mathbb E[f(x_k)-f(x^*)]$&$\mathbb E[f(x_k)-f(x^*)]$\\ \hline 20&$3.28$e-$6$& $5.06$e-$6$ \\ \hline 60&$9.54$e-$6$& $1.57$e-$5$\\ \hline 100&$1.80$e-$5$&$2.92$e-$5$\\ \hline \end{tabular} \caption{Strongly convex: \\ \eqref{VS-SQN} vs ({\bf VS-APM})} \label{sc_tab_example} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \scriptsize \begin{tabular}{|c|c|c|} \hline &\eqref{rVS-SQN}&({\bf VS-APM})\\ \hline n&$\mathbb E[f(x_k)-f(x^*)]$&$\mathbb E[f(x_k)-f(x^*)]$\\ \hline 20&$9.14$e-$5$&$1.89$e-$4$ \\ \hline 60&$2.67$e-$4$&$4.35$e-$4$\\ \hline 100&$5.41$e-$4$&$8.29$e-$4$\\ \hline \end{tabular} \caption{Convex: \\ \eqref{rVS-SQN} vs ({\bf VS-APM})} \label{c_tab_example} \end{minipage} \end{table} {In Tables \ref{quad_ill} and \ref{convex_ill}, we compare the behavior of \eqref{rVS-SQN} and ({\bf VS-APM}) when the problem is ill-conditioned {in strongly convex and convex regimes, respectively}. {In strongly convex regimes}, we set the total budget equal to $2e8$ and maintain the steplength as equal for both schemes. The sample size sequence is chosen to be $N_k=\lceil 0.99^{-k}\rceil$, leading to $1443$ steps for both methods. {We observe that as $m$ grows, the relative quality of the solution compared to ({\bf VS-APM}) improves even further.} \blue{These findings are reinforced in Table \ref{convex_ill}, where for merely convex problems, although the convergence rate for ({\bf VS-APM}) is $\mathcal O(1/k^2)$ (superior to $\mathcal O(1/k)$ for (\ref{rVS-SQN}), (\ref{rVS-SQN}) outperforms ({\bf VS-APM}) in terms of empirical error. Note that parameters are chosen similar to Table \ref{compare_tab}. } \begin{table}[htbp] \begin{minipage}[b]{0.5\linewidth} \centering \tiny \begin{tabular}{|c|c|c|c|} \hline &\multicolumn{3}{|c|}{$\mathbb E[f(x_k)-f(x^*)]$}\\ \hline $\kappa$&\eqref{VS-SQN}, $m=1$&\eqref{VS-SQN}, $m=10$&({\bf VS-APM})\\ \hline $1e5$ &$9.25$e-$4$&$2.656$e-$4$& $2.600$e-$3$\\ \hline $1e6$ &$9.938$e-$5$&$4.182$e-$5$&$4.895$e-$4$\\ \hline $1e7$ &$1.915$e-$5$&$1.478$e-$5$&$1.079$e-$4$\\ \hline $1e8$ &$1.688$e-$5$&$6.304$e-$6$&$4.135$e-$5$\\ \hline \end{tabular} \caption{Strongly convex: \\Performance vs Condition number (as $m$ changes)} \label{quad_ill} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \tiny \blue{\begin{tabular}{|c|c|c|c|} \hline &\multicolumn{3}{|c|}{$\mathbb E[f(x_k)-f(x^*)]$}\\ \hline $L$&\eqref{rVS-SQN}, $m=1$&\eqref{rVS-SQN}, $m=10$&({\bf VS-APM})\\ \hline $1e3$ &$4.978$e-$4$&$1.268$e-$4$&$1.942$e-$4$\\ \hline $1e4$ &$3.288$e-$3$&$2.570$e-$4$&$3.612$e-$2$\\ \hline $1e5$ &$8.571$e-$2$&$3.075$e-$3$&$2.794$e+$0$\\ \hline $1e6$ &$3.367$e-$1$&$3.203$e-$1$&$4.293$e+$0$\\ \hline \end{tabular}} \caption{Convex: \\Performance vs Condition number (as $m$ changes)} \label{convex_ill} \end{minipage} \end{table} \noindent {\bf Example 3. Constrained Problems.} We consider the isotonic constrained LASSO problem. \begin{align}\label{isotonic} \min_{x =[x_i]_{i=1}^n\in \mathbb R^n}~ \left\{ \frac{1}{2}\sum_{i=1}^p \|A_ix-b_i\|^2 \mid x_1\leq x_2\leq \hdots\leq x_n \right\}, \end{align} where $A=[A_i]_{i=1}^p\in\mathbb{R}^{n\times p}$ is a matrix whose elements are chosen randomly from standard Gaussian distribution such that the $A^\top A\succeq 0$ and $b=[b_i]_{i=1}^p\in\mathbb{R}^p$ such that $b=A(x_0+ {\sigma})$ where $x_0\in\mathbb{R}^n$ is chosen such that the first and last $\frac{n}{4}$ of its elements are chosen from $U([-10,0])$ and $U([0,10])$ in ascending order, respectively, while the other elements are set to zero. Further, ${\sigma}\in\mathbb{R}^n$ is a random vector whose elements are independent normally distributed random variables with mean zero and standard deviation $0.01$. Let $C\in\mathbb{R}^{n-1\times n}$ be a matrix that captures the constraint, i.e., $C(i,i)=1$ and $C(i,i+1)=-1$ for $1\leq i\leq n-1$ and its other components are zero and let $X\triangleq \{x~:~Cx\leq 0\}$. Hence, we can rewrite the problem \eqref{isotonic} as $\min_{x \in \mathbb R^n} f(x):=\frac{1}{2}\sum_{i=1}^p \|A_ix-b_i\|^2+\mathcal{I}_{X}(x)$. We know that the smooth approximation of the indicator function is $\mathcal I_{X,\eta}={1\over 2\eta} d^2_{X}(x)$. Therefore, we apply \eqref{rsVS-SQN} on the following problem \begin{align}\label{isotonic_smooth} \min_{x \in \mathbb R^n} f(x) & \triangleq \frac{1}{2}\sum_{i=1}^p \|A_ix-b_i\|^2+{1\over 2\eta} d^2_{X}(x). \end{align} {Parameter choices are similar to those in Table \ref{compare_tab} and we note from Fig.~\ref{fig_isotonic} (Left) that empirical behavior appears to be favorable. } \begin{figure}[htb] \centering \includegraphics[scale=0.1]{isotonic_c_diffn} \includegraphics[scale=0.1]{example_sc_com}\caption{Left: \eqref{sVS-SQN} Right: \eqref{sVS-SQN}~vs.~BFGS} \label{fig_isotonic} \end{figure} \noindent {\bf Example 4. Comparison of ({\bf s-QN}) with BFGS} In~\cite{lewis2008behavior}, the authors show that a nonsmooth BFGS scheme may take null steps and fails to converge to the optimal solution (See~Fig.~\ref{fig:nssqn}) and consider the following problem. \begin{align*} \min_{x {\in \mathbb R^2}} \qquad {1\over 2}\|x\|^2+\max\{2|x_1|+x_2,3x_2\}. \end{align*} In this problem, {BFGS takes a null step after two iterations (steplength is zero)}; however ({\bf s-QN}) (the deterministic version of \eqref{sVS-SQN}) converges to the optimal solution. Note that the optimal solution is $(0,-1)$ and ({\bf s-QN}) reaches $(0,-1.0006)$ in just $0.095$ seconds (see Fig.~\ref{fig_isotonic} (Right)). \section{Conclusions} Most SQN schemes can process smooth and strongly convex stochastic optimization problems and there appears be a gap in the asymptotics and rate statements in addressing merely convex and possibly nonsmooth settings. Furthermore, a clear difference exists between deterministic rates and their stochastic counterparts, paving the way for developing variance-reduced schemes. In addition, much of the available statements rely on a somewhat stronger assumption of uniform boundedness of the conditional second moment of the noise, which is often difficult to satisfy in unconstrained regimes. Accordingly, the present paper makes three sets of contributions. First, a regularized smoothed L-BFGS update is proposed that combines regularization and smoothing, providing a foundation for addressing nonsmoothness and a lack of strong convexity. Second, we develop a variable sample-size SQN scheme \eqref{VS-SQN} for strongly convex problems and its Moreau smoothed variant \eqref{sVS-SQN} for nonsmooth (but smoothable) variants, both of which attain a linear rate of convergence and an optimal oracle complexity. Notably, when more general smoothing techniques are employed, the convergence rate can also be quantified. Third, in merely convex regimes, we develop a regularized VS-SQN \eqref{rVS-SQN} and its smoothed variant \eqref{rsVS-SQN} for smooth and nonsmooth problems respectively. The former achieves a rate of $\mathcal{O}(1/K^{1-\epsilon})$ while the rate degenerates to $\mathcal{O}(1/K^{1/3-\epsilon})$ in the case of the latter. Finally, numerics suggest that the SQN schemes compare well with their variable sample-size accelerated gradient counterparts and perform particularly well in comparison when the problem is afflicted by ill-conditioning. \bibliographystyle{siam}
{ "timestamp": "2019-10-01T02:22:02", "yymm": "1804", "arxiv_id": "1804.05368", "language": "en", "url": "https://arxiv.org/abs/1804.05368" }
\section{Introduction} \label{sec:intro} A statistical test for two composite hypotheses is called minimax optimal if it minimizes the maximum risk over the two corresponding sets of feasible distributions. In the context of robust statistics, these sets are referred to as \emph{uncertainty sets}. In contrast to adaptive procedures \cite{Zeitouni1992_glrt}, the minimax approach provides strict guarantees on the error probabilities for all feasible distributions. Moreover, minimax tests are often easy to implement since they typically reduce to an optimal test for two simple hypotheses, where each hypothesis is represented by a \emph{least favorable distribution}. A common way of specifying uncertainty sets is via a neighborhood around a nominal distribution, which represents an ideal system state or model \cite{Kassam1981_robustness_survey}. In many works on robust detection, the use of $f$-divergence balls has been proposed as a useful and versatile model to construct such neighborhoods \cite{McKellips_binary_input, McKellips1998_maximin, Levy2009_entropy_tolerance, Gul2013_modelling_errors, Gul2014_Hellinger_distance, Gul2015_composite_distances, Gul2016_alpha_divergence, Gul2017_minimax_robust}. In contrast to outlier models, such as $\varepsilon$-contamination \cite{Huber1965_robust_PRT}, $f$-divergence balls do not allow for arbitrarily large deviations from the nominals and, therefore, have been argued to better represent scenarios where the shape of a distribution is subject to uncertainty, but there are no gross outliers in the data \cite{Levy2009_entropy_tolerance}. In order to present the result in this paper, the concept of single sample and fixed sample size tests needs to be introduced. A single sample test is based on the observation of a single, possibly vector-valued, random variable $X_1$. Consequently, the uncertainty sets are defined in terms of all possible \emph{joint} distributions of the elements of $X$. Such an uncertainty model is suitable in some cases, but more often the observations are obtained by repeatedly performing independent random experiments so that the test is based on a sequence of independent random variables $X_1, \ldots, X_N$, $N > 1$. By definition, this independence constraint cannot be incorporated into a single sample test. Hence, tests whose observations are realizations of multiple independent random variables need to be considered separately. In order to highlight the difference to tests whose sample size is random \cite{Wald1947_sequential_analysis}, they are referred to as fixed sample size tests in what follows. For most commonly used uncertainty models---including the density band model, which will be discussed in detail later on---it can be shown that a single sample minimax optimal test is also fixed sample size minimax optimal. More precisely, the least favorable distributions for $X_1$ in the single sample case are also least favorable for all $X_1, \ldots, X_N$ in the fixed sample size case. However, in general, this does not hold true for uncertainty sets of the $f$-divergence ball type, where the fixed sample size minimax optimal solution is typically intractable. However, a commonly applied heuristic is to use the single sample least favorable distributions for the fixed sample size case anyhow, regardless of the fact that this extension does not hold in theory; compare \cite[Sec.~V.A]{Gul2017_minimax_robust}. Tests constructed this way are referred to as single sample minimax optimal tests with repeated observations. Evidently, such tests are no longer minimax optimal. However, in practice, it can be observed that they are still robust, meaning that they meet the specified error probabilities for most if not all feasible distributions. In this paper, the favorable robustness properties of single sample minimax optimal tests with repeated observations are explained in a rigorous manner. It is shown that they are indeed fixed sample size minimax optimal, but for a density band uncertainty model instead of the original $f$-divergence ball model. That is, single-sample minimax optimal tests can be applied to repeated observations without sacrificing minimax optimality, if one is willing to accept a change in the uncertainty model. This result is proved by showing that for every $f$-divergence ball model, there exists and equivalent density band model that admits the same single sample minimax optimal solution. However, for the density band model, this solution is known to be fixed sample size optimal as well so that it automatically extends to the case of repeated observations. The paper is organized as follows: a brief review of minimax optimal detection is given in Section~\ref{sec:minimax_optimal_detection}. The two uncertainty models of interest, i.e., $f$-divergence balls and density bands, are introduced in Section~\ref{sec:uncertainty_sets}. The main result is stated and proved in Section~\ref{sec:main_result}, followed by a brief discussion in Section~\ref{sec:discussion}. An illustrative example is shown in Section~\ref{sec:example}, which also concludes the paper. \section{Minimax Optimal Detection} \label{sec:minimax_optimal_detection} The single sample case is considered first. Let $(\mathcal{X},\mathcal{F})$ be a measurable space and let $X_1$ be a $(\mathcal{X},\mathcal{F})$-valued random variable that is distributed according to a probability measure (distribution) $P$. Throughout the paper it is assumed that all distributions on $(\mathcal{X},\mathcal{F})$ have a continuous density function with respect to some $\sigma$-finite reference measure $\mu$. The set of all distributions on $(\mathcal{X},\mathcal{F})$ that admit this property is denoted by $\mathcal{M}_\mu$. The goal of a simple, non-robust binary hypothesis test is to decide between the two hypotheses \begin{align*} \mathcal{H}_0\colon \; P = P_0, \qquad \mathcal{H}_1\colon \; P = P_1, \end{align*} where $P_0, P_1 \in \mathcal{M}_\mu$ are two given distributions. The test is defined by a decision $d \in \{0,1\}$ and a, possibly randomized, decision rule $\delta\colon \mathcal{X} \to [0,1]$, where $\delta(x)$ denotes the conditional probability to decide for $\mathcal{H}_1$, given the observation $X_1 = x$. The set of all decision rules is denoted by $\Delta$. The type I and type II error probabilities are given by \begin{align*} P_0[d=1] &= E_{P_0}[\,\delta(X) \,], \\ P_1[d=0] &= E_{P_1}[1-\delta(X)]. \end{align*} The optimal decision rule $\delta^*$ for the simple binary hypothesis test is a threshold comparison of the likelihood ratio, i.e., \begin{equation*} \delta^*(x) = \begin{cases} 1, & l(x) > \lambda \\ \kappa, & l(x) = \lambda \\ 0, & l(x) < \lambda \end{cases}, \end{equation*} where $\lambda > 0$ is the threshold value, $\kappa \in [0,1]$ can be chosen arbitrarily, and $l(x)$ denotes the likelihood ratio \begin{equation*} l(x) = \frac{p_1(x)}{p_0(x)}. \end{equation*} The likelihood ratio test is optimal in a very general sense \cite{Christensen2005_Fisher-Neyman-Pearson-Bayes}. In particular, it minimizes the weighted sum error probability, i.e., it solves \begin{equation} \label{eq:decision_rule_simple} \min_{\delta \in \Delta} \; E_{P_1}[\,\delta(X) \,] + \lambda \, E_{P_0}[1-\delta(X)]. \end{equation} In robust detection, the distribution under each hypothesis is assumed not to be known exactly. The distributional uncertainty is modeled by two disjoint sets $\mathcal{P}_0,\mathcal{P}_1 \subset \mathcal{M}_\mu$ so that the hypotheses become \begin{align*} \mathcal{H}_0\colon \; P \in \mathcal{P}_0, \qquad \mathcal{H}_1\colon \; P \in \mathcal{P}_1. \end{align*} The minimax problem corresponding to \eqref{eq:decision_rule_simple} is thus given by \begin{equation} \label{eq:minimax_problem} \min_{\delta \in \Delta} \; \max_{\substack{H_0 \in \mathcal{P}_0 \\ H_1 \in \mathcal{P}_1}} \; E_{H_1}[\, \delta(X) \,] + \lambda \, E_{H_0}[1-\delta(X)]. \end{equation} Problem \eqref{eq:minimax_problem} is central to robust detection. By definition, its solution is minimax optimal with respect to the weighted sum of error probabilities, but it can also be shown to be minimax optimal in the sense of Neyman--Pearson and the Baysian sense. This property is fixed in the following definition. \begin{definition} A triplet $(\delta^*,Q_0,Q_1)$ that solves \eqref{eq:minimax_problem} for a given $\lambda > 0$ is called \emph{single sample minimax optimal} with respect to the threshold $\lambda$ and the uncertainty sets $\mathcal{P}_0$, $\mathcal{P}_1$. This is written as \begin{equation*} (\delta^*,Q_0,Q_1) \in \{\mathcal{P}_0,\mathcal{P}_1\}_{\lambda}^*. \end{equation*} \end{definition} In \cite{Huber1965_robust_PRT} and \cite{Fauss2016_old_bands} it is shown that if the solution of \eqref{eq:minimax_problem} is independent of the threshold $\lambda$, it is also minimax optimal for fixed sample size tests with arbitrary thresholds and arbitrary sample sizes. This property is fixed in the next definition. \begin{definition} A triplet $(\delta^*,Q_0,Q_1)$ that jointly solves \eqref{eq:minimax_problem} for all $\lambda > 0$ is called \emph{fixed sample size minimax optimal} with respect to $\mathcal{P}_0$, $\mathcal{P}_1$. This is written as \begin{equation*} (\delta^*,Q_0,Q_1) \in \{\mathcal{P}_0,\mathcal{P}_1\}^*. \end{equation*} \end{definition} \section{Uncertainty Sets} \label{sec:uncertainty_sets} Two types of uncertainty sets are introduced in this section. The first one is the $f$-divergence ball model, which specifies uncertainty sets in terms of a maximum feasible distance from a nominal distribution and allows the use of arbitrary $f$-divergences to define this distance. Formally, $f$-divergence ball uncertainty sets are of the form \begin{equation} \label{eq:f-divergence_uncertainty} \mathcal{P}_f(P,\varepsilon) = \{ H \in \mathcal{M}_{\mu} : D_f(H \Vert P) \leq \varepsilon \}, \end{equation} where $P$ denotes the nominal distribution and $D_f$ denotes the $f$-divergence induced by the function $f$, i.e., \begin{align*} D_f(H \Vert P) &= \int_{\mathcal{X}} f\biggl( \frac{\mathrm{d} H}{\mathrm{d} P}(x) \biggr) \, \mathrm{d} P(x) \\ &= \int_{\mathcal{X}} f\biggl( \frac{h(x)}{p(x)} \biggr) p(x) \, \mathrm{d} \mu(x), \end{align*} where $f \colon \mathbb{R}_{\geq 0} \to \mathbb{R}$ is convex and satisfies $f(1) = 0$. The definition of the $f$-divergence ball in terms of $D_f(H \Vert P)$ instead of $D_f(P \Vert H)$ is arbitrary since for every feasible function $f$ it holds that $D_f(P \Vert H) = D_{\tilde{f}}(H \Vert P)$, with $\tilde{f}(x) = f(\tfrac{1}{x})x$. Owing to the mild constraints on $f$, uncertainty sets of the form \eqref{eq:f-divergence_uncertainty} offer a great amount of flexibility and have attracted increased attention in recent years. In \cite{Levy2009_entropy_tolerance} and \cite{Gul2017_minimax_robust}, minimax optimal tests based on the Kullback--Leibler divergence were derived under varying assumptions. Minimax optimal tests have also been derived for the squared Hellinger distance \cite{Gul2013_modelling_errors, Gul2014_Hellinger_distance}, the total variation distance \cite{Gul2015_composite_distances}, and $\alpha$-divergences \cite{Gul2016_alpha_divergence, Gul2017_minimax_robust}. However, a disadvantage of robust tests with $f$-divergence ball uncertainty is that no fixed sample size minimax optimal solution is guaranteed to exist. In fact, most of the works cited above only consider the single sample case. The second type of uncertainty model is the density band model. In a robust detection context, it was first proposed by Kassam \cite{Kassam1981} and covers sets of the form \begin{equation} \label{eq:density_band_uncertainty} \mathcal{P}_\text{b}(P',P'') = \{ H \in \mathcal{M}_{\mu} : p' \leq h \leq p'' \}, \end{equation} where $P',P''$ are nonnegative measures on $(\mathcal{X},\mathcal{F})$ that admit densities $p',p''$ with respect to $\mu$ and satisfy \begin{equation*} P'(\mathcal{X}) \leq 1, \quad P''(\mathcal{X}) \geq 1, \quad \text{and} \quad 0 \leq p' \leq p''. \end{equation*} In words, the density band model restricts the true density to lie within a band specified by $p'$ and $p''$. Similar to the choice of $f$ in the $f$-divergence ball model, the choice of $P'$ and $P''$ allows for \emph{a priori} knowledge about the type of contamination to be incorporated into the model. Therefore, although it is still based on the concept of outliers, the band model can capture a much larger variety of contamination types and mismatches than the standard $\varepsilon$-contamination model. Another useful property of the density band model is that for every pair of uncertainty sets of the form \eqref{eq:density_band_uncertainty}, a fixed sample size minimax optimal test is guaranteed to exist. Moreover, the corresponding least favorable densities can be calculated in a generic manner using a simple and efficient algorithm. See \cite{Fauss2016_old_bands} for a more detailed discussion of the band model and the calculation of its least favorable densities. In the next section, it is shown in how far the two uncertainty models can be considered equivalent in the single sample case and how the density band model can be used to construct fixed sample size minimax optimal tests from tests that are only single sample size minimax optimal under $f$-divergence ball uncertainty. \section{Main Result} \label{sec:main_result} In this section, the main result of the paper is stated and proved. A more detailed discussion is deferred to Section~\ref{sec:discussion}. \begin{theorem*} Let $\mathcal{P}_{f_0}(P_0,\varepsilon_0)$ and $\mathcal{P}_{f_1}(P_1,\varepsilon_1)$ be two uncertainty sets of the form \eqref{eq:f-divergence_uncertainty}. If it holds that \begin{equation*} (\delta^*, Q_0, Q_1) \in \{\mathcal{P}_{f_0}(P_0,\varepsilon_0) \,,\, \mathcal{P}_{f_1}(P_1,\varepsilon_1)\}_{\lambda}^*, \end{equation*} then there exist nonnegative scalars $a_0 \leq b_0$ and $a_1 \leq b_1$ such that \begin{equation*} (\delta^*, Q_0, Q_1) \in \{\mathcal{P}_\text{b}(a_0 P_0,b_0 P_0) \,,\, \mathcal{P}_\text{b}(a_1 P_1,b_1 P_1)\}^*. \end{equation*} \end{theorem*} In words, the theorem states that if $(\delta^*,Q_0,Q_1)$ is single sample minimax optimal for an $f$-divergence ball uncertainty model, a density band model can be constructed from scaled versions of the nominal densities such that $(\delta^*,Q_0,Q_1)$ is fixed sample size minimax optimal with respect to this band model. A proof is detailed below. \begin{proof*} Let $P_0$, $P_1$, $f_0$, $f_1$, $\varepsilon_0$, $\varepsilon_1$, and $\lambda$ be given. Rewriting the optimization problem \eqref{eq:minimax_problem} in terms of the densities and with explicit constraints yields \begin{gather} \max_{\substack{h_0 > 0 \\ h_1 > 0}} \; \min_{\delta \in \Delta} \; \int_{\mathcal{X}} h_1 \, \delta + \lambda h_0 (1-\delta) \, \mathrm{d}\mu \quad \text{s.t.} \label{eq:minimax_objective} \\ \int_{\mathcal{X}} f_0\biggl( \frac{h_0}{p_0} \biggr) p_0 \, \mathrm{d}\mu \leq \varepsilon_0, \quad \int_{\mathcal{X}} f_1\biggl( \frac{h_1}{p_1} \biggr) p_1 \, \mathrm{d}\mu \leq \varepsilon_1 \label{eq:f-divergence_constraints} \\ \int_{\mathcal{X}} h_0 \, \mathrm{d}\mu = 1, \quad \int_{\mathcal{X}} h_1 \, \mathrm{d}\mu = 1. \label{eq:density_constraints} \end{gather} By assumption, $(\delta^*,Q_0,Q_1)$ solves this minimax problem, which implies that $(\delta^*,Q_0,Q_1)$ is a saddle point of \eqref{eq:minimax_objective}. This, in turn, implies that $(\delta^*,Q_0,Q_1)$ satisfies the corresponding Karush--Kuhn--Tucker conditions, which are first order necessary conditions for optimality \cite{Guignard1969}. In particular, stationarity of the saddle point solution implies that scalars $\eta_0,\eta_1$ and nonnegative scalars $\nu_0,\nu_1$ exists such that \begin{align} \lambda(1-\delta^*) &= \nu_0 f_0'\biggl( \frac{q_0}{p_0} \biggr) - \eta_0, \label{eq:stationarity_0}\\ \delta^* &= \nu_1 f_1'\biggl( \frac{q_1}{p_1} \biggr) - \eta_1, \label{eq:stationarity_1} \\ \delta^* &= \begin{cases} 1, & q_1 > \lambda q_0 \\ \kappa \in (0,1), & q_1 = \lambda q_0 \\ 0, & q_1 < \lambda q_0 \end{cases}, \label{eq:stationarity_delta} \end{align} where $f_0'$, $f_1'$ denote the (sub)derivatives of $f_0$ and $f_1$, $\eta_0$, $\eta_1$ denote the Lagrange multipliers corresponding to the constraints \eqref{eq:density_constraints}, and $\nu_0$, $\nu_1$ denote the Lagrange multipliers corresponding to the constraints \eqref{eq:f-divergence_constraints}. Since $f_0$ and $f_1$ are convex, their (sub)derivatives are nondecreasing. Moreover, the inverse functions $g_0$ and $g_1$, which are implicitly defined by \begin{equation*} g_0(f_0'(x)) = x \quad \text{and} \quad g_1(f_1'(x)) = x \quad \forall x \in \mathbb{R}_{\geq 0}, \end{equation*} exist and are nondecreasing as well. Solving \eqref{eq:stationarity_0} and \eqref{eq:stationarity_1} for $q_0$ and $q_1$ yields \begin{align} q_0 &= g_0\biggl(\frac{\lambda(1-\delta^*) + \eta_0}{\nu_0}\biggr) p_0, \label{eq:q0_g0} \\ q_1 &= g_1\biggl(\frac{\delta^* + \eta_1}{\nu_1}\biggr) p_1. \label{eq:q1_g1} \end{align} Combining \eqref{eq:stationarity_delta}, \eqref{eq:q0_g0} and \eqref{eq:q1_g1}, it follows that the least favorable densities are of the form \begin{equation} q_0 = \begin{cases} b_0 \, p_0, & \delta^* = 0 \\ \frac{1}{\lambda} \, q_1, & \delta^* \in (0,1) \\ a_0 \, p_0, & \delta^* = 1 \end{cases} \label{eq:lfds_piecewise_0} \end{equation} and \begin{equation} \label{eq:lfds_piecewise_1} q_1 = \begin{cases} a_1 \, p_1, & \delta^* = 0 \\ \lambda \, q_0, & \delta^* \in (0,1) \\ b_1 \, p_1, & \delta^* = 1 \end{cases}, \end{equation} where \begin{equation} \label{eq:coefficients_a} a_0 = g_0\left(\frac{\eta_0}{\nu_0}\right) \leq b_0 = g_0\left(\frac{\lambda + \eta_0}{\nu_0} \right) \end{equation} and \begin{equation} \label{eq:coefficients_b} a_1 = g_1\left(\frac{\eta_1}{\nu_1}\right) \leq b_1 = g_1\left(\frac{1 + \eta_1}{\nu_1}\right). \end{equation} Note that since $q_0$ and $q_1$ are valid densities, $a_0, b_0$ and $a_1, b_1$ are nonnegative. The next step of the proof is to show that the least favorable densities in \eqref{eq:lfds_piecewise_0} and \eqref{eq:lfds_piecewise_1} can be written as \begin{align} q_0 &= \min \bigl\{ b_0 p_0 \,,\, \max\bigl\{ \tfrac{1}{\lambda} q_1 \,,\, a_0 p_0 \bigr\} \bigr\}, \label{eq:lfd0} \\ q_1 &= \min \bigl\{ b_1 p_1 \,,\, \max\bigl\{ \lambda q_0 \,,\, a_1 p_1 \bigr\} \bigr\}. \label{eq:lfd1} \end{align} Only \eqref{eq:lfd0} is shown here since the proof for \eqref{eq:lfd1} can be given analogously. From \eqref{eq:stationarity_delta} and \eqref{eq:lfds_piecewise_0} it follows that on $\{ x \in \mathcal{X} : \delta(x) = 0\}$ \begin{align} q_1 < \lambda q_0 \quad &\Rightarrow \quad \frac{1}{\lambda} q_1 < q_0 = b_0 p_0, \label{eq:lfd0_d0} \\ \intertext{on $\{ x \in \mathcal{X} : \delta(x) \in (0,1)\}$} q_1 = \lambda q_0 \quad &\Rightarrow \quad \frac{1}{\lambda} q_1 = q_0, \label{eq:lfd0_d01} \\ \intertext{and on $\{ x \in \mathcal{X} : \delta(x) = 1\}$} q_1 > \lambda q_0 \quad &\Rightarrow \quad \frac{1}{\lambda} q_1 > q_0 = a_0 p_0. \label{eq:lfd0_d1} \end{align} Combining \eqref{eq:lfd0_d0}, \eqref{eq:lfd0_d01}, and \eqref{eq:lfd0_d1} yields \eqref{eq:lfd0}. From \cite[Theorem 4]{Fauss2016_old_bands}, it follows immediately that \eqref{eq:lfd0} and \eqref{eq:lfd1} are fixed sample size least favorable for a density band model with bounds \begin{align*} p_0' &= a_0 p_0 \leq b_0 p_0 = p_0'', \\ p_1' &= a_1 p_1 \leq b_1 p_1 = p_1''. \end{align*} \end{proof*} \section{Discussion} \label{sec:discussion} The result presented in the previous section states that every single sample minimax optimal test under $f$-divergence ball uncertainty is fixed sample size minimax optimal under the equivalent density band uncertainty. This not only makes it possible to use single sample results for fixed sample size tests without sacrificing minimax optimality, but also to specify the exact sets of distributions for which the minimax property holds. In this sense, the theorem lifts the $f$-divergence ball model to the same level of usefulness as the classic outlier models, whose single sample results automatically carry over to the fixed sample case. In addition to this generalization, the fact that for every $f$-divergence ball model an equivalent density band model can be constructed offers some deeper insights and also suggests an alternative approach to the design of robust tests under $f$-divergence ball uncertainty. One useful aspect of the equivalent band model is that it simplifies comparing the amount and type of uncertainty that is allowed for by different $f$-divergence ball models. Such comparisons are non-trivial since the $\varepsilon$-tolerances in \eqref{eq:f-divergence_uncertainty} do not directly translate to contamination ratios and might be of different scales altogether. While, for example, the Kullback--Leibler divergence can take on any nonnegative value, the Hellinger distance is bounded between zero and one. In such cases, one cannot simply compare the $\varepsilon$-tolerances in order to compare the maximum amount of uncertainty in the distributions. The corresponding band model, however, offers a way to make such comparisons possible. The lower bounds on the density functions, $a_0 p_0$ and $a_1 p_1$, determine how much probability mass the nominal distributions contribute at least, namely $a_0$ and $a_1$. Consequently, the outliers can at most contribute the remaining probability masses $1-a_0$ and $1-a_1$, which can hence be interpreted as contamination ratios. The larger they are, the more uncertainty a model allows. On the other hand, the upper bounds, $b_0 p_0$ and $b_1 p_1$, offer an insight into what type of uncertainty is allowed. For $b_0, b_1 \gg 1$ the contamination is almost unconstrained, which corresponds to gross outliers. For $b_0,b_1 \approx 1$, the outlier distributions are close to the nominals, which corresponds to more subtle model mismatches. In general, the outlier distributions under $\mathcal{H}_i$, $i \in \{0,1\}$, are constrained to lie within the set \begin{equation*} \left\{ H \in \mathcal{M}_\mu : h \leq \frac{b_i-a_i}{1-a_i} p_i \right\}. \end{equation*} This interpretation of a density band model as a constrained $\varepsilon$-contamination model often offers a useful intuition for the amount and type of contamination that cannot be obtained by inspection of the $f$-divergence ball model. There are several ways to determine the coefficients $a_0$, $b_0$ and $a_1$, $b_1$ in practice. If expressions for the least favorable distributions can already be found in the literature, the coefficients can be determined by a simple comparison with the expressions in \eqref{eq:lfds_piecewise_0} and \eqref{eq:lfds_piecewise_1}. If the least favorable densities are unknown, the KKT conditions \eqref{eq:stationarity_0}--\eqref{eq:stationarity_delta} can be solved for $\nu_0$ ,$\nu_1$ and $\eta_0$, $\eta_1$. The scaling coefficients can then be calculated according to \eqref{eq:coefficients_a} and \eqref{eq:coefficients_b}. In practice, however, this approach might be prohibitively complex. An alternative to solving the KKT conditions for $\nu_0,\nu_1$ and $\eta_0,\eta_1$ is to solve them directly for $a_0,b_0$ and $a_1,b_1$. From the result in the previous section, it follows that the least favorable densities are of the form \eqref{eq:lfd0} and \eqref{eq:lfd1}. For given scaling coefficients $a_0,b_0$ and $a_1,b_1$, these equations can be solved for $q_0$ and $q_1$ by finding a threshold $\lambda$ so that the right hand sides of \eqref{eq:lfd0} and \eqref{eq:lfd1} are valid densities, i.e., they integrate to one. Finally, an outer search over $a_0,b_0$ and $a_1,b_1$ can be performed such that the primal constraints are fulfilled, i.e., \begin{equation*} \int_{\mathcal{X}} f_0 \biggl( \frac{q_0}{p_0} \biggr) p_0 \, \mathrm{d} \mu = \varepsilon_0 \quad \text{and} \quad \int_{\mathcal{X}} f_1 \biggl( \frac{q_1}{p_1} \biggr) p_1 \, \mathrm{d} \mu = \varepsilon_1. \end{equation*} This approach can be expected to be less efficient than a solution that exploits properties of a given function $f$, but is applicable in general and does not require prior analysis of the problem. Yet another option to determine the scaling coefficients is to directly solve the primal problem \eqref{eq:minimax_objective} using a suitable convex optimization algorithm. Even if the latter does not calculate the dual variables explicitly, $a_0,b_0$ and $a_1,b_1$ can be obtained from the ratio of the least favorable and the nominal densities \begin{equation} \label{eq:lfd_ratio} \begin{aligned} \frac{q_0}{p_0} &= \min \left\{ b_0 \,,\, \max \left\{ \frac{1}{\lambda} \frac{q_1}{p_0} \,,\, a_0 \right\} \right\}, \\ \frac{q_1}{p_1} &= \min \left\{ b_1 \,,\, \max \left\{ \lambda \frac{q_0}{p_1} \,,\, a_1 \right\} \right\}. \end{aligned} \end{equation} By inspection of \eqref{eq:lfd_ratio}, the scaling coefficients can be identified form the regions where the likelihood ratio is constant. The result in Section~\ref{sec:main_result} also motivates further research into how the two uncertainty models are related. Given equivalent $f$-divergence balls and density bands, does one uncertainty set contain the other, i.e., is one model a relaxation of the other one? Does every band model whose bounds are constructed by scaling a nominal density admit an equivalent $f$-divergence ball model? If so, how can the corresponding function $f$ be determined? Another question that might be asked is, whether similar equivalences exist for other types of uncertainty models as well and in how far there is a hierarchy between these models, i.e., whether the set of all possible least favorable distributions under one model is a sub- or super-set of all possible least favorable distributions under another model. \section{Example} \label{sec:example} \begin{figure}[t] \centering \includegraphics{band_lfds.pdf} \caption{Least favorable densities and equivalent density bands for uncertainty sets $\mathcal{P}_{x\log}\bigl(\mathcal{N}(-1,1),0.03\bigr)$ and $\mathcal{P}_{x\log}\bigl(\mathcal{N}(1,2),0.02\bigr)$.} \label{fig:bands} \end{figure} In order to highlight the connection to existing results, we consider the example from \cite[Sec.~VI.A]{Gul2017_minimax_robust}, where the nominal distributions under $\mathcal{H}_0$ and $\mathcal{H}_1$ are chosen as $P_0 = \mathcal{N}(-1,1)$ and $P_1 = \mathcal{N}(1,2)$, respectively, and $\mathcal{N}(m,\sigma^2)$ denotes a Gaussian distribution with mean $m$ and variance $\sigma^2$. The Kullback--Leibler divergence is used as a distance measure, i.e., $f_0(x) = f_1(x) = x\log(x)$, the tolerances are chosen as $\varepsilon_0 = 0.03$, $\varepsilon_1 = 0.02$, and the likelihood ratio threshold is set to $\lambda = 1$. Using Theorem~2 in \cite{Gul2017_minimax_robust}, the least favorable densities for this model can be calculated efficiently by solving two integral equations. The coefficients for the corresponding band model can be identified by comparing (6) in \cite{Gul2017_minimax_robust} to \eqref{eq:lfds_piecewise_0} and \eqref{eq:lfds_piecewise_1} in this paper. For the numerical values given above, they calculate to $a_0 \approx 0.9047$, $b_0 \approx 2.2519$, $a_1 \approx 0.8319$, and $b_1 \approx 1.3009$. The least favorable densities and the equivalent density bands are depicted in Fig.~\ref{fig:bands}. Interestingly, the difference in the radii of the $f$-divergence balls is not reflected in the contamination ratio, which is smaller under $\mathcal{H}_0$ ($\approx 10\%$) than under $\mathcal{H}_1$ ($\approx 17\%$). However, the band under $\mathcal{H}_0$ is wider, meaning that it allows for larger deviations from the nominal distribution. Under $\mathcal{H}_1$, the contamination ratio is higher, but the outlier distribution is much more restricted. This example illustrates how existing results on robust tests under $f$-divergence ball uncertainty can be used to construct fixed sample size minimax optimal tests for the equivalent density band uncertainty sets and how the latter provide additional insight into the amount and type of contamination induced by the uncertainty model. \bibliographystyle{IEEEbib}
{ "timestamp": "2018-04-17T02:16:22", "yymm": "1804", "arxiv_id": "1804.05632", "language": "en", "url": "https://arxiv.org/abs/1804.05632" }
\section{Introduction} \label{intro} The existence of non-Fourier heat conduction under various conditions is experimentally proved in several different ways. First, the Maxwell-Cattaneo-Vernotte equation (MCV) \cite{Max1867, Cattaneo58, Vernotte58}, \begin{equation} \tau_q \partial_{tt} T + \partial_t T = \alpha \partial_{xx} T, \label{MCV} \end{equation} is used to describe the dissipative wave form of heat propagation called second sound. Here, $\tau_q$ is the relaxation time, $\alpha$ stands for the thermal diffusivity, $\partial_t$ denotes the time derivative and $\partial_{xx}$ denotes the second spatial derivative in one dimension. It is the simplest extension of Fourier's law and there are several different theorems in the literature which lead to this type of hyperbolic generalization \cite{JosPre89, JosPre90a, Gyar77a, JouVasLeb88ext, Tzou95, MulRug98, VanFul12, KovVan15, BerVan15, Cimmelli09nl, Cimm09diff}. The existence of second sound was predicted by Tisza and Landau \cite{Tisza38, Lan47}, earlier than the experimental discovery. Then Peshkov managed to measure it in superfluid He \cite{Pesh44} and enhanced the researches in that respect. Later on, several new ideas have developed how to measure similar phonemena in different materials. One of the most important result is related to Guyer and Krumhansl who derived the so-called window condition, significantly supporting the measurement of second sound in solids \cite{GK64}. The next extension of Fourier's equation bears their names, called Guyer-Krumhansl (GK) equation \cite{GuyKru66a1, GuyKru66a2, Van01a}, \begin{equation} \tau_q \partial_{tt} T + \partial_t T = \alpha \partial_{xx} T + \kappa^2 \partial_{txx} T, \label{GK} \end{equation} where $\kappa^2$ is the dissipation parameter \cite{KovVan15}, strongly related to the mean free path from the aspect of kinetic theory \cite{MulRug98}. It contains the MCV equation (\ref{MCV}), however, it is a parabolic type model and is able to recover the solution of Fourier equation when $\kappa^2 / \tau = \alpha$ holds, called Fourier resonance \cite{Botetal16, Vanetal17, VanKovFul15}. Despite of the disadvantageous infinite propagation speed of parabolic models, it is still a valid and thermodynamically consistent realisation of non-Fourier heat conduction at room temperature \cite{Botetal16, Vanetal17, KovVan18dpl}. Regarding the experiments, one should mention the ballistic-type heat conduction measured by Jackson et al. \cite{JacWalMcN70, JacWal71, McNEta70a, McN74t} in NaF crystals and modeled by several authors \cite{DreStr93a, Ma13a, Ma13a1, Ma13a2}. The most recent one can be found in \cite{KovVan18} where quantitative agreement is obtained between the theory and experiments. The theory is based on non-equilibrium thermodynamics with internal variables and Nyíri multipliers \cite{KovVan15, BerVan15, Nyiri91}. The experimental success of measuring the second sound and the universal theory of non-equilibrium thermodynamics has motivated the researchers to find non-Fourier heat conduction in wave form described by the MCV equation (\ref{MCV}) at room temperature. For example, such an endeavor is related to the experiments of Mitra et al. \cite{MitEta95} where a frozen meat is used to find similar phenomenon. Unfortunately, no one was able to reproduce these experimental results and the measurements of Mitra et al. are widely criticized \cite{TilVic09, HerBec00, HerBec00b}. However, it turned out that the GK equation could be the relevant measurable extension of Fourier's law, the related non-Fourier effects are measured several times in different materials \cite{Botetal16, Vanetal17}. In many other cases the dual phase lag model is considered also as an adequate generalization \cite{TanEtal07, AkbPas14, AfrinEtal12, LiuChen10, Zhang09}, however, this model is contradictory to basic physical principles \cite{KovVan18dpl} and its validity is questionable \cite{Ruk14, Ruk17, Fabetal14, FabLaz14a, FabEtal16, Quin07, ChirCiaTib17}. All the aforementioned experiments are the heat pulse type, the underlying principle is the same, only the equipment is different. It is a standard method to measure the thermal diffusivity and is used widely in engineering practice. The importance of Guyer-Krumhansl equation (\ref{GK}) in the evaluation of such experiments indicated the need to find an analytic solution. The work of Zhukovsky has to be mentioned here \cite{Zhukov16, Zhu16a, Zhu16b, ZhuSri17}. Recently, Zhukovsky obtained an exact solution of GK equation using operational method for infinite spatial domain. Moreover, different initial conditions are considered, the wave-like initial condition together with decaying boundary conditions have greater importance. Despite of these valuable results, it is still quite far from the experiments. Therefore, the goal of this paper is to complement the results of the aforementioned papers to be more applicable for real experimental setup like described below. \section{Experimental setup and boundary conditions} \label{bcs} Measurements finding non-Fourier heat conduction in heterogeneous materials are performed on room temperature as it is described in detail in the papers \cite{Botetal16, Vanetal17} have the following setup, see Fig. \ref{fig:exp1}. \begin{figure}[h] \centering \includegraphics[width=10cm,height=8cm]{exp1.PNG} \caption{Arrangement of the experiment, original figure from \cite{Vanetal17}.} \label{fig:exp1} \end{figure} The front side boundary condition depicts the heat pulse which excites the heterogeneous sample. The pulse has a finite length, given as $t_p=0.01$ s \cite{Botetal16, Vanetal17}. The exact shape of the pulse has not been taken in account in \cite{Botetal16, Vanetal17} during the evaluation process, nevertheless, its length is critical and greatly influences the solution \cite{GrofPhD02}. As it is highlighted and applied in \cite{Botetal16, Vanetal17, KovVan15, BCTFGGPV13}, the following function is considered to model the heat pulse, \begin{center} $q( x=0, t)= \left\{ \begin{array}{cc} q_{max} \left(1-cos\left(2 \pi \cdot \frac{ t}{t_p}\right)\right) & \textrm{if } 0< t \leq t_p,\\ 0 & \textrm{if } t> t_p, \end{array} \right. $ \end{center} that is, the front side boundary condition is given by prescribing the heat flux in time, here $q_{max}$ is the amplitude of the signal. When the experimental results are evaluated, the cooling on boundary had to be considered. Nevertheless it is crucial to model these effects, in the analytic solution it is neglected to simplify the mathematical problem. Thereby adiabatic condition is applied to the rear side for every time instant $q(x=L, t) = 0$. Regarding the initial conditions, all the time derivatives are zero at the initial state and the sample is in equilibrium with its environment, i.e. $T(x,t=0) = T_0$. \section{Dimensionless quantities} In order to ease the solution of GK equation dimensionless quantities are used (see \cite{KovVan15} for details). From now on, the same formalism is applied, that is, the following parameters are introduced, \begin{eqnarray} \hat{t} =\frac{\alpha t}{L^2} \quad &\text{with}& \quad \alpha=\frac{\lambda}{\rho c}; \quad \hat{x}=\frac{x}{L};\nonumber \\ \hat{T}=\frac{T-T_{0}}{T_{\text{end}}-T_{0}} \quad &\text{with}&\quad T_{\text{end}}=T_{0}+\frac{\bar{q}_0 t_p}{\rho c L}; \nonumber \\ \hat{q}=\frac{q}{\bar{q}_0} \quad &\text{with}&\quad \bar{q}_0=\frac{1}{t_p} \int_{0}^{t_p} q_{0}(t)dt, \label{ndvar}\end{eqnarray} where $L$ is the length of the sample, $\lambda$, $\rho$ and $c$ are the thermal conductivity, mass density and specific heat, respectively. The time averaged heat flux $\bar q_0$ is used to define the equilibrium temperature $T_{end}$. The material parameters converted with \begin{equation} \hat{\tau}_\Delta =\frac{\alpha t_p}{L^2}; \quad \hat{\tau}_q = \frac{\alpha \tau_{q}}{L^2}; \quad \hat{\kappa} = \frac{\kappa}{L}, \end{equation} where $\hat{\tau}_\Delta$ stands for the dimensionless heat pulse length and $\hat \tau_q$ denotes the relaxation time related to the heat flux. For the sake of simplicity, the notation ``hat'' is omitted and let us restrict ourselves only for dimensionless quantities. Using these formalism the GK-type heat equation reads as \begin{equation} \tau_q \partial_{tt} T + \partial_t T = \partial_{xx} T + \kappa^2 \partial_{txx} T, \label{ndGK} \end{equation} which can be decomposed into two equations containing the balance equation of internal energy \begin{equation} \tau_{\Delta} \partial_t T + \partial_x q = 0, \label{ndbalen} \end{equation} and the GK-type consititutive equation is: \begin{equation} \tau_q \partial_t q + q +\tau_{\Delta} \partial_x T - \kappa^2 \partial_{xx} q=0. \label{ndcongkeq} \end{equation} Since the boundary conditions are prescribed as a given heat flux in time it is suitable to eliminate $T$ from the equations (\ref{ndbalen}) and (\ref{ndcongkeq}): \begin{equation} \tau_q \partial_{tt} q + \partial_t q = \partial_{xx} q + \kappa^2 \partial_{txx} q. \label{ndGKforq} \end{equation} After obtaining the solution for $q(x,t)$ one can use eq. (\ref{ndbalen}) to integrate $\partial_x q$ respect to time and calculate $T(x,t)$. Applying dimensionless quantities, the heat pulse boundary condition at the front side reads as \begin{center} $q( x=0, t)=q_0(t)= \left\{ \begin{array}{cc} \left(1-cos\left(2 \pi \cdot \frac{ t}{\tau_{\Delta}}\right)\right) & \textrm{if } 0< t \leq \tau_{\Delta},\\ 0 & \textrm{if } t> \tau_{\Delta}, \end{array} \right. $ \end{center} and for the rear side $q(x=1,t)=q_L(t)=0$ holds together with the dimensionless initial condition $T(x,t=0)=0$. \section{Solution method} According to the front side boundary condition it is reasonable to split the solution into two sections in time. The first one goes from $0$ to $\tau_{\Delta}$ and the second interval starts at $\tau_{\Delta}$ and reaches up to an arbitrary time instant $t$. The basic mathematical principles and procedures can be found in \cite{CarJae59b, Farlow93b, GTvN11b}. The sample length $L$ is intentionally left unchanged in the following as it highlights the integration limits in non-dimensionless formalism. In case of dimensionless quantities $L$ can be simply considered as $L=1$, since $0\leq x \leq 1$. \subsection{Section I. ($0<t<\tau_{\Delta}$)} Due to the time dependent boundary condition, let us split the solution of $q(x,t)$ as \begin{equation} q(x,t) = w(x,t) + v(x,t), \label{qbont} \end{equation} where $w(x,t)$ is used to separate the time dependence of the boundary condition from the part $v(x,t)$. It is arbitrary to choose the form of $w(x,t)$, for the sake of simplicity it is satisfactory to assume its form to be linear, i.e. \begin{equation} w(x,t) := q_0(t) + \frac{x}{L} \big ( q_L(t) - q_0(t) \big ) = \big (1-\frac{x}{L} \big ) q_0(t), \end{equation} as the rear side is adiabatic. For further calculations let us simplify and shorten our notation of partial derivatives: $\partial_t = \dot \Box$ and $\partial_x = \Box ' $. Substituting (\ref{qbont}) into (\ref{ndGKforq}), it yields \begin{equation} \tau_q (\ddot w + \ddot v) + \dot w + \dot v = w'' + v'' + \kappa^2 (\dot w'' + \dot v''). \label{eq1} \end{equation} Therefore $v(x,t)$ has constant boundary condition in time but an inhomogeneous term appears since $\dot w = \big (1-\frac{x}{L} \big ) \dot q_0(t)$, $\ddot w = \big (1-\frac{x}{L} \big ) \ddot q_0(t)$ holds and $w'' = 0$. At this point one obtains an inhomogeneous equation for $v(x,t)$, \begin{equation} \tau_q \ddot v + \dot v = v'' + \kappa^2 \dot v'' - f(x,t), \label{eq2} \end{equation} where $f(x,t) = \dot w + \tau_q \ddot w$. The splitting (\ref{qbont}) preserves the initial conditions: $v(x,t=0) = 0$ and $\dot v(x,t=0)=0$. Regarding the boundary conditions, $v(x=0,t)=0$, $v(x=L,t)=0$ holds. The inhomogeneous term $f(x,t)$ can be determined from $q_0(t)$ as \begin{eqnarray} \dot w = \frac{2 \pi}{\tau_{\Delta}} \big (1-\frac{x}{L} \big ) \sin (2 \pi \frac{t}{\tau_{\Delta}}), \\ \ddot w = \frac{4 \pi^2 }{\tau_{\Delta}^2} \big (1-\frac{x}{L} \big ) \cos (2 \pi \frac{t}{\tau_{\Delta}}). \end{eqnarray} Let us suppose now that the variables can be separated and \begin{equation} v(x,t) = \varphi (t) X(x) \label{vbont} \end{equation} exists and dissociate the partial differential equation (\ref{eq2}) into two ordinary differential equations (ODEs). As equation (\ref{eq2}) is inhomogeneous one should also assume that the eigenfunctions $X(x)$ of the homogeneous case ($f(x,t)=0$) solves the inhomogeneous equation, too. This system of eigenfunctions is used to explicate $f(x,t)$ in the function space spanned by the solutions $X(x)$. Thus, one has to calculate the homogeneous part of $v(x,t)$ that is done as follows. The separation of variables, eq. (\ref{vbont}), leads to the equation for homogeneous part \begin{equation} \frac{\tau_q \ddot \varphi + \dot \varphi}{\varphi + \kappa^2 \dot \varphi} = \frac{X''(x)}{X(x)} = - \beta, \quad \beta \in \mathbb{R}^+, \end{equation} hence the eigenfunctions are determined by the equation \begin{equation} X'' + \beta X =0, \quad X(x=0)=0, \quad X(x=L) =0. \end{equation} The general solution reads as \begin{equation} X(x)=A \cos (\sqrt{\beta }x ) + B \sin (\sqrt{\beta }x ), \end{equation} where the constants $A$ and $B$ are determined according to the boundary conditions for $v(x,t)$. The condition $X(x=0)=0$ implies that $A=0$ and $X(x=L)$ determines the eigenvalues. As $B\neq0$, otherwise it would lead to a trivial solution, $\sin(\sqrt{\beta }x )=0$ holds, hence \begin{equation} \beta_n = \big ( \frac{n \pi}{L} \big )^2, \end{equation} where $0<n \in \mathbb{N}$. In summary, \begin{equation} X_n(x) = \sin \big ( \frac{n \pi}{L} x \big ) \label{sfv} \end{equation} is an eigenfunction of the operator $\frac{d^2}{dx^2}$ with positive eigenvalues $\beta_n$. The constant $B$ will be combined with the emerging solution of the time evolution part $\varphi (t)$. Using eq. (\ref{sfv}) one obtains \begin{equation} v(x,t) = \sum\limits_{n=1}^{\infty} \varphi_n(t) \sin \big ( \frac{n \pi}{L} x \big ), \end{equation} that is, the inhomogeneous term $f(x,t)$ has to be accounted now, \begin{eqnarray} -f(x,t) &=& \tau_q \ddot v + \dot v - v'' -\kappa^2 \dot v'' = \\ &=& \sum\limits_{n=1}^{\infty} \big [\tau_q \ddot \varphi_n + \dot \varphi_n + \big ( \frac{n \pi}{L} \big )^2 \varphi_n + \kappa^2 \big ( \frac{n \pi}{L} \big )^2 \dot \varphi_n \big ] \sin \big ( \frac{n \pi}{L} x \big ). \label{eqvart} \end{eqnarray} It can be solved for every $n$ if the function $f(x,t)$ is decomposed according to the eigenfunctions, it yields an ODE for $\varphi_n$. The Fourier series of $f(x,t)$ is given as \begin{equation} f(x,t) = \sum\limits_{n=1}^{\infty} f_n(t) \sin \big ( \frac{n \pi}{L} x \big ), \end{equation} where \begin{equation} f_n(t) = \frac{2}{L} \big [ \frac{2 \pi}{\tau_{\Delta}} \sin \big (2 \pi \frac{t}{\tau_{\Delta}} \big ) + \tau_q \frac{4 \pi^2 }{\tau_{\Delta}^2} \cos \big (2 \pi \frac{t}{\tau_{\Delta}} \big) \big] \int\displaylimits_0^L \big (1-\frac{x}{L} \big ) \sin \big ( \frac{n \pi}{L} x \big ) dx. \end{equation} Calculating the integral on the right hand side yields \begin{equation} f_n(t) = \big [ \frac{2 \pi}{\tau_{\Delta}} \sin \big (2 \pi \frac{t}{\tau_{\Delta}} \big ) + \tau_q \frac{4 \pi^2 }{\tau_{\Delta}^2} \cos \big (2 \pi \frac{t}{\tau_{\Delta}} \big) \big] \frac{2}{n \pi} = f(t) \frac{2}{n \pi}, \end{equation} \begin{equation} f(x,t) = \sum\limits_{n=1}^{\infty}f(t) \frac{2}{n \pi} \sin \big ( \frac{n \pi}{L} x \big ). \end{equation} Now the resulted ODE can be solved for $\varphi_n(t)$ with initial conditions $\varphi_n(0)=0$ and $\dot \varphi_n(0) =0$: \begin{equation} \tau_q \ddot \varphi_n + \big ( 1 + \kappa^2 \big (\frac{n \pi}{L} \big )^2 \big ) \dot \varphi_n + \big (\frac{n \pi}{L} \big )^2 \varphi_n = -f(t)\frac{2}{n \pi}. \end{equation} Its solution is calculated using Wolfram Mathematica, it reads as \begin{eqnarray} \varphi_n(t) = \frac{1}{2 \sqrt{a^2-4 b} \left(a^2 g^2+\left(b-g^2\right)^2\right)}e^{-\frac{1}{2} \left(a+\sqrt{a^2-4 b}\right) t} \cdot \left(a^2 c \left(-1+e^{\sqrt{a^2-4 b} t}\right) g -\right. \nonumber \\ -\left(\sqrt{a^2-4 b} d \left(1+e^{\sqrt{a^2-4 b} t}\right)+2 c \left(-1+e^{\sqrt{a^2-4 b} t}\right) g\right) \left(b-g^2\right)+ \nonumber \\ +a \left(\sqrt{a^2-4 b} c g+\sqrt{a^2-4 b} c e^{\sqrt{a^2-4 b} t}g+d \left(b+g^2\right)-d e^{\sqrt{a^2-4 b} t} \left(b+g^2\right)\right)+ \nonumber \\ \left.+2 \sqrt{a^2-4 b} e^{\frac{1}{2} \left(a+\sqrt{a^2-4 b}\right) t} ((b d-g (a c+d g)) \cos(g t)+(b c+g (a d-c g)) \sin(g t))\right), \end{eqnarray} where the constants $a,b,c,d,g$ are given as \begin{eqnarray} a = \frac{1}{\tau_q} \big ( 1 + \kappa^2 \big (\frac{n \pi}{L} \big )^2 \big ), \quad b= \frac{1}{\tau_q} \big (\frac{n \pi}{L} \big )^2, \nonumber \\ c =-\frac{4}{ n \tau_{\Delta} \tau_q}, \quad d = -\frac{8 \pi}{n \tau_{\Delta}^2}, \quad g = \frac{2 \pi}{\tau_{\Delta}}. \end{eqnarray} Now $v(x,t)$ is obtained together with the solution of first section $q_I(x,t) = w(x,t) + v(x,t)$. \newpage \subsection{Section II. ($\tau_\Delta <t$)} For section II, the initial condition is determined based on the functions $q_I(x,t=\tau_\Delta)$ and $\dot q_I(x,t=\tau_\Delta)$. Here we seek for the solution of eq. (\ref{ndGKforq}) with time independent boundary conditions. These are prescribed as adiabatic condition on both sides. However, the initial conditions are more difficult to consider. Let us introduce $\tilde t $ as $\tilde t = t - \tau_\Delta$ to ease the calculations. The initial conditions are \begin{equation} q_{II}(x, \tilde t=0)=q_I(x,t=\tau_{\Delta}), \quad \dot q_{II}(x, \tilde t =0) = \dot q_I(x,t=\tau_{\Delta}). \end{equation} Moreover, the inhomogeneous term $f(x,t)$ is vanished for that section due to constant boundary conditions. Let us separate the variables again and assume that \begin{equation} q_{II}(x,\tilde t) = \gamma(\tilde t) X(x), \end{equation} where the eigenfunctions $X(x)$ and eigenvalues $\beta_n$ are already calculated in the previous section. In order to determine $\gamma(\tilde t)$ an ODE has to be solved, \begin{equation} \tau_q \ddot \gamma_n + (1 + \beta_n \kappa^2) \dot \gamma_n + \beta_n \gamma_n = 0 \label{eq21} \end{equation} with initial conditions $\gamma_n (0) = \varphi_n (\tau_\Delta)$ and $\dot \gamma_n (0) = \dot \varphi_n (\tau_\Delta)$. Its general solution is \begin{equation} \gamma_n(\tilde t) = C_{1n} e^{r_{1n} \tilde t} + C_{2n} e^{r_{2n} \tilde t}, \end{equation} where the characteristic exponents are \begin{equation} r_{1,2} = \frac{1}{2 \tau_q} \big ( -1 -\beta_n \kappa^2 \pm \sqrt{(1+\beta_n \kappa^2)^2 - 4 \tau_q \beta_n} \big ). \end{equation} Taking into account the initial conditions for the constants $C_{1n}$ and $ C_{2n}$, leads to \begin{eqnarray} C_{1n} + C_{2n} = \varphi_n (t=\tau_\Delta), \nonumber \\ C_{1n} r_{1n} + C_{2n} r_{2n}= \dot \varphi_n (t=\tau_\Delta). \end{eqnarray} It is solved again using Wolfram Mathematica where the $R = \sqrt{a^2 - 4b}$ notation is applied. \begin{align} &C_{1n} = -\frac{1}{r_1-r_2}\left(-\frac{1}{4 \left(a^2 g^2+\left(b-g^2\right)^2\right) R}e^{-\frac{1}{2} (a+R) \tau_\Delta} (-a-R)\cdot\right. \nonumber \\ &\cdot \left(a^2 c \left(-1+e^{R \tau_\Delta}\right) g+2 e^{\frac{1}{2} (a+R) \tau_\Delta} (b d-g (a c+d g)) R-\left(b-g^2\right) \left(2 c \left(-1+e^{R \tau_\Delta}\right) g+\right.\right. \nonumber \\ &+\left.\left.+d \left(1+e^{R \tau_\Delta}\right) R\right)+a \left(d \left(b+g^2\right)-d e^{R \tau_\Delta} \left(b+g^2\right)+c g R+c e^{R \tau_\Delta} g R\right)\right)- \nonumber \\ &-\frac{1}{2 \left(a^2 g^2+\left(b-g^2\right)^2\right) R}e^{-\frac{1}{2} (a+R) \tau_\Delta} \left(a^2 c e^{R \tau_\Delta} g R+2 e^{\frac{1}{2} (a+R) \tau_\Delta} g (b c+g (a d-c g)) R+\right. \nonumber \\ &+e^{\frac{1}{2} (a+R) \tau_\Delta} (b d-g (a c+d g)) R (a+R)-\left(b-g^2\right) \left(2 c e^{R \tau_\Delta} g R+d e^{R \tau_\Delta} R^2\right)+ \nonumber \\ &\left.+a \left(-d e^{R \tau_\Delta} \left(b+g^2\right) R+c e^{R \tau_\Delta} g R^2\right)\right)+\frac{1}{2 \left(a^2 g^2+\left(b-g^2\right)^2\right) R}e^{-\frac{1}{2} (a+R) \tau_\Delta} \cdot \nonumber \\ &\cdot \left(a^2 c \left(-1+e^{R \tau_\Delta}\right) g+2 e^{\frac{1}{2} (a+R) \tau_\Delta} (b d-g (a c+d g)) R-\right.\left(b-g^2\right) \left(2 c \left(-1+e^{R \tau_\Delta}\right) g+ \right.\nonumber \\ &+\left.\left.\left.d \left(1+e^{R \tau_\Delta}\right) R\right)+a \left(d \left(b+g^2\right)-d e^{R \tau_\Delta} \left(b+g^2\right)+c g R+c e^{R \tau_\Delta} g R\right)\right) r_2\right), \nonumber \\ &C_{2n} =\frac{1}{2 \left(a^2 g^2+\left(b-g^2\right)^2\right) R}e^{-\frac{1}{2} (a+R)\tau_\Delta} \left(a^2 c \left(-1+e^{R \tau_\Delta}\right) g+2 e^{\frac{1}{2} (a+R) \tau_\Delta} \cdot \right. \nonumber \\ &\cdot (b d-g (a c+d g)) R-\left(b-g^2\right) \left(2 c \left(-1+e^{R \tau_\Delta}\right) g+d \left(1+e^{R \tau_\Delta}\right) R\right)+a \left(d \left(b+g^2\right)-\right. \nonumber \\ &\left.\left.-d e^{R \tau_\Delta} \left(b+g^2\right)+c g R+c e^{R\tau_\Delta} g R\right)\right)+\frac{1}{r_1-r_2}\left(-\frac{1}{4 \left(a^2 g^2+\left(b-g^2\right)^2\right) R}\cdot \right. \nonumber \\ &\cdot e^{-\frac{1}{2} (a+R) \tau_\Delta} (-a-R) \left(a^2 c \left(-1+e^{R \tau_\Delta}\right) g+2 e^{\frac{1}{2} (a+R) \tau_\Delta} (b d-g (a c+d g)) R-\right. \nonumber \\ &-\left(b-g^2\right) \left(2 c \left(-1+e^{R \tau_\Delta}\right) g+d \left(1+e^{R \tau_\Delta}\right) R\right)+a \left(d \left(b+g^2\right)-d e^{R \tau_\Delta} \left(b+g^2\right)+c g R+\right. \nonumber \\ &\left.\left.+c e^{R \tau_\Delta} g R\right)\right)-\frac{1}{2 \left(a^2 g^2+\left(b-g^2\right)^2\right) R}e^{-\frac{1}{2} (a+R) \tau_\Delta} \left(a^2 c e^{R \tau_\Delta} g R+\right. \nonumber \\ &+2 e^{\frac{1}{2} (a+R) \tau_\Delta} g (b c+g (a d-c g)) R+e^{\frac{1}{2} (a+R) \tau_\Delta} (b d-g (a c+d g)) R (a+R)- \nonumber \\ &\left.-\left(b-g^2\right) \left(2 c e^{R \tau_\Delta} g R+d e^{R \tau_\Delta} R^2\right)+a \left(-d e^{R \tau_\Delta} \left(b+g^2\right) R+c e^{R \tau_\Delta} g R^2\right)\right)+\nonumber \\ &+\frac{1}{2 \left(a^2 g^2+\left(b-g^2\right)^2\right) R}e^{-\frac{1}{2} (a+R) \tau_\Delta} \left(a^2 c \left(-1+e^{R \tau_\Delta}\right) g+\right. \nonumber \\ &+2 e^{\frac{1}{2} (a+R) \tau_\Delta} (b d-g (a c+d g)) R-\left(b-g^2\right) \left(2 c \left(-1+e^{R \tau_\Delta}\right) g+\right. \nonumber \\ &\left.\left.\left.+d \left(1+e^{R \tau_\Delta}\right) R\right)+a \left(d \left(b+g^2\right)-d e^{R \tau_\Delta} \left(b+g^2\right)+c g R+c e^{R \tau_\Delta} g R\right)\right) r_2\right). \end{align} \section{Temperature distribution} So far we have seen the solution for the field of heat flux $q$. It uniquely determines the temperature field by using the balance equation of internal energy, eq. (\ref{ndbalen}). Again, one has to perform the calculations for both sections. \begin{align} \tau_{\Delta} \dot T + q' = 0, \Rightarrow \dot T = -\frac{1}{\tau_{\Delta}} q' = -\frac{1}{\tau_{\Delta}} \sum\limits_{n=1}^{\infty} \Gamma_n(t) \frac{n \pi}{L} \cos \big ( \frac{n \pi}{L} x \big ), \\ T = -\frac{1}{\tau_{\Delta}} \int\displaylimits_0^{t} \sum\limits_{n=1}^{\infty} \Gamma_n(\alpha) \frac{n \pi}{L} \cos \big ( \frac{n \pi}{L} x \big ) d\alpha, \end{align} where $\Gamma_n(t)$ could be $\varphi_n$ or $\gamma_n$ depending on which section is considered. The initial condition for temperature in section I is $T_I(x,t=0)=0$, for section II is $T_{II}(x,\tilde t=0)=T_I(x,t=\tau_\Delta)$. For section I it reads as \begin{align} T_I (x,t) &= -\frac{1}{\tau_{\Delta}} \int\displaylimits_0^{t} (w'(\alpha) + v'(\alpha)) d\alpha =& \nonumber \\&= -\frac{1}{\tau_{\Delta}} \int\displaylimits_0^{t} \left ( -\frac{1}{L} q_0(\alpha) + \sum\limits_{n=1}^{\infty} \varphi_n(\alpha) \frac{n \pi}{L} \cos \big ( \frac{n \pi}{L} x \big ) \right) d\alpha, \\ \int\displaylimits_0^{t} w'(\alpha) d\alpha &= \frac{1}{L} \big ( -t + \frac{t_p \sin (2 \pi t / t_p)}{2 \pi} \big ), \end{align} \begin{align} &\int\displaylimits_0^{t} \sum\limits_{n=1}^{\infty} \varphi_n(\alpha) d\alpha = \sum\limits_{n=1}^{\infty} \Phi_n(t)= \frac{1}{g \left(a^2 g^2+\left(b-g^2\right)^2\right) (a-R) R (a+R)} \cdot \nonumber\\ &\cdot \left((b c+g (a d-c g)) (a-R) R (a+R)+g(a+R) \left(a^2 c g-a d \left(b+g^2\right)+\right.\right. \nonumber\\ &+ \left.a c g R-\left(b-g^2\right) (2 cg+d R)\right)-g (a-R) \left(a^2 c g-\left(b-g^2\right) (2 c g-d R)-\right. \nonumber\\ &- \left.a \left(d \left(b+g^2\right)+c g R\right)\right)-e^{-\frac{1}{2} (a+R) t} \left(e^{Rt} g (a+R) \cdot\right. \nonumber\\ &\cdot(a^2 c g - a d (b + g^2) + a c g R - (b - g^2) (2 c g + d R)) - \nonumber \\ &-g (a-R) \left(a^2 c g-\left(b-g^2\right) (2 c g-d R)-a \left(d \left(b+g^2\right)+c g R\right)\right)+\nonumber \\ &\left.\left.+e^{\frac{1}{2} (a+R) t} (a-R) R (a+R) ((b c+g (a d-c g)) \sin(g t)+(-b d+g (a c+d g)) \sin(gt))\right)\right), \end{align} It follows from $\varphi_n(t=0) = 0$ that \begin{equation} \sum\limits_{n=1}^{\infty} \Phi_n(t=0)=0 \end{equation} is true at time instant $t=0$. The initial condition for section I is automatically fulfilled. In case of section II the temperature distribution has to be fitted for $T_I(x,t=\tau_\Delta)$, i.e. \begin{equation} T_{II}= -\frac{1}{\tau_{\Delta}} \int\displaylimits_0^{\tilde t} \sum\limits_{n=1}^{\infty} \gamma_n(\alpha) \frac{n \pi}{L} \cos \big ( \frac{n \pi}{L} x \big ) d\alpha, \end{equation} \begin{equation} \int\displaylimits_0^{\tilde t} \sum\limits_{n=1}^{\infty} \gamma_n(\alpha) d\alpha = \sum\limits_{n=1}^{\infty} \frac{C_{1n}}{r_{1n}} \big (e^{r_{1n} \tilde t} -1 \big )=\sum\limits_{n=1}^{\infty} \Omega_n (\tilde t). \end{equation} For $\Omega_n(\tilde t=0) = 0$ holds thus one has to exploit the integration constant and determine its value to fulfill the initial condition. Let us consider now the integration constant $K_{n}$ which is calculated as follows: \begin{align} &T_{II}(x,\tilde t=0)=T_I(x,t=\tau_{\Delta}) = -\frac{1}{\tau_{\Delta}} \left ( -\frac{\tau_{\Delta}}{L} + \sum\limits_{n=1}^{\infty} \Phi_n(t=\tau_{\Delta}) \frac{n \pi}{L} \cos (\frac{n \pi}{L} x) \right ) = \nonumber \\ &= -\frac{1}{\tau_{\Delta}} \left (\sum\limits_{n=1}^{\infty} \Omega_n(\tilde t=0) \frac{n \pi}{L} \cos (\frac{n \pi}{L} x) \right ) + \sum\limits_{n=1}^{\infty} K_{n} \frac{n \pi}{L} \cos (\frac{n \pi}{L} x) + \frac{1}{L}, \end{align} that is, $K_{n} = \Phi_n(t=\tau_\Delta)$. Since the rear side temperature history has importance during the evaluation of heat pulse experiments, let us check its convergence considering more and more terms in the sum (see Fig. \ref{fig:analgk1}). In this case the solution of Fourier equation is presented ($\tau_q = \kappa^2, \tau_\Delta=0.04$) and $N=1, 3, 10, 40$ terms are considered. It is visible that the initial region is considerably sensitive but the difference disappears after a certain time and the first term alone seems to be enough. \begin{figure}[h] \includegraphics[width=12cm,height=7cm]{conv.jpg} \caption{The convergence of rear side temperature history considering more and more terms.} \label{fig:analgk1} \end{figure} \section{Validation of solution} The presented analytic solution is compared to the available numerical code \cite{KovVan15} as a validation (see Figs. \ref{fig:analgk2}, \ref{fig:analgk3} and \ref{fig:analgk4}). Naturally, the analytic solution runs much faster especially in the over-damped region ($\kappa^2>\tau_q$) without resulting in any unphysical temperature history. The over-damped solutions have greater importance as all the measurements confirm such behavior \cite{Botetal16, Vanetal17}. The comparison is performed in three different cases: \begin{enumerate} \item Fourier's solution: $\tau_q=\kappa^2=0.02$ (Fig. \ref{fig:analgk2}), \item MCV's solution: $\tau_q=0.02$, $\kappa^2=0$ (Fig. \ref{fig:analgk3}), \item Over-damped solution: $\tau_q=0.02$, $\kappa^2=0.2$ (Fig. \ref{fig:analgk4}). \end{enumerate} The dimensionless pulse length is $\tau_\Delta=0.04$ in every case. \begin{figure} \includegraphics[width=12cm,height=7cm]{c1.jpg} \caption{The rear side temperature history considering $\tau_q=\kappa^2=0.02$, using $40$ terms. } \label{fig:analgk2} \end{figure} \begin{figure} \includegraphics[width=12cm,height=7cm]{c2.jpg} \caption{The rear side temperature history considering $\tau_q=0.02$, $\kappa^2=0$, using $200$ terms. } \label{fig:analgk3} \end{figure} \begin{figure} \includegraphics[width=12cm,height=7cm]{c3.jpg} \caption{The rear side temperature history considering $\tau_q=0.02$, $\kappa^2=0.2$, using $10$ terms. } \label{fig:analgk4} \end{figure} \section{Conclusions} The analytic solution for Guyer-Krumhansl equation is presented considering finite heat pulse length on the front side and adiabatic condition on the rear side. It should be emphasized that finite spatial region is also considered which makes the results more applicable for practical cases. The solution is obtained in the form of an infinite sum. It converges quickly to the exact solution in case of a smooth temperature history. In case of MCV equation, $200$ terms are sufficient to model the sharp wavefront. It is easier to define boundary conditions for the field of heat flux and calculate the temperature field as a consequence. Applying the same idea for numerical codes leads to the shifted field concept described in \cite{KovVan15} and tested in several cases \cite{Botetal16, Vanetal17, KovVan18dpl, KovVan18, KovVan16}. The analytical solution is validated by an explicit numerical method for every possible domain could appear in GK equation. Then the obtained analytical solution could be of a good use to investigate the entropy production paradox discussed by Barletta and Zanchini \cite{BarZan97a} in connection with the Taitel's paradox \cite{Taitel72}. It was highlighted by Zhukovsky \cite{Zhukov16} that GK equation could violate the maximum principle under over-damped (or over-diffusive) conditions. Here, in the presented solutions the negative temperature domain does not exist even for the over-damped region. Now, one has to move on the more difficult case containing cooling boundary condition to widen possibilities. \section{Acknowledgements} \label{ackn} The work was supported by the grant National Research, Development and Innovation Office – NKFIH, NKFIH 124366 and NKFIH 124508. \bibliographystyle{elsarticle-num}
{ "timestamp": "2018-04-17T02:06:39", "yymm": "1804", "arxiv_id": "1804.05225", "language": "en", "url": "https://arxiv.org/abs/1804.05225" }
\section{Introduction} Disc self-gravity (SG) can play a crucial role in the dynamics of accretion discs in AGN discs or early stage protoplanetary (PP) discs; it can allow an outward transfer of angular momentum or result in disc fragmentation, a process which has been linked with the formation of stellar/sub-stellar or planetary bodies. The (inverse) strength of a disc's SG is usually quantified by means of the Toomre parameter \citep{Toomre1964} \begin{equation} Q \equiv \frac{c_s \kappa}{\pi G \Sigma}, \end{equation} where $c_s$ represents the sound speed, $\kappa$ the epicyclic frequency, $G$ is the gravitational constant and $\Sigma$ the surface density. The temperature of the disc strongly affects the outcome of gravitational instability (GI), with cold discs being more susceptible to its onset. Using the simple $\beta$-cooling prescription, \citet{Gammie2001} showed that disc fragmentation would be triggered if the cooling timescale $\tau_c$ of the disc obeyed \begin{equation} \beta \equiv \tau_c \Omega \lesssim 3, \end{equation} although more recent numerical works have found somewhat different threshold values \citep{Riceetal2003, Riceetal2005}. Furthermore, \citet{Paardekooper2012} found the fragmentation process to be of a stochastic nature, with its probability simply decreasing with longer cooling times. A less efficient cooling time than $\beta \simeq 3$, on the other hand, results in the development of a self-sustaining gravito-turbulent state, where the Toomre parameter is maintained at roughly $Q\sim1$ by the opposing actions of shock dissipation and cooling. The self-sustenance of this turbulent state, a state which can be thought of as a sub-critical instability, however requires a continuous energy extraction from the background flow, the origin of which is still unclear. This work tries to address the question of gravito-turbulence self-sustenance by exploring the possibility of zonal flows being involved in the process. Zonal flows are coherent structures of axisymmetric nature exhibiting alternating bands, with an axisymmetric slow mode instability believed to be involved in their formation in discs \citep{VanonOgilvie2017}. They represent equilibrium solutions to the equations of disc flow dynamics in the presence of a geostrophic balance between the pressure gradient and the Coriolis force. Zonal flows occur frequently in astrophysical fluids; the closest example is Earth's atmosphere, where the resulting sharp temperature gradients may trigger cyclones and precipitation. They are also observed in the atmospheres of other Solar System planets, the most notable example being Jupiter's striped pattern. Zonal flows have also been observed in simulations of MHD turbulent accretion discs \citep{Johansenetal2009, Simonetal2012, KunzLesur2013, BaiStone2014} performed using the shearing box approximation; these works found the emergence of zonal flows to be independent of both initial conditions and box size. The incompressible inviscid hydrodynamical calculation by \citet{Lithwick2007, Lithwick2009} -- carried out using the shearing sheet model in non-SG conditions -- showed that zonal flows can be broken up into vortices by the action of the Kelvin-Helmholtz (KH) instability. These vortices, observed by several other simulations \citep[e.g.][]{UmurhanRegev2004, JohnsonGammie2005}, appear to be long-lived despite the modest Reynolds numbers that can be applied in numerical simulations of accretion discs. More refined work on the stability of zonal flows by \citet{VanonOgilvie2016} -- carried out in compressible and SG conditions -- found that, as well as being affected by the KH instability as identified by \citet{Lithwick2007}, zonal flows can also be gravitationally unstable for $Q \lesssim 2$. The aim of this paper is to investigate whether zonal flows play a central role in the self-sustenance of gravito-turbulence, and how this self-sustaining process is maintained. This analysis is carried out thanks to a 2D pseudo-spectral code specially written for this work; the code makes use of the shearing sheet model, with the modelled flow being fully compressible, viscous and self-gravitating. A simple $\beta$-cooling prescription is also employed, as well as a horizontal thermal diffusion. Section~\ref{sec:casper} introduces the equations solved by the pseudo-spectral method employed, together with its specifications. The main results are presented in Section~\ref{sec:results}, with particular emphasis given to the mechanism of self-sustenance of the turbulent state; the implications of these findings and the concluding remarks are presented in Section~\ref{sec:discussion}. \section{The \texttt{CASPER} code} \label{sec:casper} For the purpose of this analysis, a pseudo-spectral code based on the shearing sheet model was developed. The code, which was named \texttt{CASPER}, takes into account the self-gravity of the disc and solves fully compressible, viscous non-linear equations for the evolution of the flow. \texttt{CASPER} makes use of a third order Runge-Kutta iteration method, representing the best compromise between accuracy and performance. Several reasons were at the root of the decision to employ a pseudo-spectral code to tackle this problem. One of these is the obvious selling point of spectral methods' accuracy, which allows for a much faster (i.e. exponential) error convergence than other methods. Furthermore, spectral methods' affinity to systems with periodic boundary conditions (which are applied in both $x$ and $y$ in this case) and ease to deal with disc self-gravity represented another two strong advantages of this choice. The method's ability to resolve and analyse each individual wavelength independently was also a useful tool for this work, allowing a close comparison to previous zonal flow stability analyses \citep{VanonOgilvie2016, VanonOgilvie2017}. Of course spectral methods do come with their own weaknesses; in particular, and most appropriately for this problem, their difficulty to appropriately resolve shocks. The problem was however circumvented with the use of viscosities, as explained in more detail in Section~\ref{sec:specs}. \subsection{The shearing sheet model} The \texttt{CASPER} code is based upon the local shearing sheet model first used by \citet{GoldreichLynden-Bell1965} in the context of galactic discs. The model employs a Cartesian frame of reference with periodic boundary conditions for both spatial coordinates, and it is centred around the fiducial radius $R_0$. The sheet, which has dimensions $L_x$ and $L_y$ obeying $L_x$, $L_y \ll R_0$, corotates with the disc. In this corotating frame of reference, a viscous, compressible flow is described by the continuity and Navier-Stokes equations: \begin{equation} \label{eq:continuity-sigma} \frac{\partial \Sigma}{\partial t} + \nabla \cdot \left(\Sigma \boldsymbol{\varv}\right) = 0, \end{equation} \begin{equation} \frac{\partial \boldsymbol{\varv}}{\partial t} + \boldsymbol{\varv} \cdot \nabla \boldsymbol{\varv} + 2\boldsymbol{\Omega} \times \boldsymbol{\varv} = - \nabla \Phi - \nabla \Phi_{d,m} - \frac{1}{\Sigma}\nabla P - \frac{1}{\Sigma}\nabla \cdot \boldsymbol{T}, \end{equation} where $\boldsymbol{\varv}$ is the flow velocity vector, $\boldsymbol{\Omega} = \Omega \boldsymbol{e}_z$ the angular velocity of the disc ($\boldsymbol{e}_z$ being the unit vector parallel to the $z$-axis), $\Phi = -q \Omega^2 x^2$ the effective potential, $q=-\mathrm{d}\ln \Omega/\mathrm{d}\ln r$ the dimensionless shear rate (its value being $q=3/2$ for Keplerian discs), $\Phi_{d,m}$ the disc potential being evaluated at the disc midplane and P the 2-dimensional pressure. Also, \begin{equation} \boldsymbol{T} = 2\mu_s \boldsymbol{S} + \mu_b \left(\nabla \cdot \boldsymbol{\varv}\right) \boldsymbol{I} \end{equation} represents the stress tensor for the shear ($\mu_s$) and bulk ($\mu_b$) dynamic viscosities ($\mu_i = \Sigma \nu_i$, with $\nu_i$ being the corresponding kinematic viscosity), $\boldsymbol{I}$ is the unit tensor and $\boldsymbol{S}$ represents the traceless shear tensor, which is given by \begin{equation} \boldsymbol{S} = \frac{1}{2} \left[ \nabla \boldsymbol{\varv} + \left(\nabla \boldsymbol{\varv}\right)^T \right] - \frac{1}{3} \left(\nabla \cdot \boldsymbol{\varv}\right) \boldsymbol{I}. \end{equation} The continuity equation (Equation~\ref{eq:continuity-sigma}) is transformed, by means of the introduction of the quantity $h=\ln \Sigma + \mathrm{const}$, to \begin{equation} \frac{\partial h}{\partial t} + \boldsymbol{\varv}\cdot \nabla h + \nabla \cdot \boldsymbol{\varv} = 0. \end{equation} The self-gravity of the disc is regulated by the $\nabla \Phi_\mathrm{d,m}$ term; this can be evaluated using the Poisson equation \begin{equation} \nabla^2 \Phi_\mathrm{d} = 4\pi G\Sigma \delta (z), \end{equation} with $\delta(z)$ representing the Dirac delta function and $z$ being the height from the disc midplane. The solution to the equation is most easily expressed in Fourier space, with the full form of the disc potential being \begin{equation} \tilde{\Phi}_\mathrm{d} = - \frac{2\pi G \tilde{\Sigma}}{\sqrt{k_x^2 + k_y^2}} \mathrm{e}^{- \lvert \boldsymbol{k}\rvert \lvert z\rvert}, \end{equation} where $\tilde{\Sigma}$ represents the Fourier transform of the respective quantity, and with $k_x$ and $k_y$ being the radial and azimuthal wavenumbers. While this expression is a function of the height from the disc midplane $z$, the midplane form of the disc potential can be readily found by setting $z=0$, obtaining \begin{equation} \tilde{\Phi}_\mathrm{d,m} = - \frac{2\pi G\tilde{\Sigma}}{\sqrt{k_x^2 + k_y^2}}. \end{equation} Other important quantities in the analysis include the potential vorticity $\zeta$ and the specific entropy $s$, which are given by: \begin{equation} \zeta = \frac{2\Omega + \left(\nabla \times \boldsymbol{\varv}\right)_z}{\Sigma}, \end{equation} \begin{equation} s = \frac{1}{\gamma} \ln P - \ln \Sigma, \end{equation} where $\gamma$ represents the adiabatic index (which is taken as\footnote{Although $\gamma=2$ does not necessarily represent the most physically realistic scenario, it offers a direct comparison with much of the literature, which have adopted this value after \citet{Gammie2001}. } $\gamma=2$ in this analysis) and the pressure $P$ is given by \begin{equation} P = (\gamma-1) \Sigma e; \end{equation} here $e$ represents the specific internal energy, whose temporal evolution is dictated by \begin{equation} \frac{\partial e}{\partial t} + \boldsymbol{\varv}\cdot \nabla e = - \frac{P}{\Sigma} \nabla \cdot \boldsymbol{\varv} + 2\nu_s \boldsymbol{S}^2 + \nu_b \left( \nabla \cdot \boldsymbol{\varv}\right)^2 + \frac{1}{\Sigma} \nabla \cdot \left( \nu_t \Sigma \nabla e\right) - \frac{e}{\tau_c}. \end{equation} Three types of diffusive effects feature in the equation: bulk ($\nu_b$) and shear ($\nu_s$) viscosities, and (horizontal) thermal diffusion ($\nu_t$); a constant $\beta$-cooling time $\tau_c$ is also considered. During the simulation runs, the flow evolves away from its background state. It is therefore useful to express each quantity as a sum of its background state and its departure away from it, e.g. $\Sigma = \Sigma_0 + \Sigma^\prime$ (with $\Sigma_0$ representing the background state value and $\Sigma^\prime$ the departure), or $\boldsymbol{\varv} = \boldsymbol{\varv}_0 + \boldsymbol{\varv}^\prime$ (with $\boldsymbol{\varv}_0 = (0,-q\Omega x,0)^T$ and $\boldsymbol{\varv}^\prime = (u^\prime,\varv^\prime,0)^T$). The departure from the background state can then be described by the following set of equations: \begin{equation} \mathrm{D}h^\prime = - \left(\partial_x u^\prime + \partial_y \varv^\prime\right) \equiv - \Delta, \end{equation} \begin{multline} \mathrm{D}u^\prime - 2\Omega \varv^\prime = - \partial_x \Phi_\mathrm{d,m}^\prime - (\gamma-1) \left(\partial_x e^\prime + e\, \partial_x h^\prime \right) \\+ \nu_s \nabla^2 u^\prime + \left(\nu_b + \frac{1}{3}\nu_s\right) \partial_x \Delta + T_{xx}\partial_x h^\prime + T_{xy} \partial_y h^\prime , \end{multline} \begin{multline} \mathrm{D}\varv^\prime + (2-q)\Omega u^\prime = - \partial_y \Phi_\mathrm{d,m}^\prime - (\gamma-1)\left(\partial_y e^\prime + e\, \partial_y h^\prime\right) \\+ \nu_s \nabla^2 \varv^\prime + \left(\nu_b + \frac{1}{3}\nu_s\right) \partial_y \Delta + T_{yx} \partial_x h^\prime + T_{yy} \partial_y h^\prime , \end{multline} \begin{equation} \label{eq:e-full} \mathrm{D}e^\prime = - (\gamma-1)e \Delta + \nu_s U + \left(\nu_b - \frac{2}{3}\nu_s\right) \Delta^2 + \nu_t \nabla^2 e^\prime - \frac{e}{\tau_c} . \end{equation} Here $\mathrm{D} = \partial_t + u^\prime \partial_x + \varv^\prime \partial_y$ is the Lagrangian derivative, and the above equations have been simplified by the quantities $T_{xx}$, $T_{xy}$, $T_{yx}$, $T_{yy}$ and $U$, which are given by \begin{equation} T_{xx} = 2\nu_s \partial_x u^\prime + \left( \nu_b - \frac{2}{3}\nu_s \right) \Delta, \end{equation} \begin{equation} T_{yy} = 2\nu_s \partial_y \varv^\prime + \left( \nu_b - \frac{2}{3}\nu_s \right) \Delta, \end{equation} \begin{equation} T_{xy}=T_{yx} = \nu_s \left( -q\Omega + \partial_x \varv^\prime + \partial_y u^\prime\right), \end{equation} \begin{equation} U = 2 \left(\partial_x u^\prime\right)^2 + 2\left(\partial_y \varv^\prime\right)^2 + \left(-q\Omega + \partial_x \varv^\prime + \partial_y u^\prime\right)^2. \end{equation} It is possible to notice through Equation~\ref{eq:e-full} that the background state of the flow does not represent a steady state solution of the equations. For this reason, as discussed in more detail in Section~\ref{sec:turb-visc} also considering viscous and thermal effects, the initial state is not in thermal equilibrium. \subsubsection{Stresses} Another quantity of interest in analysing the flow dynamics is the stress tensor. In particular, its Reynolds (or hydrodynamical) and gravitational components, which can be used to estimate the amount of angular momentum transport $\alpha$. We consider spatially averaged gravitational and Reynolds stresses, $\left\langle G_{xy}\right\rangle$ and $\left\langle H_{xy}\right\rangle$ respectively, which are given by \begin{align} \left\langle G_{xy}\right\rangle = & \frac{1}{L_x L_y} \int \int G_{xy} \, \mathrm{d}x \, \mathrm{d}y \nonumber \\ = &- \frac{1}{4\pi G} \sum_k \frac{k_x k_y}{\lvert \boldsymbol{k}\rvert} \left\lvert \tilde{\Phi}_{d,m} (\boldsymbol{k})\right\rvert^2, \end{align} \begin{align} \left\langle H_{xy} \right\rangle = & \frac{1}{L_x L_y} \int \int H_{xy} \, \mathrm{d}x \, \mathrm{d}y \nonumber \\ = & \frac{1}{L_x L_y} \int \int \Sigma u \varv \, \mathrm{d}x \, \mathrm{d}y, \end{align} where $G_{xy}$ and $H_{xy}$ are the respective local stresses. Once the stresses have been obtained, the Reynolds stress having been calculated in real space to avoid a Fourier convolution, they can be used to calculate the value of $\alpha$ according to \begin{equation} \label{eq:stress-alpha} \alpha = \frac{\left\langle G_{xy} + H_{xy}\right\rangle}{qP}. \end{equation} \subsection{Specifications} \label{sec:specs} \subsubsection{Diffusive processes} As mentioned previously, the analysis conducted with \texttt{CASPER} employs three types of diffusive processes: bulk and shear viscosities, and (horizontal) thermal diffusion. All three kinematic diffusion coefficients are taken to be independent of radius, temperature or surface density for reasons of simplicity. This also ensures the flow to be viscously stable, as \citep{LightmanEardley1974} \begin{equation} \frac{\partial \left(\nu \Sigma\right)}{\partial \Sigma} > 0. \end{equation} The kinematic diffusion coefficients are therefore initialised as constants and are expected to retain their initial values for the remainder of the simulation. However, if the spatial resolution is not sufficiently high to allow strong shocks to be appropriately resolved using that viscous configuration, the code can increase the viscosity coefficients to avoid numerical errors. This is done by continuously identifying the largest $x$- and $y$-velocities in the flow \begin{align} U_\mathrm{max} = &\, \lvert u\rvert_\mathrm{max} + c_s,\nonumber \\ V_\mathrm{max} = &\, \lvert \varv\rvert_\mathrm{max} + c_s, \end{align} and checking whether the initial viscosity is larger than the viscosity needed to resolve flows moving at $U_\mathrm{max}$ or $V_\mathrm{max}$ at the given spatial resolution. Regardless of the existence of this safety measure employed to avoid numerical artefacts such as the Gibbs phenomenon, it is important to stress that steps have been taken to make sure the viscosity coefficients remain constant, with the rare deviations not exceeding $5\%$ of the initial value. \subsubsection{Time stepping} The time step of each simulation $\Delta t$ was likewise continuously adapted to the evolving flow to ensure stability according to \begin{equation} \Delta t = \min \left( \tau_\mathrm{visc}, \min\left( \tau_{\mathrm{adv,}x}, \tau_{\mathrm{adv,}y}\right) \right), \end{equation} where $\tau_\mathrm{visc}$, $\tau_{\mathrm{adv,}x}$ and $\tau_{\mathrm{adv,}y}$ are the viscous and radial and azimuthal advection timescales, respectively, given by \begin{align} & \tau_\mathrm{visc} \simeq C_\nu \frac{1}{\nu k_\mathrm{max}^2}, \nonumber \\ & \tau_{\mathrm{adv,}x} = C_\mathrm{CFL} \frac{\Delta x}{U_\mathrm{max}}, \\ & \tau_{\mathrm{adv,}y} = C_\mathrm{CFL} \frac{\Delta y}{V_\mathrm{max}} \nonumber. \end{align} Here $k_\mathrm{max}$ is the largest wavenumber resolved, and $C_\nu$ and $C_\mathrm{CFL}$ are safety factors, the latter being controlled by the Courant-Friedrichs-Lewy (CFL) condition. \subsubsection{Anti-aliasing} Another of \texttt{CASPER}'s features is the presence of an anti-aliasing filter, which can be easily turned on or off. This is particularly important in the periodic boundary conditions setting used, as radial wavenumbers exceeding the maximum absolute values set by the run parameters are remapped back on the other side of the $k_x$ range using the classical \begin{equation} k_x(t) = k_x(0) + k_y q\Omega t. \end{equation} This causes trailing waves exceeding the largest resolved wavenumber to be remapped as leading waves and viceversa. For this purpose, \texttt{CASPER} employs a truncation (or 2/3-rule) anti-aliasing method, where shearing waves having wavenumbers exceeding 2/3 the largest resolved wavenumber are discarded. This results in a continuous, but minimal, energy loss. Although useful, such a method is only able to remove aliasing errors arising from quadratic non-linearities. This means that -- while the method would be able to completely eliminate aliasing errors in an incompressible, non-gravitational flow -- in the gravitational, compressible case errors resulting from cubic or higher order non-linearities remain. \subsection{Shock resolving test} Several tests were carried out to ensure the code worked as expected on simplified problems, before tackling the one at hand. One such test was to verify the ability of the code to handle and resolve shocks, given spectral methods' known weakness in dealing with flow discontinuities. \begin{figure} \includegraphics[width=\columnwidth]{./figures/capture_shocks.eps} \caption{Comparison between the Mach numbers obtained from $h$ and $P$ to test the efficiency of the \texttt{CASPER} code to resolve shocks. The dashed line represents the ideal case where $\mathcal{M}_h = \mathcal{M}_P$, while the shaded area encloses values falling within $10\%$ of this ideal case. The code is seen to handle shocks adequately, with its performance not deteriorating for stronger shocks.} \label{fig:shock-test} \end{figure} The test was carried out by computing the Mach numbers for specific shocks using both $h$ and the pressure $P$ from the Rankine-Hugoniot conditions. The flow considered was fully self-gravitating, $Q\sim 1$, and with random initial conditions (as further explained in Section~\ref{sec:ic}). In an ideal case, the two Mach number values $\mathcal{M}_h$ and $\mathcal{M}_P$ should of course be equal. Figure~\ref{fig:shock-test} illustrates the result of this basic test; most of the data points lie close to the dashed line, which illustrates the ideal $\mathcal{M}_h=\mathcal{M}_P$. In fact, all but three points lie within the shaded region, which represents values within $10\%$ the ideal case. Furthermore, the accuracy of the code's shock resolving seem unaffected by the strength of the shock, which confirms the ability of the code to deal with the problem at hand. \subsection{Initial conditions} \label{sec:ic} Similarly to other works, the runs presented here are initialised with random velocity initial conditions (ICs). The spectrum of the applied velocity perturbations, which obeys a uniform distribution in the range $[-0.5,0.5]$ and is scaled by a factor $10^{-3} c_s$, is bound by a minimum and maximum wavelength according to \begin{align} k_\mathrm{min}=&\,\frac{2\pi}{L},\nonumber \\ k_\mathrm{max}=&\,32 k_\mathrm{min}. \end{align} Density and internal energy, on the other hand, are kept uniform with their background values being \begin{align} \label{eq:h0} h_0 = &\,\ln \Sigma_0 = 0, \\ \label{eq:e0} e_0 = &\,\frac{c_s^2}{\gamma(\gamma-1)}. \end{align} These ICs aim to mimic early residual disc turbulence following the collapse of its parent cloud \citep{GodonLivio2000}. Unlike other works employing random velocity perturbation ICs \citep{JohnsonGammie2005, Shenetal2006}, no incompressibility condition was applied in this instance. The runs examined in this work are carried out on a $1024\times 1024$ grid with a square box of dimension $L=8\pi (\pi G\Sigma_0)/\Omega^2$. The box parameters are such that the value of the intrinsic shear viscosity is $\alpha_s\approx 0.004$; as mentioned previously, the code will not increase this value by more than $\sim5\%$. The adiabatic index, as mentioned previously, is set to $\gamma=2$ and a range of the initial Toomre parameter ($1\leq Q_0 \leq 2$) and of the cooling timescale ($7 \leq \tau_c \Omega \leq 15$) are used. \section{Results} \label{sec:results} Having applied the initial conditions mentioned above, the flow was allowed to evolve freely for tens of orbits. The system quickly settled into a self-sustaining state with an average Toomre parameter $\overline{Q} \equiv \sqrt{\gamma (\gamma-1) \bar{e}} \approx 2$ as shown in Figure~\ref{fig:Qbar} (the expression being obtained thanks to the use of gravitational units, set such that $c_s=\pi G\Sigma_0=1$, and of Equation~\ref{eq:e0}), with the quantity $\bar{e}=\tilde{e}\left(k_x=0,k_y=0\right)$ representing the mean internal energy. Runs with different box properties showed no significant difference in the average turbulent $Q$. Owing to the system not being in thermal equilibrium with the applied ICs, the disc is observed to cool at first ($\tau_c\Omega=12$ for this run), until the heat generated by the GI reverses the trend at $t\Omega \approx 7$. As $\overline{Q}$ increases, the amount of viscous heating produced by smoothing down shocks decreases until the system settles into a state of self-regulating gravito-turbulence with $\overline{Q}\approx 2$ starting from $t\Omega \approx 50$. This gravito-turbulent state, which features recurring weaker heating events on a characteristic timescale of $\sim50 \Omega^{-1}$ ($\sim 8$ orbits), persists until the end of the run. \begin{figure*} \centering \includegraphics[width=.8\textwidth]{./figures/Qbar} \caption{Evolution of the average Toomre parameter $\overline{Q}$ as a function of time with $\tau_c\Omega=12$, showing the system settling into a self-regulated state of gravito-turbulence following an initial period of cooling.} \label{fig:Qbar} \end{figure*} The main question addressed in this work is what mechanism allows the gravito-turbulent state to be self-sustaining. While axisymmetric shearing waves dominate the dynamics for $Q<1$, in this case these are not present as the disc is not sufficiently cool. The shearing sheet model used here only allows non-axisymmetric shearing waves to be transiently amplified. The system is linearly stable to non-axisymmetric perturbations, as in the linear regime viscous effects quickly quench such transient growths. It is however possible for the system to continuously regenerate transiently growing shearing waves by means of a coupling with a non-linear feedback. This coupling would allow to continuously extract energy from the background flow and feed it into the non-axisymmetric GI, causing a sustained state of gravito-turbulence to survive. \begin{figure} \includegraphics[width=\columnwidth]{./figures/entropy_structure_low} \caption{Snapshot of the flow in the entropy showing the presence of the nearly axisymmetric structure.} \label{fig:flow_zf} \end{figure} \begin{figure*} \centering \hspace*{-.24cm}\includegraphics[width=145mm]{./figures/Q_L8pi_N1024}\\ \hspace*{1.2cm} \includegraphics[width=165mm]{./figures/kx_L8pi_N1024_pv_hot_crop}\\ \hspace*{1.1cm} \includegraphics[width=165mm]{./figures/kx_L8pi_N1024_s_hot_crop} \caption{Temporal evolution of $\overline{Q}$ (top) and the axisymmetric power spectrum maps for potential vorticity (middle) and entropy (bottom) in the interval $90 \leq t\Omega \leq 180$ for $\tau_c\Omega=10$. The power maps show the wavenumbers $k_x \pi G\Sigma_0/\Omega^2=1.00$ and $1.25$ dominating over the other components, becoming especially prominent during heating events.} \label{fig:2zf} \end{figure*} Analysing the behaviour of the flow during the gravito-turbulent state can provide the first hints regarding how such a state is maintained. Figure~\ref{fig:flow_zf} shows the spatial behaviour of the flow in the entropy and features the presence of a nearly axisymmetric structure (henceforth called a zonal flow), which is also present in the potential vorticity. The zonal flow, which in this case is found to have a wavenumber $k_x \pi G\Sigma_0/\Omega^2=1.25$, is found to persist while the system is in its gravito-turbulent state. In fact, the structure is observed to be disrupted and reformed again on a timescale comparable to that of the heating events seen in Figure~\ref{fig:Qbar}. This correlation between the heating events and the zonal flow evolution is illustrated in Figure~\ref{fig:2zf}, showing the time evolution of $\overline{Q}$ (top) as well as the axisymmetric power spectrum maps for PV (middle) and entropy (bottom). Figure~\ref{fig:2zf} shows that there are in fact two dominating axisymmetric $k_x$ values, and that their values evolve with the flow, although they remain close to $k_x \pi G\Sigma_0/\Omega^2 \sim 1$. Runs with different box sizes and lower resolutions, not presented here, confirmed the dominant wavenumbers to possess $k_x \pi G\Sigma_0/\Omega^2 \sim1$, indicating the preference towards these zonal flow wavelengths is dictated by the intrinsic flow behaviour, and not by the box properties. \subsection{Turbulent viscosity} \label{sec:turb-visc} The ability of the flow to self-sustain at a roughly constant value of $Q$, and therefore at a roughly constant temperature (by means of the relationship $c_s \propto T^{1/2}$), indicates that the system has reached a state of thermal equilibrium in its gravito-turbulent regime. However, the intrinsic viscosities and cooling timescale used in the initial conditions do not allow thermal balance. In fact, according to the thermal equilibrium condition \begin{equation} \label{eq:thermal-eq} \alpha_s \tau_c = \frac{1}{q^2 \Omega (\gamma-1)}, \end{equation} (which is derived by considering only shear viscosity heating and cooling) the initial shear viscosity $\alpha_s \sim 0.004$ would have been thermally balanced by a cooling timescale of only $\tau_c \Omega \sim 100$. Instead, the cooling timescales used are of the order $\tau_c \Omega \sim 10$, which makes the system thermally unbalanced to begin with, which explains the initial cooling period in Figure~\ref{fig:Qbar}. It is therefore clear that an additional source of viscosity, turbulent in nature, is allowing the system to achieve thermal equilibrium for $\tau_c\Omega \sim 10$. This is confirmed by calculating the effective shear viscosity $\alpha_\mathrm{eff} = \alpha_\mathrm{turb} + \alpha_\mathrm{init}$, where $\alpha_\mathrm{init}$ is the initial shear viscosity and $\alpha_\mathrm{turb}$ the turbulent component given by Equation~\ref{eq:stress-alpha}. The results are found to match very well with the thermal equilibrium condition (Equation~\ref{eq:thermal-eq}). In the same way that the turbulent motions create an enhanced shear viscosity, the horizontal thermal diffusion would also be boosted. Estimating the turbulent component for $\nu_t$ is however much harder than for $\nu_s$ as \texttt{CASPER} was not designed to compute a detailed heat transport. Instead, the turbulent thermal diffusion is estimated by means of the turbulent Prandtl number \begin{equation} \mathrm{Pr}_\mathrm{turb} = \frac{\nu_{s,\mathrm{turb}}}{\nu_{t,\mathrm{turb}}} \sim \frac{H_{xy} + G_{xy}}{H_{xy}}. \end{equation} This estimation arises from two assumptions: that fluid motions transport heat and angular momentum in similar ways, and that gravitational interactions carry angular momentum much more efficiently than they transport heat. The value of the turbulent Prandtl number is found to fluctuate around $\mathrm{Pr}\sim 2$. \subsection{Sustenance through slow mode instability} \label{sec:axi} \begin{figure*} \centering \hspace*{-.3mm}\includegraphics[width=140.5mm]{./figures/full_tc13_Q1-3_L10pi_N1024_Q-noxtics-label}\\ \includegraphics[width=140mm]{./figures/full_tc13_Q1-3_L10pi_N1024_alpha-noxtics-label}\\ \hspace*{2.35mm}\includegraphics[width=143.9mm]{./figures/full_tc13_Q1-3_L10pi_N1024_Pr-label}\\ \vspace*{.5cm} \includegraphics[width=48mm]{./figures/full_tc13_Q1-3_L10pi_N1024_1} \includegraphics[width=48mm]{./figures/full_tc13_Q1-3_L10pi_N1024_2} \includegraphics[width=48mm]{./figures/full_tc13_Q1-3_L10pi_N1024_3}\\ \includegraphics[width=48mm]{./figures/full_tc13_Q1-3_L10pi_N1024_4} \includegraphics[width=48mm]{./figures/full_tc13_Q1-3_L10pi_N1024_5} \includegraphics[width=48mm]{./figures/full_tc13_Q1-3_L10pi_N1024_5b} \caption{Temporal evolution of $Q$, $\alpha_\mathrm{eff}$ and $\mathrm{Pr}_\mathrm{turb}$ (top) during a heating event including six points marked \textit{a--f} in a run with a cooling timescale $\tau_c \Omega =13$, and the corresponding slow mode instability regions at these snapshots (bottom). The red data points represent the dominating zonal flow wavenumbers in the same run. The shaded area illustrates the region where $\omega_0^2<0$ and the flow is therefore axisymmetrically unstable.} \label{fig:axi-casper-grid} \end{figure*} As observed in the power spectrum maps of Figure~\ref{fig:2zf} the zonal flows appear to be periodically regenerated through the run, rathen than decaying gradually. This hints at the possibility of an instability acting to grow these axisymmetric structures during heating events. One plausible candidate is the axisymmetric slow mode instability discussed in \citet{VanonOgilvie2017}, which was shown to generate axisymmetric structures for intermediate wavelengths if the disc was sufficiently cool. The critical temperature was however found to depend on disc parameters such as cooling timescale, effective viscosity, adiabatic index and Prandtl number. The growth rate $\lambda_\mathrm{sm}$ of such an instability was found to be given by \begin{equation} \lambda_\mathrm{sm} = \frac{c_1 + c_4}{2} \pm \sqrt{\frac{\left(c_1-c_4\right)^2}{4}+c_2c_3}, \end{equation} where the coefficients $c_1$, $c_2$, $c_3$ and $c_4$ originated from the linearised equations for the zonal flow amplitude's temporal evolution once they were rewritten according to the system \begin{align} \label{eq:slow-system} \partial_t A_s = & \, c_1 A_s + c_2 A_\zeta, \nonumber \\ \partial_t A_\zeta = & \, c_3 A_s + c_4 A_\zeta, \end{align} where $A_s$ and $A_\zeta$ represent the dimensionless zonal flow amplitudes in the specific entropy and in the potential vorticity, respectively, and are given by \begin{align} A_s = & \, \frac{1}{\gamma} \left(A_e + A_h\right), \nonumber \\ A_\zeta = & \, \frac{k A_v}{(2-q)\Omega} - A_h. \end{align} The analysis by \citet{VanonOgilvie2017} found the coefficients appearing in Equation~\ref{eq:slow-system} to be given by: \begin{align} c_1 = & \, \frac{\gamma_t \left(c_s^2 k^2 (\gamma-1) - \gamma \omega_0^2\right) + \gamma_s q \kappa^2 \gamma(\gamma-1)}{\gamma \omega_0^2}, \nonumber \\ c_2 = & \, \frac{\kappa^2(\gamma-1) \left[\gamma_t c_s^2/\gamma + \gamma_s q/k^2\left(\kappa^2-\omega_0^2\right)\right]}{c_s^2 \omega_0^2} , \nonumber \\ c_3 = & \, -\frac{4\gamma_s c_s^2 k^2 (q-1)\Omega^2}{\kappa^2 \omega_0^2}, \nonumber \\ c_4 = & \, -\frac{\gamma_s \left(\omega_0^2 + 4(q-1)\Omega^2\right)}{\omega_0^2}. \end{align} Here $\gamma_s = \nu_s k^2$ and $\gamma_t = \nu_t k^2 + 1/\tau_c$ are dissipative coefficients and $\omega_0^2 = \kappa^2 - 2\pi G\Sigma_0 k + c_s^2 k^2$. To investigate whether the slow mode instability was at the root of the zonal flow growth, the resulting instability region was monitored during a heating event. Figure~\ref{fig:axi-casper-grid} shows the temporal evolution of the critical varying quantities -- meaning $Q$, $\alpha_\mathrm{eff}$ and $\mathrm{Pr}_\mathrm{turb}$ -- as well as the resulting slow mode instability region in the $kc_s/\Omega$--$Q$ plane at six time points (\textit{a}--\textit{f}) for a run with $\tau_c\Omega =13$. The instability regions also feature red points indicating the two dominating zonal flow wavenumbers (converted into acoustic units) at that particular snapshot. The time sequence shows how the evolution of $\alpha_\mathrm{eff}$ and $\mathrm{Pr}_\mathrm{turb}$ heavily affects the size of the instability region. This is particularly sensitive to $\alpha_\mathrm{eff}$, as shown by peaks in the viscosity (coincident with troughs in $Q$) driving deeply unstable conditions in the disc (snapshots \textit{b}, \textit{d}). The mechanism of the instability is particularly clear in frames \textit{d}--\textit{f}. As the system cools down, the viscosity receives a boost to dissipate strong shocks, driving strongly unstable conditions stretching up to $Q\sim4$ (snapshot \textit{d}). As the disc warms up due to this viscous heating, the value of $\alpha_\mathrm{eff}$ gradually drops, continually shrinking the instability region (\textit{e}), eventually allowing the system to regain stability (\textit{f}). The slow mode instability playing a part in the self-sustenance of the gravito-turbulent state also explains the presence of two dominating axisymmetric wavenumber modes; in fact, Figure~\ref{fig:axi-casper-grid} shows that these modes' wavenumbers are usually centred around the peak of the instability region, where the growth rate (when the system is unstable) is maximised. While ideally the zonal flow would have the exact wavelength corresponding to the fastest growing mode, the wavenumber quantisation due to the consideration of a box of finite size means two modes (one on either side of the fastest growing mode) are activated instead. It is however important to remember that some approximations have been made in arriving to this somewhat surprising result. These include the assumption that the turbulent viscosity behaves similarly to the laminar disc viscosity and can therefore be modelled in the same way \citep{BalbusPapaloizou1999}, the rough estimation of the turbulent Prandtl number and the value of $\alpha_\mathrm{eff}$ not being fixed as it was in the slow mode instability analysis by \citet{VanonOgilvie2017}. The former point is the most important but it is believed that the use of a local shearing sheet system, coupled with the gravito-turbulent nature of the disc, should minimise -- if not remove altogether -- the presence of global wave transport, therefore allowing the turbulent disc to be described by the $\alpha$ formalism \citep{BalbusPapaloizou1999}. Lastly, while it is possible for the intrinsic viscosities to trigger the slow mode instability, their small magnitude ($\alpha \sim 0.004$) means this is unlikely to happen. A confirmation of this is given in Figure~\ref{fig:axi-casper-grid}, where the system fails to trigger the slow mode instability for turbulent shear viscosities of $\alpha_\mathrm{eff} \sim 0.03$ (snapshots \textit{a}, \textit{c}, \textit{f}), which is $\sim 8$ times larger than the intrinsic value. This failure to trigger the axisymmetric instability also takes place with a turbulent Prandtl number of $\mathrm{Pr}_\mathrm{turb}\sim 2$, while the intrinsic Prandtl number would be fixed to unity; as found by \citet{VanonOgilvie2017}, smaller Prandtl numbers further hinder the slow mode instability. \subsection{Disruption by non-axisymmetric instability} \label{sec:non-axi} The stability analysis carried out in \citet{VanonOgilvie2016} however found that zonal flows of intermediate wavelengths can be disrupted by the action of a non-axisymmetric instability. Figure~\ref{fig:2zf} hints towards the presence of such an instability, as the amplitudes of the dominating wavenumbers feature moments of decay in their growth during heating events. In fact, a power spectrum map for $k_y\neq 0$ shows that the $k_y \pi G\Sigma_0/\Omega^2=0.25$ mode is also activated during heating events. To better understand the role of this non-axisymmetric mode possessing a wavelength matching the box size, a simpler test run was conducted with two sets of ICs. In Case 1 the ICs were entirely composed of an imposed zonal flow with wavenumber $k_x \pi G\Sigma_0/\Omega^2=2.0$, making the system fully axisymmetric; in Case 2, on the other hand, the imposed zonal flow was accompanied by random velocity perturbations as described in Section~\ref{sec:ic} to provide non-axisymmetry to the system. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/zonalflow_damped} \caption{Temporal evolution of the natural log of the imposed zonal flow's amplitude in a test run with a purely axisymmetric nature (black, dashed line) and in one with a non-axisymmetric nature (red, full). Both test runs were conducted using $\alpha_\mathrm{init}\sim 0.004$ and $\mathrm{Pr}=3$. After the initial exponential growth period the two test runs diverge, the non-axisymmetric case showing a quenched amplitude compared to its axisymmetric counterpart.} \label{fig:axi-inst} \end{figure} The log of the amplitude of the imposed zonal flow, which grows thanks to the axisymmetric slow mode instability discussed above, is plotted in Figure~\ref{fig:axi-inst} as a function of time for both cases. While both runs show a similar exponential growth stage, they subsequently diverge: the fully axisymmetric run showing a saturation in the zonal flow amplitude (black, dashed line), and the non-axisymmetric case (red, full) exhibiting a clear amplitude quenching. This shows that some disruptive non-axisymmetric instability is present, acting to limit the maximum amplitude of the zonal flow for a given set of disc parameters (i.e. $Q$, zonal flow wavelength). To check whether this instability is indeed the same as the one described in \citet{VanonOgilvie2016}, a $kc_s/\Omega$-$A_h$ diagram is constructed -- where $A_h$ is the zonal flow amplitude in $h$ -- using the average values from the non-axisymmetric test run (Case 2). This allows a direct comparison with the results from \citet{VanonOgilvie2016}. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/nonaxis_inst_Q1-47_gauss0-8_dot_corrected_narrow_black_k3-2} \caption{Growth rate contours maximised over $ky$ as a function of the zonal flow amplitude and wavenumber for $Q\simeq 1.57$ and $A_e=0$. The data point represents the average zonal flow properties obtained from the non-axisymmetric test run.} \label{fig:nonaxi-gen} \end{figure} The plot, shown in Figure~\ref{fig:nonaxi-gen}, shows the $k_y-$maximised growth rate contours as a function of the zonal flow properties (wavenumber and amplitude) for a given disc temperature (in this case $Q\simeq 1.57$ is used). The black data point represents the test run Case 2, with the zonal flow wavenumber having been converted to acoustic units. The data point lies across the $\lambda \Omega^{-1} = 0$ contour which, taking into consideration the oscillations in $A_h$ featured in Figure~\ref{fig:axi-inst} and the fact that the wavenumber convertion to acoustic units depends on the similarly-oscillating $Q$, gives a strong indication the non-axisymmetric instability may be at play. \begin{figure} \centering \includegraphics[width=\columnwidth]{./figures/nonaxis_Ah0-0105_k3-2_corrected} \caption{Growth rate contours in the $k_y c_s/\Omega$--$Q^{-1}$ domain for the average zonal flow properties from the non-axisymmetric test run: $Q\simeq 1.57$ and $A_h\simeq 0.0105$. The data point again represents the average position on the domain of the test run's zonal flow. Both shape and size of the unstable regions depend on $k$ and $A_h$.} \label{fig:nonaxis-ky} \end{figure} A more stringest test to check the activation of the non-linear instability in the gravito-turbulent regime consists in constructing a $k_y c_s/\Omega$--$Q^{-1}$ plot for the specific zonal flow wavelength ($k c_s/\Omega \simeq 3.1$, obtained from converting $k \pi G\Sigma_0/\Omega^2 = 2.0$ into acoustic units using the run's average $Q$ value) and average amplitude from the test run. The plot, shown in Figure~\ref{fig:nonaxis-ky}, illustrates to what $Q$ values the instability stretches for a given non-axisymmetric perturbation wavelength $k_y$. The data point represents $k_y \pi G\Sigma_0/\Omega^2=0.25$ converted into acoustic units, and lies close to the marginal stability contour. It is important to remember however that both the location of the data point and the shape and size of the instability region are a function of the run's parameters. The snapshot shown here is constructed using average values from the run, where the zonal flow amplitude should not be sufficiently large to trigger the non-axisymmetric instability. It is however apparent that an increase in both $Q^{-1}$ and $A_h$ would likely push the data point within the instability region. The plot also indicates that the non-axisymmetric instability would prefer a longer-wavelength mode -- which would be accessible by considering a more azimuthally elongated box -- over the $k_y \pi G\Sigma_0/\Omega^2=0.25$ mode activated with the current box parameters. \subsection{Structure regeneration} Given the results from Sections~\ref{sec:axi} and~\ref{sec:non-axi}, it seems clear that the gravito-turbulent regime is maintained by the axisymmetric slow mode instability (whose action allows zonal flow growth) and the non-axisymmetric instability (whose action disrupts the zonal flows) balancing each other. This is confirmed visually by the flow dynamics in real space, a few snapshots of which are presented in Figure~\ref{fig:structure-sequence}. The snapshots, which are obtained in the entropy for a run with $\tau_c \Omega =10$, span a time of $t\Omega \simeq 4.5$. \begin{figure*} \centering \includegraphics[width=45mm]{./figures/entropy_sequence1-low-label-retry} \includegraphics[width=45mm]{./figures/entropy_sequence2-low-label-retry} \includegraphics[width=45mm]{./figures/entropy_sequence3-low-label-retry}\\ \hspace*{1cm}\includegraphics[width=45mm]{./figures/entropy_sequence4-low-label-retry} \includegraphics[width=45mm]{./figures/entropy_sequence5-low-label-retry} \includegraphics[width=55.3mm]{./figures/entropy_sequence6-low-label-retry} \caption{Entropy time sequence illustrating the destruction and regeneration of the zonal flow for $\tau_c\Omega = 10$. The total time elapsed from snapshot \textit{A} to \textit{F} is $t\Omega \simeq 4.5$.} \label{fig:structure-sequence} \end{figure*} The sequence starts (snapshot \textit{A}) with some axisymmetry on the right part of the box, where the remnants of a zonal flow linger; on the left half the zonal flow has however been disrupted, resulting in the formation of leading and trailing shearing waves. In the following two frames (\textit{B}, \textit{C}) the shearing waves steepen into shocks and merge, the merging occurring at separate times in different parts of the box due to the shocks' curved fronts. The hotspots caused by such mergers (\textit{D}) are then sheared by the flow such that a nearly axisymmetric structure forms again in most of the box (\textit{E}). The zonal flow is soon disrupted again by the non-axisymmetric instability, again resulting in the creation of leading and trailing shearing waves (\textit{F}), completing the cycle. Such a cycle repeats multiple times throughout a single heating event. Furthermore, it is possible to appreciate that the zonal flow amplitude has increased during the cycle due to the constructive effect of the slow mode instability. The sustenance of the gravito-turbulent regime, mediated by a balance between the slow mode and non-axisymmetric instability, therefore occurs in a cyclic fashion: as the temperature in the disc drops, heat generated through viscous shock dissipation increases thanks to a growth in $\alpha_\mathrm{eff}$. This causes an enlargement in the slow mode instability region, eventually making the system unstable; as the zonal flow forms the disc cools down, as radiative cooling dominates over the scarce shock dissipation. The slow mode instability then allows the zonal flow to grow until its amplitude is large enough to trigger the destructive non-axisymmetric instability. This leads to the creation of leading and trailing shearing waves, in turn boosting the Reynolds stress which causes energy from the background flow to be fed to the turbulent motions. This boosts the kinetic energy, which is then again dissipated into heat as the shocks merge again, restarting the cycle. This cycle occurs multiple times during the course of a heating event, as well as at the beginning of each heating event. \section{Discussion and Conclusions} \label{sec:discussion} The work focused on the role played by zonal flows in the self-sustenance of the gravito-turbulent regime in astrophysical discs. The problem was tackled using a local shearing sheet approximation, solving the full non-linear equations of the system thanks to a bespoke pseudo-spectral method. The disc taken into consideration was assumed to be compressible, self-gravitating, viscous (with both bulk and shear viscosity types), thermally diffusive and cooled down by a constant cooling ($\beta$ cooling prescription). The system, whose initial conditions were well out of thermal equilibrium, quickly settled into a gravito-turbulent state with an average $Q$ value of $\overline{Q}\sim 2$. The thermal balance attained by the flow in its gravito-turbulent configuration was attributed to turbulent viscosities. The onset of gravito-turbulence was accompanied by that of two axisymmetric structures, whose wavelengths remained roughly constant during the runs ($k \pi G\Sigma_0/\Omega^2 \sim 1$). Further analysis into the slow mode instability, originally discussed in \citet{VanonOgilvie2017}, showed that such an instability acted on the axisymmetric structures accompanying the gravito-turbulent regime, allowing them to grow. Such growth was however limited by the onset of a second instability, this time non-axisymmetric in nature and discussed in \citet{VanonOgilvie2016}, which disrupted the zonal flow, creating leading and trailing shearing waves in its place. It is this creation of shearing waves which directly led to energy being extracted from the background flow and fed back into the gravito-turbulent regime, ensuring its survival. The shearing waves were subsequently seen to steepen into shocks, merging with similar shockfronts shortly after. The hotspots created by such mergers--thanks to the shearing nature of the flow--quickly reformed zonal flows due to the resulting increase in the shear viscosity, which triggered the slow mode instability once again, completing the cycle of the zonal flow formation-destruction. Such a cycle repeats several times during the course of a single heating event, and is also at the base of the turbulent regime's ability to self-sustain on a long-term basis. It is therefore clear that zonal flows may play an important role in the self-sustenance of a gravito-turbulent regime in the conditions described by this work, as it is ultimately their formation and destruction which allows the turbulent state to repeatedly extract energy from the background flow to maintain itself. Both slow mode and non-axisymmetric instabilities are therefore also a key part in the self-sustenance of such a state, as without the persistent zonal flow destruction by the non-axisymmetric instability, the gravito-turbulent regime would likely run out of energy. It is however possible that other similar numerical analyses may not find comparable results if the conditions are not suitable for triggering the slow mode instability. In that case the flow would not spontaneously form axisymmetric structures, and any induced zonal flows would quickly decay. To ensure that such a cycle is set up, it is also crucial to ensure that the non-axisymmetric instability can be activated; as this instability prefers perturbations of long azimuthal wavelength, considering a box elongated in the azimuthal direction would help. Finally, scope remains to improve the present analysis. Potential improvements could be the consideration of more refined physical approximations, improving upon simpler prescriptions such as a constant cooling timescale, or constant kinematic viscosities. Furthermore, it would be of extreme interest to test the observed self-sustaining cycle in a 3D setup. \section*{Acknowledgements} The research was conducted thanks to the funding received by the Science \& Technology Facilities Council (STFC). The author would like to thank Prof. Gordon Ogilvie for the help and feedback he provided on this work, and the referee for the useful comments. \bibliographystyle{mnras}
{ "timestamp": "2018-04-17T02:06:18", "yymm": "1804", "arxiv_id": "1804.05215", "language": "en", "url": "https://arxiv.org/abs/1804.05215" }
\section{Introduction} \label{sec:intro} Top quark pair production is one of the most important processes at the Large Hadron Collider (LHC). The total cross section at $\sqrt{s} = \unit{13}{\TeV}$ is about $\unit{800}{\picobarn}$. Both the ATLAS and the CMS experiments have collected nearly $\unit{100}{\invfb}$ of integrated luminosity at $\unit{13}{\TeV}$. This corresponds to 160 million $t\bar{t}$ events in total. With such a large event sample, top quark physics has become one of the precision frontier of particle physics. Many important measurements related to the top quark, e.g., inclusive and differential cross sections \cite{Aaboud:2017fha, Sirunyan:2017mzl}, top-quark mass and width \cite{Aaboud:2017ujq, Khachatryan:2014nda}, top-quark polarization \cite{Aaboud:2016hsq} and \textit{et al.}, can now be done at unprecedented precisions. The large production cross section also allows precision measurements of boosted top quark pairs, which is important for high-mass $t\bar{t}$ resonances search \cite{Chatrchyan:2012yca}, and for precision study of boosted top-quark jet \cite{Aad:2015hna}. Currently, the best fixed-order calculations for top-quark pair production is at the next-to-next-to-leading order (NNLO) in QCD \cite{Baernreuther:2012ws, Czakon:2012zr, Czakon:2012pz, Czakon:2013goa, Czakon:2014xsa, Czakon:2015owf, Czakon:2016dgf} and the next-to-leading order (NLO) in the electroweak coupling \cite{Beenakker:1993yr, Bernreuther:2005is, Kuhn:2005it, Bernreuther:2006vg, Kuhn:2006vh, Hollik:2007sw, Bernreuther:2008md, Bernreuther:2010ny, Hollik:2011ps, Kuhn:2011ri, Bernreuther:2012sx, Kuhn:2013zoa, Campbell:2016dks, Pagani:2016caq, Denner:2016jyo}. Recently, these two corrections have been combined to give a more comprehensive description of $t\bar{t}$ production in \cite{Czakon:2017wor}. Despite the high precisions of these perturbative calculations, the complicated kinematics of $t\bar{t}$ production makes it necessary to consider even higher order corrections. This is particularly important since the high energy of the LHC has opened up the possibility to produce ``boosted'' top quark pairs, which means that the energies of the top quarks are much larger than their rest mass $m_t$. In \cite{Czakon:2016dgf}, it has been found that the NNLO QCD differential cross sections in the boosted regime are rather sensitive to the choice of factorization and renormalization scales. This scale dependence can be dramatically reduced by resumming certain towers of large logarithms to all orders in the strong coupling $\alpha_s$ \cite{Pecjak:2016nee, Czakon:2018nun}. These include not only the threshold logarithms which arise when the partonic center-of-mass energy approaches the $t\bar{t}$ invariant mass $M$, but also the mass logarithms of the form $\ln^n(m_t^2/M^2)$ which develop in the boosted region $M \gg m_t$. The resummation of the threshold logarithms in $t\bar{t}$ production requires a couple of ingredients, such as the hard function and the soft function, as well as various anomalous dimensions. The NLO hard and soft functions have been computed in \cite{Ahrens:2010zv}, and the anomalous dimension matrices have been derived in \cite{Ferroglia:2009ep, Ferroglia:2009ii}. These enabled the resummation to be done at the next-to-next-to-leading logarithmic (NNLL) accuracy \cite{Ahrens:2010zv}. Given the NNLO accuracy achieved from fixed-order calculations, it is desirable to extend the threshold resummation to N$^3$LL. Such a calculation would improve the theoretical predictions over the whole phase space, all the way from low invariant mass to the boost regime. In order to achieve that, the NNLO hard and soft functions are necessary. The NNLO hard function can in principle be extracted from the virtual amplitude calculated in \cite{Baernreuther:2013caa}. Therefore, the NNLO soft function becomes a major bottleneck in pushing up the resummation accuracy of $t\bar{t}$ production, and is the subject of this article. The soft functions describe the cross sections in the soft limit. The behavior of scattering amplitudes and cross sections in the soft limit is of high interest not only phenomenologically, but also theoretically. For example, the soft theorems in gauge theories and in gravitational theories \cite{Weinberg:1965nx, Gross:1968in, Jackiw:1968zza} are of fundamental importance in the understanding of their structures. In perturbative calculations in gauge theories, both the exchange of virtual soft particles and the emission of real soft ones can lead to infrared (IR) divergences. These must cancel against each other in order to arrive at meaningful predictions for physical observables. While such cancellations have been proven generically \cite{Kinoshita:1962ur, Lee:1964is}, the practical treatments of the IR divergences are highly non-trivial. Both the virtual and real contributions need to be calculated analytically in order to verify the precise cancellation. For the virtual amplitudes, when all external hard partons are massless, the soft singularities enjoy a dipole form up to two loops \cite{Aybat:2006wq}, thanks to the emerged scaling symmetry as the energy of hard partons become large \cite{Becher:2009cu, Gardi:2009qi}. Non-trivial corrections to the dipole form of soft singularities for massless scattering first appear at three loops, and have been computed recently in \cite{Almelid:2015jia}. The situation for massive amplitudes is much more complicated. Non-trivial correlations among three or four partons appear already at two loops \cite{Mitov:2009sv, Becher:2009kw, Ferroglia:2009ep, Ferroglia:2009ii, Chien:2011wz}. These virtual singularities must have the same structure as the real ones, and the soft functions provide a perfect place to investigate the latter. It is therefore interesting to calculate the massive soft functions through to NNLO, in order to study these multi-parton correlations from a different perspective. The soft functions are defined as the vacuum expectation values of certain operators consisting of light-like and time-like soft Wilson lines. In simpler situations, they have been extensively studied in the literature. For processes involving two massless partons, such as the Drell-Yan process and Higgs production through gluon fusion, the soft functions have been calculated up to the N$^3$LO \cite{Li:2014afw}. For processes with 4 massless partons such as di-jet production and boosted heavy quark pair production, the NNLO soft function was obtained in \cite{Ferroglia:2012uy}. When massive partons are involved, the calculation gets much more complicated. The soft function for the $e^+ e^- \to t\bar{t}$ process has been calculated at the NNLO in \cite{vonManteuffel:2014mva}. Much less is known in the case of hadronic production of top quark pairs, for which only the NLO soft function is available \cite{Ahrens:2010zv}. Our result in this work therefore serves as the first example of an NNLO soft function for massive scattering with 4 external partons. This paper is organized as follows. In Section \ref{sec:form} we lay out the generic definition and renormalization of threshold soft functions. In Section \ref{sec:nlo} we provide the result of the NLO soft function to higher powers of the dimensional regulator $\epsilon$, which is a necessary ingredient in the renormalization of the soft function at NNLO. Section \ref{sec:nnlosoft} describes the main efforts of this work, namely the calculation of the NNLO bare soft function. We then perform its renormalization in Section \ref{sec:ren}, and discuss some cross-checks and the numerical impact of our new result. Finally, we conclude and discuss some future applications and extensions of our calculation in Section \ref{sec:con}. \section{Formalism} \label{sec:form} We consider a generic scattering process involving energetic massless quarks, gluons and massive partons (such as top quarks or some new colored particles often present in models beyond the SM). The interactions of soft gluons with these energetic partons can be described by Wilson lines defined as \begin{align} \bm{S}_i(x) = \mathcal{P} \exp \! \left( ig_s \int_{-\infty}^0 ds \, v_i \cdot A^a(x+sv_i) \, \bm{T}_i^a \right) , \end{align} where $\mathcal{P}$ denotes path ordering, $v_i$ is a 4-vector pointing to the direction of the momentum of the $i$-th parton, which satisfies $v_i^2 = 0$ for massless partons and $v_i^2>0$ for massive partons. Note that here we have taken all vectors $v_i$ to be incoming. The boldface $\bm{T}_i^a$ is the color generator associated with the $i$-th parton in the color-space formalism \cite{Catani:1996jh, Catani:1996vz}. It is evident that the Wilson lines are invariant under the rescaling $v_i \to \lambda v_i$ for any $\lambda > 0$, since this change can be compensated by a change of the integration variable $s \to s / \lambda$. We could employ this freedom to normalize the direction vectors of massive partons to $v_i^2 = 1$. This has the physical meaning that $v_i$ is the 4-velocity of the $i$-th parton: $v_i = \pm p_i / m_i$ where $m_i$ is the mass of the $i$-th parton. However, we'd like to keep this possibility open for the sake of generality. Putting the Wilson lines together, the behavior of the $n$-parton scattering amplitude in the soft limit can be obtained via studying the vacuum matrix elements of the Wilson loop operator constructed out of the Wilson lines \begin{align} \bm{W}(x,\{v\}) \equiv \braket{0 | \bar{\TO} \! \left[ \bm{O}_s^\dagger(x) \right] \TO \! \left[ \bm{O}_s(0)\right] |0} \equiv \Braket{0 | \bar{\TO} \! \left[ \prod_{i=1}^n \bm{S}_i^\dagger(x) \right] \TO \! \left[ \prod_{i=1}^n \bm{S}_i(0) \right] |0} \, , \end{align} where $\{v\}$ denotes the collection of the directional vectors $v_i$, $x$ is a time-like vector, and $\TO$ denotes time-ordering. It is well-known that the vacuum matrix elements of the Wilson loop operator, when calculated in perturbation theory, contain ultraviolet (UV) divergences which need to be renormalized \cite{Korchemsky:1987wg, Korchemskaya:1992je}. The renormalization properties of the Wilson loops can be used to study the infrared singularities of scattering amplitudes, as was illustrated in \cite{Becher:2009cu, Becher:2009qa, Becher:2009kw, Ferroglia:2009ep, Ferroglia:2009ii}. The Wilson loop operator is also an essential ingredient in the factorization of scattering cross sections in the soft limit. We consider scattering processes at hadron-hadron colliders with no final state massless partons at the leading order. These include, for example, top quark pair production (possibly associated with other colorless particles such as the Higgs boson and electroweak gauge bosons), production of 4 top quarks, squark and gluino productions in supersymmetric models, as well as productions of top partners in many new physics models. At higher orders in the strong coupling constant, there will be additional emissions of gluons and quarks in the final state. We are interested in the case where these additional emissions are all soft, i.e., with energies much smaller than the typical momentum transfer of the hard-scattering process. Note that the precise meaning of ``soft'' depends on the reference frame, which leads to different forms of the factorization formula, such as the ``pair-invariant-mass'' (PIM) kinematics and the ``single-particle-inclusive'' (1PI) kinematics in top quark pair production discussed, e.g., in \cite{Ahrens:2010zv, Kidonakis:2010dk, Ahrens:2011mw}. While the formalism can be applied to any reference frame, in the following, we will work in the center-of-mass frame of the two incoming partons, which are not only good for demonstration purposes, and are also adopted in many existing calculations. For example, this corresponds to the PIM kinematics in \cite{Kidonakis:1996aq, Kidonakis:1997gm, Ahrens:2010zv} for $t\bar{t}$ production, in \cite{Broggio:2013uba, Broggio:2013cia} for stop pair production, and in \cite{Broggio:2015lya, Broggio:2016lfj} for $t\bar{t}H$ production. Schematically, we are considering partonic processes of the form \begin{align} h_1(p_1) + h_2(p_2) \to h_3(p_3) + \cdots + h_n(p_n) + X + X_s(p_s) \, , \end{align} where $h_1$ and $h_2$ are two incoming massless partons, $h_I$ ($I=3,\ldots,n$) are outgoing massive partons, $X$ denotes colorless particles such as the Higgs boson and electroweak gauge bosons, and $X_s$ represents additional soft radiations which we want to describe. Here, the momenta $p_I$ ($I=3,\ldots,n$) are chosen to be outgoing, but we still keep $v_I$ to be incoming for convenience. In the center-of-mass frame of the two incoming partons, the emissions of additional soft partons are described by the so-called soft function $\bm{S}$, which is simply the momentum-space version of the vacuum matrix element of the Wilson loop operator: \begin{align} \bm{S}(\omega,\{v\}) &\equiv \frac{1}{\sqrt{d_1d_2}} \int \frac{dx_0}{4\pi} \, e^{i\omega x_0/2} \, \Big[ \bm{W}(x,\{v\}) \Big]_{x_\mu=(x_0,0,0,0)} \nonumber \\ \label{eq:Smom} &= \frac{1}{\sqrt{d_1d_2}} \sum_{X_s} \braket{0 | \bm{O}_s^\dagger(0) | X_s} \braket{X_s | \bm{O}(0) | 0} \delta ( \omega - v_0 \cdot p_{X_s} ) \, , \end{align} where the reference vector $v_0=(2,0,0,0)$, and $\omega$ represents (2 times) the energy of the additional soft partons. We have included a normalization factor such that the definition of the soft function coincides with that in \cite{Ahrens:2010zv}. Here $d_1$ and $d_2$ are the dimensions of the $SU(N)_{\text{color}}$ representations to which the partons $h_1$ and $h_2$ belong. For later convenience, it is useful to perform a Mellin or Laplace transform into the moment space \begin{align} \label{eq:laplace} \tilde{\bm{s}}(\Lambda,\{v\}) &= \int_0^\infty d\omega \, \exp \! \left( -\frac{\omega}{\Lambda e^{\gamma_E}} \right) \bm{S}(\omega,\{v\}) = \frac{1}{\sqrt{d_1d_2}} \, \bm{W} \! \left( x_0 = \frac{-2i}{\Lambda e^{\gamma_E}} , \{v\} \right) , \end{align} where $\Lambda$ is a soft momentum scale in the moment space. As discussed above, the bare soft function contains UV divergences which can be regularized in dimensional regularization with $d=4-2\epsilon$. These UV divergences are removed by the operator renormalization \begin{align} \tilde{\bm{s}}(L,\{v\},\mu) = \bm{Z}_s^\dagger(L,\{v\},\mu) \, \tilde{\bm{s}}_{\text{bare}}(\Lambda,\{v\}) \, \bm{Z}_s(L,\{v\},\mu) \, , \end{align} where $\mu$ is the renormalization scale and $L\equiv\ln(\Lambda^2/\mu^2)$. As indicated in the above formula, both the renormalized soft function $\tilde{\bm{s}}(L,\{v\},\mu)$ and the renormalization factor $\bm{Z}_s(L,\{v\},\mu)$ are $\mu$-dependent and satisfy renormalization group equations (RGEs) \begin{align} \label{eq:zrge} \frac{d}{d\mu} \bm{Z}_s(L,\{v\},\mu) &= - \bm{Z}_s(L,\{v\},\mu) \, \bm{\Gamma}_s(L,\{v\},\mu) \, , \\ \label{eq:srge} \frac{d}{d\mu} \tilde{\bm{s}}(L,\{v\},\mu) &= - \bm{\Gamma}_s^\dagger(L,\{v\},\mu) \, \tilde{\bm{s}}(L,\{v\},\mu) - \tilde{\bm{s}}(L,\{v\},\mu) \, \bm{\Gamma}_s(L,\{v\},\mu) \, . \end{align} The generic form of the soft anomalous dimension matrix $\bm{\Gamma}_s$ up to the two-loop order can be written as \cite{Becher:2009kw} \begin{align} \label{eq:gammaS} \bm{\Gamma}_s(L,\{v\},\mu) &= \frac{\bm{T}_1^2 + \bm{T}_2^2}{2} \, \gamma_{\text{cusp}}(\alpha_s) \, L_s - \sum_{(I,J)} \frac{\bm{T}_I \cdot \bm{T}_J}{2} \, \gamma_{\text{cusp}}(\beta_{IJ},\alpha_s) + \sum_i \gamma^i_s(\alpha_s) + \sum_I \gamma^I(\alpha_s) \nonumber \\ &\, + \sum_{I} \left[ \bm{T}_I \cdot \bm{T}_1 \, \gamma_{\text{cusp}}(\alpha_s) \, \ln\frac{v_1 \cdot v_2 \, \sqrt{v_I^2}}{v_0 \cdot v_2 \; w_{I1}} + \bm{T}_I \cdot \bm{T}_2 \, \gamma_{\text{cusp}}(\alpha_s) \, \ln\frac{v_1 \cdot v_2 \, \sqrt{v_I^2}}{v_0 \cdot v_1 \; w_{I2}} \right] \nonumber \\ &\, + \sum_{(I,J,K)} if^{abc} \, \bm{T}_I^a \, \bm{T}_J^b \, \bm{T}_K^c \, F_1(\beta_{IJ},\beta_{JK},\beta_{KI}) \nonumber \\ &\, + \sum_{(I,J)} \sum_k if^{abc} \, \bm{T}_I^a \, \bm{T}_J^b \, \bm{T}_k^c \, f_2 \! \left( \beta_{IJ}, \ln\frac{w_{Jk} \, \sqrt{v_I^2}}{w_{Ik} \, \sqrt{v_J^2}} \right) + \mathcal{O}(\alpha_s^3) \, , \end{align} where the lower-case indices ($i$, $j$, $k$) run over massless partons 1 and 2, while the capital indices ($I$, $J$, $K$) run over massive partons. We have introduced the abbreviations $L_s \equiv L - i\pi$ and $w_{Ij} \equiv v_I \cdot v_j + i0$. The notation $(I,J,\ldots)$ denotes unordered tuples of distinct parton indices. The functions $\gamma_{\text{cusp}}(\alpha_s)$ and $\gamma_{\text{cusp}}(\beta_{IJ},\alpha_s)$ are the famous cusp anomalous dimensions for light-like Wilson lines and time-like Wilson lines, respectively \cite{Korchemsky:1987wg, Korchemsky:1991zp, Kidonakis:2009ev} \begin{align} \gamma_{\text{cusp}}(\alpha_s) &= \frac{\alpha_s}{\pi} + \left( \frac{\alpha_s}{4\pi} \right)^2 \left[ \left( \frac{268}{9} - \frac{4\pi^2}{3} \right) C_A - \frac{80}{9} \, T_F N_l \right] + \mathcal{O}(\alpha_s^3) \, , \\ \gamma_{\text{cusp}}(b,\alpha_s) &= \gamma_{\text{cusp}}(\alpha_s) \, b \coth(b) \nonumber \\ &\hspace{-3em} + \frac{C_A}{2} \left( \frac{\alpha_s}{\pi} \right)^2 \left\{ \frac{\pi^2}{6} + \zeta_3 + b^2 + \coth^2(b) \left[ \Li_3(e^{-2b}) + b \Li_2(e^{-2b}) - \zeta_3 + \frac{\pi^2}{6} b + \frac{b^3}{3} \right] \right. \\ &\hspace{1em} \left. + \coth(b) \left[ \Li_2(e^{-2b}) - 2 b \ln(1-e^{-2b}) - \frac{\pi^2}{6} (1+b) - b^2 - \frac{b^3}{3} \right] \right\} + \mathcal{O}(\alpha_s^3) \, . \nonumber \end{align} with the cusp angle given by \begin{align} \beta_{IJ} = \arccosh \! \left( - \frac{v_I \cdot v_J}{\sqrt{v_I^2 \, v_J^2}} - i0 \right) \, . \end{align} The single parton soft anomalous dimensions $\gamma_s^i(\alpha_s)$ and $\gamma^I(\alpha_s)$ are \begin{align} \gamma_s^i(\alpha_s) &= \left( \frac{\alpha_s}{4\pi} \right)^2 C_i \left[ \left( -\frac{404}{27} + \frac{11\pi^2}{18} + 14\zeta_3 \right) C_A + \left( \frac{112}{27} - \frac{2\pi^2}{9} \right) T_F N_l \right] + \mathcal{O}(\alpha_s^3) \, , \\ \gamma^I(\alpha_s) &= -C_I \frac{\alpha_s}{2\pi} + \left( \frac{\alpha_s}{4\pi} \right)^2 C_I \left[ \left( -\frac{98}{9} + \frac{2\pi^2}{3} - 4\zeta_3 \right) C_A + \frac{40}{9} \, T_F N_l \right] + \mathcal{O}(\alpha_s^3) \, , \end{align} where $C_{i(I)}=C_F$ for the fundamental representation, and $C_{i(I)}=C_A$ for the adjoint representation of the gauge group. The three-parton correlation functions $F_1$ and $f_2$ were calculated in \cite{Ferroglia:2009ep, Ferroglia:2009ii}. The function $F_1$ describes correlations among three massive partons, and can be written as \begin{align} F_1(\beta_{IJ},\beta_{JK},\beta_{JI}) &= \left( \frac{\alpha_s}{4\pi} \right)^2 \, \frac{4}{3} \sum_{L,M,N} \epsilon_{LMN} \, g(\beta_{LM}) \, \beta_{NL} \coth(\beta_{NL}) + \mathcal{O}(\alpha_s^3) \, , \end{align} where the indices ($L$,$M$,$N$) run over ($I$,$J$,$K$) with $\epsilon_{LMN}=1$ if ($L$,$M$,$N$) is an even permutation of ($I$,$J$,$K$), and \begin{align} g(b) = \coth(b) \left[ b^2 + 2 b \ln(1-e^{-2b}) - \Li_2(e^{-2b}) + \frac{\pi^2}{6} \right] - b^2 - \frac{\pi^2}{6} \, . \end{align} The function $f_2$ describes correlations among two massive partons and one massless parton, and is given by \begin{align} f_2 \! \left( \beta_{IJ}, \ln\frac{w_{Jk} \, \sqrt{v_I^2}}{w_{Ik} \, \sqrt{v_J^2}} \right) &= - \left( \frac{\alpha_s}{4\pi} \right)^2 4 g(\beta_{IJ}) \times \ln\frac{w_{Jk} \, \sqrt{v_I^2}}{w_{Ik} \, \sqrt{v_J^2}} + \mathcal{O}(\alpha_s^3) \, , \end{align} Given the anomalous dimension matrix $\bm{\Gamma}_s$, one can solve the RGE (\ref{eq:zrge}) to obtain the renormalization factor $\bm{Z}_s$. To this end, it is useful to decompose $\bm{\Gamma}_s$ in (\ref{eq:gammaS}) into the form \begin{align} \bm{\Gamma}_s(L,\{v\},\mu) \equiv \frac{\alpha_s}{4\pi} \left( A_0 L_s + \bm{\Gamma}_0 \right) + \left( \frac{\alpha_s}{4\pi} \right)^2 \left( A_1 L_s + \bm{\Gamma}_1 \right) , \end{align} and then \begin{align} \label{eq:Zs} \ln \bm{Z}_s(L,\{v\},\mu) &= \frac{\alpha_s}{4\pi} \left( -\frac{A_0}{2\epsilon^2} + \frac{A_0 L_s + \bm{\Gamma}_0}{2\epsilon} \right) \nonumber \\ &\, + \left( \frac{\alpha_s}{4\pi} \right)^2 \left[ \frac{3A_0\beta_0}{8\epsilon^3} + \frac{-A_1 - 2\beta_0 (A_0 L_s + \bm{\Gamma}_0)}{8\epsilon^2} + \frac{A_1 L_s + \bm{\Gamma}_1}{4\epsilon} \right] + \mathcal{O}(\alpha_s^3) \, , \end{align} where $\beta_0 = (11 C_A - 4 T_F N_l) / 3$. \section{The soft function for $t\bar{t}$ production and the NLO result to arbitrary orders in $\epsilon$} \label{sec:nlo} While the formalism introduced in the last section is very generic and applies to a lot of processes, the actual calculation of the soft function could get very complicated when the number of independent scalar products $v_i \cdot v_j$ becomes large. In this paper, we begin with the special case of $t\bar{t}$ production, where the partonic processes can be described as \begin{align} h_1(p_1) + h_2(p_2) \to t(p_3) + \bar{t}(p_4) + X_s(p_s) \, , \end{align} where $p_3^2 = p_4^2 = m_t^2$ with $m_t$ the mass of the top quark. In the soft limit $p_s \to 0$, there are 3 independent Lorentz invariant kinematic variables, which can be chosen as $m_t$ and \begin{align} M^2 \equiv (p_1+p_2)^2 \, , \quad t_1 \equiv (p_1-p_3)^2-m_t^2 \, . \end{align} It is convenient to introduce dimensionless quantities $\beta$ and $y = \cos\theta$, defined as \begin{align} \beta = \sqrt{1-\frac{4m_t^2}{M^2}} \, , \quad t_1 = -\frac{M^2}{2} (1-\beta y) \, , \end{align} where $\beta$ and $\theta$ have the physical meanings of the 3-velocity and the scattering angle of the top quark in the partonic center-of-mass frame. The soft function depends on $\beta$ and $y$ through the following combinations of the directional vectors $v_i$: \begin{align} \frac{v_3 \cdot v_1 \, v_2 \cdot v_0}{\sqrt{v_3^2} \, v_1 \cdot v_2} &= \frac{v_4 \cdot v_2 \, v_1 \cdot v_0}{\sqrt{v_4^2} \, v_1 \cdot v_2} = - \frac{1-\beta y}{\sqrt{1-\beta^2}} \, , \quad \frac{v_3 \cdot v_4}{\sqrt{v_3^2 \, v_4^2}} = \frac{1+\beta^2}{1-\beta^2} \, , \nonumber \\ \frac{v_3 \cdot v_2 \, v_1 \cdot v_0}{\sqrt{v_3^2} \, v_1 \cdot v_2} &= \frac{v_4 \cdot v_1 \, v_2 \cdot v_0}{\sqrt{v_4^2} \, v_1 \cdot v_2} = -\frac{1+\beta y}{\sqrt{1-\beta^2}} \, , \end{align} and we also have \begin{align} \beta_{34} \equiv \arccosh \! \left( -\frac{v_3 \cdot v_4}{\sqrt{v_3^2 \, v_4^2}} - i0 \right) = \ln\frac{1+\beta}{1-\beta} - i\pi \, . \end{align} To calculate the bare soft function, it is convenient to start from the momentum-space version $\bm{S}_{\text{bare}}(\omega,\beta,y)$ introduced in Eq.~(\ref{eq:Smom}). Here we have expressed the dependence on the directional vectors $v_i$ through the quantities $\beta$ and $y$. The perturbative expansion of the momentum-space soft function can be written as \begin{align} \bm{S}_{\text{bare}}(\omega,\beta,y) = \delta(\omega) \, \frac{\bm{1}}{\sqrt{d_1d_2}} + \frac{\alpha_s}{4\pi} \, \bm{S}_{\text{bare}}^{(1)}(\omega,\beta,y) + \left( \frac{\alpha_s}{4\pi} \right)^2 \bm{S}_{\text{bare}}^{(2)}(\omega,\beta,y) + \cdots \, , \end{align} where $\bm{1}$ denotes the identity operator in color space, and the NLO soft function $\bm{S}_{\text{bare}}^{(1)}(\omega,\beta,y)$ was already calculated in \cite{Ahrens:2010zv}. In fact, since the NLO soft function involves 2-parton correlations at most, the same calculation can be applied to scattering processes with more than two colored particles in the final state. This has been done, e.g., for the case of $t\bar{t}H$ production in \cite{Broggio:2015lya, Broggio:2016lfj}, and can be extended to more complicated processes such the simultaneous production of two $t\bar{t}$ pairs. In order to calculate the NNLO soft function, however, we will need the NLO one to higher orders in the dimensional regulator $\epsilon$, which will produce a finite contribution to the renormalized NNLO soft function. We will describe such a calculation in the following, and the calculation of the NNLO bare soft function will be discussed in Section~\ref{sec:nnlosoft}. In \cite{Ahrens:2010zv}, the NLO soft function was calculated by brute-force evaluation of the relevant phase-space integrals. While it is possible to continue using such a method to obtain the higher order terms in $\epsilon$, it is useful to employ a more systematic approach which can be extended to the NNLO calculation. The definition (\ref{eq:Smom}) of the soft function involves a summation over soft final states $X_s$. It is easy to see that when $\ket{X_s}$ is the vacuum state, i.e., when there is no extra soft emission, the matrix elements involve scaleless integrals in dimensional regularization, which are defined to be zero. Therefore, at the NLO, the only contribution is given by \begin{align} \frac{\alpha_s}{4\pi} \, \bm{S}_{\text{bare}}^{(1)}(\omega,\beta,y) = \frac{1}{\sqrt{d_1d_2}} \int \frac{d^dk}{(2\pi)^{d-1}} \, \delta^+(k^2) \braket{0 | \bm{O}_s^\dagger(0) | g(k)} \braket{g(k) | \bm{O}(0) | 0} \delta ( \omega - v_0 \cdot k ) \, , \end{align} where a summation over the helicity and the color of the gluon is understood. We use \texttt{QGRAF} \cite{Nogueira:1991ex} to generate the squared amplitudes in the above formula, and use \texttt{FORM} \cite{Vermaseren:2000nd} to manipulate the resulting expression. The phase-space integrals appearing in the result have the general form \begin{align} \label{eq:Iij} I_{ij}(\epsilon,\omega,\beta,y) = \int [dk] \, \frac{v_i \cdot v_j}{v_i \cdot k \, v_j \cdot k} \, \delta( \omega - v_0 \cdot k ) \, , \end{align} where $[dk] \equiv d^dk \, \delta^+(k^2)$. From symmetry considerations, it is obvious that we only need to calculate $I_{12}$, $I_{13}$, $I_{33}$ and $I_{34}$. At this point, we note that while the soft function itself does not depend on the normalizations of the directional vectors $v_i$, it is convenient to fix them in practical calculations. Therefore we will choose the normalizations $v_1 \cdot v_0 = v_2 \cdot v_0 = v_1 \cdot v_2 = 2$, and $v_3^2=v_4^2 = 1-\beta^2$ in the following. Note that the normalizations of $v_3$ and $v_4$ are unconventional. In the center-of-mass frame of the incoming partons, these vectors are then parameterized by \begin{gather} v_1 = (1,0,0,1) \, , \quad v_3 = - (1,0,\beta\sin\theta,\beta y) \, ,\nonumber \\ v_2 = (1,0,0,-1) \, , \quad v_4 = - (1,0,-\beta\sin\theta,-\beta y) \, . \end{gather} The phase-space integrals appearing in the result are not independent, and we employ the integration-by-parts (IBP) \cite{Chetyrkin:1981qh, Tkachov:1981wb} method to find relations among them. To this end it is necessary to use the relation \begin{align} \label{eq:cutkosky1} \delta^+(k^2) \equiv \delta(k^2) \, \theta(k^0) = \frac{1}{2\pi i} \left( \frac{1}{k^2 + i0} - \frac{1}{k^2 - i0} \right) , \end{align} which is known as the reverse unitarity method \cite{Anastasiou:2002yz}, to express the phase-space integrals in terms of loop integrals. The $\delta$-function in Eq.~(\ref{eq:Iij}) can be similarly written as \begin{align} \label{eq:cutkosky2} \delta(\omega - v_0 \cdot k) = \frac{1}{2\pi i} \left( \frac{1}{\omega - v_0 \cdot k + i0} - \frac{1}{\omega - v_0 \cdot k - i0} \right) \end{align} These integrals are then feed into the program packages \texttt{Reduze2} \cite{vonManteuffel:2012np} and \texttt{FIRE5} \cite{Smirnov:2014hma}, which use the IBP relations to reduce the relevant loop integrals to a number of master integrals. After the IBP reduction, one can recover the phase-space integrals by reversing the relations (\ref{eq:cutkosky1}) and (\ref{eq:cutkosky2}). In the NLO case, the master integrals can be chosen as $F_{0,0}$, $F_{0,1}$ and $F_{1,1}$, where $F_{a_1,a_2}$ is defined as \begin{align} F_{a_1,a_2} \equiv \int [dk] \, \delta ( \omega - v_0 \cdot k ) \, \frac{1}{(v_1 \cdot k)^{a_1} \, (-v_3 \cdot k)^{a_2}} \, . \end{align} In order to calculate the master integrals to arbitrary orders in the dimensional regulator $\epsilon$, we employ the method of differential equations \cite{Kotikov:1990kg, Gehrmann:1999as}. Taking the partial derivative of $F_{a_1,a_2}$ with respect to $\beta$ will lead to integrals with $a_2$ index shifted. However, all these integrals can be expressed as linear combinations of the master integrals. We collect the master integrals into a vector with 3 components \begin{align} \vec{f}(\epsilon,\beta,y,\omega) \equiv ( F_{0,0}, F_{0,1}, F_{1,1} )^{\mathsf{T}} \, . \end{align} The partial derivative of $\vec{f}$ with respect to $\beta$ then has the form \begin{align} \partial_\beta \vec{f}(\epsilon, \beta, y, \omega) = \hat{A}(\epsilon, \beta, y, \omega) \, \vec{f}(\epsilon, \beta, y, \omega) \, , \end{align} where $\hat{A}$ is a $3 \times 3$ matrix. The matrix $\hat{A}$ is a rather complicated function of $\epsilon$, $\beta$, $y$ and $\omega$, which makes the differential equation not so straightforward to solve. It is possible to simplify the above equation by a linear transformation $\vec{g}(\epsilon,\beta,y) = \hat{T}(\epsilon, \beta, y, \omega) \, \vec{f}(\epsilon, \beta, y, \omega)$, where the matrix $\hat{T}$ is given by \begin{align} \hat{T}(\epsilon, \beta, y, \omega) = \frac{2 \, \Gamma(1-2\epsilon)}{\pi^{1-\epsilon} \, \omega^{1-2\epsilon} \, \Gamma(1-\epsilon)} \begin{pmatrix} 1-2\epsilon & 0 & 0 \\ 0 & \epsilon \, \omega \, \beta & 0 \\ 0 & 0 & \epsilon \, \omega^2 \, (1 - \beta y) \end{pmatrix} \, . \end{align} The new vector $\vec{g}$ (so-called ``canonical basis'') satisfies a simpler differential equation \cite{Henn:2013pwa} \begin{align} \label{eq:nloDEQ} \partial_\beta \vec{g}(\epsilon,\beta,y) = \epsilon \, \hat{B}(\beta,y) \, \vec{g}(\epsilon,\beta,y) \, . \end{align} Note that the matrix $\hat{B}$ does not depend on $\epsilon$ and $\omega$ anymore, and is given by \begin{align} \hat{B}(\beta,y) = -\frac{\hat{a}}{\beta-1} + \frac{\hat{b}}{\beta} + \frac{\hat{c}}{\beta+1} - \frac{\hat{d}}{\beta-1/y} \, , \end{align} where \begin{align} \hat{a} = \begin{pmatrix} 0 & 0 & 0 \\ 1 & 1 & 0 \\ 2 & 2 & 0 \end{pmatrix} , \quad \hat{b} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 2 & 0 \\ 4 & 0 & 2 \end{pmatrix} , \quad \hat{c} = \begin{pmatrix} 0 & 0 & 0 \\ 1 & -1 & 0 \\ -2 & 2 & 0 \end{pmatrix} , \quad \hat{d} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 2 \end{pmatrix} . \end{align} The vector $\vec{g}$ has no singularity in $\epsilon$ and can be Taylor-expanded in the form \begin{align} \vec{g}(\epsilon,\beta,y) = \sum_{n=0}^{\infty} \vec{g}^{(n)}(\beta,y) \, \epsilon^n \, . \end{align} The differential equation (\ref{eq:nloDEQ}) can then be solved order by order in $\epsilon$: \begin{align} \label{eq:nloDEQ2} \vec{g}^{(n+1)}(\beta,y) = \int_{\beta_0}^\beta d\beta' \, \hat{B}(\beta',y) \, \vec{g}^{(n)}(\beta',y) + \vec g^{(n+1)}(\beta_0,y) \, , \end{align} where the boundary conditions $\vec{g}^{(n)}(\beta_0,y)$ at some boundary point $\beta_0$ need to be obtained through other methods, which will be discussed later. Such iterated integrals give rise to so-called generalized polylogarithms (GPLs) \cite{Goncharov:1998kja}, which are defined by \begin{align} G(a_1,\ldots,a_n;\beta) \equiv \int_0^\beta \frac{d\beta'}{\beta'-a_1} G(a_2,\ldots,a_n;\beta') \, , \end{align} with $G(;\beta) \equiv 1$. The special case where all the $a_i$'s are zero is defined as \begin{align} G(0,\ldots,0;\beta) \equiv \frac{1}{n!} \log^n\beta \, . \end{align} The GPLs have many good mathematical properties (for a review, see e.g. \cite{Duhr:2014woa}), and can be straightforwardly evaluated by program packages such as \texttt{GiNaC} \cite{Bauer:2000cp}. They therefore form a wonderful basis for expressing our results. In order to solve the differential equations (\ref{eq:nloDEQ}), we also need the explicit expression of $\vec{g}(\epsilon,\beta,y)$ at the boundary point $\beta_0$, serving as the boundary condition. In our case, it is convenient to choose the point $\beta = 0$ as the boundary. The calculation of the boundary condition can be simplified by observing that the matrix $\hat{B}$ contains a singular term proportional to $1/\beta$. It is clear that $\partial_\beta F_{a_1,a_2}$ can produce a $1/\beta$ coefficient only if $F_{a_1,a_2}$ itself develops a power-like or logarithmic divergence when $\beta \to 0$. One can easily check that all the master integrals in $\vec{f}$ are regular in the limit $\beta \to 0$. The same applies to the components of $\vec{g}$ since the transformation matrix $\hat{T}$ is also regular. It follows that \begin{align} 0 = \lim_{\beta \to 0} \beta \, \partial_\beta \vec{g}(\epsilon,\beta,y) = \lim_{\beta \to 0} \beta \, \epsilon \, \hat{B}(\beta,y) \, \vec{g}(\epsilon,\beta,y) \, , \end{align} which leads to the conditions \begin{align} \label{eq:boundaryrelation} g_3(\beta=0) = -2g_1(\beta=0) \, , \quad g_2(\beta=0) = 0 \, , \end{align} with $g_i$ being the $i$-th component of $\vec{g}$. We now only need to directly evaluate the component $g_1$ (which actually doesn't depend on $\beta$) at the boundary $\beta = 0$, which is very simple: \begin{align} g_1(\epsilon,0,y) = \frac{2 \, \Gamma(2-2\epsilon)}{\pi^{1-\epsilon} \, \omega^{1-2\epsilon} \, \Gamma(1-\epsilon)} \int [dk] \, \delta(\omega-v_0\cdot k) = 1 \, . \end{align} We now have everything we need to express the NLO soft function as an abstract matrix in color space, in terms of the inner products of color generators $\bm{T}_i \cdot \bm{T}_j \equiv \bm{T}^a_i \, \bm{T}^a_j$. This abstract form is generic and especially useful if we want to apply our result to more complicated processes. However, for practical computations of $t\bar{t}$ cross sections, it is convenient to choose a color basis and express the soft function as a $2 \times 2$ matrix in the quark-anti-quark annihilation channel, and a $3 \times 3$ matrix in the gluon fusion channel. Such matrix elements are defined as \begin{align} \bm{S}_{IJ}(\omega,\beta,y) \equiv \braket{c_I | \bm{S}(\omega,\beta,y) | c_J} \, , \end{align} where $\{\ket{c_I}\}$ is an orthogonal color basis. In accordance with \cite{Ahrens:2010zv}, we choose the singlet-octet basis with \begin{gather} \left(c_1^{q\bar{q}}\right)_{\{a\}} = \delta_{a_1a_2} \delta_{a_3a_4} \, , \quad \left(c_2^{q\bar{q}}\right)_{\{a\}} = t^c_{a_2a_1} t^c_{a_3a_4} \, , \nonumber \\ \label{eq:colorbasis} \left(c_1^{gg}\right)_{\{a\}} = \delta^{a_1a_2} \delta_{a_3a_4} \, , \quad \left(c_2^{gg}\right)_{\{a\}} = if^{a_1a_2c} \, t^c_{a_3a_4} \, , \quad \left(c_3^{gg}\right)_{\{a\}} = d^{a_1a_2c} \, t^c_{a_3a_4} \, , \end{gather} where $a_i$ is the color index of the $i$-th parton. We have compared the resulting NLO matrices with those (up to order $\epsilon^0$) in \cite{Ahrens:2010zv} and find complete agreement. \section{The NNLO bare soft function} \label{sec:nnlosoft} We now turn to the calculation of the NNLO bare soft function, which is the main new result of our paper. The contributions to the bare NNLO soft function consist of two parts: the virtual-real diagrams and the double-real diagrams. The two-loop virtual diagrams leading to vanishing integrals in dimensional regularization and we do not need to consider them. \subsection{Double-real contributions} We first present the calculation of the double-real contributions. As in the NLO calculation, we generate relevant Feynman diagrams and amplitudes using \texttt{QGRAF} \cite{Nogueira:1991ex}. The phase-space integrals in the double-real contribution have the generic form \begin{align} \int [dk_1] \, [dk_2] \, \delta \big( \omega - v_0 \cdot (k_1+k_2) \big) \, \mathcal{F}(\{v\},k_1,k_2) \, , \end{align} where $\mathcal{F}$ denotes the integrand consisting of scalar products among the directional vectors $v_i$ and the two momenta $k_1$ and $k_2$. We generically call these scalar products ``propagators''. There exist many different propagators in our squared amplitudes. However, only a subset of them appears in any individual integral. It is therefore useful to classify all the integrals into a couple of ``integral families'', each defined by a particular set of propagators. For this purpose, we first classify the relevant Feynman diagrams into three categories according to the number of independent Wilson lines involved: 1) those involving one or two Wilson lines; 2) those involving three Wilson lines; and 3) those involving all four Wilson lines. We discuss the calculation of the first two categories in the following. The diagrams involving all four Wilson lines can be trivially expressed as a convolution of two NLO integrals, and we do not bother to discuss them here. \subsubsection{One- or two-Wilson-line diagrams} \begin{figure}[t!] \begin{center} \includegraphics[width=0.6\textwidth]{./diagrams/One-Wilson-lines-RR.eps} \end{center} \vspace{-2ex} \caption{One-Wilson-line double-real diagrams contributing to the NNLO soft function.} \label{fig:NNLORRoneWilsonLine} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.6\textwidth]{./diagrams/Two-Wilson-lines-RR.eps} \includegraphics[width=0.6\textwidth]{./diagrams/Two-Wilson-lines-RR1.eps} \end{center} \vspace{-2ex} \caption{Two-Wilson-line double-real diagrams contributing to the NNLO soft function.} \label{fig:NNLORRtwoWilsonLine} \end{figure} The Feynman diagrams involving only one Wilson line along the vector $v_i$ are depicted in Figure~\ref{fig:NNLORRoneWilsonLine}. It is clear that such diagrams must be proportional to $v_i^2$, and therefore vanish if $v_i$ is light-like. The shaded-blob in the first diagram denotes loops of quarks, gluons and ghosts (we work in the Feynman gauge). Similarly, Figure~\ref{fig:NNLORRtwoWilsonLine} shows the diagrams involving two Wilson lines in the directions $v_i$ and $v_j$. The integrals coming from these diagrams can be classified into two integral families. The first family is defined by the following set of 6 propagators: \begin{align} \label{eq:family1} \{ (k_1+k_2)^2, \, v_1 \cdot k_2, \, v_1 \cdot (k_1+k_2), \, v_2 \cdot k_1, \, v_3 \cdot k_1, \, v_3 \cdot (k_1+k_2) \} \, . \end{align} The corresponding integrals have the form \begin{align} \label{eq:F1} F^{(1)}_{a_1,a_2,a_3,a_4,a_5,a_6} \equiv \int [dk_1] \, [dk_2] \, \delta \big( \omega - v_0 \cdot (k_1+k_2) \big) \, \prod_{i=1}^6 (D_i)^{-a_i} \, , \end{align} where $D_i$ refers to the propagators in Eq.~(\ref{eq:family1}). We again feed all the integrals into \texttt{Reduze2} and \texttt{FIRE5}, which reduce them to 14 master integrals. We collect these master integrals into a vector \begin{align} \vec{f}^{(1)}(\epsilon,\beta,y,\omega) \equiv \big( &F^{(1)}_{0,0,0,0,0,0}, F^{(1)}_{1,0,0,0,-1,2}, F^{(1)}_{0,0,0,0,1,0}, F^{(1)}_{0,0,0,0,1,1}, F^{(1)}_{0,1,0,0,0,2}, \nonumber \\ &F^{(1)}_{0,0,1,0,0,2}, F^{(1)}_{0,1,0,0,1,1}, F^{(1)}_{1,1,0,0,1,-1}, F^{(1)}_{1,1,-1,0,1,0}, F^{(1)}_{1,1,0,0,1,0}, \nonumber \\ &F^{(1)}_{1,0,1,0,1,0}, F^{(1)}_{0,0,1,0,1,0}, F^{(1)}_{0,1,1,0,1,1}, F^{(1)}_{1,1,0,1,0,0} \big)^{\mathsf{T}} \, . \end{align} We transform the above master integrals to a ``canonical basis'' via a linear transformation $\vec{g}^{(1)}(\epsilon,\beta,y) = \hat{T}^{(1)}(\epsilon,\beta,y,\omega) \, \vec{f}^{(1)}(\epsilon,\beta,y,\omega)$, where the transformation matrix $\hat{T}^{(1)}$ is diagonal with its diagonal entries given by \begin{align} \hat{T}^{(1)}(\epsilon,\beta,y,\omega) = &\frac{8 \, \Gamma(1-4\epsilon)}{\pi^{2-2\epsilon} \, \omega^{3-4\epsilon} \, \Gamma^2(1-\epsilon)} \nonumber \\ \times \diag \Big\{ &(1-2\epsilon)(1-4\epsilon)(3-4\epsilon ), \, \epsilon^2(1-2\epsilon )\omega^3\beta, \, \epsilon(1-2\epsilon)(1-4\epsilon)\omega\beta, \nonumber \\ &\epsilon^2(1-4\epsilon)\omega^2\beta^2, \, \epsilon^2\omega^3\beta^2, \, \epsilon(1-2\epsilon)\omega^3\beta^2, \, \epsilon^3\omega^3\beta(1-\beta y), \, \epsilon^3\omega^3, \nonumber \\ &\epsilon^3\omega^3\beta, \, \epsilon^3\omega^4(1-\beta y), \, \epsilon^3\omega^4(1-\beta y), \, \epsilon^2(1-4\epsilon)\omega^2(1+\beta y), \nonumber \\ &\epsilon^3\omega^4(1-\beta y)^2, \, \epsilon^3\omega^4 \Big\} \, . \end{align} The transformed vector of master integrals $\vec{g}^{(1)}$ satisfies the differential equation \begin{align} \label{eq:diffg1} \partial_{\beta}\vec{g}^{(1)}(\epsilon,\beta,y) = \epsilon \left( -\frac{\hat{a}^{(1)}}{\beta-1} + \frac{\hat{b}^{(1)}}{\beta} + \frac{\hat{c}^{(1)}}{\beta+1} + \frac{\hat{d}^{(1)}}{\beta+1/y} - \frac{\hat{e}^{(1)}}{\beta-1/y} \right) \vec{g}^{(1)}(\epsilon,\beta,y) \, , \end{align} where $\hat{a}^{(1)}$, $\hat{b}^{(1)}$, $\hat{c}^{(1)}$, $\hat{d}^{(1)}$ and $\hat{e}^{(1)}$ are $14 \times 14$ matrices with matrix elements independent of $\epsilon$ and $\beta$. We now turn to the second integral family in the one- or two-Wilson-line diagrams. It is defined by the set of propagators \begin{align} \label{eq:family2} \{ (k_1+k_2)^2, \, v_1\cdot k_1, \, v_1 \cdot (k_1+k_2), \, v_4 \cdot k_1, \, v_3 \cdot k_2, \, v_3 \cdot (k_1+k_2) \} \, . \end{align} We denote the corresponding integrals as $F^{(2)}_{a_1,a_2,a_3,a_4,a_5,a_6}$, defined similar to Eq.~(\ref{eq:F1}), but with the propagators $D_i$ chosen from the above set (\ref{eq:family2}). The master integrals in this family can be chosen as \begin{align} \vec{f}^{(2)}(\epsilon,\beta,y,\omega) \equiv \big( &F^{(2)}_{0,0,0,0,0,0}, F^{(2)}_{1,0,0,0,-1,2}, F^{(2)}_{0,0,0,0,1,0}, F^{(2)}_{0,0,0,0,1,1}, F^{(2)}_{0,0,0,1,0,0}, \nonumber \\ &F^{(2)}_{0,0,0,1,1,0}, F^{(2)}_{1,0,0,1,1,0}, F^{(2)}_{0,0,0,1,1,1}, F^{(2)}_{1,0,0,1,0,1} \big)^{\mathsf{T}} \, , \end{align} which are transformed into a canonical basis $\vec{g}^{(2)}(\epsilon,\beta,y)$ by the following transformation matrix \begin{align} \hat{T}^{(2)}(\epsilon,\beta,y,\omega) = &\frac{8 \, \Gamma(1-4\epsilon)}{\pi^{2-2\epsilon} \, \omega^{3-4\epsilon} \, \Gamma^2(1-\epsilon)} \nonumber \\ \times \diag \Big\{ &(1-2\epsilon)(1-4\epsilon)(3-4\epsilon ), \, \epsilon^2(1-2\epsilon)\omega^3\beta, \, \epsilon(1-2\epsilon)(1-4\epsilon)\omega\beta, \nonumber \\ &\epsilon^2(1-4\epsilon)\omega^2\beta^2, \, \epsilon(1-2\epsilon)(1-4\epsilon)\omega\beta, \, \epsilon^2(1-4\epsilon)\omega^2\beta^2, \nonumber \\ &\epsilon^3\omega^4\beta, \, \epsilon^3\omega^3\beta^2, \epsilon^3\omega^4\beta \Big\} \, . \end{align} The differential equation satisfied by $\vec{g}^{(2)}$ is given by \begin{align} \label{eq:diffg2} \partial_{\beta}\vec{g}^{(2)}(\epsilon,\beta,y) = \epsilon \left( -\frac{\hat{a}^{(2)}}{\beta-1} + \frac{\hat{b}^{(2)}}{\beta} + \frac{\hat{c}^{(2)}}{\beta+1} \right) \vec{g}^{(2)}(\epsilon,\beta,y) \, , \end{align} with $\hat{a}^{(2)}$, $\hat{b}^{(2)}$ and $\hat{c}^{(2)}$ being $9 \times 9$ constant matrices. In order to solve the differential equations (\ref{eq:diffg1}) and (\ref{eq:diffg2}), we also need the boundary conditions at some value of $\beta$. Similar to the NLO case, we again choose the point $\beta = 0$ as the boundary, where only 7 of the integrals in $\vec{g}^{(1)}$ and $\vec{g}^{(2)}$ are non-vanishing. Some of the boundary conditions are related to each other, similar to (\ref{eq:boundaryrelation}). The independent ones are given by \begin{align} \label{eq:NNLORRboundary} g_1^{(1)}(\epsilon, 0, y) = g_1^{(2)}(\epsilon, 0, y) &= \frac{4 \, \Gamma(4-4\epsilon)}{\pi^{2-2\epsilon} \, \omega^{3-4\epsilon} \, \Gamma^2(1-\epsilon)} \, \int [dk_1] \, [dk_2] \, \delta \big( \omega - v_0 \cdot (k_1+k_2) \big) = 1 \, , \nonumber \\ g_{12}^{(1)}(\epsilon, 0, y) &= \frac{8 \, \Gamma(2-4\epsilon)}{\pi^{2-2\epsilon} \, \omega^{1-4\epsilon} \, \Gamma^2(-\epsilon)} \, \int [dk_1] \, [dk_2] \, \frac{\delta \big( \omega - v_0 \cdot (k_1+k_2) \big)}{v_1 \cdot (k_1+k_2) \; v_3 \cdot k_1} \, , \nonumber \\ &= \frac{2\pi^2}{3} \, \epsilon^2 + \frac{84\zeta_3}{3} \, \epsilon^3 + \frac{4\pi^4}{3} \, \epsilon^4 + \mathcal{O}(\epsilon^5) \, , \nonumber \\ g_{14}^{(1)}(\epsilon, 0, y) &= \frac{8 \, \epsilon \, \Gamma(1-4\epsilon) \, \omega^{1+4\epsilon}}{\pi^{2-2\epsilon} \, \Gamma^2(-\epsilon)} \, \int [dk_1] \, [dk_2] \, \frac{\delta \big( \omega - v_0 \cdot (k_1+k_2) \big)}{(k_1+k_2)^2 \; v_1 \cdot k_2 \; v_2 \cdot k_1} \, , \nonumber \\ &= -\frac{4 \, \Gamma(-2 \epsilon) \, \Gamma(1-2\epsilon)}{\Gamma(1-\epsilon) \, \Gamma (-3\epsilon)} \, _3F_2(-\epsilon, -\epsilon, -\epsilon; 1-\epsilon, -3 \epsilon; 1) \, . \end{align} The hypergeometric function appearing in the above formula can be expanded in $\epsilon$ with the help of the program package \texttt{HypExp} \cite{Huber:2007dx}. The differential equations of $\vec{g}^{(1)}$ and $\vec{g}^{(2)}$ can then be solved order-by-order in $\epsilon$ in terms of the GPLs, similar to the NLO case (\ref{eq:nloDEQ2}). \subsubsection{Three-Wilson-line diagrams} \label{sec:rr3} We now turn to the three-Wilson-line diagrams. In the calculation of the two-loop anomalous dimensions and infrared singularities \cite{Ferroglia:2009ep, Ferroglia:2009ii}, it was found that the three-Wilson-line diagrams are the most complicated ones. They give rise to the three-parton functions $F_1$ and $f_2$ in Eq.~(\ref{eq:gammaS}). It is therefore highly interesting to see how these complications appear in the calculation of the NNLO soft function. As will be clear below, the genuine three-parton correlations only arise from the virtual-real contributions, and the double-real three-parton contributions can always be expressed as convolutions of two NLO integrals. This can be understood since the soft function is a Hermitian matrix from its definition (\ref{eq:Smom}). On the other hand, the genuine three-parton contributions (such as the functions $F_1$ and $f_2$) multiply the anti-Hermitian color factor $if^{abc}\bm{T}^a_i\bm{T}^b_j\bm{T}^c_k$. Therefore the soft function can only receive contributions from the imaginary parts of the three-parton integrals, which are only present in the virtual-real diagrams but not in the double-real diagrams. \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth]{./diagrams/Three-Wilson-lines-RR1.eps} \vspace{-2ex} \caption{A set of three-Wilson-line double-real diagrams adding up to a convolution of two NLO integrals.} \label{fig:NNLORRthreeWilsonLine1} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth]{./diagrams/Three-Wilson-lines-RR2.eps} \vspace{-2ex} \caption{A pair of three-Wilson-line double-real diagrams adding up to zero.} \label{fig:NNLORRthreeWilsonLine2} \end{figure} More practically, one can carry out an analysis similar to that in \cite{Ferroglia:2012uy} (done for the massless soft function). In that paper, it was demonstrated that the three-Wilson-line integrals in the double-real contributions can be combined into convolutions of NLO integrals. The same applies to the massive soft function. To see this, consider for example the set of diagrams shown in Figure~\ref{fig:NNLORRthreeWilsonLine1}. Each of these diagrams gives rise to rather complicated integrals. However, the sum of the four diagrams leads to the simple integral \begin{align} \int [dk_1] \, [dk_2] \, \frac{ v_i \cdot v_j \; v_i \cdot v_k \; \delta \big( \omega - v_0 \cdot (k_1+k_2) \big) }{ v_i \cdot k_1 \; v_j \cdot k_1 \; v_i \cdot k_2 \; v_k \cdot k_2 } \, , \end{align} which is obviously a convolution of two NLO integrals. This fact does not depend on whether $v_i$, $v_j$ and $v_k$ are light-like or time-like, and therefore applies equally well to the massless and the massive soft functions. Another example is the two diagrams involving the three-gluon vertex, shown in Figure~\ref{fig:NNLORRthreeWilsonLine2}. As demonstrated in \cite{Ferroglia:2012uy}, they add up to zero due to the color structure, irrespective of the nature of the Wilson lines involved. \subsection{Virtual-real contributions} In this subsection, we present the calculation of the virtual-real contributions. As discussed in Section~\ref{sec:rr3}, the virtual-real diagrams contain genuine three parton correlations. In particular, the scale-dependent part of their contributions involves the complicated function $f_2$ in the anomalous dimension matrix (\ref{eq:gammaS}) calculated in \cite{Ferroglia:2009ep, Ferroglia:2009ii}. It can be expected that the calculation of the scale-independent pieces will be more involved. It was not known at all whether or not they can be written in terms of GPLs. In our explicit calculations, we find that a canonical basis can be constructed and therefore all the master integrals for the virtual-real contributions can be solved iteratively as GPLs to all orders in the dimensional regulator $\epsilon$. \begin{figure}[t!] \begin{center} \includegraphics[width=0.6\textwidth]{./diagrams/One-Wilson-lines-RV.eps} \end{center} \vspace{-2ex} \caption{One-Wilson-line virtual-real diagrams contributing to the NNLO soft function.} \label{fig:NNLOVRoneWilsonLine} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.6\textwidth]{./diagrams/Two-Wilson-lines-RV.eps} \includegraphics[width=0.6\textwidth]{./diagrams/Two-Wilson-lines-RV1.eps} \end{center} \vspace{-2ex} \caption{Two-Wilson-line virtual-real diagrams contributing to the NNLO soft function.} \label{fig:NNLOVRtwoWilsonLine} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth]{./diagrams/Three-Wilson-lines-RV.eps} \caption{Three-Wilson-line virtual-real diagrams.} \label{fig:NNLOVRthreeWilsonLine} \end{figure} The one-, two- and three-Wilson-line virtual-real diagrams are shown in Figure~\ref{fig:NNLOVRoneWilsonLine}, \ref{fig:NNLOVRtwoWilsonLine} and \ref{fig:NNLOVRthreeWilsonLine}, respectively. It is easy to understand that the four Wilson line diagrams always lead to scaleless integrals in dimensional regularization and we don't need to consider them. We write a generic virtual-real integral in the form \begin{align} \int d^dk \, d^dl \, \delta \big( \omega - v_0 \cdot k \big) \, \mathrm{Disc} \big[ (k^2)^{-a_1} \big] \, \mathcal{F}(\{v\},k,l) \, , \end{align} where $l$ is the loop momentum, and we have extracted the delta-function from the phase-space measure $[dk]$ into \begin{align} \mathrm{Disc} \big[ (k^2)^{-a_1} \big] \equiv \frac{1}{2\pi i} \left[ (k^2+i0)^{-a_1} - (k^2-i0)^{-a_1} \right] . \end{align} We find this convenient since we will use the method of differential equation to raise the power $a_1$. The virtual-real integrals can again be classified into two integral families. The first family is defined by the following set of 7 propagators $D_i$ ($i=2,\ldots,8$): \begin{align} \label{eq:family3} \{ l^2, \, (k+l)^2, \, v_1 \cdot k, \, v_1 \cdot l, \, v_2 \cdot (k+l), \, v_3 \cdot k, \, v_3 \cdot l \} \, . \end{align} The corresponding integrals can be expressed as \begin{align} F^{(3)}_{a_1,a_2,a_3,a_4,a_5,a_6,a_7,a_8} = \int d^dk \, d^dl \, \delta \big( \omega - v_0 \cdot k \big) \, \mathrm{Disc} \big[ (k^2)^{-a_1} \big] \prod_{i=2}^8 (D_i)^{-a_i} \, . \end{align} We choose the master integrals in this family to be \begin{align} \vec{f}^{(3)}(\epsilon,\beta,y,\omega) \equiv \big( &F_{1,1,1,0,0,0,-1,2}^{(3)}, F_{2,0,1,0,0,0,0,2}^{(3)}, F_{1,0,1,1,0,0,0,2}^{(3)}, F_{1,1,0,0,0,1,0,1}^{(3)}, F_{1,1,1,0,0,1,-1,1}^{(3)}, \nonumber \\ &F_{1,1,1,-1,0,1,0,1}^{(3)}, F_{1,1,1,0,0,1,0,1}^{(3)}, F_{1,1,1,1,-1,1,0,1}^{(3)}, F_{1,1,1,1,0,1,0,1}^{(3)}, F_{1,1,1,0,1,1,0,0}^{(3)}, \nonumber \\ &F_{1,1,1,-1,1,1,1,0}^{(3)}, F_{1,1,1,0,1,1,1,0}^{(3)}, F_{1,1,1,-1,1,1,1,-1}^{(3)}, F_{1,1,0,0,0,1,1,2}^{(3)}, F_{1,1,0,0,0,1,1,1}^{(3)} \big)^{\mathsf{T}} \, . \end{align} The transformation matrix \begin{align} \hat{T}^{(3)}(\epsilon, \beta, y, \omega) &= \frac{ i e^{-2i\pi\epsilon} \, \omega^{4\epsilon} \, \Gamma(-2\epsilon) }{ \pi^{3-2\epsilon} \, \Gamma^2(-\epsilon) \, \Gamma(1+2\epsilon) } \, \hat{T}'^{(3)}(\epsilon, \beta, y, \omega) \end{align} takes the master integrals into a canonical basis $\vec{g}^{(3)}(\epsilon,\beta,y)$, which satisfies the differential equation \begin{align} \label{eq:diffg3} \partial_{\beta}\vec{g}^{(3)}(\epsilon,\beta,y) = \epsilon \left( -\frac{\hat{a}^{(3)}}{\beta-1} + \frac{\hat{b}^{(3)}}{\beta} + \frac{\hat{c}^{(3)}}{\beta+1} - \frac{\hat{d}^{(3)}}{\beta-1/y} + \frac{\hat{e}^{(3)}}{\beta+1/y} \right) \vec{g}^{(3)}(\epsilon,\beta,y) \, . \end{align} The second integral family in the virtual-real contributions is defined by the propagators \begin{align} \label{eq:family4} \{ l^2, \, (k+l)^2, \, v_1 \cdot k, \, v_1 \cdot l, \, v_4 \cdot (k+l), \, v_3 \cdot k, \, v_3 \cdot l \} \, . \end{align} We choose the master integrals to be \begin{align} \vec{f}^{(4)}(\epsilon,\beta,y,\omega) \equiv \big( &F_{1,1,1,0,0,0,0,1}^{(4)}, F_{2,0,1,0,0,0,0,2}^{(4)}, F_{1,2,0,0,0,1,1,0}^{(4)}, F_{1,1,0,1,0,2,0,0}^{(4)}, F_{1,0,1,1,0,0,0,2}^{(4)}, \nonumber \\ &F_{2,1,0,0,0,1,0,1}^{(4)}, F_{1,1,0,0,0,2,0,1}^{(4)}, F_{1,1,0,0,0,1,1,1}^{(4)}, F_{1,0,1,1,0,1,0,1}^{(4)}, F_{1,1,1,0,0,1,0,1}^{(4)}, \nonumber \\ &F_{1,1,0,1,0,1,0,1}^{(4)}, F_{1,1,1,0,1,1,0,0}^{(4)}, F_{1,1,1,-1,1,1,0,0}^{(4)}, F_{1,1,1,0,1,1,-1,0}^{(4)}, F_{1,1,1,-1,1,1,1,0}^{(4)}, \nonumber \\ &F_{1,1,1,0,1,1,1,0}^{(4)}, F_{1,1,1,0,1,1,1,-1}^{(4)}, F_{1,1,1,1,-1,1,0,1}^{(4)}, F_{1,1,1,1,0,1,-1,1}^{(4)}, F_{1,1,1,1,0,1,0,1}^{(4)}, \nonumber \\ &F_{1,1,1,1,-1,1,-1,1}^{(4)} \big)^{\mathsf{T}} \, , \end{align} with the transformation matrix \begin{align} \hat{T}^{(4)}(\epsilon, \beta, y, \omega) &= \frac{ i e^{-2 i\pi\epsilon} \, \omega^{4\epsilon} \, \Gamma(-2\epsilon) }{ \pi^{3-2\epsilon} \, \Gamma^2(-\epsilon) \, \Gamma(1+2\epsilon) } \, \hat{T}'^{(4)}(\epsilon, \beta, y, \omega) \, . \end{align} The resulting canonical basis $\vec{g}^{(4)}(\epsilon,\beta,y)$ satisfies the differential equation \begin{align} \label{eq:diffg4} \partial_{\beta}\vec{g}^{(4)}(\epsilon,\beta,y) = \epsilon \left( -\frac{\hat{a}^{(4)}}{\beta-1} + \frac{\hat{b}^{(4)}}{\beta} + \frac{\hat{c}^{(4)}}{\beta+1} - \frac{\hat{d}^{(4)}}{\beta-1/y} + \frac{\hat{e}^{(4)}}{\beta+1/y} \right) \vec{g}^{(4)}(\epsilon,\beta,y) \, . \end{align} The $15 \times 15$ matrices $\hat{T}'^{(3)}$, $\hat{a}^{(3)}$, $\hat{b}^{(3)}$, $\hat{c}^{(3)}$, $\hat{d}^{(3)}$, $\hat{e}^{(3)}$ as well as the $21 \times 21$ matrices $\hat{T}'^{(4)}$, $\hat{a}^{(4)}$, $\hat{b}^{(4)}$, $\hat{c}^{(4)}$, $\hat{d}^{(4)}$, $\hat{e}^{(4)}$ are non-diagonal with lengthy expressions. We choose to give them in an electronic file attached to the arXiv submission, together with the matrices $\hat{a}^{(1)}$, $\hat{b}^{(1)}$, $\hat{c}^{(1)}$, $\hat{d}^{(1)}$, $\hat{e}^{(1)}$, $\hat{a}^{(2)}$, $\hat{b}^{(2)}$ and $\hat{c}^{(2)}$ appearing in the calculation of the double-real contributions. In order to solve the differential equations (\ref{eq:diffg3}) and (\ref{eq:diffg4}), we again need to calculate the boundary conditions at $\beta = 0$. In the virtual-real case, an additional complication arises due to the fact that some of the master integrals are logarithmic divergent in the limit $\beta \to 0$. This happens when the loop integral (involving both $v_3$ and $v_4$) give rise to Coulomb-like singularities of $1/\beta$ which are multiplied by some $\epsilon$-dependent powers of $\beta$. When this is the case, we cannot set $\beta = 0$ before performing the integration, like what we did in the double-real case. Instead, we have to explicitly calculate the $\beta$-dependence of such master integrals (at least their asymptotic form as $\beta \to 0$). Fortunately, this only needs to be done for two of the master integrals, which are \begin{align} g^{(4)}_6(\epsilon,\beta,y) &= \frac{ i e^{-2 i \pi\epsilon} \, \Gamma(-2\epsilon) \, \omega^{-1+4\epsilon} \, \beta }{ 2 \pi^{3-2\epsilon} \, \Gamma^2(-\epsilon) \, \Gamma(2\epsilon) } \int [dk] \, d^dl \, \frac{\delta(\omega - v_0 \cdot k)}{(l^2+i0) \; (v_3 \cdot l + i0) \; [-v_4 \cdot (k+l) + i0] } \nonumber \\ &\hspace{15em} \times \left[ \frac{\omega}{-v_4 \cdot (k+l) + i0} + 2(1-4\epsilon) \right] , \\ g^{(4)}_9(\epsilon,\beta,y) &= -\frac{ i e^{-2 i \pi\epsilon} \, \Gamma(1-2\epsilon) \, \omega^{4\epsilon} \, \beta }{ 4 \pi^{3-2\epsilon} \, \Gamma^2(-\epsilon) \, \Gamma(2\epsilon) } \int [dk] \, d^dl \, \frac{\delta(\omega - v_0 \cdot k)}{(l^2+i0) \; (-v_1 \cdot k) \; (v_3 \cdot l + i0) } \nonumber \\ &\hspace{15em} \times \frac{1}{-v_4 \cdot (k+l) + i0} \, . \end{align} Here, we explicitly write the imaginary parts of the propagators involving the loop momentum $l$, since the loop integrals over $l$ depend on them. In the following, we will suppress the imaginary parts and the $+i0$ prescription is always understood. The results of the integrals over the loop momentum $l$ can be found in \cite{Bierenbaum:2011gg, Czakon:2018iev}. The remaining integrations over the momentum of the real gluon $k$ can be carried out in the limit $\beta \to 0$, since no new divergence arises in this limit. The asymptotic behaviors of $g^{(4)}_6$ and $g^{(4)}_9$ near the boundary can then be obtained and are given by \begin{align} g^{(4)}_6(\epsilon,\beta \to 0,y) &\approx \frac{ (e^{-2 i\pi\epsilon} - 1) \, \beta^{2\epsilon} \, \Gamma(1-2\epsilon) \, \Gamma(1+\epsilon) }{ 4^{1-2\epsilon} \, \Gamma(1-\epsilon) } \, , \\ g^{(4)}_9(\epsilon,\beta \to 0,y) &\approx \frac{ (e^{-2 i\pi\epsilon} - 1) \, \beta^{2\epsilon} \, \Gamma(1-2\epsilon) \, \Gamma(1+\epsilon) }{ 2^{3-4\epsilon} \, \Gamma(1-\epsilon) } \nonumber \\ &= \frac{1}{2} \, g^{(4)}_6(\epsilon,\beta \to 0,y) \, . \end{align} This can be readily expanded in $\epsilon$ to arrive at the boundary conditions. The other independent boundary conditions are easier to obtain and are given by \begin{align} \label{eq:NNLOVRboundary1} g^{(3)}_2(\epsilon, 0, y) &= g^{(4)}_2(\epsilon, 0, y) = -\frac{ i e^{-2i\pi\epsilon } \, (1-2 \epsilon) \, \Gamma(2-2\epsilon) }{ \pi^{3-2\epsilon} \, \omega^{2-4\epsilon} \, \Gamma^2(1-\epsilon) \Gamma(2\epsilon) } \int [dk] \, d^dl \, \frac{\delta(\omega - v_0 \cdot k)}{(k+l)^2 \; v_3 \cdot l} \, \nonumber \\ &=1 \, , \nonumber \\ g^{(3)}_4(\epsilon, 0, y) &= \frac{ i e^{-2 i \pi\epsilon } \, (1-4\epsilon) \, \Gamma (-2\epsilon ) }{ 2 \pi^{3-2\epsilon} \, \omega^{1-4\epsilon} \, \Gamma^2(-\epsilon) \, \Gamma (2\epsilon)} \int [dk] \, d^dl \, \frac{\delta(\omega - v_0 \cdot k)}{l^2 \; [ - v_2 \cdot (k+l) ] \; v_3 \cdot l} \, \nonumber \\ &= - \frac{ e^{-2 i \pi\epsilon } \, \Gamma(1-3\epsilon) \, \Gamma^2(-2\epsilon) \, \Gamma(1+\epsilon) }{ \Gamma(1-4\epsilon) \, \Gamma^2(-\epsilon) } \, , \nonumber \\ g^{(3)}_9(\epsilon, 0, y) &= \frac{ i e^{-2 i \pi\epsilon } \, \omega^{1+4\epsilon} \, \Gamma (1-2\epsilon ) }{ 4 \pi^{3-2\epsilon} \, \Gamma^2(-\epsilon) \, \Gamma (2\epsilon)} \int [dk] \, d^dl \, \frac{\delta(\omega - v_0 \cdot k)}{l^2 \; (k+l)^2 \; [ - v_2 \cdot (k+l) ] \; v_3 \cdot l} \nonumber \\ &\hspace{15em} \times \left[ \frac{\omega}{-v_1 \cdot k} + 2 \right] \nonumber \\ &= \frac{1}{3} - \frac{i\pi}{6} \epsilon - \frac{\pi^2}{6} \epsilon^2 + \left( \frac{38\zeta_3}{9} + \frac{4i\pi^3}{3} \right) \epsilon^3 - \left( \frac{209\pi^4}{240} + \frac{79i\pi\zeta_3}{9} \right) \epsilon^4 + \mathcal{O}(\epsilon^5) \, , \nonumber \\ g^{(3)}_{10}(\epsilon, 0, y) &= \frac{ i e^{-2 i\pi\epsilon } \, \omega^{1+4\epsilon} \, \Gamma(1-2\epsilon) }{ 4\pi^{3-2\epsilon} \, \Gamma^2(-\epsilon) \, \Gamma(2\epsilon)} \int [dk] \, d^dl \, \frac{ \delta(\omega - v_0 \cdot k) }{ l^2 \; (k+l)^2 \; [-v_2 \cdot (k+l)] \; v_2 \cdot l} \, \nonumber \\ &= \frac{ e^{-3 i \pi \epsilon } \, \Gamma^2(1-2\epsilon) \, \Gamma^2(1+\epsilon) }{ 2 \, \Gamma(1-4\epsilon) \, \Gamma(1+2\epsilon) } \, . \end{align} With the above results, it is straightforward to derive the virtual-real contributions to the NNLO soft function. It should be emphasized again that the three-Wilson-line diagrams give non-vanishing contributions, which are necessary to give the correct pole structure in accordance with the anomalous dimension (\ref{eq:gammaS}). On the other hand, these contributions are proportional to the anti-Hermitian color factor $if^{abc}\bm{T}^a_i\bm{T}^b_j\bm{T}^c_k$, and only enter the off-diagonal entries of the soft function in the singlet-octet basis (\ref{eq:colorbasis}). They therefore do not appear at the level of the NNLO cross section, but will enter the resummation formula which encodes higher order effects beyond the NNLO. \subsection{The bare soft function in the moment space} Assembling all the ingredients in the last two subsections, we obtain the NNLO momentum-space bare soft function $\bm{S}_{\text{bare}}^{(2)}(\omega,\beta,y)$. It is written in terms of star-distributions in $\omega$. For later convenience, it is useful to transform the soft function to the moment space using a Laplace transform (\ref{eq:laplace}). This can be most easily done by observing that the $\omega$-dependence of the NNLO bare soft function comes from an overall factor $\omega^{-1-4\epsilon}$, and \begin{align} \int_0^\infty d\omega \, \exp \! \left( -\frac{\omega}{\Lambda e^{\gamma_E}} \right) \mu^{4\epsilon} \, \omega^{-1-4\epsilon} = e^{-4\epsilon\gamma_E} \, \Gamma(-4\epsilon) \left( \frac{\mu^2}{\Lambda^2} \right)^{2\epsilon} \, . \end{align} A similar transformation rule can be derived for the NLO bare soft function. The resulting moment-space soft function can then be written as a function of $\Lambda$: \begin{align} \tilde{\bm{s}}_{\text{bare}}(\Lambda,\beta,y) = \int_0^\infty d\omega \, \exp \! \left( -\frac{\omega}{\Lambda e^{\gamma_E}} \right) \bm{S}_{\text{bare}}(\omega,\beta,y) \, . \end{align} Similar to the momentum-space soft function, we define the matrix elements of the moment-space soft function as \begin{align} \tilde{\bm{s}}_{IJ}(\Lambda,\beta,y) \equiv \braket{c_I | \tilde{\bm{s}}(\Lambda,\beta,y) | c_J} \, , \end{align} where the color basis is chosen as in Eq.~(\ref{eq:colorbasis}). This matrix-valued moment-space soft function will be the main object to be studied in the following. \section{Renormalized soft function} \label{sec:ren} \subsection{Anomalous dimensions and renormalization constants} The bare soft functions $\tilde{\bm{s}}_{\text{bare}}$ we just calculated contain ultraviolet divergences. As discussed in Section~\ref{sec:form}, these can be renormalized in the form \begin{align} \label{eq:sIJren} \tilde{\bm{s}}(L,\beta,\cos\theta,\mu) = \lim_{\epsilon \to 0} \bm{Z}_s^{\dagger}(L,\beta,\cos\theta,\mu) \, \tilde{\bm{s}}_{\text{bare}}(\Lambda,\beta,\cos\theta) \, \bm{Z}_s(L,\beta,\cos\theta,\mu) \, , \end{align} where the renormalization matrix $\bm{Z}_s$ can be constructed from the anomalous dimension matrix $\bm{\Gamma}_s(L,\beta,\cos\theta,\mu)$. Taking the matrix elements of the above formula in the color basis (\ref{eq:colorbasis}), we arrive at \begin{align} \label{eq:sIJren1} \tilde{\bm{s}}_{IJ}(L,\beta,\cos\theta,\mu) = \lim_{\epsilon \to 0} \sum_{M,N} \frac{ \braket{c_I | \bm{Z}_s^{\dagger}(L,\beta,\cos\theta,\mu) | c_M} }{ \braket{c_M | c_M} } \, \tilde{\bm{s}}^{\text{bare}}_{MN}(\Lambda,\beta,\cos\theta) \, \frac{ \braket{c_N | \bm{Z}_s(L,\beta,\cos\theta,\mu) | c_J} }{ \braket{c_N | c_N} } \, . \end{align} This motivates us to define the matrix elements of the renormalization factor $\bm{Z}_s$ (similar for the anomalous dimension $\bm{\Gamma}_s$) as \begin{align} \bm{Z}_{IJ}(L,\beta,\cos\theta,\mu) \equiv \frac{1}{\braket{c_I|c_I}} \braket{c_I | \bm{Z}_s(L,\beta,\cos\theta,\mu) | c_J} \, . \end{align} Eq.~(\ref{eq:sIJren1}) can then be written as \begin{align} \label{eq:sIJren2} \tilde{\bm{s}}_{IJ}(L,\beta,\cos\theta,\mu) = \lim_{\epsilon \to 0} \sum_{M,N} \bm{Z}^\dagger_{IM}(L,\beta,\cos\theta,\mu) \, \tilde{\bm{s}}^{\text{bare}}_{MN}(\Lambda,\beta,\cos\theta) \, \bm{Z}_{NJ}(L,\beta,\cos\theta,\mu) \, . \end{align} We now need to construct the renormalization matrix out of the anomalous dimension matrix given in Eq.~(\ref{eq:gammaS}), using the relation (\ref{eq:Zs}). Adopting the singlet-octet basis (\ref{eq:colorbasis}), the explicit matrix forms of $\bm{\Gamma}_s$ in the $q\bar{q}$ channel and the $gg$ channel are given by \begin{align} \bm{\Gamma}_s^{q\bar{q}} &= \left[ C_F \, \gamma_{\text{cusp}}(\alpha_s) \left( \ln\frac{\Lambda^2}{\mu^2} - i\pi \right) + C_F \, \gamma_{\text{cusp}}(\beta_{34},\alpha_s) + 2\gamma_s^q(\alpha_s) + 2\gamma^Q(\alpha_s) \right] \bm{1} \nonumber \\ &\quad + \frac{N}{2} \left[ \gamma_{\text{cusp}}(\alpha_s) \left( \ln\frac{t_1^2}{M^2m_t^2} + i\pi \right) - \gamma_{\text{cusp}}(\beta_{34},\alpha_s) \right] \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \nonumber \\ &\quad + \gamma_{\text{cusp}}(\alpha_s) \, \ln\frac{t_1^2}{u_1^2} \left[ \begin{pmatrix} 0 & \frac{C_F}{2N} \\ 1 & -\frac{1}{N} \end{pmatrix} + \frac{\alpha_s}{4\pi} \, g(\beta_{34}) \begin{pmatrix} 0 & \frac{C_F}{2} \\ -N & 0 \end{pmatrix} \right] , \end{align} and \begin{align} \bm{\Gamma}_s^{gg} &= \left[ N \, \gamma_{\text{cusp}}(\alpha_s) \left( \ln\frac{\Lambda^2}{\mu^2} - i\pi \right) + C_F \, \gamma_{\text{cusp}}(\beta_{34},\alpha_s) + 2\gamma_s^g(\alpha_s) + 2\gamma^Q(\alpha_s) \right] \bm{1} \nonumber \\ &\quad + \frac{N}{2} \left[ \gamma_{\text{cusp}}(\alpha_s) \left( \ln\frac{t_1^2}{M^2m_t^2} + i\pi \right) - \gamma_{\text{cusp}}(\beta_{34},\alpha_s) \right] \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \nonumber \\ &\quad + \gamma_{\text{cusp}}(\alpha_s) \, \ln\frac{t_1^2}{u_1^2} \left[ \begin{pmatrix} 0 & \frac{1}{2} & 0 \\ 1 & -\frac{N}{4} & \frac{N^2-4}{4N} \\ 0 & \frac{N}{4} & -\frac{N}{4} \end{pmatrix} + \frac{\alpha_s}{4\pi}\,g(\beta_{34}) \begin{pmatrix} 0 & \frac{N}{2} & 0 \\ -N & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \right] , \end{align} where the various functions were given in Section~\ref{sec:form}. It should be noted that starting from the NNLO one also needs to renormalize the strong coupling $\alpha_s$. Inserting the renormalization matrices and the bare soft functions into Eq.~(\ref{eq:sIJren2}), we find that all the poles in $\epsilon$ cancel for both the $q\bar{q}$ channel and the $gg$ channel. This provides a strong check on the correctness of our calculation. We can then safely take the limit $\epsilon \to 0$ and obtain finite soft function matrices $\tilde{\bm{s}}^{q\bar{q}}_{IJ}(L,\beta,\cos\theta,\mu)$ and $\tilde{\bm{s}}^{gg}_{IJ}(L,\beta,\cos\theta,\mu)$. These are the main results of our paper. Since the expressions are rather lengthy, we do not give them here and provide instead an electronic file included in the arXiv submission. For illustration purposes, in the following we list the $\mu$-independent terms proportional to the number of light quarks $N_l$ in the octet-octet entry for the $q\bar{q}$ channel: \begin{align} \tilde{\bm{s}}^{q\bar{q},(2)}_{22}(0,\beta,y) \bigg|_{T_FN_l} &= \frac{16 (7\beta^2-126\beta+127)}{243\beta} G_{1} + \frac{8(5\beta^2+90\beta+53)}{81\beta} \big( G_{-1,-1} - G_{-1,1} - 2G_{0,-1} \big) \nonumber \\ &\hspace{-6em} - \frac{16(7\beta^2+126\beta+127)}{243\beta} G_{-1} + \frac{8(5\beta^2 - 90\beta + 53)}{81\beta} \big( G_{1,-1} - G_{1,1} + 2G_{0,1} \big) \nonumber \\ &\hspace{-6em} + \frac{8(\beta^2+18\beta+1)}{27\beta} \big( -G_{-1,-1,-1} + G_{-1,-1,1} + 2G_{-1,0,-1} - 2G_{-1,0,1} - G_{-1,1,-1} + G_{-1,1,1} \nonumber \\ &\hspace{-6em} + 2G_{0,-1,-1} - 2G_{0,-1,1} - 4G_{0,0,-1} \big) + \frac{8(\beta^2-18\beta+1)}{27\beta} \big( 4G_{0,0,1} + 2G_{0,1,-1} - 2G_{0,1,1} \nonumber \\ &\hspace{-6em} - G_{1,-1,-1} + G_{1,-1,1} + 2G_{1,0,-1} - 2G_{1,0,1} - G_{1,1,-1} + G_{1,1,1} \big) \nonumber \\ &\hspace{-6em} + \frac{32}{243} \bigg[ 28G_{-1/y} + 98G_{1/y} + 30 \big( 2G_{0,-1/y} + G_{-1/y,-1} + G_{-1/y,1} - 2G_{-1/y,-1/y} \big) \nonumber \\ &\hspace{-6em} + 105 \big( 2G_{0,1/y} + G_{1/y,-1} + G_{1/y,1} - 2G_{1/y,1/y} \big) + 18 \big( 4G_{0,0,-1/y} + 2G_{0,-1/y,-1} + 2G_{0,-1/y,1} \nonumber \\ &\hspace{-6em} - 4G_{0,-1/y,-1/y} - G_{-1/y,-1,-1} + G_{-1/y,-1,1} + 2G_{-1/y,0,-1} + 2G_{-1/y,0,1} - 4G_{-1/y,0,-1/y} \nonumber \\ &\hspace{-6em} + G_{-1/y,1,-1} - G_{-1/y,1,1} - 2G_{-1/y,-1/y,-1} - 2G_{-1/y,-1/y,1} + 4G_{-1/y,-1/y,-1/y} \big) \nonumber \\ &\hspace{-6em} + 63 \big( 4G_{0,0,1/y} + 2G_{0,1/y,-1} + 2G_{0,1/y,1} - 4G_{0,1/y,1/y} - G_{1/y,-1,-1} + G_{1/y,-1,1} + 2G_{1/y,0,-1} \nonumber \\ &\hspace{-6em} + 2G_{1/y,0,1} - 4G_{1/y,0,1/y} + G_{1/y,1,-1} - G_{1/y,1,1} - 2G_{1/y,1/y,-1} - 2G_{1/y,1/y,1} + 4G_{1/y,1/y,1/y} \big) \nonumber \\ &\hspace{-6em} - \frac{332}{3} - \frac{5\pi^2}{2} + 6\zeta_3 \bigg] \, , \end{align} where we have set the number of colors $N_c = 3$ in order to shorten the expression, and defined the abbreviations $G_{a_1,\ldots,a_n} \equiv G(a_1,\ldots,a_n;\beta)$. \subsection{Validations of the results} While our results of the NNLO soft functions are novel, it is possible to partially validate them by checking their consistency with some known results in the literature. We have performed 3 checks: 1) that they satisfy RGEs according to the anomalous dimension matrices calculated in \cite{Ferroglia:2009ep, Ferroglia:2009ii}; 2) that they reproduce in the threshold limit the results of \cite{Belitsky:1998tc, Czakon:2013hxa}; 3) that they correctly factorize in the boosted limit according to the factorization formula given in \cite{Ferroglia:2012ku}. As discussed in Section~\ref{sec:form}, the renormalized soft function satisfies a renormalization group equation \begin{align} \label{eq:srge2} \frac{d}{d\mu} \tilde{\bm{s}}(L,\beta,\cos\theta,\mu) &= - \bm{\Gamma}_s^\dagger(L,\beta,\cos\theta,\mu) \, \tilde{\bm{s}}(L,\beta,\cos\theta,\mu) - \tilde{\bm{s}}(L,\beta,\cos\theta,\mu) \, \bm{\Gamma}_s(L,\beta,\cos\theta,\mu) \, . \end{align} This property is closely related to the renormalization in Eq.~(\ref{eq:sIJren}). Given that our result is correctly renormalized, we expect that it should naturally satisfies Eq.~(\ref{eq:srge2}). Nevertheless, we have calculated the left side and the right side of the above equation and indeed find consistency. In the threshold limit $s \to 4m_t^2$ or $\beta \to 0$, the top quark and the anti-top quark are at rest in the partonic center-of-mass frame. In this case, if the $t\bar{t}$ pair forms a color-singlet state, the soft gluons cannot probe it, and the situation is no different from the Drell-Yan process or the Higgs boson production process. The corresponding element of the soft function matrix then reduce to those calculated in \cite{Belitsky:1998tc}. On the other hand, if the $t\bar{t}$ pair forms a color-octet state, the corresponding matrix element in the threshold limit has been calculated in \cite{Czakon:2013hxa}. We can therefore check the consistency of our result by taking the limit $\beta \to 0$ in our expressions. This is easy to do in our formalism since $\beta = 0$ is essentially the boundary point of the differential equations. One can also directly take the $\beta \to 0$ limit starting from the explicit expressions of $\tilde{\bm{s}}(L,\beta,\cos\theta,\mu)$, which is expressed in terms of GPLs of $\beta$. Using the property that $G(a_1,\ldots,a_n;0) = 0$ unless all the indices are zero, this limit is rather straightforward to obtain. We have done this simple exercise and find that the results are in perfect agreement with those in \cite{Belitsky:1998tc} and \cite{Czakon:2013hxa}. In particular, for the $q\bar{q}$ channel we find the diagonal entries to be \begin{align} \frac{\tilde{\bm{s}}^{q\bar{q},(2)}_{11}(L,\beta \to 0, \cos\theta)}{\tilde{\bm{s}}^{q\bar{q},(0)}_{11}} &= \frac{C_F^2}{2} \left( 2L^2 + \frac{\pi^2}{3} \right)^2 + C_F C_A \left[ -\frac{22}{9} L^3 + \left( \frac{134}{9} - \frac{2\pi^2}{3} \right) L^2 \right. \nonumber \\ &+ \left. \left( -\frac{808}{27} + 28\zeta_3 \right) L + \frac{2428}{81} + \frac{67\pi^2}{54} - \frac{22\zeta_3}{9} - \frac{\pi^4}{3} \right] \nonumber \\ &+ C_F T_F N_l \left( \frac{8}{9} L^3 - \frac{40}{9} L^2 + \frac{224}{27} L - \frac{656}{81} - \frac{10\pi^2}{27} + \frac{8\zeta_3}{9} \right) , \\ \frac{\tilde{\bm{s}}^{q\bar{q},(2)}_{22}(L,\beta \to 0, \cos\theta)}{\tilde{\bm{s}}^{q\bar{q},(0)}_{22}} &= \frac{1}{2} \left[ C_F \left( 2L^2 + \frac{\pi^2}{3} \right) + C_A (-2L + 4) \right]^2 + C_F C_A \left[ -\frac{22}{9} L^3 \right. \nonumber \\ &+ \left. \left( \frac{134}{9} - \frac{2\pi^2}{3} \right) L^2 + \left( -\frac{808}{27} + 28\zeta_3 \right) L + \frac{2428}{81} + \frac{67\pi^2}{54} - \frac{22\zeta_3}{9} - \frac{\pi^4}{3} \right] \nonumber \\ &+ C_F T_F N_l \left( \frac{8}{9} L^3 - \frac{40}{9} L^2 + \frac{224}{27} L - \frac{656}{81} - \frac{10\pi^2}{27} + \frac{8\zeta_3}{9} \right) \nonumber \\ &+ C_A^2 \left[ \frac{11}{3} L^2 + \left( -\frac{230}{9} + \frac{2\pi^2}{3} - 4\zeta_3 \right) L + \frac{1568}{27} + \frac{2\pi^2}{3} - 10\zeta_3 + \frac{13\pi^4}{180} \right] \nonumber \\ &+ C_A T_F N_l \left( -\frac{4}{3} L^2 + \frac{88}{9} L -\frac{640}{27} \right) . \end{align} The results for the $gg$ channel can be simply obtained from the above expressions by the replacement $C_F \to C_A$. An interesting subtlety in the above exercise is that the non-diagonal entries of the soft function do not vanish in the limit $\beta \to 0$. Instead, they develop logarithmic divergent behaviors in that limit. These non-vanishing terms arise from the three-Wilson-line virtual-real diagrams and are therefore purely imaginary. For example, we have \begin{align} \tilde{\bm{s}}^{q\bar{q},(2)}_{12}(L,\beta \to 0, \cos\theta) &= -2i\pi C_F C_A \left[ L^2 - 4 \left( \ln(4\beta) + 1 \right) L + 2\ln^2(4\beta) + 8\ln(4\beta) + \pi^2 \right] . \end{align} This is actually expected since the $f_2$ function in the anomalous dimension matrix has the similar property in the threshold limit (see, e.g., Section 3.4 of \cite{Ferroglia:2009ii}). However, such a behavior cannot be seen from the calculation of \cite{Belitsky:1998tc, Czakon:2013hxa}, and is a novel feature of our results. Besides the above ``trivial'' checks, a highly non-trivial cross-check of our result is the opposite, boosted limit $\beta \to 1$ or $M^2 \gg 4m_t^2$. In this limit, it was shown in \cite{Ferroglia:2012ku} that the soft function should factorize in the form \begin{align} \label{eq:sfac} \tilde{\bm{s}}_{\text{massive}} \! \left(\ln\frac{\Lambda^2}{\mu^2},\beta,\cos\theta,\mu\right) = \tilde{\bm{s}}_{\text{massless}} \! \left(\ln\frac{\Lambda^2}{\mu^2},\cos\theta,\mu\right) \, \tilde{s}_D^2 \! \left(\ln\frac{m_t^2\Lambda^2}{M^2\mu^2},\mu\right) + \mathcal{O}(m_t^2/M^2) \, , \end{align} where $\tilde{\bm{s}}_{\text{massive}}$ is the massive soft function calculated in this work, $\tilde{\bm{s}}_{\text{massless}}$ is the soft function with top quarks treated as massless (which was calculated at NNLO in \cite{Ferroglia:2012uy}\footnote{It should be noted that \cite{Ferroglia:2012uy} did not calculate the three-Wilson-line virtual-real contributions to the massless soft function $\tilde{\bm{s}}_{\text{massless}}$, which however can be easily extracted from the results in this paper. It should be possible to perform the massless calculation explicitly and compare with the expressions obtained here.}), and $\tilde{s}_D$ is a soft-collinear partonic fragmentation function describing a boosted top quark evolving into a top quark plus soft radiations. The soft-collinear fragmentation function $\tilde{s}_D$ was not directly calculated in the literature. In \cite{Gardi:2005yi, Neubert:2007je}, it was related to the shape function in $B$-meson decays and was extracted from existing result of the latter. In the Appendix~A of \cite{Ferroglia:2012ku}, however, it was found that this result of $\tilde{s}_D$ is inconsistent with other related calculations. To understand this, we note that $\tilde{s}_D$ should satisfy another factorization formula \begin{align} \label{eq:Dfac} \tilde{D}_{t/t}(\Lambda^2/s,m_t,\mu) = C_D(m_t,\mu) \, \tilde{s}_D(L_D,\mu) + \mathcal{O}(\Lambda^2/M^2) \, , \end{align} where $L_D \equiv \ln\big(m_t^2\Lambda^2/M^2\mu^2\big)$ (note that $\tilde{s}_D$ depends on a see-saw scale $m_t\Lambda/M$). In the above formula, $\tilde{D}_{t/t}$ is the full partonic fragmentation function of the top quark in the moment space, and $C_D$ is a hard-collinear matching coefficient. The momentum-space version of $\tilde{D}_{t/t}$ was calculated in \cite{Melnikov:2004bm}. Using this result and the result of $\tilde{s}_D$ from \cite{Gardi:2005yi, Neubert:2007je}, it is possible to extract the form of $C_D$ from Eq.~(\ref{eq:Dfac}). However, the function $C_D$ could also be extracted from the high-energy behavior of the scattering-amplitude involving heavy quarks \cite{Mitov:2006xs}. The main conclusion of the Appendix~A of \cite{Ferroglia:2012ku} is that these two extractions do not coincide! Using our new result of the NNLO massive soft function, it is for the first time that one can directly validate the factorization formula (\ref{eq:sfac}) at the NNLO, and extract the soft-collinear fragmentation function $\tilde{s}_D$ at this order. It is then possible to resolve the conflict between the two results of $C_D$. In order to do these, we need to take the limit $m_t \to 0$ or $\beta \to 1$, and carefully extract the logarithms of $m_t$. Such logarithms arise from GPLs which are singular in the limit $\beta \to 1$. In order to extract such singularities, we employ properties of GPLs to convert their argument to $\cos\theta$, and put all $\beta$ dependences into the weights. We have used the program package \texttt{HyperInt} \cite{Panzer:2014caa} to accomplish this. After this conversion, the singular terms become powers of $\ln(1-\beta) \approx \ln(2m_t^2/M^2)$. Finally, we insert the results and the massless soft function from \cite{Ferroglia:2012uy} into Eq.~(\ref{eq:sfac}), and we find the NNLO coefficient of $\tilde{s}_D$ to be \begin{align} \tilde{s}_D^{(2)}(L_D,\mu) &= \frac{8}{9} L_D^4 + \left( \frac{76}{9} - \frac{8}{27} N_l \right) L_D^3 + \left( -\frac{104}{9} + \frac{76\pi^2}{27} + \frac{16}{27} N_l \right) L_D^2 \nonumber \\ & + \left( \frac{440}{27} + \frac{416\pi^2}{27} - 72\zeta_3 + \frac{16}{81} N_l - \frac{16\pi^2}{27} N_l \right) L_D \nonumber \\ & - \frac{1304}{81} - \frac{89\pi^2}{9} + \frac{1213\pi^4}{405} - \frac{1132\zeta_3}{9} + \left( -\frac{16}{243} + \frac{14\pi^2}{27} + \frac{88\zeta_3}{27} \right) N_l \, , \end{align} The above formula differs from that in \cite{Gardi:2005yi, Neubert:2007je} by a constant term $4\pi^2C_AC_F$, which is essentially the inconsistency discussed in the Appendix~A of \cite{Ferroglia:2012ku}. Accordingly, the NNLO coefficient of the function $C_D$ is given by \begin{align} C_D^{(2)}(L_m,\mu) &= \frac{8}{9} L_m^4 + \left( \frac{20}{3} - \frac{8}{27} N_l \right) L_m^3 + \left( \frac{406}{9} - \frac{28\pi^2}{27} - \frac{52}{27} N_l \right) L_m^2 \nonumber \\ &+ \left( \frac{2594}{27} + \frac{248\pi^2}{27} - \frac{232\zeta_3}{3} - \frac{308}{81} N_l -\frac{16\pi^2}{27} N_l \right) L_m \nonumber \\ &+ \frac{21553}{162} + \frac{59\pi^2}{3} - \frac{749\pi^4}{405} + \frac{260\zeta_3}{9} + \frac{16\pi^2}{9} \ln 2 - \left( \frac{1541}{243} + \frac{74\pi^2}{81} + \frac{104\zeta_3}{27} \right) N_l \, , \end{align} where $L_m \equiv \ln\big(\mu^2/m_t^2\big)$. The above expression of $C_D$ satisfies both Eq.~(\ref{eq:Dfac}) and the high-energy limit of the scattering amplitude. The inconsistency on the form of $C_D$ found in the Appendix~A of \cite{Ferroglia:2012ku} is thus resolved by our calculation. The remaining discrepancy then boils down to the relation between the $B$-meson shape function and the soft fragmentation function $\tilde{s}_D$. It would be interesting to directly compute $\tilde{s}_D^{(2)}$ from its operator definition in the future, in order to figure out its difference with the shape function. \subsection{Numerical results} The contribution of the soft function to the differential cross section is through the factorization formula in the soft limit \cite{Kidonakis:1996aq, Kidonakis:1997gm, Ahrens:2010zv} \begin{align} \frac{ d\tilde{\sigma} \big( N \big) }{dM \, d\cos\theta} \propto \sum_{ij=q\bar{q},gg} \mathrm{Tr} \! \left[ \bm{H}_{ij} \! \left( \ln\frac{M^2}{\mu^2},\beta,\cos\theta,\mu \right) \tilde{\bm{s}}_{ij} \! \left( \ln\frac{M^2}{\bar{N}^2\mu^2},\beta,\cos\theta,\mu \right) \right] + \mathcal{O}(1/N) \, , \end{align} where $\bar{N} = Ne^{\gamma_E}$ with $N=M/\Lambda$ being the moment variable, and $\bm{H}_{ij}$ are the hard functions, whose matrix elements at the leading order are given by \begin{align} \bm{H}_{q\bar{q}}^{(0)} &= \begin{pmatrix} 0 & 0 \\ 0 & 2 \end{pmatrix} \Bigg[ \frac{t_1^2 + u_1^2}{M^4} + \frac{2m_t^2}{M^2} \Bigg] \, , \nonumber \\ \bm{H}_{gg}^{(0)} &= \begin{pmatrix} \frac{1}{N^2} & \frac{1}{N}\,\frac{t_1-u_1}{M^2} & \frac{1}{N} \\ \frac{1}{N}\,\frac{t_1-u_1}{M^2} & \frac{(t_1-u_1)^2}{M^4} & \frac{t_1-u_1}{M^2} \\ \frac{1}{N} & \frac{t_1-u_1}{M^2} & 1 \end{pmatrix} \frac{M^4}{2t_1u_1} \Bigg[ \frac{t_1^2+u_1^2}{M^4} + \frac{4m_t^2}{M^2} - \frac{4m_t^4}{t_1u_1} \Bigg] \, . \end{align} In order to assess the numerical impact of the NNLO correction to the soft function, we define the following quantities: \begin{align} \mathcal{S}_{ij}^{(n)}(\beta,\mu/\mu_{\text{def}}) = \int_{-1}^1 d\cos\theta \, \left( \frac{\alpha_s}{4\pi} \right)^n \mathrm{Tr} \! \left[ \bm{H}^{(0)}_{ij} \! \left( \beta,\cos\theta \right) \tilde{\bm{s}}^{(n)}_{ij} \! \left( \ln\frac{\Lambda^2}{\mu^2},\beta,\cos\theta \right) \right] , \end{align} with $n=0$, 1, 2 denoting the LO, NLO and NNLO soft contributions, respectively. Here, $\mu_{\text{def}}$ is the default soft scale, and we exploit two kind of choices for it: $\mu_{\text{def},1} = \Lambda$ (which corresponds to the choice made in \cite{Pecjak:2016nee}) and $\mu_{\text{def},2} = \Lambda \sqrt{1-\beta^2\cos^2\theta}$ (which was found to be a better choice in \cite{Czakon:2018nun}). We then define the ratios \begin{align} R_{ij}^{\text{NLO}}(\beta,\mu/\mu_{\text{def}}) = \frac{\mathcal{S}_{ij}^{(1)}(\beta,\mu/\mu_{\text{def}})}{\mathcal{S}_{ij}^{(0)}(\beta,\mu/\mu_{\text{def}})} \, , \quad R_{ij}^{\text{NNLO}}(\beta,\mu/\mu_{\text{def}}) = \frac{\mathcal{S}_{ij}^{(2)}(\beta,\mu/\mu_{\text{def}})}{\mathcal{S}_{ij}^{(0)}(\beta,\mu/\mu_{\text{def}})} \, , \end{align} to quantify the relative size of the NLO and NNLO soft corrections with respect to the LO contribution. In reality, the strong coupling $\alpha_s$ should also be evaluated at the soft scale $\mu$, and hence depends on the moment variable $N$. Here for illustration purposes, we fix its value at $\alpha_s = 0.118$. This does not affect the qualitative behaviors of the soft corrections shown below. \begin{figure}[t!] \begin{center} \includegraphics[width=0.4\textwidth]{./figs/qqbetaLambda.eps} \quad \includegraphics[width=0.4\textwidth]{./figs/qqbetaHT.eps} \\[1ex] \includegraphics[width=0.4\textwidth]{./figs/ggbetaLambda.eps} \quad \includegraphics[width=0.4\textwidth]{./figs/ggbetaHT.eps} \end{center} \vspace{-2ex} \caption{\label{fig:Sbeta}Relative soft corrections as a function of $\beta$ for two choices of the default scale: $\mu_{\text{def},1} = \Lambda$ (left) and $\mu_{\text{def},2} = \Lambda \sqrt{1-\beta^2\cos^2\theta}$ (right).} \end{figure} In Figure~\ref{fig:Sbeta}, we show the relative soft corrections as a function of $\beta$ for $\mu = \mu_{\text{def}}$, with the two choices of $\mu_{\text{def}}$. It can be seen that in the $q\bar{q}$ channel, the soft function is not very sensitive to the choice of the default soft scale, and the NNLO correction stays below 5\% in the whole range of $\beta$. On the other hand, the behavior of the $gg$ channel soft function is rather different. With the choice $\mu_{\text{def}} = \Lambda$, both the NLO and NNLO corrections become very large in the boosted limit $\beta \to 1$, where the NNLO correction can reach about 28\%. When changing to the choice $\mu_{\text{def}} = \Lambda \sqrt{1-\beta^2\cos^2\theta}$, the corrections are well under control in the boosted region, where the NNLO correction is only about 8\%. These findings are in coincidence with the discussions in \cite{Czakon:2018nun}, where the second choice was identified as the better choice, especially for boosted top quark pair production. \begin{figure}[t!] \begin{center} \includegraphics[width=0.4\textwidth]{./figs/qqbeta1.eps} \quad \includegraphics[width=0.4\textwidth]{./figs/ggbeta1.eps} \\[0.5ex] \includegraphics[width=0.4\textwidth]{./figs/qqbeta5.eps} \quad \includegraphics[width=0.4\textwidth]{./figs/ggbeta5.eps} \\[0.5ex] \includegraphics[width=0.4\textwidth]{./figs/qqbeta9.eps} \quad \includegraphics[width=0.4\textwidth]{./figs/ggbeta9.eps} \\[0.5ex] \includegraphics[width=0.4\textwidth]{./figs/qqbeta99.eps} \quad \includegraphics[width=0.4\textwidth]{./figs/ggbeta99.eps} \end{center} \vspace{-2ex} \caption{\label{fig:Smu}Relative soft corrections as a function of $\mu/\mu_{\text{def}}$ for two choices of the default scale: $\mu_{\text{def},1} = \Lambda$ (dashed curves) and $\mu_{\text{def},2} = \Lambda \sqrt{1-\beta^2\cos^2\theta}$ (solid curves).} \end{figure} In Figure~\ref{fig:Smu}, we show the relative soft corrections as a function of $\mu/\mu_{\text{def}}$ for 4 values of $\beta$: $\beta = 0.1$ (the threshold region), $\beta = 0.5$ (the dominant region for the total cross section), $\beta = 0.9$ (the boosted region) and $\beta = 0.99$ (the ultra-boosted region). For $\beta = 0.1$, the two choices of the default soft scale are almost indistinguishable as the solid curves and the dashed curves overlap with each other in the first row of Figure~\ref{fig:Smu}. When $\beta$ gradually becomes larger, the two choices start to differ. We observe that the second option is a good choice in the whole range of $\beta$, in the sense that the soft corrections remain small when the soft scale is varied around the default value. This is particularly true for the $gg$ channel, where the $t$- and $u$-channel propagators in the tree-level hard function push the average value of $\cos^2\theta$ towards unity when the top quarks are highly boosted. In that case the effective soft scale is much smaller than $\Lambda$, and is better modeled by the $\theta$-dependent function $\mu_{\text{def},2} = \Lambda \sqrt{1-\beta^2\cos^2\theta}$. While the above findings have been advocated in \cite{Czakon:2018nun} from studying the massless soft function, our analysis using the massive soft function provides more comprehensive information on the behavior of the soft corrections in the whole range of phase space. \section{Conclusion and outlook} \label{sec:con} In this paper we have calculated the threshold soft function for top quark pair production at the NNLO. We used integration-by-parts identities to reduce the double-real contributions to 23 master integrals, and the virtual-real contributions to 36 master integrals. We then employed the method of differential equations to solve for the master integrals to arbitrary orders in the dimensional regulator $\epsilon$. Our final results are fully analytic, and can be entirely written in terms of GPLs, which makes it efficient for numerical evaluation. Our result represents the first ever NNLO soft function for processes involving a non-trivial color structure and two massive partons with full velocity dependence. Due to the complicated color structure, the renormalized soft function calculated in this paper is a $2\times 2$ ($3\times 3$) matrix for $q\bar{q} \to t\bar{t}$ ($gg \to t \bar{t}$) in color space. The scale-dependence of the soft function is in full agreement with the two-loop anomalous dimension matrix calculated in \cite{Ferroglia:2009ep, Ferroglia:2009ii, Chien:2011wz} from virtual amplitudes, as expected from the cancellation of infrared singularities. However, the set up of the calculation for the soft function is different from the calculation for virtual amplitudes, and therefore represents an independent confirmation of previous results on the two-loop anomalous dimensions. We find that the previously calculated three-parton correlation function $f_2$ in the anomalous dimension comes entirely from virtual-real diagrams involving the two massive Wilson lines. This suggests a Coulomb/Glauber origin of the three-parton correlation. It would be interesting to investigate about this in the future. Our result contains full velocity dependence of the massive partons, which generalizes previous results in more restricted kinematic configurations to fully generic configurations. We have checked that in the limit $\beta \to 0$, our result reproduces the corresponding results for color singlet/octet production. In the high energy limit $\beta \to 1$, our result exhibits the expected factorization property of mass logarithms, which leads to a consistent extraction of the soft fragmentation function. We also find full agreement with the NNLO massless soft function in \cite{Ferroglia:2012uy}, up to the three-parton virtual-real contributions not calculated in that paper. Our result is an important ingredient in the resummation of threshold logarithms for top quark pair production beyond the NNLL accuracy. It is interesting to study its phenomenological implications for the LHC experiments and future high energy colliders, following the framework of \cite{Pecjak:2016nee, Czakon:2018nun}. For this purpose, we have studied the numerical impact of the NNLO corrections to the soft function. We find that using a $\theta$-dependent default choice for the soft scale $\mu_{\text{def}} = \Lambda \sqrt{1 - \beta^2 \cos^2\theta}$ makes the perturbative behavior well under-control in the whole phase space, from production at rest to highly-boosted production. This is in accordance with the findings of \cite{Czakon:2018nun}. To achieve the next-to-next-to-next-to-leading logarithmic accuracy for resummation, one also needs the two-loop hard function and the three-loop soft anomalous dimension matrix. The two-loop hard function can be extracted from the virtual amplitudes calculated in \cite{Baernreuther:2013caa}. Concerning the three-loop soft anomalous dimensions for multi-leg scattering processes, the result in the purely massless case has been obtained in \cite{Almelid:2015jia}, but the result in the massive case is still lacking and deserves future investigations. Finally, while our formalism is generic, in practical calculation we have used the fact that the top quark and the anti-top quark are back-to-back in the partonic center-of-mass frame. It would be interesting to consider the more general case, where the $t\bar{t}$ pair is recoiled by additional particles. This is relevant to the production processes of top quark pairs associated with a Higgs boson or an electroweak gauge boson. At the NLO, the general result of \cite{Ahrens:2010zv} can be utilized, and has been successfully applied to $t\bar{t}H$ production in \cite{Broggio:2015lya, Broggio:2016lfj}, to $t\bar{t}W$ production in \cite{Li:2014ula, Broggio:2016zgg}, and to $t\bar{t}Z$ production in \cite{Broggio:2017kzi}. It remains open whether a similar general result can be derived at the NNLO, which we leave for future work. \section*{Acknowledgments} This work was supported in part by the National Natural Science Foundation of China under Grant No. 11575004 and 11635001.
{ "timestamp": "2018-06-01T02:04:26", "yymm": "1804", "arxiv_id": "1804.05218", "language": "en", "url": "https://arxiv.org/abs/1804.05218" }
\section{Introduction} As part of the proof of his eponymous theorem~\cite{Szemeredi75} on arithmetic progressions in dense sets of integers, Szemer\'edi developed (a variant of what is now known as) the graph {\em regularity lemma}~\cite{Szemeredi78}. The lemma roughly states that the vertex set of every graph can be partitioned into a bounded number of parts such that almost all the bipartite graphs induced by pairs of parts in the partition are quasi-random. In the past four decades this lemma has become one of the (if not the) most powerful tools in extremal combinatorics, with applications in many other areas of mathematics. We refer the reader to~\cite{KomlosShSiSz02,RodlSc10} for more background on the graph regularity lemma, its many variants and its numerous applications. Perhaps the most important and well-known application of the graph regularity lemma is the original proof of the {\em triangle removal lemma}, which states that if an $n$-vertex graph $G$ contains only $o(n^3)$ triangles, then one can turn $G$ into a triangle-free graph by removing only $o(n^2)$ edges (see \cite{ConlonFox13} for more details). It was famously observed by Ruzsa and Szemer\'edi~\cite{RuzsaSz76} that the triangle removal lemma implies Roth's theorem~\cite{Roth54}, the special case of Szemer\'edi's theorem for $3$-term arithmetic progressions. The problem of extending the triangle removal lemma to the hypergraph setting was raised by Erd\H{o}s, Frankl and R\"odl~\cite{ErdosFrRo86}. One of the main motivations for obtaining such a result was the observation of Frankl and R\"odl~\cite{FrankRo02} (see also~\cite{Solymosi04}) that such a result would allow one to extend the Ruzsa--Szemer\'edi~\cite{RuzsaSz76} argument and thus obtain an alternative proof of Szemer\'edi's theorem for progressions of arbitrary length. The quest for a hypergraph regularity lemma, which would allow one to prove a hypergraph removal lemma, took about 20 years. The first milestone was the result of Frankl and R\"odl~\cite{FrankRo02}, who obtained a regularity lemma for $3$-uniform hypergraphs. About 10 years later, the approach of~\cite{FrankRo02} was extended to hypergraphs of arbitrary uniformity by R\"odl, Skokan, Nagle and Schacht~\cite{NagleRoSc06, RodlSk04}. At the same time, Gowers~\cite{Gowers07} obtained an alternative version of the regularity lemma for $k$-uniform hypergraphs (from now on we will use $k$-graphs instead of $k$-uniform hypergraphs). Shortly after, Tao~\cite{Tao06} and R\"odl and Schacht~\cite{RodlSc07,RodlSc07-B} obtained two more versions of the lemma. As it turned out, the main difficulty with obtaining a regularity lemma for $k$-graphs was defining the correct notion of hypergraph regularity that would: $(i)$ be strong enough to allow one to prove a counting lemma, and $(ii)$ be weak enough to be satisfied by every hypergraph (see the discussion in~\cite{Gowers06} for more on this issue). And indeed, the above-mentioned variants of the hypergraph regularity lemma rely on four different notions of quasi-randomness, which to this date are still not known to be equivalent\footnote{This should be contrasted with the setting of graphs in which (almost) all notions of quasi-randomness are not only known to be equivalent but even effectively equivalent. See e.g.~\cite{ChungGrWi89}.} (see~\cite{NaglePoRoSc09} for some partial results). What all of these proofs {\em do} have in common however, is that they supply only Ackermann-type bounds for the size of a regular partition.\footnote{Another variant of the hypergraph regularity lemma was obtained in~\cite{ElekSz12}. This approach does not supply any quantitative bounds.} More precisely, if we let $\Ack_1(x)=2^x$ and then define $\Ack_k(x)$ to be the $x$-times iterated\footnote{$\Ack_2(x)$ is thus a tower of exponents of height $x$, $\Ack_3(x)$ is the so-called wowzer function, etc.} version of $\Ack_{k-1}$, then all the above proofs guarantee to produce a regular partition of a $k$-graph whose order can be bounded from above by an $\Ack_k$-type function. One of the most important applications of the $k$-graph regularity lemma was that it gave the first explicit bounds for the multidimensional generalization of Szemer\'edi's theorem, see~\cite{Gowers07}. The original proof of this result, obtained by Furstenberg and Katznelson~\cite{FurstenbergKa78}, relied on Ergodic Theory and thus supplied no quantitative bounds at all. Examining the reduction between these theorems~\cite{Solymosi04} reveals that if one could improve the Ackermann-type bounds for the $k$-graph regularity lemma, by obtaining (say) $\Ack_{k_0}$-type upper bounds (for all $k$), then one would obtain the first primitive recursive bounds for the multidimensional generalization of Szemer\'edi's theorem. Let us note that obtaining such bounds just for van der Waerden's theorem~\cite{Shelah89} and Szemer\'edi's theorem~\cite{Szemeredi75} (which are two special case) were open problems for many decades till they were finally solved by Shelah~\cite{Shelah89} and Gowers~\cite{Gowers01}, respectively. Further applications of the $k$-graph regularity lemma (and the hypergraph removal lemma in particular) are described in~\cite{RodlNaSkScKo05} and~\cite{RodlTeScTo06} as well as in R\"odl's recent ICM survey~\cite{Rodl14}. A famous result of Gowers~\cite{Gowers97} states that the $\Ack_2$-type upper bounds for graph regularity are unavoidable. Several improvements~\cite{FoxLo17}, variants~\cite{ConlonFo12,KaliSh13,MoshkovitzSh18} and simplifications~\cite{MoshkovitzSh16} of Gowers' lower bound were recently obtained, but no analogous lower bound was derived even for $3$-graph regularity. The numerous applications of the hypergraph regularity lemma naturally lead to the question of whether one can improve upon the Ackermann-type bounds mentioned above and obtain primitive recursive bounds for the $k$-graph regularity lemma. Tao~\cite{Tao06-h} predicted that the answer to this question is negative, in the sense that one cannot obtain better than $\Ack_k$-type upper bounds for the $k$-graph regularity lemma for every $k \ge 2$. The main result presented here and in the followup \cite{MSk} confirms this prediction. \begin{theo}{\bf[Main result, informal statement]}\label{thm:main-informal} The following holds for every $k\geq 2$: every regularity lemma for $k$-graphs satisfying some mild conditions can only guarantee to produce partitions of size bounded by an $\Ack_k$-type function. \end{theo} In this paper we will focus on proving the key ingredient needed for obtaining Theorem~\ref{thm:main-informal}, stated as Lemma~\ref{theo:core} in Subsection~\ref{subsec:overview}, and on showing how it can be used in order to prove Theorem~\ref{thm:main-informal} for $k=3$. In a nutshell, the key idea is to use the graph construction given by Lemma~\ref{theo:core} in order to construct a $3$-graph by taking a certain ``product'' of two graphs that are hard for graph regularity, in order to get a $3$-graph that is hard for $3$-graph regularity. See the discussion following Lemma~\ref{theo:core} in Subsection~\ref{subsec:overview}. Dealing with $k=3$ in this paper will allow us to present all the new ideas needed in order to actually prove Theorem~\ref{thm:main-informal} for arbitrary $k$, in the slightly friendlier setting of $3$-graphs. In a followup paper~\cite{MSk}, we will show how Lemma~\ref{theo:core} can be used in order to prove Theorem~\ref{thm:main-informal} for all $k \ge 2$. In this paper we will also show how to derive from Theorem~\ref{thm:main-informal} tight lower bounds for the $3$-graph regularity lemmas due to Frankl and R\"odl~\cite{FrankRo02} and to Gowers~\cite{Gowers06}. \begin{coro}\label{coro:FR-LB} There is an $\Ack_3$-type lower bound for the $3$-graph regularity lemmas of Frankl and R\"odl~\cite{FrankRo02} and of Gowers~\cite{Gowers06}. \end{coro} In \cite{MSk} we will show how to derive from Theorem~\ref{thm:main-informal} a tight lower bound for the $k$-graph regularity lemma due to R\"odl and Schacht~\cite{RodlSc07}. \begin{coro}\label{coro:RS-LB} There is an $\Ack_k$-type lower bound for the $k$-graph regularity lemma of R\"odl and Schacht~\cite{RodlSc07}. \end{coro} Before getting into the gory details of the proof, let us informally discuss what we think are some interesting aspects of the proof of Theorem \ref{thm:main-informal}. \paragraph{Why is it hard to ``step up''?} The reason why the upper bound for graph regularity is of tower-type is that the process of constructing a regular partition of a graph proceeds by a sequence of steps, each increasing the size of the partition exponentially. The main idea behind Gowers' lower bound for graph regularity~\cite{Gowers97} is in ``reverse engineering'' the proof of the upper bound; in other words, in showing that (in some sense) the process of building the partition using a sequence of exponential refinements is unavoidable. Now, a common theme in all proofs of the hypergraph regularity lemma for $k$-graphs is that they proceed by induction on $k$; that is, in the process of constructing a regular partition of the input $k$-graph $H$, the proof applies the $(k-1)$-graph regularity lemma on certain $(k-1)$-graphs derived from $H$. This is why one gets $\Ack_k$-type upper bounds. So with~\cite{Gowers97} in mind, one might guess that in order to prove a matching lower bound one should ``reverse engineer'' the proof of the upper bound and show that such a process is unavoidable. However, this turns out to be false! As we argued in~\cite{MoshkovitzSh18}, in order to prove an {\em upper bound} for (say) $3$-graph regularity it is in fact enough to iterate a relaxed version of graph regularity which we call the ``sparse regular approximation lemma'' (SRAL for short). Therefore, in order to prove an $\Ack_3$-type {\em lower bound} for $3$-graph regularity one cannot simply ``step up'' an $\Ack_2$-type lower bound for graph regularity. Indeed, a necessary condition would be to prove an $\Ack_2$-type lower bound for SRAL. See also the discussion following Lemma~\ref{theo:core} in Subsection \ref{subsec:overview} on how do we actually use a graph construction in order to get a $3$-graph construction. \paragraph{A new notion of graph/hypergraph regularity:} In a recent paper \cite{MoshkovitzSh18} we proved an $\Ack_2$-type lower bound for SRAL. As it turned out, even this lower bound was not enough to allow us to step up the graph lower bound into a $3$-graph lower bound. To remedy this, in the present paper we introduce an even weaker notion of graph/hypergraph regularity which we call $\langle \d \rangle$-regularity. This notion seems to be right at the correct level of ``strength''; on the one hand, it is strong enough to allow one to prove $\Ack_{k-1}$-type lower bounds for $(k-1)$-graph regularity, while at the same time weak enough to allow one to induct, that is, to use it in order to then prove $\Ack_{k}$-type lower bounds for $k$-graph regularity. Another critical feature of our new notion of hypergraph regularity is that it has (almost) nothing to do with hypergraphs! A disconcerting aspect of all proofs of the hypergraph regularity lemma is that they involve a very complicated nested/inductive structure. Furthermore, one has to introduce an elaborate hierarchy of constants that controls how regular one level of the partition is compared to the previous one. What is thus nice about our new notion is that it involves only various kinds of instances of graph $\langle \d \rangle$-regularity. As a result, our proof is (relatively!) simple. \paragraph{How do we find witnesses for $3$-graph irregularity?} The key idea in Gowers' lower bound~\cite{Gowers97} for graph regularity was in constructing a graph $G$, based on a sequence of partitions ${\cal P}_1,{\cal P}_2,\ldots$ of $V(G)$, with the following inductive property: if a vertex partition $\Z$ refines ${\cal P}_i$ but does not refine ${\cal P}_{i+1}$ then $\Z$ is not $\epsilon$-regular. The key step of the proof of~\cite{Gowers97} is in finding witnesses showing that pairs of clusters of $\Z$ are irregular. The main difficulty in extending this strategy to $k$-graphs already reveals itself in the setting of $3$-graphs. In a nutshell, while in graphs, a witness to irregularity of a pair of clusters $A,B \in \Z$ is {\em any} pair of large subsets $A' \sub A$ and $B' \sub B$, in the setting of $3$-graphs we have to find three large edge-sets (called a {\em triad}, see Section~\ref{sec:FR}) that have an additional property: they must together form a graph containing many triangles. It thus seems quite hard to extend Gowers' approach already to the setting of $3$-graphs. By working with the much weaker notion of $\langle \d \rangle$-regularity, we circumvent this issue since two of the edges sets in our version of a triad are always complete bipartite graphs. See Subsection~\ref{subsec:definitions}. \paragraph{What is then the meaning of Theorem \ref{thm:main-informal}?} Our main result, stated formally as Theorem~\ref{theo:main}, establishes an $\Ack_3$-type lower bound for $\langle \d \rangle$-regularity of $3$-graphs, that is, for a specific new version of the hypergraph regularity lemma. Therefore, we immediately get $\Ack_3$-type lower bounds for any $3$-graph regularity lemma which is at least as strong as our new lemma, that is, for any lemma whose requirements/guarantees imply those that are needed in order to satisfy our new notion of regularity. In particular, we will prove Corollary \ref{coro:FR-LB} by showing that the regularity notions used in these lemmas are at least as strong as $\langle \d \rangle$-regularity. In \cite{MSk} we will prove Theorem~\ref{thm:main-informal} in its full generality by extending Theorem~\ref{theo:main} to arbitrary $k$-graphs. This proof, though technically more involved, will be quite similar at its core to the way we derive Theorem~\ref{theo:main} from Lemma~\ref{theo:core} in the present paper. The deduction of Corollary \ref{coro:RS-LB}, which appears in \cite{MSk}, will also turn out to be quite similar to the way Corollary \ref{coro:FR-LB} is derived from Theorem~\ref{theo:main} in the present paper \paragraph{How strong is our lower bound?} Since Theorem \ref{thm:main-informal} gives a lower bound for $\langle \d \rangle$-regularity and Corollaries \ref{coro:FR-LB} and \ref{coro:RS-LB} show that this notion is at least as weak as previously used notions of regularity, it is natural to ask: $(i)$ is this notion equivalent to one of the other notions? $(ii)$ is this notion strong enough for proving the hypergraph removal lemma, which was one of the main reasons for developing the hypergraph regularity lemma? We will prove that the answer to both questions is {\em negative} by showing that already for graphs, $\langle \d \rangle$-regularity (for $\d$ a fixed constant) is not strong enough even for proving the triangle removal lemma. This of course makes our lower bound even stronger as it already applies to a very weak notion of regularity. In a nutshell, the proof proceeds by first taking a random tripartite graph, showing (using routine probabilistic arguments) that with high probability the graph is $\langle \d \rangle$-regular yet contains a small number of triangles. One then shows that removing these triangles, and then taking a blowup of the resulting graph, gives a triangle-free graph of positive density that is $\langle \d \rangle$-regular. The full details will appear in \cite{MSk}. \paragraph{How tight is our bound?} Roughly speaking, we will show that for a $k$-graph with $pn^k$ edges, every $\langle \d \rangle$-regular partition has order at least $\Ack_k(\log 1/p)$. In a recent paper \cite{MoshkovitzSh16} we proved that in graphs, one can prove a matching $\Ack_2(\log 1/p)$ upper bound, even for a slightly stronger notion than $\langle \d \rangle$-regularity. This allowed us to obtain a new proof of Fox's $\Ack_2(\log 1/\epsilon)$ upper bound for the graph removal lemma \cite{Fox11} (since the stronger notion allows to count small subgraphs). We believe that it should be possible to match our lower bounds with $\Ack_k(\log 1/p)$ upper bounds (even for a slightly stronger notion analogous to the one used in \cite{MoshkovitzSh16}). We think that it should be possible to deduce from such an upper bound an $\Ack_k(\log 1/\epsilon)$ upper bound for the $k$-graph removal lemma. The best known bounds for this problem are (at least) $\Ack_k(\poly(1/\epsilon))$. \subsection{Paper overview} In Section~\ref{sec:define} we will first define the new notion of hypergraph regularity, which we term $\langle \d \rangle$-regularity, for which we will prove our main lower bound. We will then give the formal version of Theorem~\ref{thm:main-informal} (see Theorem~\ref{theo:main}). This will be followed by the statement of our core technical result, Lemma~\ref{theo:core}, and an overview of how this technical result is used in the proof of Theorem~\ref{theo:main}. The proof of Theorem~\ref{theo:main} appears in Section~\ref{sec:LB}. We refer the reader to~\cite{MSk} for the proof of Lemma~\ref{theo:core}. In Section \ref{sec:FR} we prove Corollary~\ref{coro:FR-LB}. In Appendix~\ref{sec:FR-appendix} we give the proof of certain technical claims missing from Section~\ref{sec:FR}. \section{$\langle \d \rangle$-regularity and Proof Overview}\label{sec:define} Formally, a \emph{$3$-graph} is a pair $H=(V,E)$, where $V=V(H)$ is the vertex set and $E=E(H) \sub \binom{V}{3}$ is the edge set of $H$. The number of edges of $H$ is denoted $e(H)$ (i.e., $e(H)=|E|$). The $3$-graph $H$ is \emph{$3$-partite} on (disjoint) vertex classes $(V_1,V_2,V_3)$ if every edge of $H$ has a vertex from each $V_i$. The \emph{density} of a $3$-partite $3$-graph $H$ is $e(H)/\prod_{i=1}^3 |V_i|$. For a bipartite graph $G$, the set of edges of $G$ between disjoint vertex subsets $A$ and $B$ is denoted by $E_G(A,B)$; the density of $G$ between $A$ and $B$ is denoted by $d_G(A,B)=e_G(A,B)/|A||B|$, where $e_G(A,B)=|E_G(A,B)|$. We use $d(A,B)$ if $G$ is clear from context. When it is clear from context, we sometimes identify a hypergraph with its edge set. In particular, we will write $V_1 \times V_2$ for the complete bipartite graph on vertex classes $(V_1,V_2)$. For partitions $\P,\Q$ of the same underlying set, we say that $\Q$ \emph{refines} $\P$, denoted $\Q \prec \P$, if every member of $\Q$ is contained in a member of $\P$. We say that $\P$ is \emph{equitable} if all its members have the same size.\footnote{In a regularity lemma one allows the parts to differ in size by at most $1$ so that it applies to all (hyper-)graphs. For our lower bound this is unnecessary.} We use the notation $x \pm \e$ for a number lying in the interval $[x-\e,\,x+\e]$. In the following definition, and in the rest of the paper, we will sometimes identify a graph or a $3$-graph with its edge set when the vertex set is clear from context. \begin{definition}[$2$-partition]\label{def:2-partition} A \emph{$2$-partition} $(\Z,\E)$ on a vertex set $V$ consists of a partition $\Z$ of $V$ and a family of edge disjoint bipartite graphs $\E$ so that: \begin{itemize} \item Every $E \in \E$ is a bipartite graph whose two vertex sets are distinct $Z,Z' \in \Z$. \item For every $Z \neq Z' \in \Z$, the complete bipartite graph $Z \times Z'$ is the union of graphs from $\E$. \end{itemize} \end{definition} Put differently, a $2$-partition consists of vertex partition $\Z$ and a collection of bipartite graphs $\E$ such that $\E$ is a refinement of the collection of complete bipartite graphs $\{Z \times Z' : Z \neq Z' \in \Z \}$. \subsection{$\langle \d \rangle$-regularity of graphs and hypergraphs}\label{subsec:definitions} In this subsection we define our new\footnote{For $k=3$, related notions of regularity were studied in~\cite{ReiherRoSc16,Towsner17}.} notion of $\langle\d\rangle$-regularity, first for graphs and then for $3$-graphs in Definition~\ref{def:k-reg} below. Let us first recall Szemer\'edi's notion of $\epsilon$-regularity. A bipartite graph on $(A,B)$ is \emph{$\e$-regular} if for all subsets $A' \sub A$, $B' \sub B$ with $|A'|\ge\e|A|$, $|B'|\ge\e|B|$ we have $|d(A',B') -d(A,B)| \le \e$. A vertex partition $\P$ of a graph is $\e$-regular if the bipartite graph induced on each but at most $\e|\P|^2$ of the pairs $(A,B)$ with $A \neq B \in \P$ is $\e$-regular. Szemer\'edi's graph regularity lemma says that every graph has an $\e$-regular equipartition of order at most some $\Ack_2(\poly(1/\e))$. We now introduce a weaker notion of graph regularity which we will use throughout the paper. \begin{definition}[graph $\langle\d\rangle$-regularity]\label{def:star-regular} A bipartite graph $G$ on $(A,B)$ is \emph{$\langle \d \rangle$-regular} if for all subsets $A' \sub A$, $B' \sub B$ with $|A'| \ge \d|A|$, $|B'|\ge\d|B|$ we have $d_G(A',B') \ge \frac12 d_G(A,B)$.\\ A vertex partition $\P$ of a graph $G$ is \emph{$\langle \d \rangle$-regular} if one can add/remove at most $\d \cdot e(G)$ edges so that the bipartite graph induced on each $(A,B)$ with $A \neq B \in \P$ is $\langle \d \rangle$-regular. \end{definition} For the reader worried that in Definition~\ref{def:star-regular} we merely replaced the $\e$ from the definition of $\e$-regularity with $\d$, we refer to the discussion following Theorem~\ref{theo:main} below. The definition of $\langle\d\rangle$-regularity for hypergraphs involves the $\langle\d\rangle$-regularity notion for graphs, applied to certain auxiliary graphs which are defined as follows. \begin{definition}[The auxiliary graph $G_{H}^i$]\label{def:aux} For a $3$-partite $3$-graph $H$ on vertex classes $(V_1,V_2,V_3)$, we define a bipartite graph $G_{H}^1$ on the vertex classes $(V_2 \times V_3,\,V_1)$ by $$E(G_{H}^1) = \big\{ ((v_2,v_3),v_1) \,\big\vert\, (v_1,v_2,v_3) \in E(H) \big\} \;.$$ The graphs $G_{H}^2$ and $G_{H}^3$ are defined in an analogous manner. \end{definition} Importantly, for a $2$-partition (as defined in Definition~\ref{def:2-partition}) to be $\langle\d\rangle$-regular it must first satisfy a requirement on the regularity of its parts. \begin{definition}[$\langle\d\rangle$-good partition]\label{def:k-good} A $2$-partition $(\Z,\E)$ on $V$ is \emph{$\langle\d\rangle$-good} if all bipartite graphs in $\E$ (between any two distinct vertex clusters of $\Z$) are $\langle \d \rangle$-regular. \end{definition} For a $2$-partition $(\Z,\E)$ of a $3$-partite $3$-graph on vertex classes $(V_1,V_2,V_3)$ with $\Z \prec \{V_1,V_2,V_3\}$, for every $1 \le i \le 3$ we denote $\Z_i = \{Z \in \Z \,\vert\, Z \sub V_i\}$, and we denote $\E_i = \{E \in \E \,\vert\, E \sub V_j \times V_k\}$ where $\{i,j,k\}=\{1,2,3\}$. So for example, $\E_1$ is thus a partition of $V_2 \times V_3$. \begin{definition}[$\langle\d\rangle$-regular partition]\label{def:k-reg} Let $H$ be a $3$-partite $3$-graph on vertex classes $(V_1,V_2,V_3)$ and $(\Z,\E)$ be a $\langle \d \rangle$-good $2$-partition with $\Z \prec \{V_1,V_2,V_3\}$. We say that $(\Z,\E)$ is a \emph{$\langle \d \rangle$-regular} partition of $H$ if for every $1 \le i \le 3$, $\E_i \cup \Z_i$ is a $\langle \d \rangle$-regular partition of $G_H^i$. \end{definition} \subsection{Formal statement of the main result} We are now ready to formally state our tight lower bound for $3$-graph $\langle \d \rangle$-regularity (the formal version of Theorem~\ref{thm:main-informal} above for $k=3$). Recall that we define the {\em tower} functions $\twr(x)$ to be a tower of exponents of height $x$, and then define the {\em wowzer} function $\wow(x)$ to be the $x$-times iterated tower function, that is $\wow(x)= \underbrace{\twr(\twr(\cdots(\twr(1))\cdots))}_{x \text{ times}}$. \begin{theo}[Main result]\label{theo:main} For every $s \in \N$ there is a $3$-partite $3$-graph $H$ on vertex classes of equal size and of density at least $2^{-s}$, and a partition $\V_0$ of $V(H)$ with $|\V_0| \le 2^{300}$, such that if $(\Z,\E)$ is a $\langle 2^{-73} \rangle$-regular partition of $H$ with $\Z \prec \V_0$ then $|\Z| \ge \wow(s)$. \end{theo} Let us draw the reader's attention to an important and perhaps surprising aspect of Theorem~\ref{theo:main}. All the known tower-type lower bounds for graph regularity depend on the error parameter $\epsilon$, that is, they show the existence of graphs $G$ with the property that every $\epsilon$-regular partition of $G$ is of order at least $\Ack_2(\poly(1/\e))$. This should be contrasted with the fact that our lower bounds for $\langle \d \rangle$-regularity holds for a {\em fixed} error parameter $\delta$. Indeed, instead of the dependence on the error parameter, our lower bound depends on the {\em density} of the graph. This delicate difference makes it possible for us to prove Theorem~\ref{theo:main} by iterating the construction described in the next subsection. \subsection{The core construction and proof overview}\label{subsec:overview} The graph construction in Lemma~\ref{theo:core} below is the main technical result we will need in order to prove Theorem~\ref{theo:main}. We will first need to define ``approximate'' refinement (a notion that goes back to Gowers~\cite{Gowers97}). \begin{definition}[Approximate refinements] For sets $S,T$ we write $S \sub_\b T$ if $|S \sm T| < \b|S|$. For a partition $\P$ we write $S \in_\b \P$ if $S \sub_\b P$ for some $P \in \P$. For partitions $\P,\Q$ of the same set of size $n$ we write $\Q \prec_\b \P$ if $$\sum_{\substack{Q \in \Q\colon\\Q \notin_\b \P}} |Q| \le \b n \;.$$ \end{definition} Note that for $\Q$ equitable, $\Q \prec_\b \P$ if and only if all but at most $\b|\Q|$ parts $Q \in \Q$ satisfy $Q \in_\b \P$. We note that throughout the paper we will only use approximate refinements with $\b \le 1/2$, and so if $S \in_\b \P$ then $S \sub_\b P$ for a unique $P \in \P$. We stress that in Lemma~\ref{theo:core} below we only use notions related to graphs. In particular, $\langle \d \rangle$-regularity refers to Definition~\ref{def:star-regular}. \begin{lemma}[Core construction]\label{theo:core} Let $\Lside$ and $\Rside$ be disjoint sets. Let $\L_1 \succ \cdots \succ \L_s$ and $\R_1 \succ \cdots \succ \R_s$ be two sequences of $s$ successively refined equipartitions of $\Lside$ and $\Rside$, respectively, that satisfy for every $i \ge 1$ that: \begin{enumerate} \item\label{item:core-minR} $|\R_i|$ is a power of $2$ and $|\R_1| \ge 2^{200}$, \item\label{item:core-expR} $|\R_{i+1}| \ge 4|\R_i|$ if $i < s$, \item\label{item:core-expL} $|\L_i| = 2^{|\R_i|/2^{i+10}}$. \end{enumerate} Then there exists a sequence of $s$ successively refined edge equipartitions $\G_1 \succ \cdots \succ \G_s$ of $\Lside \times \Rside$ such that for every $1 \le j \le s$, $|\G_j|=2^j$, and the following holds for every $G \in \G_j$ and $\d \le 2^{-20}$. For every $\langle \d \rangle$-regular partition $\P \cup \Q$ of $G$, where $\P$ and $\Q$ are partitions of $\Lside$ and $\Rside$, respectively, and every $1 \le i \le j$, if $\Q \prec_{2^{-9}} \R_{i}$ then $\P \prec_{\g} \L_{i}$ with $\g = \max\{2^{5}\sqrt{\d},\, 32/\sqrt[6]{|\R_1|} \}$. \end{lemma} \begin{remark}\label{remequi} Every $G \in \G_j$ is a bipartite graph of density $2^{-j}$ since $\G_j$ is equitable. \end{remark} As mentioned before, the proof of Lemma~\ref{theo:core} appears in~\cite{MSk}. Let us end this section by explaining the role Lemma~\ref{theo:core} plays in the proof of Theorem \ref{theo:main}. \paragraph{Using graphs to construct $3$-graphs:} Perhaps the most surprising aspect of the proof of Theorem~\ref{theo:main} is that in order to construct a $3$-graph we also use the graph construction of Lemma~\ref{theo:core} in a somewhat unexpected way. In this case, $\Lside$ will be a complete bipartite graph and the $\L_i$'s will be partitions of this complete bipartite graph themselves given by another application of Lemma~\ref{theo:core}. The partitions will be of wowzer-type growth, and the second application of Lemma~\ref{theo:core} will ``multiply'' the graph partitions (given by the $\L_i$'s) to give a partition of the complete $3$-partite $3$-graph into $3$-graphs that are hard for $\langle \d \rangle$-regularity. We will take $H$ in Theorem~\ref{theo:main} to be an arbitrary $3$-graph in this partition. \paragraph{Why is Lemma~\ref{theo:core} one-sided?} As is evident from the statement of Lemma~\ref{theo:core}, it is one-sided in nature; that is, under the premise that the partition $\Q$ refines $\R_i$ we may conclude that $\P$ refines $\L_i$. It is natural to ask if one can do away with this assumption, that is, be able to show that under the same assumptions $\Q$ refines $\R_i$ and $\P$ refines $\L_i$. As we mentioned in the previous item, in order to prove a wowzer-type lower bound for $3$-graph regularity we have to apply Lemma~\ref{theo:core} with a sequence of partitions that grows as a wowzer-type function. Now, in this setting, Lemma~\ref{theo:core} does not hold without the one-sided assumption, because if it did, then one would have been able to prove a wowzer-type lower bound for graph $\langle \d \rangle$-regularity, and hence also for Szemer\'edi's regularity lemma. Put differently, if one wishes to have a construction that holds with arbitrarily fast growing partition sizes, then one has to introduce the one-sided assumption. \paragraph{How do we remove the one-sided assumption?} The proof of Theorem \ref{theo:main} proceeds by first proving a one-sided version of Theorem \ref{theo:main}, stated as Lemma~\ref{lemma:ind-k}. In order to get a construction that does not require such a one-sided assumption, we will need one final trick; we will take $6$ clusters of vertices and arrange $6$ copies of this one-sided construction along the $3$-edges of a cycle. This will give us a ``circle of implications'' that will eliminate the one-sided assumption. See Subsection \ref{subsec:pasting}. \section{Proof of Theorem~\ref{theo:main}}\label{sec:LB} \renewcommand{\k}{r} \renewcommand{\t}{t} \newcommand{\w}{w} \newcommand{\GG}{\mathbf{G}} \newcommand{\FF}{\mathbf{F}} \newcommand{\VV}{\mathbf{V}} \renewcommand{\Hy}[1]{H_{{#1}}} \renewcommand{\A}{A} \newcommand{\subs}{\subset_*} \newcommand{\pad}{P} \renewcommand{\K}{\mathcal{K}} \newcommand{\U}{U} \renewcommand{\k}{k} \renewcommand{\K}{\mathcal{K}} \renewcommand{\r}{k} The purpose of this section is to prove the main result, Theorem~\ref{theo:main}. This section is self-contained save for the application of Lemma~\ref{theo:core}. The key step of the proof, stated as Lemma~\ref{lemma:ind-k} and proved in Subsection \ref{subsec:key}, relies on a subtle construction that uses Lemma~\ref{theo:core} twice. This lemma only gives a ``one-sided'' lower bound for $3$-graph regularity, in the spirit of Lemma~\ref{theo:core}. In Subsection~\ref{subsec:pasting} we show how to use Lemma~\ref{lemma:ind-k} in order to complete the proof of Theorem~\ref{theo:main}. We first observe a simple yet crucial property of $2$-partitions, stated as Claim~\ref{claim:uniform-refinement} below, which we will need later. This property relates $\d$-refinements of partitions and $\langle \d \rangle$-regularity of partitions, and relies on Claim~\ref{claim:refinement-union}. Here, as well as in the rest of this section, we will use the definitions and notations introduced in Section \ref{sec:define}. In particular, recall that if a vertex partition $\Z$ of vertex classes $(V_1,V_2,V_3)$ satisfies $\Z \prec \{V_1,V_2,V_3\}$, then for every $1 \le i \le 3$ we denote $\Z_i = \{Z \in \Z \,\vert\, Z \sub V_i\}$. Moreover, if a $2$-partition $(\Z,\E)$, satisfies $\Z \prec \{V_1,V_2,V_3\}$ we denote $\E_i = \{E \in \E \,\vert\, E \sub V_j \times V_k\}$ where $\{i,j,k\}=\{1,2,3\}$. We will first need the following easy claim regarding the union of $\langle \d\rangle$-regular graphs. \begin{claim}\label{claim:star-union Let $G_1,\ldots,G_\ell$ be mutually edge-disjoint bipartite graphs on the same vertex classes $(Z,Z')$. If every $G_i$ is $\langle \d \rangle$-regular then $G=\bigcup_{i=1}^\ell G_i$ is also $\langle \d \rangle$-regular. \end{claim} \begin{proof Let $S \sub Z$, $S' \sub Z'$ with $|S| \ge \d|Z|$, $|S'| \ge \d|Z'|$. Then $$d_G(S,S') = \frac{e_G(S,S')}{|S||S'|} = \sum_{i=1}^\ell \frac{e_{G_i}(S,S')}{|S||S'|} = \sum_{i=1}^\ell d_{G_i}(S,S') \ge \sum_{i=1}^\ell \frac12 d_{G_i}(Z,Z') = \frac12 d_{G}(Z,Z') \;,$$ where the second and last equalities follow from the mutual disjointness of the $G_i$, and the inequality follows from the $\langle \d \rangle$-regularity of each $G_i$. Thus, $G$ is $\langle \d \rangle$-regular, as claimed. \end{proof} We use the following claim regarding approximate refinements. \begin{claim}\label{claim:refinement-union} If $\Q \prec_\d \P$ then there exist $P \in \P$ and $Q$ that is a union of members of $\Q$ such that $|P \triangle Q| \le 3\d|P|$. \end{claim} \begin{proof} For each $P\in \P$ let $\Q(P) = \{Q \in \Q \colon Q \sub_\d P\}$, and denote $P_\Q = \bigcup_{Q \in \Q(P)} Q$. We have \begin{align*} \sum_{P \in \P} |P \triangle P_\Q| &= \sum_{P \in \P} |P_\Q \sm P| + \sum_{P \in \P} |P \sm P_\Q| = \sum_{P \in \P} \sum_{\substack{Q \in \Q \colon\\Q \sub_\d P}} |Q \sm P| + \sum_{P \in \P} \sum_{\substack{Q \in \Q \colon\\Q \nsubseteq_\d P}} |Q \cap P| \\ &\le \sum_{P \in \P} \sum_{\substack{Q \in \Q \colon\\Q \sub_\d P}} \d|Q| + \Big( \sum_{\substack{Q \in \Q\colon\\Q \notin_\d \P}} |Q| + \sum_{\substack{Q \in \Q \colon\\Q \in_\d \P}} \d|Q| \Big) \le 3\d\sum_{Q \in \Q} |Q| = 3\d\sum_{P \in \P} |P| \;, \end{align*} where the last inequality uses the statement's assumption $\Q \prec_\d \P$ to bound the middle summand. By averaging, there exists $P \in \P$ such that $|P \triangle P_\Q| \le 3\d|P|$, thus completing the proof. \end{proof} The property of $2$-partitions that we need is as follows. \begin{claim}\label{claim:uniform-refinement} Let $\P=(\Z,\E)$ be a $2$-partition with $\Z \prec \{V_1,V_2,V_3\}$, and let $\G$ be a partition of $V_1\times V_2$ with $\E_3 \prec_\d \G$. If $(\Z,\E)$ is $\langle \d \rangle$-good then $\Z_1 \cup \Z_2$ is a $\langle 3\d \rangle$-regular partition of some $G \in \G$. \end{claim} \begin{proof} Put $\E=\E_3$. By Claim~\ref{claim:refinement-union}, since $\E \prec_\d \G$ there exist $G \in \G$ (a bipartite graph on $(V_1,V_2)$) and $G_\E$ that is a union of members of $\E$ (and thus also a bipartite graph on $(V_1,V_2)$) such that $|G \triangle G_\E| \le 3\d|G|$. Letting $Z_1 \in \Z_1$, $Z_2 \in \Z_2$, to complete the proof it suffices to show that the induced bipartite graph $G_\E[Z_1,Z_2]$ is $\langle \d \rangle$-regular (recall Definition~\ref{def:star-regular}). By Definition~\ref{def:2-partition}, $G_\E[Z_1,Z_2]$ is a union of bipartite graphs from $\E$ on $(Z_1,Z_2)$. Since every graph in $\E$ is $\langle \d \rangle$-regular by the statement's assumption that $(\Z,\E)$ is $\langle \d \rangle$-good (recall Definition~\ref{def:k-good}), we have that $G_\E[Z_1,Z_2]$ is a union of $\langle \d \rangle$-regular bipartite graphs on $(Z_1,Z_2)$. By Claim~\ref{claim:star-union}, $G_\E[Z_1,Z_2]$ is $\langle \d \rangle$-regular as well, thus completing the proof. \end{proof} We will later need the following easy (but slightly tedious to state) claim. \begin{claim}\label{claim:restriction} Let $H$ be a $3$-partite $3$-graph on vertex classes $(V_1,V_2,V_3)$, and let $H'$ be the induced $3$-partite $3$-graph on vertex classes $(V_1',V_2',V_3')$ with $V_i' \sub V_i$ and $\a \cdot e(H)$ edges. If $(\Z,\E)$ is a $\langle \d \rangle$-regular partition of $H$ with $\Z \prec \bigcup_{i=1}^3 \{V_i,\,V_i \sm V_i'\}$ then its restriction $(\Z',\E')$ to $V(H')$ is a $\langle \d/\a \rangle$-regular partition of $H'$. \end{claim} \begin{proof} Recall Definition~\ref{def:k-reg}. Clearly, $(\Z',\E')$ is $\langle \d \rangle$-good. We will show that $\E'_1 \cup \Z'_1$ is a $\langle \d/\a \rangle$-regular partition of $G^1_{H'}$. The argument for $G^2_{H'}$, $G^3_{H'}$ will be analogous, hence the proof would follow. Observe that $G^1_{H'}$ is an induced subgraph of $G^1_{H}$, namely, $G^1_{H'} = G^i_{H}[V_2' \times V_3',\, V_1']$. By assumption, $e(H') = \a e(H)$, and thus $e(G^1_{H'}) = \a e(G^1_{H})$. By the statement's assumption on $\Z$ and since $\E_1 \cup \Z_1$ is a $\langle \d \rangle$-regular partition of $G^1_{H}$, we deduce---by adding/removing at most $\d e(G^1_{H}) = (\d/\a)e(G^1_{H'})$ edges of $G^1_{H'}$---that $\E'_1 \cup \Z'_1$ is a $\langle \d/\a \rangle$-regular partition of $G_{H'}^1$. As explained above, this completes the proof. \end{proof} Finally, we will need the following claim regarding approximate refinements. \begin{claim}\label{claim:refinement-size} If $\Q \prec_{1/2} \P$ and $\P$ is equitable then $|\Q| \ge \frac14|\P|$. \end{claim} \begin{proof} We claim that the underlying set $U$ has a subset $U^*$ of size $|U^*|\ge \frac14|U|$ such that the partitions $\Q^*=\{Q \cap U^* \,\vert\, Q \in \Q \} \setminus \{\emptyset\}$ and $\P^*=\{P \cap U^* \,\vert\, P \in \P \} \setminus \{\emptyset\}$ of $U^*$ satisfy $\Q^* \prec \P^*$. Indeed, let $U^* = \bigcup_{Q} Q \cap P_Q$ where the union is over all $Q \in \Q$ satisfying $Q \sub_{1/2} P_Q$ for a (unique) $P_Q \in \P$. As claimed, $|U^*| = \sum_{Q \in_{1/2} \P} |Q \cap P_Q| \ge \sum_{Q \in_{1/2} \P} \frac12|Q| \ge \frac14|U|$, using $\Q \prec_{1/2} \P$ for the last inequality. Now, since $\P$ is equitable, $|\P^*| \ge \frac14|\P|$. Thus, $|\Q| \ge |\Q^*| \ge |\P^*| \ge \frac14|\P|$, as desired. \end{proof} \renewcommand{\K}{k} \renewcommand{\w}{w} \subsection{$3$-graph key argument}\label{subsec:key} We next introduce a few more definitions that are needed for the statement of Lemma \ref{lemma:ind-k}. Let $e(i) = 2^{i+10}$. We define the following tower-type function $\t\colon\N\to\N$; \begin{equation}\label{eq:t} \t(i+1) = \begin{cases} 2^{\t(i)/e(i)} &\text{if } i \ge 1\\ 2^{250} &\text{if } i = 0 \;. \end{cases} \end{equation} It is easy to prove, by induction on $i$, that $\t(i) \ge e(i)\t(i-1)$ for $i \ge 2$ (for the induction step, $t(i+1) \ge 2^{\t(i-1)} = t(i)^{e(i-1)}$, so $t(i+1)/e(i+1) \ge \t(i)^{e(i-1)-i-11} \ge \t(i)$). This means that $t$ is monotone increasing, and that $\t$ is an integer power of $2$ (follows by induction as $t(i)/e(i) \ge 1$ is a positive power of $2$ and in particular an integer). We record the following facts regarding $\t$ for later use: \begin{equation}\label{eq:monotone} \t(i) \ge 4\t(i-1) \quad\text{ and }\quad \text{ $\t(i)$ is a power of $2$} \;. \end{equation} For a function $f:\N\to\N$ with $f(i) \ge i$ we denote \begin{equation}\label{eq:f*} f^*(i) = \t\big(f(i)\big)/e(i) \;. \end{equation} Note that $f^*(i)$ is indeed a positive integer (by the monotonicity of $\t$, $f^*(i) \ge \t(i)/e(i)$ is a positive power of $2$). In fact, $f^*(i) \ge f(i)$ (as $f^*(i) \ge 4^{f(i)}/e(i)$ using~(\ref{eq:monotone})). We recursively define the function $\w\colon\N\to\N$ as follows; \begin{equation}\label{eq:Ak} \w(i+1) = \begin{cases} \w^*(i) &\text{if } i \ge 1\\ 1 &\text{if } i = 0 \;. \end{cases} \end{equation} It is evident that $\w$ is a wowzer-type function; in fact, one can check that: \begin{equation}\label{eq:A_k} \w(i) \ge \wow(i) \;. \end{equation} \begin{lemma}[Key argument]\label{lemma:ind-k} Let $s \in \N$, let $\Vside^1,\Vside^2,\Vside^3$ be mutually disjoint sets of equal size and let $\V^1 \succ\cdots\succ \V^m$ be a sequence of $m=\w^*(s)+1$ successive equitable refinements of $\{\Vside^1,\Vside^2,\Vside^3\}$ with $|\V^i_1|=|\V^i_2|=|\V^i_3|=\t(i)$ for every\footnote{Since we assume that each $\V^i$ refines $\{\Vside^1,\Vside^2,\Vside^3\}$ then $\V^i_1$ is (by the notation mentioned before Claim \ref{claim:star-union}) the restriction of $\V^i$ to $\Vside^1$.} $1 \leq i \leq m$. Then there is a $3$-partite $3$-graph $H$ on $(\Vside^1,\Vside^2,\Vside^3)$ of density $d(H)=2^{-s}$ satisfying the following property:\\ If $(\Z,\E)$ is a $\langle 2^{-70} \rangle$-regular partition of $H$ and for some $1 \le i \le \w(s)$ $(< m)$ we have $\Z_3 \prec_{2^{-9}} \V^i_3$ and $\Z_2 \prec_{2^{-9}} \V^i_2$ then we also have $\Z_1 \prec_{2^{-9}} \V^{i+1}_1$. \end{lemma} \begin{proof} Put $s':=\w^*(s)$, so that $m = s'+1$. Apply Lemma~\ref{theo:core} with $$\Lside=\Vside^1,\quad \Rside=\Vside^2 \quad\text{ and }\quad \V^2_1 \succ \cdots \succ \V^{s'+1}_1 ,\quad \V^1_2 \succ \cdots \succ \V^{s'}_2 \;,$$ and let \begin{equation}\label{eq:main-k-colors} \G^1 \succ \cdots \succ \G^{s'} \quad\text{ with }\quad |G^\ell|=2^\ell \text{ for every } 1 \le \ell \le s' \end{equation} be the resulting sequence of $s'$ successively refined equipartitions of $\Vside^1 \times \Vside^2$. \begin{prop}\label{prop:main-k-hypo} Let $1 \le \ell \le s'$ and $G \in \G^\ell$. For every $\langle 2^{-28} \rangle$-regular partition $\Z_1 \cup \Z_2$ of $G$ (where $\Z_1$ and $\Z_2$ are partitions of $\Vside^1$ and $\Vside^2$, respectively) and every $1 \le i \le \ell$, if $\Z_2 \prec_{2^{-9}} \V^i_2$ then $\Z_1 \prec_{2^{-9}} \V^{i+1}_1$. \end{prop} \begin{proof} First we need to verify that we may apply Lemma~\ref{theo:core} as above. Assumptions~\ref{item:core-minR},~\ref{item:core-expR} in Lemma~\ref{theo:core} hold by~(\ref{eq:monotone}) and the fact that $|\V^j_2|=\t(j)$. Assumption~\ref{item:core-expL} is satisfied since for every $1 \le j \le s$ we have $$|\V^{j+1}_1| = \t(j+1) = 2^{\t(j)/e(j)} = 2^{|\V^{j}_2|/e(j)} \;,$$ where the second equality uses the definition of the function $\t$ in~(\ref{eq:t}). We can thus use Lemma~\ref{theo:core} to infer that the fact that $\Z_2 \prec_{2^{-9}} \V^i_2$ implies that $\Z_1 \prec_x \V^{i+1}_1$ with $x=\max\{2^{5}\sqrt{2^{-28}},\, 32/\sqrt[6]{\t(1)} \} = 2^{-9}$, using~(\ref{eq:t}). \end{proof} For each $1 \le j \le s$ let \begin{equation}\label{eq:main-k-dfns} \G^{(j)} = \G^{\w^*(j)} \quad\text{ and }\quad \V^{(j)} = \V^{\w(j)}_3 \;. \end{equation} All these choices are well defined since $\w^*(j)$ satisfies $1 \le \w^*(1) \le \w^*(j) \le \w^*(s) = s'$, and since $\w(j)$ satisfies $1 \le \w(1) \le \w(j) \le \w(s) \le m$. Observe that we have thus chosen two subsequences of $\G^1,\cdots,\G^{s'}$ and $\V^1_3,\ldots,\V^m_3$, each of length $s$. Recalling that each $\G^{(j)}$ is a partition of $\Vside^1 \times \Vside^2$, we now apply Lemma~\ref{theo:core} again with $$ \Lside=\Vside^1 \times \Vside^2,\quad \Rside=\Vside^3 \quad\text{ and }\quad \G^{(1)} \succ \cdots \succ \G^{(s)}, \quad \V^{(1)} \succ \cdots \succ \V^{(s)} \;. $$ The output of this application of Lemma~\ref{theo:core} consists of a sequence of $s$ (successively refined) equipartitions of $(\Vside^1 \times \Vside^2)\times\Vside^3$. We can think of the $s$-th partition of this sequence as a collection of $2^s$ bipartite graphs on vertex sets $(\Vside^1\times\Vside^{2},\,\Vside^3)$. For the rest of the proof let $G'$ be be any of these graphs. By Remark \ref{remequi} we have \begin{equation}\label{eq:ind-colors2} d(G')=2^{-s} \;. \end{equation} \begin{prop}\label{prop:ind-prop2} For every $\langle 2^{-70} \rangle$-regular partition $\E \cup \V$ of $G'$ (where $\E$ and $\V$ are partitions of $\Vside^1\times\Vside^{2}$ and $\Vside^3$ respectively) and every $1 \le j' \le s$, if $\V \prec_{2^{-9}} \V^{(j')}$ then $\E \prec_{2^{-30}} \G^{(j')}$. \end{prop} \begin{proof} First we need to verify that we may apply Lemma~\ref{theo:core} as above. Note that $|\G^{(j)}|=2^{\w^*(j)}$ by~(\ref{eq:main-k-colors}) and (\ref{eq:main-k-dfns}), and that $|\V^{(j)}|=\t(\w(j))$ by (\ref{eq:main-k-dfns}) and the statement's assumption that $|\V^i_3|=\t(i)$. Therefore, \begin{equation}\label{eq:main-k-orders} |\G^{(j)}| = 2^{\w^*(j)} = 2^{\t(\w(j))/e(j)} = 2^{|\V^{(j)}|/e(j)} \;, \end{equation} where the second equality relies on~(\ref{eq:f*}). Moreover, note that $\t(\w(1)) = \t(1) = 2^{300}$. Now, Assumptions~\ref{item:core-minR} and~\ref{item:core-expR} in Lemma~\ref{theo:core} follow from the fact that $|\V^{(j)}|=\t(\w(j))$, from~(\ref{eq:monotone}) and the fact that $|\V^{(1)}| = \t(\w(1)) \ge 2^{200}$ by~(\ref{eq:Ak}). Assumption~\ref{item:core-expL} follows from~(\ref{eq:main-k-orders}). We can thus use Lemma~\ref{theo:core} to infer that the fact that $\V \prec_{2^{-9}} \V^{(j')}$ implies that $\E \prec_x \G^{(j')}$ with $x=\max\{2^{5}\sqrt{2^{-70}},\, 32/\sqrt[6]{\t(\w(1))} \} = 2^{-30}$. \end{proof} Let $H$ be the $3$-partite $3$-graph on vertex classes $(\Vside^1,\Vside^2,\Vside^3)$ with edge set $$ E(H) = \big\{ (v_1,v_2,v_3) \,:\, ((v_1,v_{2}),v_3) \in E(G') \big\} \;, $$ and note that we have (recall Definition \ref{def:aux}) \begin{equation}\label{eqH} G'=G_{H}^3\;. \end{equation} We now prove that $H$ satisfies the properties in the statement of the lemma. First, note that by~(\ref{eq:ind-colors2}) and (\ref{eqH}) we have $d(H)=2^{-s}$, as needed. Assume now that $i$ is such that \begin{equation}\label{eq:ind-i-assumption} 1 \le i \le \w(s) \end{equation} and: \begin{enumerate} \item\label{item:ind-reg} $(\Z,\E)$ is a $\langle 2^{-70} \rangle$-regular partition of $H$, and \item\label{item:ind-refine} $\Z_3 \prec_{2^{-9}} \V^i_3$ and $\Z_2 \prec_{2^{-9}} \V^i_2$. \end{enumerate} We need to show that \begin{equation}\label{eq:ind-goal} \Z_1 \prec_{2^{-9}} \V^{i+1}_1 \;. \end{equation} Since Item~\ref{item:ind-reg} guarantees that $(\Z,\E)$ is a $\langle 2^{-70} \rangle$-regular partition of $H$, we get from Definition~\ref{def:k-reg} and (\ref{eqH}) that \begin{equation}\label{eq:ind-reg} \text{$\E_3 \cup \Z_3$ is a $\langle 2^{-70} \rangle$-regular partition of } G'. \end{equation} Let \begin{equation}\label{eq:ind-j'} 1 \le j' \le s \end{equation} be the unique integer satisfying (the equality here is just (\ref{eq:Ak})) \begin{equation}\label{eq:ind-sandwich} \w(j') \le i < \w(j'+1) = \w^*(j')\;. \end{equation} Note that (\ref{eq:ind-j'}) holds due to~(\ref{eq:ind-i-assumption}). Recalling~(\ref{eq:main-k-dfns}), the lower bound in~(\ref{eq:ind-sandwich}) implies that $\V^i_3 \prec \V^{\w(j')} = \V^{(j')}$. Therefore, the assumption $\Z_3 \prec_{2^{-9}} \V^i_3$ in~\ref{item:ind-refine} implies that \begin{equation}\label{eq:ind-Zk} \Z_3 \prec_{2^{-9}} \V^{(j')} \;. \end{equation} Apply Proposition~\ref{prop:ind-prop2} on $G'$, using~(\ref{eq:ind-reg}),~(\ref{eq:ind-j'}) and~(\ref{eq:ind-Zk}), to deduce that \begin{equation}\label{eq:ind-E} \E_3 \prec_{2^{-30}} \G^{(j')} = \G^{\w^*(j')} \;, \end{equation} where for the equality again recall~(\ref{eq:main-k-dfns}). Since $(\Z,\E)$ is a $\langle 2^{-70} \rangle$-regular partition of $H$ (by Item~\ref{item:ind-reg} above) it is in particular $\langle 2^{-70} \rangle$-good. By~(\ref{eq:ind-E}) we may thus apply Claim~\ref{claim:uniform-refinement} to conclude that \begin{equation}\label{eq:ind-reg2} \Z_1 \cup \Z_2 \text{ is a } \langle 2^{-28} \rangle \text{-regular partition of some $G\in\G^{\w^*(j')}$.} \end{equation} By~(\ref{eq:ind-reg2}) we may apply Proposition~\ref{prop:main-k-hypo} with $G$, $\Z_1\cup\Z_2$, $\ell=\w^*(j')$ and $i$, observing (crucially) that $i \leq \ell$ by (\ref{eq:ind-sandwich}). We thus conclude that the fact $\Z_2 \prec_{2^{-9}} \V^i_2$ (stated in~\ref{item:ind-refine}) implies that $\Z_1 \prec_{2^{-9}} \V^{i+1}_1$, thus proving~(\ref{eq:ind-goal}) and completing the proof. \end{proof} \subsection{Putting everything together}\label{subsec:pasting} We can now prove our main theorem, Theorem~\ref{theo:main}, which we repeat here for convenience. \addtocounter{theo}{-2} \begin{theo}[Main theorem Let $s \in \N$. There exists a $3$-partite $3$-graph $H$ on vertex classes of equal size and of density at least $2^{-s}$, and a partition $\V_0$ of $V(H)$ with $|\V_0| \le 2^{300}$, such that if $(\Z,\E)$ is a $\langle 2^{-73} \rangle$-regular partition of $H$ with $\Z \prec \V_0$ then $|\Z| \ge \wow(s)$. \end{theo} \addtocounter{theo}{+1} \begin{proof} Let the $3$-graph $B$ be the tight $6$-cycle; that is, $B$ is the $3$-graph on vertex classes $\{0,1,\ldots,5\}$ with edge set $E(B)=\{\{0,1,2\},\{1,2,3\},\{2,3,4\},\{3,4,5\},\{4,5,0\},\{5,0,1\}\}$. Note that $B$ is $3$-partite with vertex classes $(\{0,3\},\{1,4\},\{2,5\}\}$. Put $m=\w^*(s-1)+1$ and let $n \ge \t(m)$. Let $\Vside^0,\ldots,\Vside^{5}$ be $6$ mutually disjoint sets of size $n$ each. Let $\V^1 \succ\cdots\succ \V^m$ be an arbitrary sequence of $m$ successive equitable refinements of $\{\Vside^0,\ldots,\Vside^{5}\}$ with $|\V^i_h|=\t(i)$ for every $1 \le i \le m$ and $0 \le h \le 5$, which exists as $n$ is large enough. Extending the notation $\Z_i$ (above Definition~\ref{def:k-reg}), for every $0 \le x \le 5$ we henceforth denote the restriction of the vertex partition $\Z$ to $\Vside^x$ by $\Z_x = \{Z \in \Z \,\vert\, Z \sub \Vside^x\}$. For each edge $e=\{x,x+1,x+2\} \in E(B)$ (here and henceforth when specifying an edge, the integers are implicitly taken modulo $6$) apply Lemma~\ref{lemma:ind-k} with $$s-1,\, \Vside^{x},\Vside^{x+1},\Vside^{x+2} \text{ and } (\V^{1}_x \cup \V^1_{x+1} \cup \V^1_{x+2}) \succ\cdots\succ (\V^{m}_{x}\cup\V^{m}_{x+1}\cup\V^{m}_{x+2}) \;.$$ Let $H_e$ denote the resulting $3$-partite $3$-graph on $(\Vside^{x},\Vside^{x+1},\Vside^{x+2})$. Note that $d(H_e) = 2^{-(s-1)}$. Moreover, let $$c = 2^{-9} \quad\text{ and }\quad K=\w(s-1)+1 \;.$$ Then $H_e$ has the property that for every $\langle 2^{-70} \rangle$-regular partition $(\Z',\E')$ of $H_e$ and every $1 \le i < K$, \begin{equation}\label{eq:paste-property} \text{if $\Z'_{x+2} \prec_{c} \V^i_{x+2}$ and $\Z'_{x+1} \prec_{c} \V^i_{x+1}$ then $\Z'_x \prec_{c} \V^{i+1}_x$.} \end{equation} We construct our $3$-graph on the vertex set $\Vside:=\Vside^0 \cup\cdots\cup \Vside^5$ as $E(H) = \bigcup_{e} E(H_e)$; that is, $H$ is the edge-disjoint union of all six $3$-partite $3$-graphs $H_e$ constructed above. Note that $H$ is a $3$-partite $3$-graph (on vertex classes $(\Vside^0 \cup \Vside^3,\, \Vside^1 \cup \Vside^4,\, \Vside^2 \cup \Vside^5))$ of density $\frac68 2^{-(s-1)} \ge 2^{-s}$, as needed. We will later use the following fact. \begin{prop}\label{prop:restriction} Let $(\Z,\E)$ be an $\langle 2^{-73}\rangle$-regular partition of $H$ and let $e \in E(B)$. If $\Z \prec \{\Vside^0,\ldots,\Vside^{5}\}$ then the restriction $(\Z',\E')$ of $(\Z,\E)$ to $V(H_e)$ is a $\langle 2^{-70} \rangle$-regular partition of $H_e$. \end{prop} \begin{proof} Immediate from Claim~\ref{claim:restriction} using the fact that $e(H_e) = \frac16 e(H)$. \end{proof} Now, let $(\Z,\E)$ be a $\langle 2^{-73} \rangle$-regular partition of $H$ with $\Z \prec \V^1$. Our goal will be to show that \begin{equation}\label{eq:paste-goal} \Z \prec_{c} \V^{K} \;. \end{equation} Proving~(\ref{eq:paste-goal}) would complete the proof, by setting $\V_0$ in the statement to be $\V^1$ here (notice $|\V^1|=3\t(1) \le 2^{300}$ by~(\ref{eq:t})); indeed, Claim~\ref{claim:refinement-size} would imply that $$|\Z| \ge \frac14|\V^{K}| = \frac14 \cdot 6 \cdot \t(K) \ge \t(K) \ge \t(\w(s-1)) \ge \w(s) \ge \wow(s) \;,$$ where the last inequality uses~$(\ref{eq:A_k})$. Assume towards contradiction that $\Z \nprec_{c} \V^{K}$. By averaging, \begin{equation}\label{eq:assumption} \Z_h \nprec_c \V^{K}_h \text{ for some } 0 \le h \le 5. \end{equation} For each $0 \le h \le 5$ let $1 \le \b(h) \le K$ be the largest integer satisfying $\Z_h \prec_c \V^{\b(h)}_h$, which is well defined since $\Z_h \prec_c \V^1_h$, since in fact $\Z \prec \V^1$. Put $\b^* = \min_{0 \le h \le 5} \b(h)$, and note that by~(\ref{eq:assumption}), \begin{equation}\label{eq:paste-star} \b^* < K \;. \end{equation} Let $0 \le x \le 5$ minimize $\b$, that is, $\b(x)=\b^*$. Therefore: \begin{equation}\label{eqcontra} \Z_{x+2} \prec_c \V^{\b^*}_{x+2} \mbox{~,~} \Z_{x+1} \prec_c \V^{\b^*}_{x+1} \mbox{ and } \Z_{x} \nprec_c \V^{\b^*+1}_{x}. \end{equation} Let $e=\{x,x+1,x+2\} \in E(B)$. Let $(\Z',\E')$ be the restriction of $(\Z,\E)$ to $V(H_e)=\Vside^{x} \cup \Vside^{x+1} \cup \Vside^{x+2}$, which is a $\langle 2^{-70} \rangle$-regular partition of $H_e$ by Proposition~\ref{prop:restriction}. Since $\Z'_x=\Z_x$, $\Z'_{x+1}=\Z_{x+1}$, $\Z'_{x+2}=\Z_{x+2}$ we get from (\ref{eqcontra}) a contradiction to~(\ref{eq:paste-property}) with $i=\beta^*$. We have thus proved~(\ref{eq:paste-goal}) and so the proof is complete. \end{proof} \section{Wowzer-type Lower Bounds for $3$-Graph Regularity Lemmas}\label{sec:FR} \renewcommand{\K}{\mathcal{K}} The purpose of this section is to apply Theorem \ref{theo:main} in order to prove Corollary~\ref{coro:FR-LB}, thus giving wowzer-type (i.e., $\Ack_3$-type) lower bounds for the $3$-graph regularity lemmas of Frankl and R\"odl~\cite{FrankRo02} and of Gowers~\cite{Gowers06}. We will start by giving the necessary definitions for Frankl and R\"odl's lemma and state our corresponding lower bound. Next we will state the necessary definitions for Gower's lemma and state our corresponding lower bound. The formal proofs would then follow. \subsection{Frankl and R\"odl's $3$-graph regularity} \begin{definition}[$(\ell,t,\e_2)$-equipartition,~\cite{FrankRo02}]\label{def:ve-partition} An \emph{$(\ell,t,\e_2)$-equipartition} on a set $V$ is a $2$-partition $(\Z,\E)$ on $V$ where $\Z$ is an equipartition of order $|\Z|=t$ and every graph in $\E$ is $\e_2$-regular\footnote{Here, and in several places in this section, we of course refer to the ``traditional'' notion of Szemer\'edi's $\e$-regularity, as defined at the beginning of Section \ref{sec:define}. } of density $\ell^{-1} \pm \e_2$. \end{definition} \begin{remark} If $\e_2 \le \frac12\ell^{-1}$ then $(Z,\E)$ has at most $2\ell$ bipartite graphs between every pair of clusters of $\Z$. \end{remark} A \emph{triad} of a $2$-partition $(\Z,\E)$ is any tripartite graph whose three vertex classes are in $\Z$ and three edge sets are in $\E$. We often identify a triad with a triple of its edge sets $(E_1,E_2,E_3)$. The \emph{density} of a triad $P$ in a $3$-graph $H$ is $d_H(P)=|E(H) \cap T(P)|/|T(P)|$ (and $0$ if $|T(P)|=0$). A \emph{subtriad} of $P$ is any subgraph of $P$ on the same vertex classes. \begin{definition}[$3$-graph $\e$-regularity~\cite{FrankRo02}]\label{def:FR-reg} Let $H$ be a $3$-graph. A triad $P$ is \emph{$\e$-regular} in $H$ if every subtriad $P'$ with $|T(P')| \ge \e|T(P)|$ satisfies $|d_H(P')-d_H(P)| \le \e$.\\ An $(\ell,t,\e_2)$-equipartition $\P$ on $V(H)$ is an \emph{$\e$-regular} partition of $H$ if $\sum_P |T(P)| \le \e|V|^3$ where the sum is over all triads of $\P$ that are not $\e$-regular in $H$. \end{definition} The $3$-graph regularity of Frankl and R\"odl~\cite{FrankRo02} states, very roughly, that for every $\e>0$ and every function $\e_2\colon\N\to(0,1]$, every $3$-graph has an $\e$-regular $(\ell,t,\e_2(\ell))$-equipartition where $t,\ell$ are bounded by a wowzer-type function. In fact, the statement in~\cite{FrankRo02} uses a considerably stronger notion of regularity of a partition than in Definition~\ref{def:FR-reg} that involves an additional function $r(t,\ell)$ which we shall not discuss here (as discussed in~\cite{FrankRo02}, this stronger notion was crucial for allowing them to prove the $3$-graph removal lemma). Our lower bound below applies even to the weaker notion stated above, which corresponds to taking $r(t,\ell)\equiv 1$. Using Theorem~\ref{theo:main} we can deduce a wowzer-type \emph{lower} bound for Frankl and R\"odl's $3$-graph regularity lemma. The proof of this lower bound appears in Subsection~\ref{subsec:FR-LB-proof}. \begin{theo}[Lower bound for Frankl and R\"odl's regularity lemma]\label{theo:FR-LB} Put $c = 2^{-400}$. For every $s \in \N$ there exists a $3$-partite $3$-graph $H$ of density $p=2^{-s}$, and a partition $\V_0$ of $V(H)$ with $|\V_0| \le 2^{300}$, such that if $(\Z,\E)$ is an $\e$-regular $(\ell,t,\e_2(\ell))$-equipartition of $H$, with $\e \le c p$, $\e_2(\ell) \le c \ell^{-3}$ and $\Z \prec \V_0$, then $|\Z| \ge \wow(s)$. \end{theo} \begin{remark} One can easily remove the assumption $\Z \prec \V_0$ by taking the common refinement of $\Z$ with $\V_0$ (and adjusting $\E$ appropriately). Since $|\V_0|=O(1)$ this has only a minor effect on the parameters $\e,\ell,t,\e_2(\ell)$ of the partition and thus one gets essentially the same lower bound. We omit the details of this routine transformation. \end{remark} \subsection{Gowers' $3$-graph regularity} Here we consider the $3$-graph regularity Lemma due to Gowers~\cite{Gowers06}. \begin{definition}[$\a$-quasirandomness, see Definition~6.3 in \cite{Gowers06}]\label{def:quasirandom} Let $H$ be a $3$-graph, and let $P=(E_0,E_1,E_2)$ be a triad with $d(E_0)=d(E_1)=d(E_2)=:d$ on vertex classes $(X,Y,Z)$ with $|X|=|Y|=|Z|=:n$. We say that $P$ is \emph{$\a$-quasirandom} in $H$ if $$\sum_{x_0,x_1 \in X}\sum_{y_0,y_1 \in Y}\sum_{z_0,z_1 \in Z} \prod_{i,j,k\in\{0,1\}} f(x_i,y_j,z_k) \le \a d^{12}n^6 \;,$$ where $$f(x,y,z) = \begin{cases} 1-d_H(P) &\text{if } (x,y,z) \in T(P), (x,y,z) \in E(H)\\ -d_H(P) &\text{if } (x,y,z) \in T(P), (x,y,z) \notin E(H)\\ 0 &\text{if } (x,y,z) \notin T(P) \;. \end{cases}$$ An $(\ell,t,\e_2)$-equipartition $\P$ on $V(H)$ is an \emph{$\a$-quasirandom} partition of $H$ if $\sum_P |T(P)| \le \a|V|^3$ where the sum is over all triads of $\P$ that are not $\a$-quasirandom in $H$. \end{definition} The $3$-graph regularity lemma of Gowers~\cite{Gowers06} (see also~\cite{NaglePoRoSc09}) can be equivalently phrased as stating that, very roughly, for every $\a>0$ and every function $\e_2\colon\N\to(0,1]$, every $3$-graph has an $\a$-quasirandom $(\ell,t,\e_2(\ell))$-equipartition where $t,\ell$ are bounded by a wowzer-type function. One way to prove a wowzer-type lower bound for Gowers' $3$-graph regularity lemma is along similar lines to the proof of Theorem~\ref{theo:FR-LB}. However, there is shorter proof using the fact that Gowers' notion of quasirandomness implies Frankl and R\"odl's notion of regularity. In all that follows we make the rather trivial assumption that, in the notation above, $\a,1/\ell \le 1/2$. \begin{prop}[\cite{NagleRoSc17}]\label{prop:Schacht} There is $C \ge 1$ such that the following holds; if a triad $P=(E_0,E_1,E_2)$ is $\e^C$-quasirandom and for every $0 \le i \le 2$ the bipartite graph $E_i$ is $d(E_i)^C$-regular then $P$ is $\e$-regular. \end{prop} Our lower bound for Gowers' $3$-graph regularity lemma is as follows. \begin{theo}[Lower bound for Gowers' regularity lemma]\label{theo:Gowers-LB} For every $s \in \N$ there exists a $3$-partite $3$-graph $H$ of density $p=2^{-s}$, and a partition $\V_0$ of $V(H)$ with $|\V_0| \le 2^{300}$, such that if $(\Z,\E)$ is an $\a$-quasirandom $(\ell,t,\e_2(\ell))$-equipartition of $H$, with $\a \le \poly(p)$, $\e_2(\ell) \le \poly(1/\ell)$ and $\Z \prec \V_0$, then $|\Z| \ge \wow(s)$. \end{theo} \begin{proof Given $s$, let $H$ and $\V_0$ be as in Theorem~\ref{theo:FR-LB}. Let $\P=(\Z,\E)$ be an $\a$-quasirandom $(\ell,t,\e_2(\ell))$-equipartition of $H$ with $\Z \prec \V_0$, $\a \le (cp)^C$ and $\e_2(\ell) \le \min\{c\ell^{-3},\,(2\ell)^{-C}\}$, where $c$ and $C$ are as in Theorem~\ref{theo:FR-LB} and Proposition~\ref{prop:Schacht} respectively. We will show that $\P$ is a $cp$-regular partition of $H$, which would complete the proof using Theorem~\ref{theo:FR-LB} and the fact that $\e_2 \le c\ell^{-3}$. Let $P=(E_0,E_1,E_2)$ be a triad of $\P$ that is $\a$-quasirandom in $H$. Note that, by our choice of $\e_2(\ell)$, for every $0 \le i \le 2$ we have $d(E_i) \ge 1/\ell - \e_2(\ell) \ge 1/2\ell$; thus, since $\e_2(\ell) \le (1/2\ell)^{C} \le d(E_i)^C$, we have that $E_i$ is $d(E_i)^C$-regular. Applying Proposition~\ref{prop:Schacht} on $P$ we deduce that $P$ is $\e$-regular with $\e=\a^{1/C} \le cp$. Since $\P$ is an $\a$-quasirandom partition of $H$ we have, by Definition~\ref{def:quasirandom} and since $\a \le \e$, that $\P$ is an $\e$-regular partition of $H$, as needed. \end{proof} \subsection{Proof of Theorem~\ref{theo:FR-LB}}\label{subsec:FR-LB-proof} The proof of Theorem~\ref{theo:FR-LB} will follow quite easily from Theorem~\ref{theo:main} together with Claim~\ref{claim:reduction} below. Claim~\ref{claim:reduction} basically shows that a $\langle \d \rangle$-regularity ``analogue'' of Frankl and R\"odl's notion of regularity implies graph $\langle \d \rangle$-regularity. Here it will be convenient to say that a graph partition is \emph{perfectly} $\langle \d \rangle$-regular if all pairs of distinct clusters are $\langle \d \rangle$-regular without modifying any of the graph's edges. Furthermore, we will henceforth abbreviate $t(P)=|T(P)|$ for a triad $P$. We will only sketch the proof of Claim~\ref{claim:reduction}, deferring the full details to the Appendix~\ref{sec:FR-appendix}. \begin{claim}\label{claim:reduction} Let $H$ be a $3$-partite $3$-graph on vertex classes $(\Aside,\Bside,\Cside)$, and let $(\Z,\E)$ be an $(\ell,t,\e_2)$-equipartition of $H$ with $\Z \prec \{\Aside,\Bside,\Cside\}$ such that for every triad $P$ of $\P$ and every subtriad $P'$ of $P$ with $t(P') \ge \d \cdot t(P)$ we have $d_H(P') \ge \frac23 d_H(P)$. If $\e_2(\ell) \le (\d^2/88)\ell^{-3}$ then $\E_3 \cup \Z_3$ is a perfectly $\langle 2\sqrt{\d} \rangle$-regular partition of $G_H^3$. \end{claim} \begin{proof}[Proof (sketch):] We remind the reader that the vertex classes of $G_H^3$ are $(\Aside \times \Bside,\, \Cside)$ (recall Definition~\ref{def:aux}), and that $\E_3$ and $\Z_3$ are the partition of $\Aside\times\Bside$ induced by $\E$ and the partition of $\Cside$ induced by $\Z$, respectively. Suppose $(\Z,\E)$ is as in the statement of the claim, and define $\E'$ as follows: for every $A \in \Z_1$ and $C \in \Z_3$, replace all the bipartite graphs between $A$ and $C$ with the complete bipartite graph $A \times C$. Do the same for every $B \in \Z_2$ and $C \in \Z_3$ (we do {\em not} change the partitions between $\Aside$ and $\Bside$). The simple (yet somewhat tedious to prove) observation is that if all triads of $(\Z,\E)$ are regular then all triads of $(\Z,\E')$ are essentially as regular. Once this observation is proved, the proof of the claim reduces to checking definitions. We thus defer the proof to Appendix~\ref{sec:FR-appendix}. \end{proof} Using Claim~\ref{claim:reduction}, we now prove our wowzer lower bound. \begin{proof}[Proof of Theorem~\ref{theo:FR-LB}] Put $\a = 2^{-73}$. We have \begin{equation}\label{eq:FR-LB-ineq} c = 2^{-400} \le \a^4/1500 \;. \end{equation} Given $s$, let $H$ and $\V_0$ be as in Theorem~\ref{theo:main}. Let $\P=(\Z,\E)$ be an $\e$-regular $(\ell,t,\e_2(\ell))$-equipartition of $H$ with $\e \le c p$, $\e_2(\ell) \le c \ell^{-3}$ and $\Z \prec \V_0$. Thus, the bound $|Z| \ge \wow(s)$ would follow from Theorem~\ref{theo:main} if we show that $\P$ is an $\langle \a \rangle$-regular partition of $H$. First we need to show that $\P$ is $\langle \a \rangle$-good (recall Definition~\ref{def:k-good}). Let $E$ be a graph with $E \in \E$ on vertex classes $(Z,Z')$ (so $Z \neq Z' \in \Z$). We need to show that $E$ is $\langle \a \rangle$-regular. Since $\P$ is an $(\ell,t,\e_2(\ell))$-equipartition we have (recall Definition~\ref{def:ve-partition}) that $E$ is $\e_2(\ell)$-regular and $d(E) \ge \ell^{-1} - \e_2(\ell)$. The statement's assumption on $\e_2(\ell)$ thus implies $d(E) \ge 2\e_2(\ell)$. It follows that for every $S \sub Z$, $S' \sub Z'$ with $|S| \ge \e_2(\ell)|Z|$, $|S'| \ge \e_2(\ell)|Z'|$ we have $d_E(S,S') \ge d(E)-\e_2(\ell) \ge \frac12 d(E)$. This proves that $E$ is $\langle \e_2(\ell) \rangle$-regular, and since $\e_2(\ell) \le c \le \a$, that $E$ is $\langle \a \rangle$-regular, as needed. It remains to show that the $\langle \a \rangle$-good $\P$ is an $\langle \a \rangle$-regular partition of $H$ (recall Definition~\ref{def:k-reg}). By symmetry, it suffices to show that $\E_3 \cup \Z_3$ is an $\langle \a \rangle$-regular partition of $G_{H}^3$. Let $H'$ be obtained from $H$ by removing all ($3$-)edges in triads of $\P$ that are either not $\e$-regular in $H$ or have density at most $3\e$ in $H$. By Definition~\ref{def:FR-reg}, the number of edges removed from $H$ to obtain $H'$ is at most \begin{equation}\label{eq:FR-LB-modify} \e|V(H)|^3 + 3\e|V(H)|^3 \le 4\cdot c p |V(H)|^3 \le (\a p/27)|V(H)|^3 = \a\cdot e(H) \;, \end{equation} where the second inequality uses ~(\ref{eq:FR-LB-ineq}), and the equality uses the fact that all three vertex classes of $H$ are of the same size. Thus, in $H'$, every non-empty triad of $\P$ is $\e$-regular and of density at least $3\e$. Put $\d = (\a/2)^2$. Again by Definition~\ref{def:FR-reg}, for every triad $P$ of $\P$ and every subtraid $P'$ of $P$ with $t(P') \ge \d \cdot t(P)$ ($\ge \e \cdot t(P)$ by~(\ref{eq:FR-LB-ineq})) we have $d_{H'}(P') \ge d_{H'}(P)-\e \ge \frac23 d_{H'}(P)$. It follows from applying Claim~\ref{claim:reduction} with $H'$ and $\d$, using~(\ref{eq:FR-LB-ineq}), that $\E_3 \cup \Z_3$ is a perfectly $\langle \a \rangle$-regular partition of $G_{H'}^3$. Note that~(\ref{eq:FR-LB-modify}) implies that one can add/remove at most $\a \cdot e(G_{H}^3)$ edges of $G_{H}^3$ to obtain $G_{H'}^3$. Thus, $\E_3 \cup \Z_3$ is an $\langle \a \rangle$-regular partition of $G_{H}^3$, and as explained above, this completes the proof. \end{proof}
{ "timestamp": "2019-07-19T02:09:59", "yymm": "1804", "arxiv_id": "1804.05511", "language": "en", "url": "https://arxiv.org/abs/1804.05511" }
\section{Conclusion} \label{sec:conc} This paper successfully demonstrated the advantages of integrating direct and feature-based methods in VO. By relying on a feature based map when direct tracking fails, the issue of large baselines that is characteristic of direct methods is mitigated, while maintaining the high accuracy of direct methods in both feature based and direct maps, and at a relatively low computational cost. Both qualitative and quantitative experimental results proved the effectiveness of the collaboration between direct and feature-based methods in the localization part. While these results are exciting, they do not make use of a global feature based map; as such we are currently developing a more elaborate integration between both frameworks, to improve the mapping accuracy. Furthermore, we anticipate that the benefits to the mapping thread will also lead to added robustness and accuracy to the motion estimation within a full SLAM framework. \section{Experiments and Results} \label{sec:exp} To evaluate FDMO's tracking robustness, experiments were performed on several well-known datasets \cite{euroc_2016} and \cite{mono_dataset}, and both qualitative and quantitative appraisal was conducted. To further validate FDMO's effectiveness, the experiments were also repeated on state of the art open-source systems in both direct (DSO) and feature-based (ORB SLAM). For fairness of comparison, we evaluate ORB SLAM as an odometry system (not as a SLAM system); therefore, similar to \cite{engel_2016_ARXIV} we disable its loop closure thread but we keep its global failure recovery, local, and global bundle adjustments intact. Note that we've also attempted to include results from SVO \cite{forster_2014_ICRA} but it continuously failed on most datasets, so we excluded it. \subsection{Datasets} \subsubsection{TUM MONO dataset} \cite{mono_dataset} contains 50 sequences of a camera moving along a path that begins and at ends at the same location. The dataset is photometrically calibrated: camera response function, exposure times and vignetting are all available; however, ground truth pose information is only available for two small segments at the beginning and end of each sequence; fortunately, such information is enough to compute translation, rotation, and scale drifts accumulated over the path, as described in \cite{mono_dataset}. \subsubsection{EuRoC MAV dataset} \cite{euroc_2016} contains 11 sequences of stereo images recorded by a drone mounted camera. Ground truth pose for each frame is available from a Vicon motion capture system. \subsection{Computational cost} The experiments were conducted on an Intel Core i7-4710HQ 2.5GHZ CPU, 16 GB memory; no GPU acceleration was used. The time required by each of the processes was recorded and summarized in Table \ref{tab:Time}. Both DSO and ORB SLAM consist of two parallel components, a tracking process (at frame-rate\footnote{occur at every frame.}) and a mapping process (keyframe-rate\footnote{occur at new keyframes only.}). On the other hand, FDMO has three main processes: (1) a direct tracking process (frame-rate), (2) a direct mapping process (keyframe-rate), and (3) a feature-based mapping process (keyframe-rate). Both of FDMO's mapping processes can run either sequentially for a total computational cost of $200$ ms on a single thread, or in parallel on two threads. As Table \ref{tab:Time} shows, the mean tracking time for FDMO remains almost the same that of DSO: we don't extract features at frame-rate; feature based tracking in FDMO is only performed when the direct tracking diverges; the extra time is reflected in the slightly increased standard deviation of the computational time with respect to DSO. Nevertheless, it is considerably less than ORB SLAM's 23 ms. As for FDMO's mapping processes, its direct part remains the same as DSO, whereas the feature-based part takes $153$ ms which is also significantly less than ORB SLAM's feature based mapping process that requires $236$ ms. \newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}} \begin{table}[!htb] \renewcommand{\arraystretch}{1.1} \caption{Computational time (ms) for continuous processes in DSO, FDMO and ORB SLAM. (Empty means the system does not have the process.)} \label{tab:Time} \centering \begin{tabular}{|c||M{1.3cm}||M{1.3cm}||M{1.45cm}|} \hline \textbf{Process} & \textbf{DSO} & \textbf{FDMO} & \textbf{ORB SLAM}\\ \hline \parbox[c][0.7cm][c]{2.5cm}{\raggedright Tracking (frame-rate)}& 12.35$\pm$9.62 & 13.54$\pm$14.19& 23.04$\pm$4.11\\ \hline \parbox[c][0.7cm][c]{2.5cm}{\raggedright Direct mapping (Keyframe-rate)}& 46.94$\pm$51.62 & 46.89$\pm$65.21& ---\\ \hline \parbox[c][0.7cm][c]{2.7cm}{\raggedright Feature-based mapping (Keyframe-rate)}& --- & 153.8$\pm$58.08& 236.47$\pm$101.8\\ \hline \end{tabular} \vspace*{-0.5cm} \end{table} \subsection{Quantitative results} \label{sec:qual_res} We assess FDMO, ORB SLAM and DSO using the following experiments. \subsubsection{Two loop experiment} in this experiment, we investigate the quality of the estimated trajectory by comparing ORB SLAM, DSO, and FDMO. We allow all three systems to run on various sequences of the Tum\_Mono dataset \cite{mono_dataset} across various conditions, both indoors and outdoors. Each system is allowed to run through every sequence for two continuous loops where each sequence begins and ends at the same location. We record the positional, rotational, and scale drifts at the end of each loop, as described in \cite{mono_dataset}. The drifts recorded at the end of the first loop are indicative of the system's performance across that loop, whereas the drifts recorded at the end of the second loop consist of three components: (1) the drift accumulated from the first loop, (2) an added drift accumulated over the second run, and (3) an error caused by a large baseline motion induced at the transition between the loops. The reported results are shown in Table. \ref{tab:LoopExp} and some of the recovered trajectories are shown in Fig. \ref{fig:Traj}. \begin{table*}[!htb] \setlength{\abovecaptionskip}{-10pt} \centering \caption{Measured drifts after finishing one and two loops over various sequences from the TumMono dataset. The alignment drift (meters), rotation drift (degrees) and scale($\frac{m}{m}$) drifts are computed similar to \cite{mono_dataset}.} \label{tab:LoopExp} \includegraphics[trim={0cm 12cm 0 1cm},clip,width=\textwidth]{Resultsfix.pdf} \vspace*{-0.7cm} \end{table*} \begin{figure}[!htb] \centering \includegraphics[trim={0cm 0cm 0 0cm},clip,width=0.5\textwidth]{Paths.pdf} \caption{Sample paths estimated by the various systems on Sequences 30 and 50 of the Tum\_Mono dataset. The paths are all aligned using ground truths available at the beginning and end of each loop. Each solid line corresponds to the first loop of a system while the dashed line correspond to the second loop. Ideally, all systems would start and end at the same location, while reporting the same trajectories across the two loops. Note that in Sequence 50, there is no second loop for DSO as it was not capable of dealing with the large baseline between the loops and failed. } \label{fig:Traj} \end{figure} \subsubsection{Frame drop experiment} While the first experiment reports on the system's performance across large scale scenes in various conditions, this experiment investigates the effects erratic and large baseline motions have on the camera's tracking accuracy. Erratic motion can be defined as a sudden acceleration in the opposite direction of motion, and is quite common in hand-held devices or quad-copters. Another example of erratic motion occurs when the camera's video feed is being transmitted over a network to a ground station where computation is taking place; communication issues may cause frame drops which are seen by the odometry system as large baseline motions; therefore it is imperative for an odometry system to cope with such motions. To quantize the influence of erratic motions on an odometry system, we set up an experiment to emulate their effects, by dropping a number of frames and measuring the recovered poses before and after dropping them. The experiment is repeated at the same location and the number of frames dropped is increased until each system fails. Various factors can affect the obtained results, such as the distance to the observed scene, skipping frames towards a previously observed or unobserved scene, and/or the type of camera motion (\textit{i.e.}, sideways, forward moving, or rotational motion), to name a few. Therefore we repeat the above experiment for each system in various locations covering the above scenarios. We chose to perform the experiments on the EuroC dataset \cite{euroc_2016} whose frame to frame ground truth is known; thus allowing us to compute the relative Euclidean distance $Translation=||F_i-F_j ||$, and the orientation difference between the recovered poses at $F_i$ and $F_j$ as the \textit{geodesic metric of the normalized quaternions on the unit sphere} defined by $Rotation=cos^{-1}(2|F_i\cdot F_j|^2-1)$. We report on the percent error $\% Error= 100 \times \frac{|Measured-GroundTruth|}{GroundTruth}$ for the recovered Euclidean distance and relative orientation before and after the skipped frames. The obtained results for FDMO, DSO and ORB SLAM are shown in Fig. \ref{fig:jumpexp}. \begin{figure}[!htb] \setlength{\belowcaptionskip}{-10pt} \centering \includegraphics[width=0.45\textwidth]{JumpResopt2.pdf \caption{$\% Error$ v.s. ground truth motion measured by dropping frames and estimating the relative transformation (rotation and translation) before and after the frames were dropped. (A) was conducted in the sequence MH01, (B) in the sequence MH02, and (C) in the sequence MH03 of the EuroC dataset \cite{euroc_2016}. } \label{fig:jumpexp} \end{figure} \subsection{Qualitative assessment} Fig. \ref{fig:quali} compares the feature-based map generated by FDMO to that of ORB SLAM (without loop closure). Notice the difference in the accumulated drift between both maps; FDMO's feature-based map inherited the sub-pixel accuracy of direct methods and did not suffer from severe drift. \begin{figure}[!htb] \centering \setlength{\belowcaptionskip}{-15pt} \includegraphics[trim={0cm 0cm 0 0cm},clip, width=0.46\textwidth]{qualitativeS.pdf} \caption{Trajectory and feature-based maps estimated by ORB SLAM and FDMO after traversing Sequence 50 of the Tum\_mono dataset. } \label{fig:quali} \end{figure} \subsection{Discussion} The results reported in the first experiment (Table. \ref{tab:LoopExp}) demonstrate FDMO's performance in large-scale indoor and outdoor environments. The importance of the problem FDMO attempts to address is highlighted by analyzing the drifts incurred at the end of the first loop; while no artificial erratic motions nor large baselines were introduced over the first loop, FDMO was able to outperform both DSO and ORB SLAM in terms of positional, rotational, and scale drifts on most sequences. The improved performance is due to FDMO's ability to detect and account for inaccuracies in the direct framework using its feature-based map, while benefiting from the sub-pixel accuracy of the direct framework. Furthermore, FDMO was capable of expanding both its direct and feature-based maps in feature-deprived environments (\textit{e.g.} Sequence 40) whereas ORB SLAM failed to do so. FDMO's robustness is further proven by analyzing the results obtained over the second loop. The drifts accumulated towards the end of the second loop are made of three components; mainly, the drift occurred over the first loop, the drift occurred over the second, and an error caused by a large baseline separating the frames at the transition between the loops. If the error caused by the large baseline is negligible, we would expect the drift at the second loop to be double that of the first. While the measured drifts for both ORB SLAM and FDMO does indeed exhibit such behavior, the drifts reported by ORB SLAM are significantly larger than the ones reported by FDMO as Fig. \ref{fig:Traj} also highlights. On the other hand, DSO tracking failed entirely on various occasions, and when it did not fail, it reported a significantly large increase in drifts over the second loop. As DSO went through the transition frames between the loops its motion model estimate was violated, erroneously initializing its highly non-convex tracking optimization. The optimization got subsequently stuck in a local minimum, which led to a wrong pose estimate. The wrong pose estimate was in turn used to propagate the map, thereby causing large drifts. On the other hand, FDMO was successfully capable of handling such a scenario. The results reported in the second experiment (Fig. \ref{fig:jumpexp}) quantify the robustness limits of each system to erratic motions. Various factors may affect the obtained results, therefore, we attempted the experiments under various types of motion and by skipping frames towards a previously observed (herein referred to as backward) and previously unobserved part of the scene (referred to as forward). The observed depth of the scene is also an important factor: far-away scenes remain for a longer time in the field of view, thus improving the systems' performance. However, we cannot model all different possibilities of depth variations; therefore, for the sake of comparison, all systems were subjected to the same frame drops at the same locations in each experiment where the observed scene's depth varied from three to eight meters. The reported results highlight DSO's brittleness to any violation of its motion model; where translations as little as thirty centimeters and rotations as small as three degrees introduced errors of over $50\%$ in its pose estimates. On the other hand, FDMO was capable of accurately handling baselines as large as $1.5$ meters and $20$ degrees towards previously unobserved scene, after which failure occurred due to feature-deprivation, and two meters towards previously observed parts of the scene. ORB SLAM's performance was very similar to FDMO in forward jumps, however it significantly outperformed it by twice as much in the backward jumps; ORB SLAM uses a global map for failure recovery whereas FDMO, being an odometry system, can only make use of its immediate surroundings. Nevertheless FDMO's current limitations in this regard are purely due to our current implementation as there are no theoretical limitations of developing FDMO into a full SLAM system. However, using a global relocalization method has its downside; the jitter in ORB SLAM's behavior (shown in Fig. \ref{fig:jumpexp} (C)) is due to its relocalization process erroneously localizing the frame at spurious locations. Another key aspect of FDMO visible in the this experiment, is its ability to detect failure and not incorporate it into its map. In contrast, towards their failure limits, both DSO and ORB SLAM incorporate spurious measurements for few frames before failing completely. \section{INTRODUCTION} \label{sec:intro} Visual Odometry (VO) is the process of localizing one or several cameras in an unknown environment. Using a video feed from a moving camera, VO generates a temporary 3D map of the camera's surroundings and uses it to recover the camera's motion within the observed scene. VO is considered indispensable for various tasks, including visual-based robotic navigation and augmented reality applications, to name a few. Two decades of extensive research have led to a multitude of VO systems that can be categorized based on the type of information they extract from an image, as direct, feature-based, or a hybrid of both \cite{younes_2016_ARXiv}. While the direct framework manipulates photometric measurements (pixel intensities), the feature-based framework extracts and uses visual features as an intermediate image representation. The choice of feature-based or direct method has important ramifications on the performance of the entire VO system, with each type exhibiting its own challenges, advantages, and disadvantages. \begin{figure}[!tb] \centering \includegraphics[trim={0.2cm 0.3cm 0 0.7cm},clip,width=0.48\textwidth]{Front_fig_f.pdf} \vspace*{-0.7cm} \caption{Direct methods failure under large baseline motion. (A) and (B) show the trajectory estimated from a direct odometry system, before and after going through a relatively large baseline between two consecutive frames (shown in (C) and (D). Notice how the camera's pose in (B) derailed from the actual path to a wrong pose. (C) and (D) show the projected direct point cloud on both frames respectively after erroneously estimating their poses. Notice how the projected point cloud is no longer aligned with the scene. On the other hand, (E) and (F) show how features can be matched across the relatively large baseline, allowing feature-based methods to cope with such motions.} \label{fig:motivation} \vspace{-0.62cm} \end{figure} One disadvantage of particular interest to this paper is the sensitivity of direct methods to their motion model. This limitation is depicted in Fig. \ref{fig:motivation} (A) and (B), where a direct VO system is subjected to a motion that violates its presumed motion model, and causes it to erroneously expand the map as shown in Fig. \ref{fig:motivation} (C) and (D). Inspired by the invariance of feature-based methods across relatively large baselines (as shown in Fig. \ref{fig:motivation} (E) and (F)), this paper proposes to address the shortcomings of direct methods, by detecting failure in their frame to frame odometry component, and accordingly invoking an efficient feature-based strategy to cope with the large baselines. We call our approach Feature assisted Direct Monocular Odometry, or FDMO for short. While we don't make use of a complete SLAM formulation (\textit{i.e.}, we don't make use of a global map for failure recovery nor for loop closure), we show that by effectively exploiting information available from the direct framework, FDMO inherits the advantages of direct methods in terms of sub-pixel accuracy, robustness to feature deprived environments in its feature-based map, and low computational cost at frame rate; all while gaining the advantages of the feature-based framework in terms of handling large baseline motions. \section{Background} \label{sec:Backg} Visual odometry can be broadly categorized as being either direct or feature-based. \subsection{Direct VO} \label{sec:dir} Direct methods process raw pixel intensities with the brightness constancy assumption \cite{baker_2004_IJCV}: \begin{equation} \label{eq:brightness} I_{t}(x)=I_{t-1}(x+g(x)), \end{equation} where $x$ is the 2-dimensional pixel coordinates $(u,v)^T$ and $g(x)$ denotes the displacement function of $x$ between the two images $I_t$and $I_{t-1}$. \subsubsection{Traits of direct methods} since direct methods rely on the entire image for localization, they are less susceptible to failure in feature-deprived environments, and do not require a time-consuming feature extraction and matching step. More importantly, since the alignment takes place at the pixel intensity level, the photometric residuals can be interpolated over the image domain $\Omega I$, resulting in an image alignment with sub-pixel accuracy, and relatively less drift than feature-based odometry methods \cite{irani_1999_iccv}. However, the objective function to minimize is highly non-convex; its convergence basin is very small, and will lock to an erroneous configuration if the optimization is not accurately initialized. Most direct methods cope with this limitation by adopting a pyramidal implementation, by assuming small inter-frame motions, and by relying on relatively high frame rate cameras; however, as a rule of thumb, all parameters involved in the optimization should be initialized such that $x$ and $g(x)$ are within 1-2 pixel radii from each other. \subsubsection{State of the art in direct methods} Direct Sparse Odometry (DSO) \cite{engel_2016_ARXIV} is a keyframe-based VO that adopts the inverse depth parametrization of \cite{civera_2008_TRO}, which is suitable for estimating depths with small parallax; therefore, it does not suffer from epipolar-geometry-based triangulation degeneracies and can handle points at infinity. DSO employs a pyramidal implementation of the forward additive image alignment \cite{baker_2004_IJCV} to optimize a variant of the brightness constancy assumption over the incremental geometric transformation between the current frame and a reference keyframe. The optimization can be summarized by: \begin{equation} \label{eq:FAIA} \underset{T_{f_i,KF_j}}{\operatorname{argmin}}\sum_{x} \sum_{x_k\in N(x)}Obj(I_{f_i}(\omega(x_k,d,T_{f_i,KF_j})-I_{KF}(x_k,d))) \end{equation} where $T_{f_i,KF_j} \in SE(3)$ is the transformation relating the current frame $f_i$ to a reference keyframe $KF_j$; $x\in \Omega I_d$ is the set of image locations with sufficient intensity gradient and an associated depth value $d$. $x_k\in N(x)$ is the set of pixels surrounding $x$ defined by the local neighborhood $N(x)$. Obj(.) is the Huber norm, and $\omega(.)$ is defined as: \begin{equation} \omega(x_k,d,T_{f_i,KF_j})=\pi(T_{f_i,KF_j}\pi^{-1}(x_k,d)) \end{equation} DSO's tracking front-end takes place on a frame by frame basis, and exploits the nature of small inter-frame motions in a video feed to update its depth filters for each point of interest in a keyframe, as described in \cite{engel_2013_ICCV}. DSO keeps in its map a small set of keyframes $\kappa_{dir}$, in which all current map points exist. DSO's back-end ensures the local consistency of its map through a photometric optimization, defined by: \begin{multline} \label{eq:PhBA} \underset{T_{KF_i},d}{\operatorname{argmin}}\sum_{KF_i\in \kappa_{dir}}\sum_{x}\sum_{\substack{j\in \kappa_{dir}\\ i\neq j}} \sum_{x_k\in N(x)}\\Obj(I_{KF_i}(\omega(x_k,d,T_{KF_i,KF_j})-I_{KF_j}(x_k,d))) \end{multline} \subsection{Feature-based VO}\label{sec:feat} Feature-based methods process 2D images to extract locations that are salient in an image. Let $x=(u,v)^T$ represent a feature's pixel coordinates in the 2-dimensional image domain $\Omega\textit{I}$. Associated with each feature is an $n$-dimensional vector $Q^{n}(x)$, known as a \textit{descriptor}. The set $\Phi\textit{I}\{x,Q(x)\}$ is an intermediate image representation after which the image itself becomes obsolete and is discarded. \subsubsection{Traits of feature-based methods} on the positive side, features with their associated descriptors are somewhat invariant to viewpoint and illumination changes, such that a feature $ x\in\Phi I_1$ in one image can be identified as $x'\in \Phi I_2$ in another, across \textit{relatively large baselines}. Such invariance comes from the properties of a feature extractor. On the downside, and as a result of their discretized image representation space, feature-based solutions offer inferior accuracy when compared to \textit{direct} methods, where the image domain can be interpolated for sub-pixel accuracy. \subsubsection{State of the art in feature-based methods} ORB-SLAM \cite{mur-artal_2015_TRO}, currently considered the state of the art in feature-based methods, associates FAST corners \cite{rosten_2006_ECCV} with ORB descriptors \cite{rublee_2011_ICCV} as an intermediate image representation. The ORB SLAM map consists of 3D map points $X_j(\{x_i,Q(x_i)\}) \in {\rm I\!R}^3$, as well as special frames, referred to as \textit{keyframes} (KF), where $KF_i\in\kappa=\left[ T_{i,w}, \Phi\{x,Q(x)\} \right]$ with $T_{i,w}\in SE(3)$ being the keyframe's pose in the world coordinate frame The 3D points are triangulated using Epipolar geometry \cite{hartley_2003_Cambridge}, from multiple observations of the feature $\{x_i,Q(x_i)\})$ in two or more keyframes. Unfortunately, this adds another shortcoming to feature-based methods, as Epipolar based triangulation is unstable for "far-away" features (small parallax) \cite{yang_2017_Arxiv}. Regular frames are localized by minimizing the geometric re-projection error defined by: \begin{equation} \label{eq:featTracking} \underset{T_{i}}{\operatorname{argmin}} \sum_j Obj(x_j-\pi(T_{i,w},X_j)), \end{equation} where Obj(.) is the Huber norm, $x_j$ is the 2D location, in the current frame, of the feature that matched the 3D point $X_j$; $\pi$ is the pinhole camera projection model. 3-dimensional point $X_j$ onto the current frame. The consistency of the map is maintained through a local bundle adjustment process defined by: \begin{equation} \underset{T_{KF_i},X_j}{\operatorname{argmin}} \sum_{i\in\kappa'}\sum_{j} Obj(x_{i,j}-\pi(T_{KF_i},X_j)), \end{equation} where $X_j$ is the set of map points that were observed in the set of keyframes $KF_i \in \kappa'$ and $\kappa'$ is a subset of the map. Both optimizations are resilient to relatively large inter-frame baseline motions and have a large convergence radius. Although ORB SLAM is considered a SLAM system (which maintains a global map and uses it for loop closure), its VO component is considered the state of the art in feature-based methods. Therefore, for the fairness of comparison, and similar to \cite{mono_dataset}, we reduce ORB SLAM to an odometry system by disabling its loop closure detection component. \subsection{Feature-based vs. Direct} \label{sec:hybrid} When the corresponding pros and cons of both feature-based and direct frameworks are placed side by side, a pattern of complementary traits emerges (Table \ref{tab:FeatvsOdom}). An ideal framework would exploit both direct and feature-based advantages to benefit from the direct formulation accuracy and robustness to feature-deprived scenes, while making use of feature-based methods for large baseline motions. \begin{table}[!htb] \caption{Comparison between the feature-based and direct methods. The more of the symbol +, the higher the attribute.} \label{tab:FeatvsOdom} \begin{center} \begin{tabular}{|p{3.5cm}||c||c|} \hline \textbf{Trait} & \textbf{Feature-based} & \textbf{Direct}\\ \hline Large baseline & +++ & +\\ \hline Robust to Feature Deprivation& + & +++\\ \hline Recovered scene point density &+ & +++\\ \hline Accuracy & + & +++\\ \hline Optimization Non-Convexity& + & ++\\ \hline \end{tabular} \end{center} \vspace*{-0.5cm} \end{table} \section{Proposed system} \label{sec:proposed} To capitalize on the advantages of both feature-based and direct frameworks, our proposed approach consists of a local direct visual odometry, assisted with a feature-based map, such that it may resort to feature-based odometry only when necessary. Therefore, FDMO does not need to perform a computationally expensive feature extraction and matching step at every frame. During its feature-based map expansion, FDMO exploits the localized keyframes with sub-pixel accuracy from the direct framework, to efficiently establish feature matches in feature-deprived environments using restricted epipolar search lines. To address any ambiguities, the subscript \textit{d} will be assigned to all direct-based measurements and \textit{f} for feature-based measurements. Similar to DSO, FDMO's local temporary map $M_{d}$ is defined by a set of seven direct-based keyframes $\kappa_{d}$ and $2000$ active direct points. Increasing these parameters was found by \cite{engel_2016_ARXIV} to significantly increase the computational cost without much improvement in accuracy. Direct Keyframe insertion and marginalization occurs frequently according to conditions described in \cite{engel_2016_ARXIV}. In contrast, the feature-based map $M_{f}$ is made of an undetermined number of keyframes $\kappa_{f}$, each with an associated set of features and their corresponding ORB descriptors $\Phi(x,Q(x))$. \subsection{Odometry} \subsubsection{Direct image alignment} frame by frame operations are handled by the flowchart described in Fig. \ref{fig:DirAlignment}. Similar to \cite{engel_2016_ARXIV}, newly acquired frames are tracked by minimizing \eqref{eq:FAIA} in $M_{d}$, seeded from a constant velocity motion model (CVMM). However, erratic motion or large motion baselines can easily violate the CVMM, erroneously initializing the highly-non convex optimization, and yielding unrecoverable tracking failure. We detect tracking failure by monitoring the RMSE of the image alignment process before and after the optimization; if the ratio $\frac{RMSE_{after}}{RMSE_{before}}>1+\epsilon$ we consider that the optimization has diverged and we invoke the feature-based tracking recovery, summarized in the flowchart of Fig. \ref{fig:feaTracking}. The $\epsilon$ is used to restrict feature-based intervention when the original motion model used is accurate, a value of $\epsilon=0.1$ was found as a good trade-off between continuously invoking the feature-based tracking and not detecting failure in the optimization. To avoid extra computational cost, feature extraction and matching is not performed on a frame by frame basis, it is only invoked during feature-based tracking recovery and feature-based KF insertion. \subsubsection{Feature-based tracking recovery} Our proposed feature-based tracking operates in $M_{f}$. When direct tracking diverges, we consider the CVMM estimate to be invalid and seek to estimate a new motion model using the feature-based map. The new motion model is then used to re-initialize the direct image alignment. Our proposed feature-based tracking recovery is a variant of the global re-localization method proposed in \cite{mur-artal_2015_TRO}; we first start by detecting FAST features with their associated ORB descriptors $\Phi f_{f}=\Phi(x,Q(x))$ in the current image, which are then parsed into a vocabulary tree. Since we consider the CVMM to be invalid, we fall back on the last piece of information the system was sure of before failure: the pose of the last successfully added keyframe. We define a set of ten feature-based keyframes $N_{f}$ connected to the last added keyframe $KF_{d}$ through a covisibility graph \cite{strasdat_2011_ICCV}, each with its associated $X\Phi KF_j$, where $KF_j\in N_{f}$, and $X\Phi{KF_j}$ is the set of features from $KF_j$ that are associated with previously triangulated map points. Blind feature matching is then performed between $\Phi f_i$ and $\Phi KF_j $, by restricting feature matching to take place between features that exist in the same node in a vocabulary tree \cite{lopez_2012_TRO}; this is done to reduce the computational cost of blindly matching all features. Once data association is established between $f_i$ and the map points observed in $N_{f}$, we set up an EPnP (Efficient Perspective-n-Point Camera Pose Estimation) \cite{lepetit_2009_IJCV} to solve for an initial pose $T_{f_i}$ using 3D-2D correspondences in an non-iterative manner. The new pose is then used to define a search window in $f_i$ surrounding the projected locations of all map points $X\in N_{f}$. Finally the pose $T_{f_i}$ is refined through the geometric optimization defined by \eqref{eq:featTracking}. To achieve sub-pixel accuracy, the recovered pose $T_{f_i}$ is then converted into a local increment over the pose of the last active direct keyframe using $T_{f_i}\cdot T_{d,KF_{d}}$, and then further refined in a direct image alignment optimization \eqref{eq:FAIA}. Note that the EPnP step could have been skipped in favor of using the last correctly tracked keyframe's position as a starting point; however, it would require a larger search window, which in turn increases the computational burden of data association in the subsequent step; data association using a search window was also empirically found to fail when the baseline motion was relatively large. \begin{figure*}[!htb] \centering \includegraphics[width=0.7\textwidth]{FeaOdom.pdf} \caption{FDMO Tracking Recovery flowchart. Only invoked when direct image alignment fails, it takes over the front end operations of the system until the direct map is re-initialized. We start by extracting features from the new frame and matching them to 3D features observed in a set of keyframes $N_f$ connected to the last correctly added keyframe. Efficient perspective n point (EPnP) camera pose estimation is used to estimate an initial guess which is then refined by a guided data association between the local map and the frame. The refined pose is then used to seed a Forward additive image alignment step to achieve sub-pixel accuracy.} \label{fig:feaTracking} \end{figure*} \subsection{Mapping} \begin{figure*}[!htb] \centering \includegraphics[width=0.7\textwidth]{Mapping.pdf} \caption{Our proposed feature-based mapping flowchart; it operates after or parallel to the direct photometric optimization of \eqref{eq:PhBA} and is responsible for expanding the feature-based map with new $KF_{f}$. It establish feature matches using restricted epipolar search lines; the 3D feature-based map is then optimized using a computationally efficient structure-only bundle adjustment, before map maintenance ensures the map remain outliers free .} \label{fig:mapping} \end{figure*} The direct-feature-based map is expanded as described in Fig. \ref{fig:mapping}. When a new keyframe is added to $M_{d}$, we create a new feature-based keyframe $KF_{f}$ that inherits its pose from $KF_{d}$ after its optimized through \eqref{eq:PhBA}. $\Phi KF_{f}(x,Q(x))$ is then extracted and data association takes place between the new keyframe and a set of local keyframes $\kappa'_{f}$ surrounding it via computationally efficient epipolar search lines. The data association is used to keep track of all map points $X_{f}$ visible in the new keyframe and to triangulate new map points. To ensure an accurate and reliable feature-based map, typical feature-based methods employ local bundle adjustment to optimize for both the keyframes poses and their associated map points; however, this is computationally very expensive and could severely reduce the frame rate of our proposed approach; instead, we make use of the fact that the new keyframe's pose is locally optimal (optimized in the direct optimization of \eqref{eq:PhBA}), to replace the typical local bundle adjustment with a computationally less demanding structure-only optimization defined by: \begin{equation} \label{eq:sba} \underset{X_j}{\operatorname{argmin}} \sum_{i\in\kappa'_{f}}\sum_{j} Obj(x_{i,j}-\pi(T_{KF_i},X_j)), \end{equation} where $X_j$ spans all 3D map points observed in all keyframes $\in$ $\kappa'_{f}$. We limit the number of iterations in the optimization of \eqref{eq:sba} to ten, since no significant reduction in the feature-based re-projection error was recorded beyond ten iterations. \subsection{Feature-based map maintenance} To ensure a reliable feature-based map, the following practices are employed. For proper operation, direct methods require frequent addition of keyframes, resulting in small baselines between the keyframes, which in turn can cause degeneracies if used to triangulate feature-based points. To avoid numerical instabilities, we prevent feature triangulation between keyframes with a $\frac{baseline}{depth}$ ratio less than the empirically tuned threshold of $0.02$ which is a trade-off between numerically unstable triangulated features and feature deprivation problems. We exploit the frequent addition of keyframes as a feature quality check. In other words, a feature has to be correctly found in at least $4$ of the $7$ keyframes subsequent to the keyframe it was first observed in, otherwise it is considered spurious and is subsequently removed. To ensure no feature deprivation occurs, a feature cannot be removed until at least 7 keyframes have been added since it was first observed. Finally, a keyframe with ninety percent of its points shared with other keyframes is removed from $M_{f}$ only once marginalized from $M_{d}$. The aforementioned practices ensure that sufficient reliable map points and features are available in the immediate surrounding of the current frame, and that only necessary map points and keyframes are kept once the camera moves on. \section{Related work} \label{sec:related} Hybrid direct-feature-based systems were previously proposed in \cite{forster_2014_ICRA}, \cite{krombach_2016_ICIAS} and \cite{jellal_2016_ECMR}; however, \cite{forster_2014_ICRA} did not extract feature descriptors, it relied on the direct image alignment to perform data association between the features. While this led to significant speed-ups in the processing required for data association, it could not handle large baseline motions; as a result, their work was limited to high frame rate cameras (which ensured frame-to-frame motion is small). On the other hand, both \cite{krombach_2016_ICIAS} and \cite{jellal_2016_ECMR} adopted a feature-based approach as a front-end to their system, and subsequently optimized the measurements with a direct image alignment. As such, these systems suffer from the limitations of the feature-based framework and are subject to failure in feature-deprived environments. To address this issue, both systems resorted to stereo cameras. In contrast to these systems, we propose a direct alignment as a front-end, backed by a feature-based map that is invoked whenever the direct alignment fails. Therefore, FDMO can operate using a monocular camera, and can adaptively switch between the two modes when necessary. It is it noteworthy to mention that FDMO can be adapted for stereo and RGBD cameras as well. \section*{ACKNOWLEDGMENT} This work was funded by the University Research Board (UBR) at the American University of Beirut, and the Canadian National Science Research Council (NSERC). \bibliographystyle{IEEEtran}
{ "timestamp": "2018-04-17T02:11:55", "yymm": "1804", "arxiv_id": "1804.05422", "language": "en", "url": "https://arxiv.org/abs/1804.05422" }
\section{Introduction} A homogeneous ideal $I\subset R=\K[\P^N]$ in the ring of polynomials with coefficients in a field $\K$, decomposes as the direct sum of graded parts $I=\oplus_{t\geq 0}I_t$. For a nontrivial homogeneous ideal $I$ in $\K[\P^N]$, the \emph{initial degree} $\alpha(I)$ of $I$ is the least integer $t$ such that $I_t\neq 0$. For a positive integer $m$, the $m^{th}$ symbolic power $I^{(m)}$ of $I$ is defined as $$I^{(m)}=\bigcap_{P\in\Ass(I)}\left(I^mR_P\cap R\right),$$ where $\Ass(I)$ is the set of associated primes of $I$ and the intersection takes place in the field of fractions of $R$. We define the \emph{initial sequence} of $I$ as the sequence of integers $\alpha_m=\alpha(I^{(m)})$. If $I$ is a radical ideal determined by the vanishing along a closed subscheme $Z\subset\P^N$, then the Nagata-Zariski Theorem \cite[Theorem 3.14]{EisenbudBook} provides a nice geometric interpretation of symbolic powers of $I$, namely $I^{(m)}$ is the ideal of polynomials vanishing to order at least $m$ along $Z$. This implies, in particular, that the initial sequence is strictly increasing. The study of the relationship between the sequence of initial degrees of symbolic powers of homogeneous ideals and the geometry of the underlying algebraic sets in projective spaces has been initiated by Bocci and Chiantini in \cite{BocChi11}. They proved that if $Z\subset\P^2$ is a finite set of points and $I=I(Z)$ is the vanishing ideal of $Z$, then the equality $$\alpha(I^{(2)})=\alpha(I)+1$$ implies that either all points in $Z$ are collinear or they form a star configuration (see Definition \ref{def:star}). This result has been considerably generalized in several directions. Dumnicki, Tutaj-Gasi\'nska and the third author studied higher symbolic powers of ideals supported on points in \cite{planar1} and \cite{planar2}. Natural analogies of the problem have been studied on $\P^1\times\P^1$ in \cite{P1xP1} and on Hirzebruch surfaces in general in \cite{DLS15}. Bauer and the third author proposed in \cite{BauSze15} the following conjecture for points in higher dimensional projective spaces and they proved it for $N=3$. \begin{varthm*}{Conjecture}[Bauer, Szemberg]\label{conj:BS} Let $Z$ be a finite set of points in the projective space $\mathbb{P}^N$ and let $I$ be the radical ideal defining $Z$. If \begin{equation*} d :=\alpha(I^{(N)})=\alpha(I) +N-1 \end{equation*} then either $\alpha(I)=1$ and the set $Z$ is contained in a single hyperplane or $Z$ is a star configuration of codimension $N$ associated to $d$ hyperplanes in $\P^N$. \end{varthm*} In recent years many problems stated originally for points in projective spaces have been generalized to arrangements of flats, see e.g. \cite{GHV13, DHST14, MalSzp18, MalSzp17, DFST18} . In particular, Janssen in \cite{Jan15} generalized results of Bocci and Chiantini to configurations of lines in $\P^3$ defined by homogeneous Cohen-Macaulay ideals. Symbolic powers of codimension $2$ Cohen-Macaulay ideals have been studied recently in \cite{codim2}. Results of these two articles, especially Section 3 in \cite{Jan15}, have motivated our research presented here. Our main result is the following, (see Definition \ref{def:pseudo-star}, for the definition of a pseudo-star configuration). \begin{theorem}[Main result]\label{thm:main} Let $\L$ be the union of a finite set of codimension $2$ projective subspaces in $\P^N$ and let $J$ be its vanishing ideal. If $J$ is Cohen-Macaulay and $$d:=\alpha(J^{(2)})=\alpha(J)+1,$$ then $\L$ is either contained in a single hyperplane or it is a codimension $2$ pseudo-star configuration determined by $d$ hypersurfaces. \end{theorem} Throughout this note we work over a field $\K$ of characteristic zero. \section{Preliminaries} The term ''star configuration'' has been coined by Geramita. It is motivated by the observation that five general lines in $\P^2$ resemble a pentagram. The objects defined below have appeared in recent years in various guises in algebraic geometry, commutative algebra and combinatorics, see \cite{GHM13} for a throughout account. \begin{definition}[Star configuration]\label{def:star} Let $\calh=\left\{H_1,\ldots,H_s\right\}$ be a collection of $s\geq 1$ mutually distinct hyperplanes in $\P^N$ defined by linear forms $\left\{h_1,\ldots,h_s\right\}$. We assume that the hyperplanes meet \emph{properly}, i.e., the intersection of any $c$ of them is either empty or has codimension $c$, where $c$ is any integer in the range $1\leq c\leq\min\left\{s,N\right\}$. The union $$S(c,\calh)=\bigcup_{1\leq i_1< \ldots < i_c\leq s} H_{i_1}\cap\ldots\cap H_{i_c}$$ is the \emph{codimension $c$ star configuration} associated to $\calh$. We have $$I(c,\calh)=I(S(c,\calh))=\bigcap_{1\leq i_1 < \ldots < i_c\leq s} \left(h_{i_1},\ldots,h_{i_c}\right).$$ where $h_i$, $i = 1,\ldots, s $ are linear forms in $R$, defining the hyperplanes $H_i$. \end{definition} The condition of meeting properly is satisfied by a collection of general hyperplanes. If the collection $\calh$ is clear from the context or irrelevant, we write $$S_N(c,s)$$ to denote a codimension $c$ star configuration determined by $s$ hyperplanes in $\P^N$. For the purpose of this note, it is convenient to use the following terminology: an $r$--flat in a projective space is a linear subspace of (projective) dimension $r$. Thus a codimension $c$ star configuration determined by $s$ hyperplanes is the union of $\binom{s}{c}$ distinguished $(N-c)$--flats. The following notion is essential for our arguments. \begin{definition}[Cohen-Macaulay] A noetherian local ring $(R,\mathfrak{m})$ is called \emph{Cohen-Macaulay}, if $$\depth_{\mathfrak{m}}R=\dim(R).$$ A noetherian ring $R$ is Cohen-Macaulay (CM) if all of its local rings at prime ideals are Cohen-Macaulay.\\ A closed subscheme $Z\subset\P^N$ with defining ideal $I(Z)$, is called \emph{arithmetically Cohen-Macaulay (ACM for short)} if its coordinate ring $\K[\P^N]/I(Z)$ is CM. \end{definition} By \cite[Proposition 2.9]{GHM13} every star configuration is ACM. The following feature of ACM subschemes makes them particularly suited for inductive arguments. \begin{proposition}\label{prop:ACM and hyperplane section} Let $Z\subseteq \mathbb{P}^N$ be an ACM subscheme of dimension at least $1$, and let $H\subseteq \mathbb{P}^N$ be a general hyperplane. Then the intersection scheme $Z\cap H$ is ACM and $$\alpha(I(Z))=\alpha(I(Z\cap H)).$$ \end{proposition} \begin{proof} A general hyperplane section of any curve is ACM, because all subschemes of dimension zero are ACM. For general hyperplane sections of higher dimensional subschemes see \cite[Theorem 1.3.3]{MiglioreBook}. The second claim follows from \cite[Corollary 1.3.8]{MiglioreBook} and some basic properties of postulation. \end{proof} In \cite{Jan15} Janssen introduced the notion of a pseudo-star configuration for lines in $\P^3$. In the present note we extend this notion to higher dimensional flats in projective spaces of arbitrary dimension. To begin with, note that if $\calh=\left\{H_1,\ldots,H_s\right\}$ is a collection of $s>N$ hyperplanes in $\P^N$, then the assumption that they intersect properly (see Definition \ref{def:star}) is equivalent to assuming that any $(N+1)$ of them have an empty intersection. For our purposes, we need to weaken this condition, i.e. \begin{definition}[Pseudo-star configuration]\label{def:pseudo-star} Let $\calh=\left\{H_1,\ldots,H_s\right\}$ be a collection of hyperplanes in $\P^N$ and let $1\leq c\leq N$ be a fixed integer. We assume that the intersection of any $c+1$ of hyperplanes in $\calh$ has codimension $c+1$ (equivalently: no $c+1$ hyperplanes in $\calh$ have the same intersection as any $c$ of them). The union $$P(c,\calh)=\bigcup_{1\leq i_1 < \ldots < i_c\leq s} H_{i_1}\cap\ldots\cap H_{i_c}$$ is called the \emph{codimension $c$ pseudo-star configuration} determined by $\calh$. \end{definition} If $\calh$ is clear from the context or irrelevant, we write $P_N(c,s)$ for a codimension $c$ pseudo-star configuration in $\P^N$ determined by $s$ hypersurfaces. Of course, any star configuration is a pseudo-star configuration. If $N=c=2$, then also the converse holds, i.e., any pseudo-star configuration of points in $\P^2$ is a star configuration. In general the two notions go apart, see Section \ref{sec:examples}. Moreover, being a pseudo-star configuration is stable under taking cones over the configuration. \begin{remark} Let $I\subset\K[\P^N]$ be an ideal of a codimension $c$ pseudo-star configuration $P_N(c,s)$. Then the extension of the ideal $I$ to $\K[\P^{N+1}]$ defines a $P_{N+1}(c,s)$. \end{remark} The construction known in the Liaison Theory as the \textit{Basic Double Linkage} (see \cite[chapter 4]{MiglioreBook}) was used in \cite{GHM13} to prove some basic properties of star configuration. These properties are also satisfied for pseudo-star configurations as the following proposition shows. \begin{proposition}\label{prop:pseudo-star basic} Let $\calh =\lbrace H_1,\cdots , H_s\rbrace $ be a collection of mutually distinct hyperplanes in $\mathbb{P}^N$ such that any $c+1$ of them intersect in a subspace of codimension $c+1$. Let $P(c,\calh)$ be the associated codimension $c$ pseudo-star configuration and let $I$ be its vanishing ideal. Then: \begin{enumerate} \item[1)] $\deg P(c,\calh)={s \choose c}$; \item[2)] $P(c,\calh)$ is $ACM$; \item[3)] $I^{(m)}$ is CM for all $1\leq m\leq c$; \item[4)] $\alpha(I)=s-c+1$ and all minimal generators of $I$ occur in this degree. \end{enumerate} \end{proposition} \begin{proof} According to the definition of a pseudo-star configuration 1) is obvious. Properties 2) and 4) were proved in \cite[Proposition 2.9]{GHM13} (see also \cite[Remark 2.13]{GHM13}). Symbolic powers of an ideal defining a pseudo-star configuration are Cohen-Macaulay by the first part of the proof of \cite[Theorem 3.2]{GHM13}. Note that Proposition 2.9 and Theorem 3.2 in \cite{GHM13} are stated for star configuration but the assumption that the hyperplanes meet properly can be relaxed to the assumption in the mentioned Proposition. Hence the proof of that Proposition, works for pseudo-stars too. \end{proof} \begin{remark} Since the ideal of every linear subspace in a projective space is a complete intersection, by unmixedness theorem, we can describe the $m^{th}$ symbolic power of a pseudo-star configuration in a straightforward manner. In fact, let $\calh=\left\{H_1,\ldots,H_s\right\}$ be a pseudo-star configuration in $\P^N$, defined by the linear forms $h_1,\ldots,h_s \in R$. Let $c \geq 1$ be a fixed integer. Then $$I=\bigcap_{1\leq i_1 < \ldots < i_c\leq s}(h_{i_1},\ldots,h_{i_s})$$ is the defining ideal of codimension $c$ pseudo-star configuration associated to $\calh$. Then by unmixedness theorem, for any positive integer $m$, one has $$I^{(m)}=\bigcap_{1\leq i_1 < \ldots < i_c\leq s}(h_{i_1},\ldots,h_{i_s})^m.$$ \end{remark} We use this description, to compute the second symbolic powers of the ideals in the following section. \section{Examples}\label{sec:examples} In this section, we assume $R$ is $\K[\P^3]$. \begin{example}[Star configurations] Let $\calh=\left\{H_1, H_2, H_3, H_4\right\}$ be a collection of hyperplanes in $\P^3$ defined by the following linear forms $$h_1=x+2y+3z,\; h_2=x+y+w,\; h_3=x+z+w,\; h_4=y+z+w.$$ These hyperplanes meet properly. Let $I$ be the ideal of the star configuration of lines $S(2,\calh)$ and let $J$ be the ideal of the associated star configuration of points $S(3,\calh)$. The minimal free resolution of $I, I^{(2)}, J$ and $J^{(2)}$ respectively are as follows \begin{align*} 0\rightarrow R^3(-4)\rightarrow R^4(-3)\rightarrow I \rightarrow 0, \end{align*} \begin{align*} 0\rightarrow R^4(-7)\rightarrow R(-4)\oplus R^4(-6)\rightarrow I^{(2)} \rightarrow 0, \end{align*} \begin{align*} 0\rightarrow R^3(-4)\rightarrow R^8(-3)\rightarrow R^6(-2)\rightarrow J\rightarrow 0, \end{align*} \begin{align*} 0\rightarrow R^6(-6)\rightarrow R^3(-4)\oplus R^{12}(-5) \rightarrow R^4(-3)\oplus R^6(-4)\rightarrow J^{(2)}\rightarrow 0. \end{align*} We see immediately that $\alpha(I)=3$ and $\alpha(I^{(2)})=4$. Similarly for the ideal $J$ we have $\alpha(J)=2$ and $\alpha(J^{(2)})=3$. \end{example} Our next example, is a pseudo-star configuration in $\P^3$. \begin{example}[A pseudo-star configuration] Let $\calh=\left\{H_1, H_2, H_3, H_4\right\}$ be a collection of hyperplanes in $\P^3$ defined by the following linear forms $$h_1=x+5z,\; h_2=17x+19y,\; h_3=2x+3y+11z,\; h_4=13x+7z.$$ These hyperplanes do not meet properly but the intersection of any three of them has codimension $3$ and they all intersect in the point $P=(0:0:0:1)$. Let $I$ be the ideal of the pseudo-star configuration of lines $P(2,\calh)$. The minimal free resolutions of $I$ and $I^{(2)}$ are \begin{align*} 0\rightarrow R^3(-4)\rightarrow R^4(-3)\rightarrow I\rightarrow 0, \end{align*} \begin{align*} 0\rightarrow R^4(-7)\rightarrow R(-4)\oplus R^4(-6)\rightarrow I^{(2)}\rightarrow 0. \end{align*} Now we have also $\alpha(I)=3$ and $\alpha(I^{(2)})=4$. Note that the codimension $3$ pseudo-star configuration defined by the ideal $$J=(h_1, h_2, h_3)\cap (h_1, h_2, h_4)\cap (h_1, h_3, h_4)\cap (h_2, h_3, h_4)$$ is now just a single point $\left\{ P\right\}$, i.e. its defining ideal in $\K[x,y,z,w]$ is $J=\langle x,y,z\rangle$. \end{example} \section{Proof of the Main Result} In the course of proving the main result of this note, the following lemma plays a crucial role. In fact, it is a higher dimensional analogue of \cite[Proposition 2.10]{Jan15}. \begin{lemma}\label{lem:H section} If a collection of $(N-2)$--planes in $\P^N$ with $N\geq 4$ is not contained in a hyperplane in $\P^N$ (so that there are in particular at least $t\geq 2$ such planes), then its intersection with a \emph{general} hyperplane $H\subset \P^N$ is not contained in a hyperplane in $H$. \end{lemma} \begin{proof} It suffices to prove the statement for $t=2$. Let $U, V$ be $(N-2)$--planes in $\P^N$. By the dimension formula $$\dim\left\langle U,V\right\rangle=\dim U+\dim V-\dim (U\cap V)$$ and by the assumption that $U$ and $V$ span $\P^N$, we have $$N=N-2+N-2-\dim(U\cap V),$$ so that $\dim(U\cap V)=N-4$. Since $N\geq 4$ by assumption, the intersection $U\cap V$ is non-empty. With the usual convention that the dimension of the empty set equals $-1$, we have for a general hyperplane $H$ $$\dim(U\cap H)=\dim U-1=N-3,\;\; \dim(V\cap H)=\dim V-1=N-3,\; $$ $$\mbox{and}\;\; \dim((U\cap V)\cap H)=\dim(U\cap V)-1=N-5.$$ Hence $$\dim\left\langle U\cap H, V\cap H\right\rangle=N-3+N-3-N+5=N-1.$$ This means that $(U\cap H)$ and $(V\cap H)$ span $H$ and we are done. \end{proof} \begin{remark} The above proof fails for two lines in $\P^3$. This is the reason that the argument in \cite{Jan15} is somewhat more involved. In fact, the proof of $N=3$ seems the most difficult case, contrary to what one might naively expect. \end{remark} We are now in the position to prove our main result. \proofof{Theorem \ref{thm:main}} Let $\L=L_1\cup\ldots\cup L_t$ be the union of $(N-2)$--flats in $\P^N$ such that the initial sequence $\alpha_m$ of the vanishing ideal $J$ of $\L$ satisfies $$d:=\alpha_2=\alpha_1+1$$ and $J$ is CM. To prove our claim, we proceed by induction on $N$. For $N=2$ see \cite[Theorem 1.1]{BocChi11}, and for $N=3$ see \cite[Theorem 2.13]{Jan15}. Moreover, if $t = 1$, then the claim is clear so we can assume $N\geq 4$ and $t\geq 2$. If $\L$ is contained in a hyperplane, then there is nothing to prove. So we assume that $\L$ spans the space $\P^N$. Let $H$ be a general hyperplane in $\P^N$. Then, the intersection $\L_H=\L\cap H$ can be represented as $$\L_H=(L_1\cap H)\cup\ldots\cup(L_t\cap H).$$ Since $H$ is general, $\dim(L_i\cap H)=N-3$ for all $i=1,\ldots,t$. By Proposition \ref{prop:ACM and hyperplane section} the ideal $J_H$ of $\L_H$ is CM and its initial sequence $\beta_m=\alpha(J_H^{(m)})$ satisfies $$d=\beta_2=\beta_1+1.$$ By the induction assumption, $\L_H$ is a codimension two pseudo-star configuration determined by hypersurfaces $F_1,\ldots,F_d$ in H. Indeed, $\L_H$ cannot be contained in a hyperplane since otherwise, by Lemma \ref{lem:H section}, $\L$ would be contained in a hyperplane. The hypersurface $F_1$ contains its intersections with the remaining $(d-1)$ hypersurfaces $F_2,\ldots,F_d$. These intersections are traces of some of the $(N-2)$-flats $L_1,\ldots,L_t$. There are exactly $\binom{d}{2}$ intersections among $F_i$'s by 1) in Proposition \ref{prop:pseudo-star basic}. There must be exactly as many traces so that $t=\binom{d}{2}$. Since the intersections of $F_1$ with $F_2,\ldots,F_d$ are by definition contained in $F_1$, a hyperplane in $H$, and they are on the other hand intersections of some $L_i$'s with $H$, the corresponding $L_i$'s must be themselves contained in a hyperplane, say $H_1$ in $\P^N$. Permuting the indices we obtain hypersurfaces $H_1,\ldots, H_d$ in $\P^N$ such that $$F_i=H_i\cap H\;\mbox{ for }\; i=1,\ldots,d.$$ Since every $(N-3)$--flat $(L_i\cap H)$ is contained in exactly two of $F_i$'s (by the definition of a codimension two pseudo-star configuration), every $L_i$ must be contained in \emph{at least} two of $H_i$'s. But there are $\binom{d}{2}$ of $L_i$'s and at most that many pair intersections among the $H_i$'s. Hence every $L_i$ is contained in \emph{exactly} two of the $H_i$'s. This shows that $\L$ is the codimension two pseudo-star configuration determined by $H_1,\ldots,H_d$ and we are done. \endproof We complete the picture by showing that the converse statement holds for \emph{arbitrary} codimension two pseudo-star configurations. \begin{theorem}\label{thm:complement} Let $\L$ be the union of $(N-2)$--flats $L_1,\ldots,L_t$ with the vanishing ideal $J$. If $\L$ is \begin{itemize} \item[a)] contained in a hyperplane, then the initial sequence of its vanishing ideal is $$1,2,3,4,\ldots;$$ \item[b)] a $P_N(2,s)$, then the initial sequence of its vanishing ideal is $$s-1,s,2s-1,2s,3s-1,3s,\ldots.$$ \end{itemize} \end{theorem} \proof In case a) there is nothing to prove since the initial sequence is strictly increasing.\\ In case b) we have, to begin with, $t=\binom{s}{2}$ for some $s$. Taking subsequent sections by general hyperplanes $H_1,\ldots,H_{N-2}$ we arrive to a pseudo-star (hence a star) configuration of $\binom{s}{2}$ points in $\P^2$. In this case \cite[Proposition 3.2]{BocChi11} implies that $\alpha(J,H_1,\ldots,H_{N-2})=s-1$. This implies \begin{equation}\label{eq:1} \alpha(J)\geq s-1. \end{equation} On the other hand the union of $s$ hyperplanes in $\P^N$ vanishes to order two along $\L$, so that $\alpha(J^{(2)})\leq s$. Combining this with \eqref{eq:1} we obtain $\alpha(J)=s-1$ and $\alpha(J^{(2)})=s$. The argument for higher symbolic powers is similar and we leave the details to the reader. \endproof In the view of our results, it is natural to conclude this note with the following challenge. \begin{problem} Is there any codimension two pseudo-star configuration which is not ACM? \end{problem} \paragraph*{Acknowledgement.} This research has been carried out while the first author was a visiting fellow at the Department of Mathematics of the Pedagogical University of Cracow in the winter term 2017/18 and spring 2018. We thank Justyna Szpond for helpful comments. The research of the last named author was partially supported by National Science Centre, Poland, grant 2014/15/B/ST1/02197. \bibliographystyle{abbrv}
{ "timestamp": "2018-04-17T02:11:48", "yymm": "1804", "arxiv_id": "1804.05415", "language": "en", "url": "https://arxiv.org/abs/1804.05415" }
\section{Introduction} Quantum gravity is elusive not mainly because we lack computational tools, but because we do not know {\it what} to compute and so how to define the theory for a generic spacetime. One possible exception and a promising path is the case of asymptotically anti-de Sitter (AdS) spacetimes for which a dual quantum conformal field theory that lives on the boundary of a bulk spacetime with gravity would amount to a definition of quantum gravity. But, even for this setting, we do not have a realistic four dimensional example. In three dimensions, the situation is slightly better: the cosmological Einstein's theory (with $\Lambda <0$) has a black hole solution \cite{BTZ} and possesses the right boundary symmetries (a double copy of the centrally extended Virasoro algebra \cite{BH}) for a unitary two dimensional conformal field theory. But as the theory has no local dynamics (namely gravitons), it is not clear exactly how much one can learn from this model as far as quantum gravity is concerned. Having said that, even for this ostensibly simple model, we still do not yet have a quantum gravity theory. Recasting Einstein's gravity in terms of a solvable Chern-Simons gauge theory is a possible avenue \cite{Witten88}, but this only works for non-invertible dreibein which cannot be coupled to generic matter. A more realistic gravity in three dimensions is the topologically massive gravity (TMG) \cite{djt} which has black hole solutions as well as a dynamical massive graviton. But the apparent problem with TMG is that the bulk graviton and the black hole cannot be made to have positive energy generally. This obstruction to a viable classical and perhaps quantum theory was observed to disappear in an important work \cite{Strom1}, where it was realized that at a "chiral point" defined by a tuned topological mass in terms of the AdS radius, one of the Virasoro algebras has a vanishing central charge (and so admits a trivial unitary representation) and the other has a positive nonzero central charge with unitary nontrivial representations, the theory has a positive energy black hole and zero energy bulk gravitons. This tuned version of TMG, called "chiral gravity", seems to be a viable candidate for a well-behaved classical and quantum gravity. One of the main objections raised against the chiral gravity is that it possesses a negative energy perturbative log-mode about the AdS vacuum which ruins the unitarity of the putative boundary CFT \cite{Grumiller}. Of course if this is the case, chiral gravity is not even viable at the classical level, since it does not have a vacuum. It was argued in \cite{Strom2,Carlip} that chiral gravity could survive if the theory is linearization unstable about its AdS solution. This means that there would be perturbative modes which cannot be obtained from any exact solution of the theory. In fact, these arguments were supported with the computations given in \cite{emel} where it was shown that the Taub charges which are functionals quadratic in the perturbative modes that must vanish identically due to background diffeomorphism invariance, do not vanish for the log-mode that ruins the chiral gravity. This means that the log-mode found from the linearized field equations is an artifact of the linearized equations and does not satisfy the global constraints coming from the Bianchi identities. In this work, we give a direct proof of the linearization instability of chiral gravity in AdS using the constraint analysis of the full TMG equations defined on a spacelike hypersurface. The crux of the argument that we shall lay out below is the following: the linearized constraint equations of TMG show that there are inconsistencies exactly at the chiral point. Namely perturbed matter fields do not determine the perturbations of the metric components on the spacelike hypersurface and there are unphysical constraints on matter perturbations besides the usual covariant conservation. To support our local analysis on the hypersurface, we compute the symplectic structure (that carries all the information about the phase space of the theory) for all perturbative solutions of the linearized field equations and find that the symplectic 2-form is degenerate and so non-invertible hence these modes do not approximate ({\it i.e.}\thinspace they are not tangent to) actual nonlinear solutions. The symplectic 2-form evaluated for the log-mode is time-dependent (hence not coordinate-invariant) and vanishes at the initial value surface and grows unbounded in the future. To carry out the constraint analysis and their linearizations (which will yield possible nearby solutions to exact solution), we shall use the field equations instead of the TMG action as the latter is not diffeomorphism invariant which complicates the discussion via the introduction of tensor densities (momenta) instead of tensors. We shall also work in the metric formulation instead of the first order one as there can be significant differences between the two formulations. Before we indulge into the analysis, let us note that the linearization instability that arises in the perturbative treatment of nonlinear theories and can be confused with dynamical or structural instability, as both are determined with the same linearization techniques.The difference is important: the latter refers to a real instability of a system such as the instability of the vacuum in a theory with ghosts such as the $R+\beta R_{\mu\nu}^2$ theory with $\beta\neq0$, this is simply not physically acceptable. On the other hand linearization instability refers to the failure of perturbation theory for a given background solution and one should resort to another method to proceed. From the point of view of the full solution space of the theory, this means that this (possibly infinite dimensional) space is not a smooth manifold but it has conical singularities around certain solutions. Let us expound on this a little bit. \section{linearization instability in brief } A nonlinear equation $F(x)=0$ is said to be linearization stable at a solution $x_0$ if every solution $\delta x$ to the linearized equation $F^\prime(x_{0})\cdot\delta x=0 $ is tangent to a curve of solutions to the original nonlinear equation. In some nonlinear theories, not all solutions to the linearized field equations represent linearized versions of exact (nonlinear) solutions. As a common algebraic example, let us consider the function $F(x,y)= x( x^2 + y^2)=0$, where $x,y$ are real, exact solution space is one dimensional given as $(0, y)$, and the linearized solution space is also one dimensional $(0, \delta y ) $ as long as $y\ne0$. But at exactly the solution $(0, 0)$, the linearized solution space is two dimensional $(\delta x, \delta y )$ and so there are clearly linerized solutions with $\delta x \ne 0$, which do not come from the linearization of any exact solution. The existence of such spurious solutions depends on the particular theory at hand and the background solution (with its symmetries and topology) about which linearization is carried out. If such so called "nonintegrable" solutions exist, perturbation theory in some directions of solution space fails and we say that the theory is not linearization stable at a nonlinear exact solution. What we have just described is not an exotic phenomenon: a {\it priori} no nonlinear theory is immune to linearization instability: one must study the problem case by case. For example, pure general relativity is linearization stable in Minkowski spacetime (with a non-compact Cauchy surface) \cite{Choquet_Deser}, hence perturbation theory makes sense, but it is not linearization stable on a background with compact Cauchy surfaces that possesses at least one Killing symmetry \cite{Moncrief} which is the case when the Cauchy surface is a flat 3-torus \cite{Deser_Brill}: on $T^3\times R$, at second order of the perturbation theory, one must go back and readjust the first order perturbative solution. As gravity is our main interest here, let us consider some nonlinear gravity field equations in a coordinate chart as $\mathscr{E}_{\mu\nu}=0$, which admits $\bar{g}_{\mu\nu}$ as an exact solution, if {\it every} solution ${h}_{\mu\nu}$ of the linearized field equations $\mathscr{E}^{(1)}(\bar{g})\cdot h=0$ is tangent to an exact solution ${g}_{\mu\nu}(\lambda)$ such that ${g}_{\mu\nu}(0)=\bar{g}_{\mu\nu}$ and $\frac{dg_{\mu\nu}}{d\lambda}|_{\lambda=0}=h_{\mu\nu}$ then, according to our definition above, the theory is linearization stable. Otherwise it is linearization unstable. In general, we do not have a theorem stating the {\it necessary and sufficient} conditions for the linearization stability of a generic gravity theory about a given exact solution. For a detailed discussion on generic gravity models, see our recent work \cite{emel}. But, as discussed in section II of that work, defining the second order perturbation as $\frac{d^2 g_{\mu\nu}}{d\lambda^2}|_{\lambda=0}=k_{\mu\nu}$, if the following second order equation \begin{equation} (\mathscr{E})^{(2)}(\bar{g})\cdot [h,h]+(\mathscr{E})^{(1)}(\bar{g})\cdot k=0, \label{lin1} \end{equation} has a solution for $k_{\mu \nu}$ without a constraint on the linear solution $h_{\mu\nu}$, then the theory is linearization stable. Of course, at this stage it is not clear that there will arise no further constraints on the linear theory beyond the second order perturbation theory. In fact, besides Einstein's theory, this problem has not been worked out. But in Einstein's gravity, as the constraint equations are related to the zeros of the moment map, one knows that there will be no further constraint for the linear theory coming from higher order perturbation theory beyond the second order \cite{Marsden_lectures}. In Einstein's gravity for compact Cauchy surfaces without a boundary, the necessary and sufficient conditions are known for linearization stability \cite{Moncrief,M1,M2,M3}. In practice, it is very hard to show that (\ref{lin1}) is satisfied for {\it all } linearized solutions, therefore, one resorts to a weaker condition by contracting that equation with a Killing vector field and integrates over a hypersurface to obtain $ Q_{Taub}\left[\bar{\text{\ensuremath{\xi}}}\right] +Q_{ADT}\left[\bar{\text{\ensuremath{\xi}}}\right] =0$ where the Taub charge \cite{Taub} is defined as\footnote{As it appears in the second order perturbation theory, the Taub charge is not a widely known quantity in physics, for a more detailed account of it, we invite the reader to study the relevant section of \cite{emel}} \begin{equation} Q_{Taub}\left[\bar{\text{\ensuremath{\xi}}}\right]:=\intop_{\Sigma} d^{3}\Sigma\thinspace\sqrt{\gamma}\thinspace\hat{n}^{\nu}\thinspace\bar{\text{\ensuremath{\xi}}}^{\mu}\thinspace(\text{\ensuremath{\mathscr{E}}}_{\mu\nu})^{(2)}\cdot [h,h], \label{ttt} \end{equation} and the ADT charge \cite{Abbott_Deser,adt} is defined as \begin{equation} Q_{ADT}\left[\bar{\text{\ensuremath{\xi}}}\right] :=\intop_{\Sigma} d^{3}\Sigma\thinspace\sqrt{\gamma}\thinspace\hat{n}^{\nu}\thinspace\bar{\text{\ensuremath{\xi}}}^{\mu}\left(\text{\ensuremath{\mathscr{E}}}_{\mu\nu}\right)^{(1)}\cdot k. \label{ADT} \end{equation} The latter can be expressed as a boundary integral. For the case of compact Cauchy surfaces without a boundary, $Q_{ADT} =0$, and hence one must have $Q_{Taub}=0$ which leads to the aforementioned quadratic integral constraint on the linearized perturbation $h_{\mu\nu}$ as the integral in (\ref{ttt}) should be zero. This is the case for Einstein's gravity, for example, on a flat 3-torus: $Q_{Taub}$ does not vanish automatically and so the first order perturbative result $h$ is constrained. On the other hand, for extended gravity theories (such as the theory we discuss here), $Q_{ADT}$ vanishes for a different reason, even for non-compact surfaces, as in the case of AdS. The reason is that for some tuned values of the parameters in the theory, the contribution to the conserved charges from various tensors cancel each other exact, yielding nonvacuum solutions that carry the (vanishing) charges of the vacuum. This is the source of instability. \section{ADM decomposition of TMG} Before restricting to the chiral gravity limit, we first study the full TMG field equations coupled with matter fields as an initial value problem, hence we take \begin{equation} \mathscr{E}_{\mu\nu}=G_{\mu\nu}+\Lambda g_{\mu\nu}+\frac{1}{\mu}C_{\mu\nu}=\kappa\tau_{\mu\nu}. \end{equation} The ADM \cite{ADM} decomposition of the metric reads \begin{equation} ds^{2}=-(n^{2}-n_{i}n^{i})dt^{2}+2n_{i}dtdx^{i}+\gamma_{ij}dx^{i}dx^{j}, \end{equation} where ($n$, $n_{i}$) are lapse and shift functions and $\gamma_{ij}$ is the $2D$ spatial metric. From now on, the Greek indices will run over the full spacetime, while the Latin indices will run over the hypersurface $\varSigma$, as $i,j...=1,2$. The spatial indices will be raised and lowered by the $2D$ metric. The extrinsic curvature ($k_{i j}$) of the surface is given as \begin{equation} 2 n k_{ij}=\dot{\mathbf{\gamma}}_{ij}-2 D_{(i}n_{j)}, \end{equation} where $D$ is the covariant derivative compatible with $\gamma_{ij}$ and $\dot{\mathbf{\gamma}}_{ij} :=\partial_{0}\gamma_{ij}$ and the round brackets denote symmetrization with a factor of 1/2. With the convention $R_{\rho\sigma}=\partial_{\mu}\Gamma_{\rho\sigma}^{\mu}-\partial_{\rho}\Gamma_{\mu\sigma}^{\mu}+\Gamma_{\mu\nu}^{\mu}\Gamma_{\rho\sigma}^{\nu}-\Gamma_{\sigma\nu}^{\mu}\Gamma_{\mu\rho}^{\nu}$, one finds the hypersurface components of the three dimensional Ricci tensor as \begin{eqnarray} &&R_{ij}={^{(2)}R}_{ij}+k k_{ij}-2k_{ik}k_{j}^{k} \\ &&+\frac{1}{n}(\dot{k}_{ij} -n^{k}D_{k}k_{ij} -D_{i}\partial_{j}n-2 k_{k(i}D_{j)}n^{k}), \nonumber \end{eqnarray} where ${^{(2)}R}_{ij}$ is the Ricci tensor of the hypersurface and $k\equiv \gamma^{i j} k_{ij}$. Similarly one find the twice projection to the normal of the surface as \begin{align} R_{00}=&\frac{n^{i}n^{j}}{n}(\dot{k}_{ij}-n^{k}D_{k}k_{ij}-D_{i}\partial_{j}n-2k_{kj}D_{i}n^{k}) \nonumber\\&-n^{2}k_{ij}^{2} +n^{i}n^{j}(^{(2)}R_{ij}+k k_{ij}-2k_{ik}k_{j}^{k})\\&+n(D_{k}\partial^{k}n-\dot{k}-n^{k}D_{k}k+2n^{k}D_{m}k_{k}^{m}). \nonumber \end{align} On the other hand, projecting once to the surface and once normal to the surface yields \begin{align} R_{0i}&=\frac{n^{j}}{n}(\dot{k}_{ij}-n^{k}D_{k}k_{ij}-D_{i}\partial_{j}n-2k_{k(i}D_{j)}n^{k})\\&+n^{j} ({^{(2)}R}_{ij}+kk_{ij}-2k_{ik}k_{j}^{k})+n(D_{i}k+D_{m}k_{i}^{m}). \nonumber \end{align} We also need the 3D scalar curvature in terms of the hypersurface quantities which can be found as \begin{equation} R={}^{(2)}R+k^{2}-k_{ij}^{2}+\frac{2}{n}(\dot{k}+n k_{ij}^{2}-D_{i}D^{i}n-n^{i}D_{i}k). {\label{3DR}} \end{equation} Given the Schouten tensor $S_{\mu\nu}:=R_{\mu\nu}-\frac{1}{4}Rg_{\mu\nu}$, the Cotton tensor is defined as \begin{equation} C_{\mu\nu} :=\frac{1}{2}\epsilon{}^{\rho\alpha\beta}(g_{\mu\rho}\nabla_{\alpha}S_{\beta\nu}+g_{\nu\rho}\nabla_{\alpha}S_{\beta\mu}), \end{equation} where $\epsilon{}^{\rho\alpha\beta}$ is the totally antisymmetric tensor which splits as $\epsilon{}^{0mn}=\frac{1}{n}\epsilon^{mn}=\frac{1}{n}\gamma^{-\frac{1}{2}}\varepsilon^{mn}$ where $\varepsilon^{mn}$ is the antisymmetric symbol. Just as we have done the ADM decomposition of the Ricci tensor, a rather lengthy computation yields the following expressions, for the projections of the Cotton tensor \begin{align} 2 n C_{ij}=&\epsilon{}^{mn}n_{i}( D_{m}S_{nj}-k_{mj}(D_{r}k_{n}^{r}-\partial_{n}k)) \nonumber\\ &+\epsilon{}^m\,_i\bigg \{\dot{S}_{mj}-n k_{j}^{k}S_{mk}-S_{mk}D_{j}n^{k} \nonumber \\ &-(\partial_{j}n+n^{r}k_{rj})(D_{s}k_{m}^{s}-\partial_{m}k) \nonumber \\ &-D_{m}(n^{r}S_{rj}+n(D_{r}k_{j}^{r}-D_{j}k) ) \nonumber \\ &+k_{mj}(D_{k}\partial^{k}n-\dot{k}+n^{k}D_{s}k_{k}^{s}+n(\frac{R}{4}-k_{rs}^{2})) \bigg \} \nonumber \\ &+i\leftrightarrow j, \end{align} and \begin{equation} C_{i0}=n^{j}C_{ij}-\frac{\epsilon{}^{mn}}{2}(n A_{mni}-n_{i}B_{mn}-\gamma_{in}( C_{m}+n E_{m})) \end{equation} and \begin{align} &C_{00}=n^{i}n^{j}C_{ij} \\ &-\epsilon{}^{mn}(nn^{i}A_{mni}-(n_{i}n^{i}-n^{2})B_{mn}-n_{n}(C_{m}+n E_{m})), \nonumber \end{align} where we have defined the following tensors \begin{align*} &A_{mni}\equiv D_{m}S_{ni}-k_{mi}\left(D_{r}k_{n}^{r}-\partial_{n}k\right), \\ &B_{mn}\equiv D_{m}D_{r}k_{n}^{r}-k_{m}^{k}S_{kn}, \\ &E_{m}\equiv 2k_{rs}D_{m}k^{rs}-\frac{1}{4}\partial_{m}R+k_{m}^{k}\left(D_{r}k_{k}^{r}-\partial_{k}k\right), \\ &C_{m}\equiv \partial_{0}D_{r}k_{m}^{r}-S_{m}^{k}\left(\partial_{k}n+n^{r}k_{rk}\right)-D_{m}D_{k}\partial^{k}n\\ &\,\,\,\,\,\,\,-D_{m}\left(n^{k}D_{s}k_{k}^{s}\right)+k_{m}^{k}S_{kr}n^{r}+\partial_{m}n(k_{rs}^{2}-\frac{R}{4}). \end{align*} Using the above decomposition, we can recast the ADM form of the full TMG equations as \begin{equation} \mathscr{E}_{ij}=S_{ij}-\frac{1}{4}\gamma_{ij}R+\Lambda\gamma_{ij}+\frac{1}{\mu}C_{ij}=\kappa\tau_{ij} \end{equation} and \begin{align} \mathscr{E}_{0i}=&\kappa\tau_{0i}=n^{j}\mathscr{E}_{ij}+n(D_{r}k_{i}^{r}-\partial_{i}k)\\-&\frac{1}{2\mu}\epsilon{}^{mn}(n A_{mni}-n_{i}B_{mn}-\gamma_{in}(C_{m}+n E_{m})) \nonumber \end{align} and \begin{align} \mathscr{E}_{00}&=\kappa\tau_{00}=2n^{i}\mathscr{E}_{0i}-n^{i}n^{j}\mathscr{E}_{ij}-\Lambda n^{2}-\frac{1}{\mu}\epsilon{}^{mn}n^{2}B_{mn}\nonumber \\&+n(D_{k}\partial^{k}n-\dot{k}+n^{k}D_{k}k+n(\frac{R}{2}-k_{rs}^{2})). \end{align} From $\mathscr{E}_{0i}$, we get the momentum constraint as \begin{align} \Phi_{i}&=\kappa(\tau_{0i}-n^{j}\tau_{ij})=n(D_{r}k_{i}^{r}-\partial_{i}k)\\+&\frac{1}{2\mu}\epsilon{}^{mn}(n_{i}B_{mn}-nA_{mni}+\gamma_{in}C_{m}+n\gamma_{in}E_{m}) \nonumber \end{align} and from $\mathscr{E}_{00}$ we get the Hamiltonian constraint as \begin{align} \Phi=&\frac{\kappa}{n^{2}}(\tau_{00}-2n^{i}\tau_{0i}+n^{i}n^{j}\tau_{ij}) \nonumber \\ +&\frac{1}{2}(^{(2)}R+k^{2}-k_{ij}^{2}-2\Lambda) \nonumber \\ -&\frac{1}{\mu}\epsilon{}^{mn}\left(D_{m}D_{r}k_{n}^{r}-k_{m}^{k}S_{kn}\right), \end{align} where in the last equation we made use of the explicit form of $R$ given in (\ref{3DR}) which for TMG is $R=6\Lambda-2 \kappa\tau $. From now on, for our purposes, it will suffice to work in the Gaussian normal coordinates with $n=1$ and $n_{i}=0$ for which $k_{ij}=\frac{1}{2}\dot{\gamma}_{ij}$ and the constraints reduce to \begin{align} &\frac{\epsilon^{mn}}{4\mu}(\dot{\gamma}_{i m}\gamma^{i k}(^{(2)}R_{kn}-\dot{\gamma}_{kp}\dot{\gamma}_{sn}\gamma^{ps}-\ddot{\gamma}_{kn})-2 D_{m}D^{k}\dot{\gamma}_{kn}) \nonumber \\ &-\frac{1}{8}\dot{\gamma}_{ij}\left(\dot{\gamma}_{ab}\gamma^{ab}\gamma^{ij}+\dot{\gamma}^{ij}\right)=\kappa\tau_{00}+\Lambda-\frac{^{(2)}R}{2} \end{align} and \begin{align} &\frac{\epsilon^{m}\thinspace_{i}}{8\mu}\left( \dot{\gamma}^{kp}(2 D_{k}\dot{\gamma}_{pm}-D_{m}\dot{\gamma}_{kp})+2D^{k}\ddot{\gamma}_{km}-\dot{\gamma}_{mk}\gamma^{kl}D^{p}\dot{\gamma}_{pl}\right) \nonumber\\ &-\frac{\epsilon^{mn}}{8\mu}\bigg(\dot{\gamma}_{ab}\gamma^{ab}D_{m}\dot{\gamma}_{in}-2\gamma^{ks}D_{m}(\dot{\gamma}_{kn}\dot{\gamma}_{si}) \nonumber \\ &+2D_{m}\ddot{\gamma}_{in}-\dot{\gamma}_{mi}D^{k}\dot{\gamma}_{kn}\bigg) \\ &+\frac{1}{2}\left(D^{k}\dot{\gamma}_{ki}-\gamma^{ab}D_{i}\dot{\gamma}_{ab}\right)=\kappa\tau_{0i}+\frac{1}{2\mu}\epsilon^{mn}D_{m}{}^{(2)}R_{ni}. \nonumber \end{align} Furthermore, taking a conformally flat $2D$ metric on $\Sigma$, we have $\gamma_{ij}=e^{\varphi}\delta_{ij}$, where $\varphi=\varphi(t,x_i)$, $k_{ij}=\frac{1}{2}\dot{\varphi}\gamma_{ij}$ and the $2D$ Ricci tensor becomes \begin{equation} ^{(2)}R_{ij}=-\frac{1}{4}\gamma_{ij}e^{-\varphi}\left(2D_{k}\partial_{k}\varphi+\partial_{k}\varphi\partial_{k}\varphi\right), \end{equation} whereas the $3D$ Ricci tensor reads \begin{equation} R_{ij}=\frac{1}{2}\gamma_{ij}(-D^{k}\partial_{k}\varphi+\dot{\varphi}^{2}+\ddot{\varphi}-\frac{1}{2}\partial^{k}\varphi\partial_{k}\varphi) \end{equation} and the $3D$ scalar curvature is \begin{equation} R=-D^{k}\partial_{k}\varphi+\frac{3}{2}\dot{\varphi}^{2}+2\ddot{\varphi}-\frac{1}{2}\partial^{k}\varphi\partial_{k}\varphi. \end{equation} With all these results in hand, one can obtain from the constraint equations the following relation \begin{equation} \partial_{i}\dot{\varphi}=-J_{i}+\frac{1}{2\mu}\epsilon^{m}\thinspace_{i}\dot{\varphi}\partial_{m}\dot{\varphi}, \label{kolay} \end{equation} where we have introduced the "source current" which, on the hypersurface, reads \begin{equation} J_{i} := 2\kappa\tau_{0i}+\frac{\kappa}{\mu}\epsilon^{m}\thinspace_{i}\partial_m\tau_{00}. \end{equation} Contracting (\ref{kolay}) with the epsilon-tensor, one arrives at \begin{equation} \frac{2\mu}{\dot{\varphi}}\epsilon^{mi}\partial_{m}\dot{\varphi}\left(1+\frac{\dot{\varphi}^{2}}{4\mu^{2}}\right)=-\frac{2\mu}{\dot{\varphi}}\epsilon^{mi}J_{m}+J^{i}. {\label{main_equation}} \end{equation} In the case of vacuum, $\tau_{\mu\nu}=0$, and so $J_{i}=0$, the unique solution to (\ref{main_equation}) is of the form $\varphi_{0}= c t$, where $c $ is a constant which can be found from the trace equation that reads $R=6\Lambda$. So $\ensuremath{c}=2\sqrt{\Lambda} \equiv \frac{2}{\ell}$, which is the de Sitter (dS) solution and $\ell >0$ is its radius. Turning on a compactly supported matter perturbation with $\delta\tau_{\mu\nu}\ne0$, one has $\delta J_i\ne0$ and perturbing the constraint equations about $\varphi_{0}$ as $\varphi=\varphi_{0}+\delta\varphi$, we find a linearized constraint equation \begin{align}\label{pert} &\mu(1+\frac{1}{\mu^{2} \ell^2})\epsilon^{m}\thinspace_{i}\partial_{m}\delta\dot{\varphi} \\= &(\partial_{i}+\frac{1}{\mu \ell}\epsilon^{m}\thinspace_{i}\partial_{m})\kappa\delta\tau_{00}+2\mu(\epsilon_{i}\thinspace^{m}+\frac{1}{\mu \ell}\delta^{m}\thinspace_{i})\kappa\delta\tau_{0m}\nonumber, \end{align} from which, for the dS case, one can solve the perturbation ($\delta \varphi$) and hence the perturbed metric by integration in terms of the perturbed matter fields on the hypersurface. Hence dS is linearization stable in TMG for any finite value of $\mu \ell$. The other linearized constraints are compatible with this solution. Our computation has been analytic in $\ell$, hence, we can do the following "Wick" rotation to study the AdS case: $x^i\rightarrow ix^i$, $t\rightarrow it$, $ \ell \rightarrow i \ell$ yielding $\Lambda=-\frac{1}{\ell^{2}}$ with the Gaussian normal form of the (signature changed) metric $ds^{2}=dt^{2}-e^{-{2 t/ \ell}}\left(dx^{2}+dx^{2}\right).$ Then for AdS, (\ref{pert}) becomes \begin{align}\label{pert2} &\mu (1-\frac{1}{\mu^{2} \ell^2})\epsilon^{m}\thinspace_{i}\partial_{m}\delta\dot{\varphi} \\= &-(\partial_{i}-\frac{1}{\mu \ell}\epsilon^{m}\thinspace_{i}\partial_{m})\kappa\delta\tau_{00}-2\mu (\epsilon_{i}\thinspace^{m}+\frac{1}{\mu \ell}\delta^{m}\thinspace_{i})\kappa\delta\tau_{0m}\nonumber \end{align} and once again the perturbation theory is valid for {\it generic } values of $\mu \ell$ in AdS as in the case of dS. But at the chiral point, $\mu \ell =1$, the left-hand side vanishes identically and there is an unphysical constraint on the matter perturbations $\delta\tau_{0m}$ and $\delta\tau_{00}$ in addition to their background covariant conservation. Moreover, the metric perturbation is not determined by the matter perturbation. What this says is that in the chiral gravity limit of TMG, for AdS, the exact AdS solution is linearization unstable. The above computation has been a local one, and does not depend on the fact that AdS does not have a Cauchy surface on which one can define the initial value problem. AdS requires initial and boundary values together, but what we have computed is a necessary condition for such a formulation (not a sufficient one) and AdS in chiral gravity does not satisfy the necessary conditions for the initial-boundary value problem. \section{Symplectic structure of TMG} Let us give another argument for the linearization instability of AdS making use of the symplectic structure of TMG which was found in \cite{caner} following \cite{w} as $\omega := \int_\Sigma d \Sigma_\alpha \sqrt{|g|} {\cal{J}}^\alpha$, where $\Sigma$ is the hypersurface. $\omega$ is a closed ($\delta w=0$) non-degenerate (except for gauge directions) 2-form for full TMG including chiral gravity. Here the on-shell covariantly conserved symplectic current reads \begin{align} {\cal{J}}^\alpha&=\delta \Gamma^\alpha_{\ \mu\nu} \wedge ( \delta g^{\mu \nu} + \frac{1}{2} g^{\mu \nu} \delta \ln g ) \nonumber \\ &- \delta \Gamma^\nu_{\ \mu\nu} \wedge ( \delta g^{\alpha \mu} + \frac{1}{2} g^{\alpha \mu} \delta \ln g ) \nonumber \\ &+ \frac{1}{\mu}\epsilon^{\alpha \nu \sigma} ( \delta {S}^\rho_{\ \sigma} \wedge \delta g_{\nu \rho} + \frac{1}{2} \delta \Gamma^\rho_{\ \nu \beta} \wedge \delta \Gamma^\beta_{\ \sigma \rho} ). \label{symplectic_two} \end{align} What is important to understand is that $\omega$ is a gauge invariant object on the solution space, say ${\mathcal {Z}}$, and also on the (more relevant) quotient ${\mathcal {Z}}/Diff$ which is the phase space and $Diff$ is the group of diffeomorphisms. Therefore, even without knowing the full space of solutions, by studying the symplectic structure, one gains a lot of information for both classical and quantum versions of the theory. Perturbative solutions live in the tangent space of the phase space and hence they are crucial in the discussion. We refer the reader to \cite{caner} for a full discussion of this. Let us show that for the linearized solutions of chiral gravity given in \cite{Strom1} the symplectic 2-form is degenerate and hence not invertible. In the global coordinates, the background metric reads \begin{equation} ds^2 = \ell^2\big(-\cosh^2{\rho}\, d\tau^2 +\sinh^2{\rho}\,d\phi^2+d\rho^2\big), \end{equation} defining $u=\tau+\phi$, $v=\tau-\phi$, making use of the $SL(2, R)\times SL(2, R)$, \cite{Strom1} found all the primary states (but one) and their descendants. The primary solutions are \begin{equation} h_{\mu \nu} = \Re \left\{e^{ -i \Delta \tau -i S \phi } F_{\mu \nu}(\rho)\right\}, \end{equation} where the real part is taken and the background tensor reads \begin{eqnarray} F_{\mu\nu}(\rho)=f(\rho)\left(\begin{array}{ccc} 1 & {S\over2}& {2i\over\sinh 2\rho} \\ {S\over2} & 1 & {i S\over\sinh 2\rho} \\ {2i\over\sinh 2 \rho} &{i S\over \sinh 2 \rho} & - {4\over\sinh^2 2\rho} \\ \end{array}\right) \end{eqnarray} and $f(\rho)=(\cosh{\rho})^{-\Delta }\sinh^2{\rho}$, where $\Delta \equiv h+\bar{h}$ and $S \equiv h-\bar{h}$. Components of the symplectic current for these modes (for generic $\mu \ell$) can be found as \begin{align} &{\cal{J}}^\tau =\frac{( 4-S^2)(S + 2 \mu \ell)\Delta}{8 \mu \ell^7 (\cosh\rho)^{2(1+\Delta)}}\sin \left( 2 \Delta \tau +2 S \phi \right), \nonumber \\ & {\cal{J}}^\phi= -\frac{2 \coth^2 \rho}{S+ 2 \mu \ell} {\cal{J}}^\tau, \\ &{\cal{J}}^\rho =-\frac{(S \Delta +4\mu \ell)\coth \rho +(\Delta -2)\mu \ell \sinh 2 \rho} {\Delta ( S+ 2 \mu \ell)}{\cal{J}}^\tau, \nonumber \end{align} which yield a vanishing $\omega$ at the chiral limit since for left, right and massive modes we have $S^2 = 4$ and the relevant symplectic current ${\cal{J}}^\tau$ vanishes identically, hence the solution is not viable. Moreover, one can show that its Taub charge diverges, while its ADT charge is for the background Killing vector $(-1,0,0)$ is \begin{eqnarray} Q_{ADT}=&&-\lim_{r \rightarrow \infty}\frac{ \sin (\pi S) \cos ( 2 \pi S + \Delta t)}{ 4 \pi S 2^{2-\Delta}\ell}\Delta ( 2 \Delta + S-2) \nonumber \\ &&\times e^{ r(2-\Delta)}, \end{eqnarray} which vanishes for the massive mode $\Delta = S=2$. In addition to the above solutions, there is an additional the log-mode given in \cite{Grumiller} which reads \begin{align} &h_{ \mu\nu} = f_1(\tau,\rho)\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{array} \right)_{\mu\nu} \nonumber \\ &+f_2(\tau, \rho)\left( \begin{array}{ccl} 1 & 1 & \qquad \qquad 0 \\ 1 & 1 & \qquad \qquad 0 \\ 0 & 0 & - {4\over\sinh^2 2\rho} \end{array} \right)_{\mu\nu}, \label{grumillermetric} \end{align}where the two functions are given as \begin{eqnarray} &&f_1(\tau, \rho) = \frac{\sinh{\rho}}{\cosh^3{\rho}}\,(\tau \cos{2u}-\sin{2u}\,\ln{\cosh{\rho}}) ,\nonumber \\ &&f_2(\tau, \rho) =-\tanh^2\!\!{\rho}\,(\tau \sin{2u}+\cos{2u}\,\ln{\cosh{\rho}}). \nonumber \end{eqnarray} The components of the symplectic current for this mode read \begin{align} &{\cal{J}}^\tau = \frac{1}{\mu \ell^7}\tau ((1-\mu \ell ) \cosh 2 \rho+1)\text{sech}^{10}\rho , \nonumber \\ & {\cal{J}}^\phi= - \frac{2}{\mu \ell^7}\tau (1- \mu \ell) \text{sech}^8 \rho, \\ &{\cal{J}}^\rho=\frac{1}{ \ell^6}\tanh \rho\, \text{sech}^8\rho(4 (\log ^2\cosh \rho+\tau ^2)+\log \text{sech}\,\rho) \nonumber , \end{align} which yield a linearly growing $\omega$ in $\tau$ and vanishes on the initial value surface. What all these say is that first order perturbation theory simply fails in chiral gravity limit of TMG. If the theory makes any sense at the classical and/or quantum level one must resort to a new method to carry out computations. This significantly affects its interpretation in the context of AdS/CFT as the perturbed metric couples to the energy-momentum tensor of the boundary CFT. This of course does not say anything about the solutions of the theory which are not globally AdS and one might simply have to define the theory in a different background. \section{Conclusions} The problem studied here is a frequently recurring one \cite{kastor}, for example it also appears in critical gravity \cite{pope, sisman22}. Linearized solutions by definition satisfy the linearized equations but this is not sufficient; they should also satisfy a quadratic constraint to actually represent linearized versions of exact solutions. This deep result comes from the Bianchi identities and their linearizations and it is connected to the conserved quantities. With the observation of gravity waves, research in general relativity and its modifications, extensions has entered an exciting era in which many theories might be possibly tested. One major tool of computation in nonlinear theories, such as gravity, is perturbation theory from which one obtains a lot of information and the gravitational wave physics is no exception as one uses the tools of perturbation theory to obtain the wave profile far away from the sources. Therefore, the issue of linearization instability arises in any use of perturbation theory as the examples provided here and before \cite{emel} show even for the ostensibly safe case of spacetimes with noncompact Cauchy surfaces. \vspace{0.4cm}
{ "timestamp": "2018-06-14T02:12:04", "yymm": "1804", "arxiv_id": "1804.05602", "language": "en", "url": "https://arxiv.org/abs/1804.05602" }
\section{Introduction}\label{sec:intro} The success of automatic image captioning \cite{farhadi2010every,mitchell2012midge,karpathy2015deep,vinyals2015show} demonstrates compellingly that end-to-end statistical models can align visual information with language. However, high-quality captions are not merely \emph{true}, but also \emph{pragmatically informative} in the sense that they highlight salient properties and help distinguish their inputs from similar images. Captioning systems trained on single images struggle to be pragmatic in this sense, producing either very general or hyper-specific descriptions. In this paper, we present a neural image captioning system\footnote{The code is available at \url{https://github.com/reubenharry/Recurrent-RSA}} that is a \emph{pragmatic speaker} as defined by the Rational Speech Acts (RSA) model \cite{Frank:Goodman:2012,goodman2}. Given a set of images, of which one is the \emph{target}, its objective is to generate a natural language expression which identifies the target in this context. For instance, the literal caption in Figure~\ref{fig1} could describe both the target and the top two distractors, whereas the pragmatic caption mentions something that is most salient of the target. Intuitively, the RSA speaker achieves this by reasoning not only about what is true but also about what it's like to be a listener in this context trying to identify the target. \begin{figure} \includegraphics[width=\linewidth, height=4cm]{Screen_Shot_2018-01-08_at_14_59_56.png} \caption{Captions generated by literal ($S_0$) and pragmatic ($S_1$) model for the target image (in green) in the presence of multiple distractors (in red).} \label{fig1} \end{figure} This core idea underlies much work in referring expression generation \cite{Dale:Reiter:1995,Monroe:Potts:2015,andreas2016reasoning,monroe2016learning} and image captioning \cite{mao2016generation,murphy}, but these models do not fully confront the fact that the agents must reason about all possible utterances, which is intractable. We fully address this problem by implementing RSA at the level of characters rather than the level of utterances or words: the neural language model emits individual characters, choosing them to balance pragmatic informativeness with overall well-formedness. Thus, the agents reason not about full utterances, but rather only about all possible character choices, a very small space. The result is that the information encoded recurrently in the neural model allows us to obtain global pragmatic effects from local decisions. We show that such character-level RSA speakers are more effective than literal captioning systems at the task of helping a reader identify the target image among close competitors, and outperform word-level RSA captioners in both efficiency and accuracy. \section{Bayesian Pragmatics for Captioning} \label{rsaintro} In applying RSA to image captioning, we think of captioning as a kind of reference game. The \emph{speaker} and \emph{listener} are in a shared context consisting of a set of images $W$, the speaker is privately assigned a target image $w^{\ast} \in W$, and the speaker's goal is to produce a caption that will enable the listener to identify $w^{\ast}$. $U$ is the set of possible utterances. In its simplest form, the \emph{literal speaker} is a conditional distribution $S_{0}(u|w)$ assigning equal probability to all true utterances $u\in U$ and $0$ to all others. The pragmatic listener $L_{0}$ is then defined in terms of this literal agent and a prior $P(w)$ over possible images: \begin{equation} L_{0}(w|u) \propto \frac{S_{0}(u|w)*P(w)}{\sum_{w'\in W}S_{0}(u|w')*P(w')} \end{equation} The pragmatic speaker $S_{1}$ is then defined in terms of this pragmatic listener, with the addition of a rationality parameter $\alpha > 0$ governing how much it takes into account the $L_{0}$ distribution when choosing utterances. $P(u)$ is here taken to be a uniform distribution over $U$: \begin{equation} S_{1}(u|w) \propto \frac{L_{0}(w|u)^{\alpha}*P(u)}{\sum_{u'\in U}L_{0}(w|u')^{\alpha}*P(u')} \end{equation} As a result of this back-and-forth, the $S_{1}$ speaker is reasoning not merely about what is true, but rather about a listener reasoning about a literal speaker who reasons about truth. To illustrate, consider the pair of images 2a and 2b in Figure~\ref{2}. Suppose that $U =\{\emph{bus}, \emph{red bus}\}$. Then the literal speaker $S_{0}$ is equally likely to produce \emph{bus} and \emph{red bus} when the left image 2a is the target. However, $L_{0}$ breaks this symmetry; because \emph{red bus} is false of the right bus, $L_0(\ref{2}\textrm{a}|\mathit{bus}) = \frac{1}{3}$ and $L_0(\ref{2}\textrm{b}|\mathit{bus}) = \frac{2}{3}$. The $S_{1}$ speaker therefore ends up favoring \emph{red bus} when trying to convey 2a, so that $S_1(\emph{red bus}|2\textrm{a}) = \frac{3}{4}$ and $S_1(\mathit{bus}|2\textrm{a}) = \frac{1}{4}$. \begin{figure} \includegraphics[width=\linewidth, height=4cm]{Screen_Shot_2018-01-04_at_17_58_45.png} \caption{Captions for the target image (in green).} \label{2} \end{figure} \section{Applying Bayesian Pragmatics to a Neural Semantics} To apply the RSA model to image captioning, we first train a neural model with a CNN-RNN architecture \cite{karpathy2015deep,vinyals2015show}. The trained model can be considered an $S_{0}$-style distribution $P(\emph{caption}|\emph{image})$ on top of which further listeners and speakers can be built. (Unlike the idealized $S_0$ described above, a neural $S_0$ will assign some probability to untrue utterances.) The main challenge for this application is that the space of utterances (captions) $U$ will be very large for any suitable captioning system, making the calculation of $S_{1}$ intractable due to its normalization over all utterances. The question, therefore, is how best to approximate this inference. The solution employed by \citet{monroe2016learning} and \citet{andreas2016reasoning} is to sample a small subset of probable utterances from the $S_0$, as an approximate prior upon which exact inference can be performed. While tractable, this approach has the shortcoming of only considering a small part of the true prior, which potentially decreases the extent to which pragmatic reasoning will be able to apply. In particular, if a useful caption never appears in the sampled prior, it cannot appear in the posterior. \subsection{Step-Wise Inference} Inspired by the success of the ``emittor-suppressor'' method of \citet{murphy}, we propose an incremental version of RSA. Rather than performing a single inference over utterances, we perform an inference \emph{for each step of the unrolling of the utterance}. We use a character-level LSTM, which defines a distribution over characters $P(u|\emph{pc},\emph{image})$, where $\emph{pc}$ (``partial caption'') is a string of characters constituting the caption so far and $u$ is the next character of the caption. This is now our $S_0$: given a partially generated caption and an image, it returns a distribution over which character should next be added to the caption. The advantage of using a character-level LSTM over a word-level one is that $U$ is much smaller for the former (${\approx}30$ vs.~${\approx}20,000$), making the ensuing RSA model much more efficient. We use this $S_0$ to define an $L_0$ which takes a partial caption and a new character, and returns a distribution over images. The $S_1$, in turn, given a target image $w^{\ast}$, performs an inference over the set of possible characters to determine which is best with respect to the listener choosing $w^{\ast}$. At timestep $t$ of the unrolling, the listener $L_0$ takes as its prior over images the $L_0$ posterior from timestep $(t-1)$. The idea is that as we proceed with the unrolling, the $L_0$ priors on which image is being referred to may change, which in turn should affect the speaker's actions. For instance, the speaker, having made the listener strongly in favor of the target image, is less compelled to continue being pragmatic. \subsection{Model Definition} \label{decision2} In our incremental RSA, speaker models take both a target image and a partial caption \emph{pc}. Thus, $S_0$ is a neurally trained conditional distribution $S_0^t(u|w,\emph{pc}_t)$, where $t$ is the current timestep of the unrolling and $u$ is a character. We define the $L_0^{t}$ in terms of the $S_0^{t}$ as follows, where \emph{ip} is a distribution over images representing the $L_0$ prior: \begin{equation} L_0^t(w|u,\emph{ip}_t,\emph{pc}_t) \propto S_0^t(u|w,\emph{pc}_t) * \emph{ip}_t(w) \end{equation} \noindent Given an $S_0^{t}$ and $L_0^{t}$, we define $S_1^{t}$ and $L_1^{t}$ as: \begin{multline} S_1^t(u|w,\emph{ip}_t,\emph{pc}_t) \propto \\ S_{0}^t(u|w,\emph{pc}_t) * L_{0}^t(w|u,\emph{ip}_t,\emph{pc}_t)^{\alpha} \end{multline} \vspace{-20pt} \begin{multline} L_1^t(w|u,\emph{ip}_t,\emph{pc}_t) \propto \\ L_{0}^t(w|u,\emph{ip}_t,\emph{pc}_t) * S_{0}^t(u|w,\emph{pc}_t) \end{multline} \paragraph{Unrolling} To perform greedy unrolling (though in practice we use a beam search) for either $S_0$ or $S_1$, we initialize the state as a partial caption $\emph{pc}_{0}$ consisting of only the start token and a uniform prior over the images $\emph{ip}_{0}$. Then, for $t > 0$, we use our incremental speaker model $S_0$ or $S_1$ to generate a distribution over the subsequent character $S^{t}(u | w,\emph{ip}_t,\emph{pc}_t)$, and add the character $u$ with highest probability density to $\emph{pc}_t$, giving us $\emph{pc}_{t+1}$. We then run our listener model $L_1$ on $u$, to obtain a distribution $\emph{ip}_{t+1}=L_{1}^{t}(w | u,\emph{ip}_t,pc_{t})$ over images that the $L_0$ can use at the next timestep. This incremental approach keeps the inference itself very simple, while placing the complexity of the model in the recurrent nature of the unrolling.\footnote{The move from standard to incremental RSA can be understood as a switching of the order of two operations; instead of unrolling a character-level distribution into a sentence level one and then applying pragmatics, we apply pragmatics and then unroll. This generalizes to any recursive generation of utterances.} While our $S_0$ is character-level, the same incremental RSA model works for a word-level $S_0$, giving rise to a word-level $S_1$. We compare character and word $S_1$s in section \ref{section:5}. As well as being incremental, these definitions of $S_1^{t}$ and $L_1^{t}$ differ from the typical RSA described in section \ref{rsaintro} in that $S_1^{t}$ and $L_1^{t}$ draw their priors from $S_{0}^{t}$ and $L_{0}^{t}$ respectively. This generalizes the scheme put forward for $S_1$ by \newcite{andreas2016reasoning}. The motivation is to have Bayesian speakers who are somewhat constrained by the $S_0$ language model. Without this, other methods are needed to achieve English-like captions, as in \newcite{murphy}, where their equivalent of the $S_1$ is combined in a weighted sum with the $S_0$. \section{Evaluation} Qualitatively, Figures~\ref{fig1} and \ref{2} show how the $S_1$ captions are more informative than the $S_0$, as a result of pragmatic considerations. To demonstrate the effectiveness of our method quantitatively, we implement an automatic evaluation. \subsection{Automatic Evaluation} To evaluate the success of $S_1$ as compared to $S_0$, we define a listener $L_{\emph{eval}}(\emph{image}|\emph{caption}) \propto P_{S_0}(\emph{caption}|\emph{image})$, where $P_{S_0}(\emph{caption}|\emph{image})$ is the total probability of $S_0$ incrementally generating $\emph{caption}$ given $\emph{image}$. In other words, $L_{\mathit{eval}}$ uses Bayes' rule to obtain from $S_0$ the posterior probability of each image $w$ given a full caption $u$. The neural $S_0$ used in the definition of $L_{\emph{eval}}$ must be trained on separate data to the neural $S_0$ used for the $S_1$ model which produces captions, since otherwise this $S_1$ production model effectively has access to the system evaluating it. As \citet{backprop} note, ``a model might `communicate' better with itself using its own language than with others''. In evaluation, we therefore split the training data in half, with one part for training the $S_0$ used in the caption generation model $S_1$ and one part for training the $S_0$ used in the caption evaluation model $L_{\emph{eval}}$. We say that the caption succeeds as a referring expression if the target has more probability mass under the distribution $L_{\emph{eval}}(\emph{image}|\emph{caption})$ than any distractor. \paragraph{Dataset} We train our production and evaluation models on separate sets consisting of regions in the Visual Genome dataset \cite{krishnavisualgenome} and full images in MSCOCO \cite{mscoco}. Both datasets consist of over 100,000 images of common objects and scenes. MSCOCO provides captions for whole images, while Visual Genome provides captions for regions within images. Our test sets consist of clusters of 10 images. For a given cluster, we set each image in it as the target, in turn. We use two test sets. Test set~1 (TS1) consists of 100 clusters of images, 10 for each of the 10 most common objects in Visual Genome.\footnote{Namely, \emph{man}, \emph{person}, \emph{woman}, \emph{building}, \emph{sign}, \emph{table}, \emph{bus}, \emph{window}, \emph{sky}, and \emph{tree}.} Test set~2 (TS2) consists of regions in Visual Genome images whose ground truth captions have high word overlap, an indicator that they are similar. We again select 100 clusters of 10. Both test sets have 1,000 items in total (10 potential target images for each of 100 clusters). \paragraph{Captioning System} Our neural image captioning system is a CNN-RNN architecture\footnote{\url{https://github.com/yunjey/pytorch-tutorial/tree/master/tutorials/03-advanced/image_captioning}} adapted to use a character-based LSTM for the language model. \paragraph{Hyperparameters} We use a beam search with width 10 to produce captions, and a rationality parameter of $\alpha=5.0$ for the $S_1$. \subsection{Results} \label{section:5} As shown in Table~\ref{fig3}, the character-level $S_1$ obtains higher accuracy (68\% on TS1 and 65.9\% on TS2) than the $S_0$ (48.9\% on TS1 and 47.5\% on TS2), demonstrating that $S_1$ is better than $S_0$ at referring. \paragraph{Advantage of Incremental RSA} We also observe that 66\% percent of the times in which the $S_1$ caption is referentially successful and the $S_0$ caption is not, for a given image, the $S_1$ caption is not one of the top 50 $S_0$ captions, as generated by the beam search unrolling at $S_0$. This means that in these cases the non-incremental RSA method of \newcite{andreas2016reasoning} could not have generated the S$_1$ caption, if these top 50 $S_0$ captions were the support of the prior over utterances. \begin{table}[t!] \begin{center} \setlength{\tabcolsep}{12pt} \begin{tabular}{l r l} \toprule \bf Model & \bf TS1 & \bf TS2 \\ \midrule Char $S_0$ & $48.9$ & $47.5$ \\ Char $S_1$ & $\mathbf{68.0}$ & $\mathbf{65.9}$ \\ Word $S_0$ & $57.6$ & $53.4$ \\ Word $S_1$ & $60.6$ & $57.6$ \\ \bottomrule \end{tabular} \end{center} \caption{\label{fig3} Accuracy on both test sets.} \end{table} \paragraph{Comparison to Word-Level RSA} We compare the performance of our character-level model to a word-level model.\footnote{Here, we use greedy unrolling, for reasons of efficiency due to the size of $U$ for the word-level model, and set $\alpha=1.0$ from tuning on validation data. For comparison, we note that greedy character-level $S_1$ achieves an accuracy of 61.2\% on TS1.} This model is incremental in precisely the way defined in section \ref{decision2}, but uses a word-level LSTM so that $u\in U$ are words and $U$ is a vocabulary of English. It is evaluated with an $L_{\mathit{eval}}$ model that also operates on the word level. Though the word $S_0$ performs better on both test sets than the character $S_0$, the character $S_1$ outperforms the word $S_1$, demonstrating the advantage of a character-level model for pragmatic behavior. We conjecture that the superiority of the character-level model is the result of the increased number of decisions where pragmatics can be taken into account, but leave further examination for future research. \paragraph{Variants of the Model} We further explore the effect of two design decisions in the character-level model. First, we consider a variant of $S_1$ which has a prior over utterances determined by an LSTM language model trained on the full set of captions. This achieves an accuracy of 67.2\% on TS1. Second, we consider our standard $S_1$ but with unrolling such that the $L_0$ prior is drawn uniformly at each timestep rather than determined by the $L_0$ posterior at the previous step. This achieves an accuracy of 67.4\% on TS1. This suggests that neither this change of $S_1$ nor $L_0$ priors has a large effect on the performance of the model. \section{Conclusion} We show that incremental RSA at the level of characters improves the ability of the neural image captioner to refer to a target image. The incremental approach is key to combining RSA with language models: as utterances become longer, it becomes exponentially slower, for a fixed $n$, to subsample $n$\% of the utterance distribution and \emph{then} perform inference (non-incremental approach). Furthermore, character-level RSA yields better results than word-level RSA and is far more efficient. \section*{Acknowledgments} Many thanks to Hiroto Udagawa and Poorvi Bhargava, who were involved in early versions of this project. This material is based in part upon work supported by the Stanford Data Science Initiative and by the NSF under Grant No.~BCS-1456077. This work is also supported by a Sloan Foundation Research Fellowship to Noah Goodman.
{ "timestamp": "2018-05-11T02:10:43", "yymm": "1804", "arxiv_id": "1804.05417", "language": "en", "url": "https://arxiv.org/abs/1804.05417" }
\section{Presentation of the results} Our first result is a multidimensional generalization of the well-known convolution identity $$\sum_{n \leqslant x} \mu(n) \left \lfloor \frac{x}{n} \right \rfloor = 1.$$ \begin{theorem} \label{th:id1} Let $r \in \mathbb{Z}_{\geqslant 2}$. For any real number $x \geqslant 1$ $$\sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor = \sum_{n \leqslant x} (1-r)^{\omega(n)}.$$ \end{theorem} Usual bounds in analytic number theory (see Lemma~\ref{lem:unitary} below) lead to the following estimates. \begin{coro} \label{cor:id0} Let $r \in \mathbb{Z}_{\geqslant 2}$. There exists an absolute constant $c_0 >0$ and a constant $c_r \geqslant 1$, depending on $r$, such that, for any $x \geqslant c_r$ sufficiently large $$\sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor \ll_r x e^{-c_0 (\log x)^{3/5} (\log \log x)^{-1/5}}.$$ Furthermore, the Riemann Hypothesis is true if and only if, for any $\varepsilon > 0$ $$\sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor \ll_{r,\varepsilon} x^{1/2+\varepsilon}.$$ \end{coro} By a combinatorial argument, we obtain the following asymptotic formula. \begin{coro} \label{cor:id1} Let $r \in \mathbb{Z}_{\geqslant 2}$. There exists an absolute constant $c_0 >0$ and a constant $c_r \geqslant 1$, depending on $r$, such that, for any $x \geqslant c_r$ sufficiently large $$\sum_{1 \leqslant n_1 < \dotsb < n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor = \frac{(-1)^{r-1} x}{r(r-2)!} + O_r \left( x e^{-c_0 (\log x)^{3/5} (\log \log x)^{1/5}} \right).$$ Furthermore, the Riemann Hypothesis is true if and only if, for any $\varepsilon > 0$ $$\sum_{1 \leqslant n_1 < \dotsb < n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor = \frac{(-1)^{r-1} x}{r(r-2)!} + O_{r,\varepsilon} \left( x^{1/2+\varepsilon} \right).$$ \end{coro} \bigskip Our second identity is the analogue of Theorem~\ref{th:id1} with $\mu$ replaced by $\mu^2$. \begin{theorem} \label{th:id15} Let $r \in \mathbb{Z}_{\geqslant 2}$. For any real number $x \geqslant 1$ $$\sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right)^2 \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor = \sum_{n \leqslant x} (1+r)^{\omega(n)}.$$ \end{theorem} As for an asymptotic formula, we derive the following estimate from the contour integration method applied to the function $k^\omega$ (see \cite[Exercise~II.4.1]{ten} for instance). \begin{coro} \label{cor:id152} Let $r \in \mathbb{Z}_{\geqslant 2}$ and $\varepsilon > 0$. For any large real number $x \geqslant 1$ $$\sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right)^2 \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor = x \mathcal{P}_r(\log x) + O \left( x^{1 - \frac{3}{r+7} + \varepsilon} \right)$$ where $\mathcal{P}_r$ is a polynomial of degree $r$ and leading coefficient $$\frac{1}{r!} \prod_p \left( 1 - \frac{1}{p} \right)^{r+1} \left( 1 + \frac{r+1}{p-1} \right).$$ \end{coro} When $r=3$, we can use a recent result of \cite{zha} which allows us to improve on Corollary~\ref{cor:id152}. \begin{equation} \sum_{n_1 , n_2 , n_3 \leqslant x} \mu \left( n_1 n_2 n_3 \right)^2 \left \lfloor \frac{x}{n_1 n_2 n_3} \right \rfloor = x \mathcal{P}_3(\log x) + O \left( x^{1/2} (\log x)^5 \right) \label{eq:ex1} \end{equation} where $\mathcal{P}_3$ is a polynomial of degree $3$. \bigskip Our third result is quite similar to Theorem~\ref{th:id1}, but is simpler and sheds a new light onto the Piltz-Dirichlet divisor problem. \begin{theorem} \label{th:id2} Let $r \in \mathbb{Z}_{\geqslant 2}$. For any real number $x \geqslant 1$ $$\sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \left( \mu \left( n_1 \right) + \dotsb + \mu \left( n_r \right) \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor = r \sum_{n \leqslant x} \tau_{r-1}(n).$$ \end{theorem} Known results from the the Piltz-Dirichlet divisor problem yield the following corollary (see \cite{bouw,ivio,kol,kolp} for the estimates of the remainder term below). \begin{coro} \label{cor:id2} Let $r \in \mathbb{Z}_{\geqslant 3}$ and $\varepsilon >0 $. For any real number $x \geqslant 1$ sufficiently large $$\sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \left( \mu \left( n_1 \right) + \dotsb + \mu \left( n_r \right) \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor = r \underset{s=1}{\res} \left( s \zeta(s)^{r-1} x^s \right) + O_{r,\varepsilon} \left( x^{\alpha_r + \varepsilon} \right) $$ where \begin{center} \begin{tabular}{cccccc} $r$ & $3$ & $4$ & $10$ & $\in \left[ 121,161 \right]$ & $ \geqslant 161$ \\ & & & & & \\ \hline & & & & & \\ $\alpha_r$ & $\frac{517}{\np{1648}}$ & $\frac{43}{\np{96}}$ & $\frac{35}{\np{54}}$ & $1 - \frac{1}{3} \left( \frac{2}{\np{4.45} (r-1)} \right)^{2/3}$ & $1 - \left( \frac{2}{\np{13.35} (r-\np{160.9})} \right)^{2/3}$ \\ \end{tabular} \end{center} \end{coro} Once again, the case $r=3$ is certainly one of the most interesting one. Corollary~\ref{cor:id2} yields \begin{equation} \sum_{n_1 , n_2 , n_3 \leqslant x} \left( \mu \left( n_1 \right) + \mu \left( n_2 \right) + \mu \left( n_3 \right)\right) \left \lfloor \frac{x}{n_1 n_2 n_3} \right \rfloor = 3x \log x + \left( 6 \gamma - 3 \right) x + O_\varepsilon \left( x^{\frac{517}{\np{1648}}+ \varepsilon} \right). \label{eq:ex2} \end{equation} \bigskip Our last identity generalizes the well-known relation \begin{equation} \sum_{d \mid n} \frac{\mu(d) \log d}{d} = - \frac{\varphi(n)}{n} \sum_{p \mid n} \frac{\log p}{p-1} \label{eq:phi} \end{equation} which can be proved in the following way: \begin{eqnarray*} - \frac{\varphi(n)}{n} \sum_{p \mid n} \frac{\log p}{p-1} &=& - \frac{1}{n} \sum_{p \mid n} \varphi \left( \frac{n}{p} \right) \log p \\ &=& - \frac{1}{n} \left( \Lambda \star \varphi \right) (n) \\ &=& - \frac{1}{n} \left( - \mu \log \star \mathbf{1} \star \mu \star \id \right) (n) \\ &=& \frac{1}{n} \left( \mu \log \star \id \right) (n) \\ &=& \sum_{d \mid n} \frac{\mu(d) \log d}{d}. \end{eqnarray*} \begin{theorem} \label{th:id3} Let $n \in \mathbb{Z}_{\geqslant 1}$, $e \in \{1,2\}$ and $f$ be a multiplicative function such that $f(p) \neq (-1)^{e+1}$. Then $$\sum_{d \mid n} \mu(d)^e f(d) \log d = \prod_{p \mid n} \left( 1 + (-1)^e f(p) \right) \sum_{p \mid n} \frac{f(p) \log p}{f(p) + (-1)^e}.$$ \end{theorem} Taking $f= \id^{-1}$ yields \eqref{eq:phi}, but many other consequences may be established with this result. We give some of them below. \begin{coro} \label{cor} Let $k,n \in \mathbb{Z}_{\geqslant 1}$ and $e \in \{1,2\}$. Then \begin{eqnarray*} & & \sum_{d \mid n} \frac{\mu(d) \log d}{d^k} = -\frac{J_k(n)}{n^k} \sum_{p \mid n} \frac{\log p}{p^k-1}. \\ & & \sum_{d \mid n} \frac{\mu(d)^2 \log d}{d^k} = \frac{\Psi_k(n)}{n^k} \sum_{p \mid n} \frac{\log p}{p^k+1}. \\ & & \sum_{d \mid n} \frac{\mu(d)^2 \log d}{\varphi(d)} = \frac{n}{\varphi(n)} \sum_{p\mid n} \frac{\log p}{p}. \\ & & \sum_{d \mid n} \frac{\mu(d) \log d}{\tau(d)} = - 2^{-\omega(n)} \log \gamma(n). \\ & & \sum_{d \mid n} \mu(d)^e \tau_k(d) \log d = \left( 1 + (-1)^e k \right)^{\omega(n)} \times \frac{k \log \gamma(n)}{k+(-1)^e} \quad \left( k \in \mathbb{Z}_{\geqslant 2} \right). \\ & & \sum_{d \mid n} \mu(d) \sigma(d) \log d = (-1)^{\omega(n)} \gamma(n) \left( \log \gamma(n) + \sum_{p\mid n} \frac{\log p}{p} \right). \end{eqnarray*} \end{coro} Note that a similar result has been proved in \cite[Theorems~3 and~5]{wak} in which the completely additive function $\log$ is replaced by the strongly additive function $\omega$. However, let us stress that the methods of proofs are completely different. \section{Notation} \noindent \begin{scriptsize} $\triangleright$ \end{scriptsize} We use some classical multiplicative functions such as $\mu$, the M\"{o}bius function, $\varphi$, $\Psi$, $J_k$ and $\Psi_k$, respectively the Euler, Dedekind, $k$-th Jordan and $k$-th Dedekind totients. Recall that, for any $k \in \mathbb{Z}_{\geqslant 1}$ $$J_k (n) := n^k \prod_{p \mid n} \left( 1 - \frac{1}{p^k} \right) \quad \textrm{and} \quad \Psi_k (n) := n^k \prod_{p \mid n} \left( 1 + \frac{1}{p^k} \right).$$ Also, $\varphi = J_1$ and $\Psi = \Psi_1$. Next, $\gamma(n)$ is the \textit{squarefree kernel} of $n$, defined by $$\gamma(n) := \prod_{p \mid n} p.$$ \medskip \noindent \begin{scriptsize} $\triangleright$ \end{scriptsize} Let $q \in \mathbb{Z}_{\geqslant 1}$. The notation $n \mid q^\infty$ means that every prime factor of $n$ is a prime factor of $q$. We define $\mathbf{1}_q^\infty$ to be the characteristic function of the integers $n$ satisfying $n \mid q^\infty$. It is important to note that \begin{equation} \mathbf{1}_{q}^{\infty} (n) = \sum_{\substack{d \mid n \\ (d,q)=1}} \mu(d). \label{eq:infty} \end{equation} This can easily be checked for prime powers $p^\alpha$ and extended to all integers using multiplicativity. \medskip \noindent \begin{scriptsize} $\triangleright$ \end{scriptsize} Finally, if $F$ and $G$ are two arithmetic function, the Dirichlet convolution product $F \star G$ is given by $$(F \star G)(n) := \sum_{d \mid n} F(d) G(n/d).$$ \medskip \noindent We always use the convention that an empty product is equal to $1$. \section{Proof of Theorem~\ref{th:id1}} \subsection{Lemmas} \begin{lemma} \label{le2} Let $q \in \mathbb{Z}_{\geqslant 1}$. For any $x \in \mathbb{R}_{\geqslant 1}$ $$\sum_{\substack{n \leqslant x \\ (n,q)=1}} \mu(n) \left \lfloor \frac{x}{n} \right \rfloor = \sum_{\substack{n \leqslant x \\ n \mid q^\infty}} 1.$$ \end{lemma} \begin{proof} This follows from \eqref{eq:infty} and the convolution identity $$\sum_{\substack{n \leqslant x \\ (n,q)=1}} \mu(n) \left \lfloor \frac{x}{n} \right \rfloor = \sum_{n \leqslant x} \sum_{\substack{d \mid n \\ (d,q)=1}} \mu(d)$$ as asserted. \end{proof} Note that the sum of the left-hand side has also been investigated in \cite{gup} by a completely different method. \begin{lemma} \label{le5} Let $q \in \mathbb{Z}_{\geqslant 1}$. For any $x \in \mathbb{R}_{\geqslant 1}$ $$\sum_{\substack{n \leqslant x \\ q \mid n}} \mu(n) \sum_{\substack{k \leqslant x/n \\ k \mid n^\infty}} 1 = \sum_{\substack{n \leqslant x \\ q \mid \gamma(n)}} (-1)^{\omega(n)}.$$ \end{lemma} \begin{proof} We first prove that, for any $n \in \mathbb{Z}_{\geqslant 1}$ and any squarefree divisor $d$ of $n$ \begin{equation} \mathbf{1}_d^\infty \left( \tfrac{n}{d} \right) = \begin{cases} 1, & \textrm{if\ } d = \gamma(n) \\ 0, & \textrm{otherwise.} \end{cases} \label{eq:id3} \end{equation} The result is obvious if $n=1$. Assume $n \geqslant 2$ and let $d$ be a squarefree divisor of $n$. Then, from \eqref{eq:infty} $$\mathbf{1}_d^\infty \left( \tfrac{n}{d} \right) = \sum_{\substack{\delta \mid \frac{n}{d} \\ (\delta,d)=1}} \mu(\delta) = \sum_{\delta \mid \frac{\gamma(n)}{d}} \mu(\delta) $$ implying \eqref{eq:id3}. Now, for any $n \in \mathbb{Z}_{\geqslant 1}$ $$\sum_{\substack{d \mid n \\ (n/d) \mid d^\infty \\ q \mid d}} \mu(d) = \sum_{\substack{d \mid n \\ d = \gamma(n) \\ q \mid d}} \mu(d) = \begin{cases} \mu \left( \gamma(n) \right), & \textrm{if\ } q \mid \gamma(n) \\ 0, & \textrm{otherwise} \end{cases} = \begin{cases} (-1)^{\omega(n)}, & \textrm{if\ } q \mid \gamma(n) \\ 0, & \textrm{otherwise.} \end{cases}$$ The asserted result then follows from $$\sum_{\substack{n \leqslant x \\ q \mid \gamma(n)}} (-1)^{\omega(n)} = \sum_{n \leqslant x} \sum_{\substack{d \mid n \\ (n/d) \mid d^\infty \\ q \mid d}} \mu(d) = \sum_{\substack{d \leqslant x \\ q \mid d}} \mu(d) \sum_{\substack{k \leqslant x/d \\ k \mid d^\infty}} 1$$ as required. \end{proof} \begin{lemma} \label{le6} For any $k \in \mathbb{Q}^*$ and $n \in \mathbb{Z}_{\geqslant 1}$ $$\sum_{d \mid n} \mu(d)^2 k^{\omega(d)} = (k+1)^{\omega(n)}.$$ \end{lemma} \begin{proof} This is well-known. For instance, this can be checked for prime powers and then extended to all integers by multiplicativity. \end{proof} \begin{lemma} \label{le7} Let $q \in \mathbb{Z}_{\geqslant 1}$ squarefree. For any $k \in \mathbb{Z}_{\geqslant 1}$ $$\sum_{n_1 \dotsb n_k \mid q} \mu \left( n_1 \right)^2 \dotsb \mu \left( n_k \right)^2 = (k+1)^{\omega(q)}.$$ \end{lemma} \begin{proof} Let $S_k(q)$ be the sum of the left-hand side. We proceed by induction on $k$, the case $k=1$ being Lemma~\ref{le6} with $k=1$. Suppose that the result is true for some $k \geqslant 1$. Then, using induction hypothesis and Lemma~\ref{le6}, we get \begin{eqnarray*} S_{k+1}(q) &=& \sum_{n \mid q} \mu \left( n \right)^2 S_k \left( \frac{q}{n} \right) = \sum_{n \mid q} \mu \left( n \right)^2 \left( k+1 \right)^{\omega(q/n)} \\ &=& (k+1)^{\omega(q)}\sum_{n \mid q} \mu \left( n \right)^2 \left( k+1 \right)^{-\omega \left( n \right) } \\ &=& (k+1)^{\omega(q)} \left( \tfrac{1}{k+1} + 1 \right)^{\omega(q)} = (k+2)^{\omega(q)} \end{eqnarray*} completing the proof. \end{proof} \subsection{Proof of Theorem~\ref{th:id1}} \noindent From Lemma~\ref{le2}, we have \begin{eqnarray*} & & \sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor \\ &=& \sum_{n_1 \leqslant x, \dotsc, n_{r-1} \leqslant x} \mu \left( n_1 \dotsb n_{r-1} \right) \sum_{\substack{n_r \leqslant x/\left( n_1 \dotsb n_{r-1} \right) \\ \left( n_r, n_1 \dotsb n_{r-1} \right) = 1}} \mu \left( n_r \right) \left \lfloor \frac{x/\left( n_1 \dotsb n_{r-1} \right)}{n_r} \right \rfloor \\ &=& \sum_{n_1 \leqslant x, \dotsc, n_{r-1} \leqslant x} \mu \left( n_1 \dotsb n_{r-1} \right) \sum_{\substack{n_r \leqslant x/\left( n_1 \dotsb n_{r-1} \right) \\ n_r \mid \left( n_1 \dotsb n_{r-1} \right)^\infty}} 1. \end{eqnarray*} \noindent The change of variable $m=n_1 \dotsb n_{r-1}$ yields \begin{eqnarray*} \sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor &=& \sum_{n_1 \leqslant x, \dotsc, n_{r-2} \leqslant x} \sum_{\substack{m \leqslant x \\ n_1 \dotsb n_{r-2} \mid m}} \mu(m) \sum_{\substack{n_r \leqslant x/m \\ n_r \mid m^\infty}} 1 \\ &=& \sum_{n_1 \leqslant x, \dotsc, n_{r-2} \leqslant x} \sum_{\substack{m \leqslant x \\ n_1 \dotsb n_{r-2} \mid \gamma(m)}} (-1)^{\omega(m)} \end{eqnarray*} where we used Lemma~\ref{le5} with $q=n_1 \dotsb n_{r-2}$. Now since $n_1 \dotsb n_{r-2} \mid \gamma(m)$ is equivalent to both $n_1 \dotsb n_{r-2} \mid \gamma(m)$ and $\mu \left (n_1 \right )^2 = \dotsb = \mu \left (n_{r-2} \right )^2 = 1$, we infer \begin{eqnarray*} \sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor &=& \sum_{m \leqslant x} (-1)^{\omega(m)} \sum_{\substack{n_1 \leqslant x, \dotsc, n_{r-2} \leqslant x \\ n_1 \dotsb n_{r-2} \mid \gamma(m)}} \mu \left (n_1 \right )^2 \dotsb \mu \left (n_{r-2} \right )^2 \\ &=& \sum_{m \leqslant x} (-1)^{\omega(m)} \sum_{n_1 \dotsb n_{r-2} \mid \gamma(m)} \mu \left (n_1 \right )^2 \dotsb \mu \left (n_{r-2} \right )^2. \end{eqnarray*} Lemma~\ref{le7} then gives \begin{eqnarray*} \sum_{n_1 \leqslant x, \dotsc, n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor &=& \sum_{m \leqslant x} (-1)^{\omega(m)} (r-1)^{\omega \left( \gamma(m) \right)} \\ &=& \sum_{m \leqslant x} (-1)^{\omega(m)} (r-1)^{\omega (m)} = \sum_{m \leqslant x} (1-r)^{\omega(m)} \end{eqnarray*} as asserted. \qed \subsection{Proof of Corollaries~\ref{cor:id0} and~\ref{cor:id1}} We start with an estimate which is certainly well-known, but we provide a proof for the sake of completeness. The unconditional estimate is quite similar to the usual bound given by the Selberg-Delange method (see \cite[Th\'{e}or\`{e}me~II.6.1]{ten} with $z \in \mathbb{Z}_{< 0}$). \begin{lemma} \label{lem:unitary} Let $k \in \mathbb{Z}_{\geqslant 1}$. There exists an absolute constant $c_0 >0$ and a constant $c_k \geqslant 1$, depending on $k$, such that, for any $x \geqslant c_k$ sufficiently large $$\sum_{n \leqslant x} (-k)^{\omega(n)} \ll_k x e^{-c_0 (\log x)^{3/5} (\log \log x)^{-1/5}}.$$ Furthermore, the Riemann Hypothesis is true if and only if, for any $\varepsilon > 0$ $$\sum_{n \leqslant x} (-k)^{\omega(n)} \ll_{k,\varepsilon} x^{1/2+\varepsilon}.$$ \end{lemma} \begin{proof} Set $f_k:=(-k)^\omega$. If $L(s,f_k)$ is the Dirichlet series of $f_k$, then usual computations show that, for any $s = \sigma + it \in \mathbb{C}$ such that $\sigma > 1$ $$L(s,f_k) = \zeta(s)^{-k} \zeta(2s)^{-\frac{1}{2}k(k+1)} G_k(s)$$ where $G_k(s)$ is a Dirichlet series absolutely convergent in the half-plane $\sigma > \frac{1}{3}$. Set $c := \np{57.54}^{-1}$, $\kappa := 1 + \frac{1}{\log x}$, $$T := e^{c (\log x)^{3/5}(\log \log x)^{-1/5}} \ \textrm{and} \ \alpha := \alpha(T) = c (\log T)^{-2/3} (\log \log T)^{-1/3}.$$ We use Perron's summation formula in the shape \cite[Corollary~2.2]{liu}, giving \begin{eqnarray*} \sum_{n \leqslant x} (-k)^{\omega(n)} &=& \frac{1}{2 \pi i} \int_{\kappa - iT}^{\kappa + iT} \frac{G_k(s)}{\zeta(s)^{k} \zeta(2s)^{\frac{1}{2}k(k+1)}} \frac{x^s}{s} \, \textrm{d}s \\ & & {} + O \left( \sum_{x-x/\sqrt{T} < n \leqslant x+x/\sqrt{T}} \left |f_k(n) \right| + \frac{x^\kappa}{\sqrt{T}} \sum_{n=1}^\infty \frac{\left |f_k(n) \right|}{n^\kappa} \right). \end{eqnarray*} By \cite{for}, $\zeta(s)$ has no zero in the region $\sigma \geqslant 1 - c (\log |t|)^{-2/3} (\log \log |t|)^{-1/3}$ and $|t| \geqslant 3$, so that we may shift the line of integration to the left and apply Cauchy's theorem in the rectangle with vertices $\kappa \pm iT$, $1-\alpha \pm iT$. In this region $$\zeta(s)^{-1} \ll \left (\log (|t|+2) \right )^{2/3} \left( \log \log (|t|+2) \right)^{1/3}.$$ Therefore, the contribution of the horizontal sides does not exceed $$\ll xT^{-1} (\log T)^{2k/3} (\log \log T)^{k/3} $$ and the contribution of the vertical side is bounded by $$\ll x^{1- \alpha} (\log T)^{1+2k/3} (\log \log T)^{k/3}.$$ Since $T \ll x^{1-\varepsilon}$, Shiu's theorem \cite{shi} yields $$\sum_{x-x/\sqrt{T} < n \leqslant x+x/\sqrt{T}} \left |f_k(n) \right| \leqslant \sum_{x-x/\sqrt{T} < n \leqslant x+x/\sqrt{T}} \tau_k (n) \ll \frac{x}{\sqrt{T}} (\log x)^{k-1}.$$ With the choice of $\kappa$, the $2$nd error term does not exceed $$\leqslant \frac{x^\kappa}{\sqrt{T}} \sum_{n=1}^\infty \frac{\tau_k (n)}{n^\kappa} = \frac{x^\kappa}{\sqrt{T}} \zeta(\kappa)^k \ll \frac{x}{\sqrt{T}} (\log x)^k.$$ Since the path of integration does not surround the origin, nor the poles of the integrand, Cauchy's theorem and the choice of $T$ give the asserted estimate for any $c_0 \leqslant \frac{1}{4}c$ and any real number $x$ satisfying $x \geqslant \exp \left( c_1 k^{10/3} \right)$, say, where $c_1 \geqslant 1$ is absolute. \item[] Now let $x,T \geqslant 2$, with $x$ large and $T \leqslant x^2$. If the Riemann Hypothesis is true, then by Perron's formula again $$\sum_{n \leqslant x} (-k)^{\omega(n)} = \frac{1}{2 \pi i} \int_{2 - iT}^{2 + iT} \frac{G_k(s)}{\zeta(s)^{k} \zeta(2s)^{\frac{1}{2}k(k+1)}} \frac{x^s}{s} \, \textrm{d}s + O_{k,\varepsilon} \left( \frac{x^{2+\varepsilon}}{T} \right).$$ We shift the line $\sigma = 2$ to the line $\sigma = \frac{1}{2} + \varepsilon$. In the rectangle with vertices $2 \pm iT$, $\frac{1}{2} + \varepsilon \pm iT$, the Riemann Hypothesis implies that $\zeta(s)^{-1} \ll |t|^{\varepsilon/k}$, so that similar argument as above yields $$\sum_{n \leqslant x} (-k)^{\omega(n)} \ll_{k,\varepsilon} x^\varepsilon \left( x^2 T^{-1} + x^{1/2} \right)$$ and the choice of $T=x^2$ gives the asserted estimate. On the other hand, if $$\sum_{n \leqslant x} (-k)^{\omega(n)} \ll_{k,\varepsilon} x^{1/2+\varepsilon}$$ then the series $L(s,f_k)$ converges for $\sigma > \frac{1}{2}$, and then defines an analytic function in this half-plane. Hence $\zeta(s)$ does not vanish in this region, and the Riemann Hypothesis holds. \end{proof} Corollary~\ref{cor:id0} is a direct consequence of Theorem~\ref{th:id1} and Lemma~\ref{lem:unitary}. The proof of Corollary~\ref{cor:id1} uses the following combinatorial identity. \begin{lemma} \label{lem:combinatoric} Let $r \in \mathbb{Z}_{\geqslant 2}$. For any $x \in \mathbb{R}_{\geqslant 1}$ \begin{eqnarray*} & & \sum_{1 \leqslant n_1 < \dotsb < n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor = \frac{1}{r!} \sum_{n_1 ,\dotsc ,n_r \leqslant x} \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor \\ & & {} - \frac{1}{r!}\sum_{j=2}^{r-2} (-1)^j (j-1) {r \choose j} \sum_{n \leqslant x} (1-r+j)^{\omega(n)} - \frac{(-1)^r \lfloor x \rfloor}{r(r-2)!} + \frac{(-1)^{r}(r-2)}{(r-1)!} . \end{eqnarray*} \end{lemma} \begin{proof} Set $u\left( n_1 ,\dotsc ,n_r \right) := \mu \left( n_1 \dotsb n_r \right) \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor$. Since $u$ is symmetric with respect to the $r$ variables $n_1 ,\dotsc ,n_r$, multiplying the left-hand side by $r!$ amounts to summing in the hypercube $\left[ 1,x \right]^r$, but we must take the diagonals into account. By a sieving argument, we get \begin{eqnarray*} \sum_{n_1 ,\dotsc ,n_r \leqslant x} u\left( n_1 ,\dotsc ,n_r \right) &=& r! \sum_{1 \leqslant n_1 < \dotsb < n_r \leqslant x} u\left( n_1 ,\dotsc ,n_r\right) \\ & & {}+ \sum_{j=2}^r (-1)^{j} (j-1) {r \choose j} \sum_{n_j,\dotsc,n_r \leqslant x} u \left( n_j, \dotsc, n_j,n_{j+1}, \dotsc, n_r \right) \end{eqnarray*} where the variable $n_j$ appears $j$ times in the inner sum. Now since $$u \left( n_j, \dotsc, n_j,n_{j+1}, \dotsc, n_r \right) = \mu \left( n_j^j n_{j+1} \dotsb n_r \right) \left \lfloor \frac{x}{n_j^j n_{j+1} \dotsb n_r} \right \rfloor$$ so that $u \left( n_j, \dotsc, n_j,n_{j+1}, \dotsc, n_r \right) = 0$ as soon as $n_j > 1$ since $j \geqslant 2$, we infer that the inner sum is $$= \left\lbrace \begin{array}{rcll} \displaystyle \sum_{n_{j+1}, \dotsc, n_r \leqslant x} \mu \left( n_{j+1} \dotsb n_r \right) \left \lfloor \frac{x}{n_{j+1} \dotsb n_r} \right \rfloor &=& \displaystyle \sum_{n \leqslant x} (1-r+j)^{\omega(n)} & \textrm{if\ } 2 \leqslant j \leqslant r-2 \\ & & & \\ \displaystyle \sum_{n_{r-1}, n_r \leqslant x} \mu \left( n_{r-1}^{r-1} n_r \right) \left \lfloor \frac{x}{n_{r-1}^{r-1} n_r } \right \rfloor &=& 1 & \textrm{if\ } j=r-1 \\ & & & \\ \displaystyle \sum_{n_r \leqslant x} \mu \left( n_r^r \right) \left \lfloor \frac{x}{n_r^r } \right \rfloor &=& \lfloor x \rfloor & \textrm{if\ } j=r \end{array} \right.$$ where we used Theorem~\ref{th:id1} when $2 \leqslant j \leqslant r-2$, implying the asserted result. \end{proof} Corollary~\ref{cor:id1} follows immediately from Theorem~\ref{th:id1}, Lemmas~\ref{lem:unitary} and~\ref{lem:combinatoric}. \qed \section{Proof of Theorem~\ref{th:id15}} \subsection{Lemmas} \begin{lemma} \label{le8} Let $q \in \mathbb{Z}_{\geqslant 1}$. For any $x \in \mathbb{R}_{\geqslant 1}$ $$\sum_{\substack{n \leqslant x \\ (n,q)=1}} \mu(n)^2 \left \lfloor \frac{x}{n} \right \rfloor = \sum_{\substack{b \leqslant x \\ b \mid q^\infty}} \sum_{\substack{a \leqslant x/b \\ (a,q)=1}} 2^{\omega(a)}.$$ \end{lemma} \begin{proof} The proof is similar to that of Lemma~\ref{le2} except that we replace \eqref{eq:infty} by the convolution identity $$\sum_{\substack{d \mid n \\ (d,q)=1}} \mu(d)^2 = 2^{\omega(a)}$$ where $n=ab$, $(a,b)=(a,q)=1$ and $b \mid q^\infty$. \end{proof} Our next result is the analogue of Lemma~\ref{le5}. \begin{lemma} \label{le9} Let $q \in \mathbb{Z}_{\geqslant 1}$. For any $x \in \mathbb{R}_{\geqslant 1}$ $$\sum_{\substack{d \leqslant x \\ q \mid d}} \mu(d)^2 \sum_{\substack{k \leqslant x/d \\ k \mid d^\infty}} \ \sum_{\substack{h \leqslant x/(kd) \\ (h,d)=1}} 2 ^{\omega(h)} = \sum_{h \leqslant x} 2 ^{\omega(h)} \sum_{\substack{n \leqslant x/h \\ q \mid \gamma(n) \\ \left (h,\gamma(n) \right ) = 1}} 1.$$ \end{lemma} \begin{proof} The sum of the left-hand side is equal to \begin{equation} \sum_{h \leqslant x} 2 ^{\omega(h)} \sum_{\substack{d \leqslant x/h \\ q \mid d \\ (d,h)=1}} \mu(d)^2 \sum_{\substack{k \leqslant x/(hd) \\ k \mid d^\infty}} 1. \label{eq:le9} \end{equation} Now as in the proof of Lemma~\ref{le5}, we derive $$\sum_{\substack{d \mid n \\ (n/d) \mid d^\infty \\ q \mid d \\ (h,d) = 1}} \mu(d)^2 = \begin{cases} 1, & \textrm{if\ } q \mid \gamma(n) \ \textrm{and\ } \left( h, \gamma(n) \right) = 1 \\ 0, & \textrm{otherwise} \end{cases}$$ so that the inner sum of the right-hand side of the lemma is $$\sum_{\substack{n \leqslant x/h \\ q \mid \gamma(n) \\ \left (h,\gamma(n) \right ) = 1}} 1 = \sum_{n \leqslant x/h} \sum_{\substack{d \mid n \\ (n/d) \mid d^\infty \\ q \mid d \\ (h,d) = 1}} \mu(d)^2 = \sum_{\substack{d \leqslant x/h \\ q \mid d \\(h,d)=1}} \mu(d)^2 \sum_{\substack{k \leqslant x/(hd) \\ k \mid d^\infty}} 1$$ and inserting this sum in \eqref{eq:le9} gives the asserted result. \end{proof} \begin{lemma} \label{le10} Let $r \in \mathbb{Z}_{\geqslant 1}$. For any $x \in \mathbb{R}_{\geqslant 1}$ $$\sum_{m \leqslant x} 2^{\omega(m)} \sum_{\substack{n \leqslant x/m \\ \left( m, \gamma(n) \right) = 1}} r^{\omega(n)} = \sum_{m \leqslant x} (r+2)^{\omega(m)}.$$ \end{lemma} \begin{proof} Define $$\varphi_r(n) := \sum_{\substack{d \mid n \\ \left( d,\gamma(n/d) \right) = 1}} 2^{\omega(d)} r^{\omega(n/d)}$$ so that the sum of the left-hand side is $$\sum_{n \leqslant x} \varphi_r(n).$$ The lemma follows by noticing that the function $\varphi_r$ is multiplicative and that $\varphi_r \left( p^\alpha \right) = r+2$ for all prime powers $p^\alpha$. \end{proof} \subsection{Proof of Theorem~\ref{th:id15}} Let $S_r(x)$ be the sum of the theorem. By Lemma~\ref{le8} and the change of variable $m=n_1 \dotsb n_{r-1}$, we get \begin{eqnarray*} S_r(x) &=& \sum_{n_1 \leqslant x, \dotsc, n_{r-1} \leqslant x} \mu \left( n_1 \dotsb n_{r-1} \right)^2 \sum_{\substack{n_r \leqslant x /(n_1 \dotsb n_{r-1}) \\ \left( n_r, n_1 \dotsb n_{r-1} \right) = 1}} \mu \left( n_r \right)^2 \left \lfloor \frac{x/(n_1 \dotsb n_{r-1})}{n_r} \right \rfloor \\ &=& \sum_{n_1 \leqslant x, \dotsc, n_{r-1} \leqslant x} \mu \left( n_1 \dotsb n_{r-1} \right)^2 \sum_{\substack{n_r \leqslant x /(n_1 \dotsb n_{r-1}) \\ n_r \mid \left( n_1 \dotsb n_{r-1} \right)^\infty}} \ \sum_{\substack{n_{r+1} \leqslant x / (n_1 \dotsc n_r) \\ \left( n_{r+1}, n_1 \dotsb n_{r-1} \right) = 1}} 2^{\omega \left( n_{r+1} \right)} \\ &=& \sum_{n_1 \leqslant x, \dotsc, n_{r-2} \leqslant x} \sum_{\substack{m \leqslant x \\ n_1 \dotsb n_{r-2} \mid m}} \mu(m)^2 \sum_{\substack{n_r \leqslant x/m \\ n_r \mid m^\infty}} \quad \sum_{\substack{n_{r+1} \leqslant x/(m n_r) \\ \left( n_{r+1}, m \right) = 1}} 2^{\omega \left( n_{r+1} \right)}. \end{eqnarray*} Now Lemma~\ref{le9} yields \begin{eqnarray*} S_r(x) &=& \sum_{n_1 \leqslant x, \dotsc, n_{r-2} \leqslant x} \ \sum_{m \leqslant x} 2^{\omega(m)} \ \sum_{\substack{n \leqslant x/m \\ n_1 \dotsb n_{r-2} \mid \gamma(n) \\ \left( m, \gamma(n) \right) = 1}} 1 \\ &=& \sum_{m \leqslant x} 2^{\omega(m)} \ \sum_{\substack{n \leqslant x/m \\ \left( m, \gamma(n) \right) = 1}} \ \sum_{\substack{n_1 \leqslant x, \dotsc, n_{r-2} \leqslant x \\ n_1 \dotsb n_{r-2} \mid \gamma(n)}} 1 \\ &=& \sum_{m \leqslant x} 2^{\omega(m)} \ \sum_{\substack{n \leqslant x/m \\ \left( m, \gamma(n) \right) = 1}} \ \sum_{n_1 \dotsb n_{r-2} \mid \gamma(n)} \mu \left( n_1 \right)^2 \dotsb \mu \left( n_{r-2} \right)^2 \\ \end{eqnarray*} and Lemmas~\ref{le7} and~\ref{le10} imply that \begin{eqnarray*} S_r(x) &=& \sum_{m \leqslant x} 2^{\omega(m)} \ \sum_{\substack{n \leqslant x/m \\ \left( m, \gamma(n) \right) = 1}} (r-1)^{\omega \left( \gamma(n) \right)} \\ &=& \sum_{m \leqslant x} 2^{\omega(m)} \ \sum_{\substack{n \leqslant x/m \\ \left( m, \gamma(n) \right) = 1}} (r-1)^{\omega \left( n \right)} \\ &=& \sum_{m \leqslant x} (r+1)^{\omega(m)} \end{eqnarray*} as required. \qed \section{Proof of Theorem~\ref{th:id2}} \noindent This identity is a consequence of the following more general result. \begin{theorem} \label{th:csq-id2} Let $f : \mathbb{Z}_{\geqslant 1} \longrightarrow \mathbb{C}$ be an arithmetic function and set $$S_{r}(x)=\sum_{n_{1}\leqslant x,...,n_{r}\leqslant x}\left( f \left( n_1 \right) + \dotsb + f \left( n_r \right) \right) \left \lfloor \frac{x}{n_1 \dotsb n_r}\right \rfloor \quad \left( x \geqslant 1 \right).$$ Let $(T_r(x))$ be the sequence recursively defined by $$T_{1}(x)=\sum_{n \leqslant x} \left \lfloor \frac{x}{n} \right \rfloor \left(f \star \mathbf{1}\right) (n) \quad \text{and} \quad T_{r}(x) = \sum_{n \leqslant x} T_{r-1} \left( \frac{x}{n} \right) \quad \left( r \in \mathbb{Z}_{\geqslant 2} \right).$$ Then, for any $r \in \mathbb{Z}_{\geqslant 2}$ and $x \in \mathbb{R}_{\geqslant 1}$ $$S_{r}(x)=r T_{r-1}(x).$$ \end{theorem} \begin{proof} An easy induction shows that, for any $r \in \mathbb{Z}_{\geqslant 2}$ \begin{equation} T_{r-1}(x) = \sum_{n_1 \leqslant x} \sum_{n_2 \leqslant x/n_1} \dotsb \sum_{n_{r-2} \leqslant \frac{x}{n_1 \dotsb n_{r-3}}} T_1 \left( \frac{x}{n_1 \dotsb n_{r-2}} \right). \label{eq:id2} \end{equation} Now \begin{eqnarray*} S_r(x) &=& r \sum_{n_1 \leqslant x} \sum_{n_2 \leqslant x/n_1} \dotsb \sum_{n_{r-2} \leqslant \frac{x}{n_1 \dotsb n_{r-3}}} \sum_{n_{r-1} \leqslant \frac{x}{n_1 \dotsb n_{r-2}}} f \left( n_{r-1} \right) \sum_{n_r \leqslant \frac{x}{n_1 \dotsb n_{r-1}}} \left \lfloor \frac{x}{n_1 \dotsb n_r} \right \rfloor \\ &=& r \sum_{n_1 \leqslant x} \sum_{n_2 \leqslant x/n_1} \dotsb \sum_{n_{r-2} \leqslant \frac{x}{n_1 \dotsb n_{r-3}}} \sum_{n_{r-1} \leqslant \frac{x}{n_1 \dotsb n_{r-2}}} \left \lfloor \frac{x}{n_1 \dotsb n_{r-1}} \right \rfloor \sum_{n_r \mid n_{r-1}} f \left( n_{r} \right) \\ &=& r \sum_{n_1 \leqslant x} \sum_{n_2 \leqslant x/n_1} \dotsb \sum_{n_{r-2} \leqslant \frac{x}{n_1 \dotsb n_{r-3}}} \sum_{n_{r-1} \leqslant \frac{x}{n_1 \dotsb n_{r-2}}} \left \lfloor \frac{x}{n_1 \dotsb n_{r-1}} \right \rfloor \left( f \star \mathbf{1} \right) \left( n_{r-1} \right) \\ &=& r \sum_{n_1 \leqslant x} \sum_{n_2 \leqslant x/n_1} \dotsb \sum_{n_{r-2} \leqslant \frac{x}{n_1 \dotsb n_{r-3}}} T_1 \left( \frac{x}{n_1 \dotsb n_{r-2}} \right) \\ &=& r T_{r-1}(x) \end{eqnarray*} by \eqref{eq:id2}, as required. \end{proof} When $f = \mu$, then $\mu \star \mathbf{1} = \delta$ where $\delta$ is the identity element of the Dirichlet convolution product, i.e. $\delta(n) = 1$ if $n=1$ and $\delta(n) = 0$ otherwise, and hence $T_r (x) = \sum_{n \leqslant x}\tau_r(n)$ by induction. \section{Proof of Theorem~\ref{th:id3}} \noindent For any $s \in \mathbb{R}_{\geqslant 0}$, define $$F(s) := \sum_{d \mid n} \frac{\mu(d)^e f(d)}{d^s} = \prod_{p \mid n} \left( 1 + \frac{(-1)^e f(p)}{p^s} \right).$$ Then $$\sum_{d \mid n} \mu(d)^e f(d) \log d = - F^{\, \prime} (0)$$ with $$- \frac{F^{\, \prime}}{F} (s) = \sum_{p \mid n} \frac{f(p) \log p}{f(p) + (-1)^e p^s}$$ and we complete the proof noticing that $F(0) = \prod_{p \mid n} \left( 1 + (-1)^e f(p) \right)$. \qed
{ "timestamp": "2018-04-18T02:06:37", "yymm": "1804", "arxiv_id": "1804.05332", "language": "en", "url": "https://arxiv.org/abs/1804.05332" }
\section{Introduction} Topologically ordered phases, which appear e.g. in fractional quantum Hall systems~\cite{PhysRevB.40.7387,PhysRevB.41.9377} and in quantum spin liquids~\cite{Kitaev2003a,PhysRevB.71.045110}, are quantum phases in gapped systems which go beyond the conventional paradigm of symmetry-breaking. Systems in topologically ordered phases have several distinct features: topology-dependent ground state degeneracy, locally indistinguishable ground states which cannot be created by a constant-depth local circuit, and anyonic excitations. These characteristic properties are robust against local perturbations and such phases are considered as a candidate of the stage to perform fault-tolerant quantum information processing. In the last decades, studying entanglement in quantum states has shown to be a powerful tool to characterize topologically ordered phases. One distinctive aspect of entanglement in ground states of gapped systems (gapped ground states) is that it satisfies an area law: the entanglement entropy scales only as the perimeter instead of the volume of a region, which is true for Haar-random states~\cite{Hayden2006}. Especially, the area law of ground states in topologically ordered phases contain a characteristic term called the topological entanglement entropy (TEE)~\cite{PhysRevLett.96.110404, PhysRevLett.96.110405}. TEE only depends on the type of the phase and has been used as a probe of topological order~\cite{Furukawa2007,Haque2007,Isakov2011,Depenbrock2012}. Topological entanglement entropy has been linked to several other aspects of topological order. If TEE is zero, then the state can be created by a constant-depth local circuit, and thus in a topologically trivial phase ~\cite{Kitaevt13,PhysRevB.94.155125,2016arXiv160907877B}. Also, TEE upper bounds the logarithmic of the topological degeneracy of the model~\cite{PhysRevLett.111.080503}. Finally, TEE has also been argued to give the logarithmic of the total quantum dimension of the anyonic excitations of the system~\cite{PhysRevLett.96.110404, PhysRevLett.96.110405}. The entanglement entropy of a region $R$ is a function of the eigenvalues of the reduced state $\rho_R$ on $R$. It is interesting to explore which information might be encoded in the whole spectrum of $\rho_R$ (i.e. all its eigenvalues). Since $\rho_R$ is positive semi-definite, we can write $\rho_R = e^{-H_R}$ for a Hermitian operator $H_R$. The operator $H_R$ is called the {\it entanglement Hamiltonian} (or modular Hamiltonian) and its eigenvalues are called the {\it entanglement spectrum}. Starting with the work of Li and Haldane \cite{PhysRevLett.101.010504}, the behavior of the entanglement spectrum of two-dimensional systems has been extensively studied. Based on numerical calculations~\cite{PhysRevB.83.245134,PhysRevLett.111.090501}, it was observed that for gapped systems with no topological order, one could equate the entanglement spectrum to the spectrum of a one-dimensional quasi-local Hamiltonian acting on the boundary of the region $R$. While for topologically ordered systems, a universal non-local interaction emerges. However so far it has been a challenge to give a more general argument for the locality of the entanglement spectrum, except some exact renormalization fixed-points in the tensor network formalism~\cite{Cirac17}. A natural question is if these two aspects of entanglement in topological order are related. In this paper we explicitly construct a quantitative relation between TEE and the entanglement Hamiltonian by showing that the TEE equals (half) the minimum relative entropy of the reduced state on annular region (which we call edge state) to the set of Gibbs states $e^{-H}$ with local Hamiltonian $H$. Using this result, we will give a general argument for the locality of entanglement spectrum of certain regions and its relation to the TEE. Our approach will be information theoretical. In particular we will derive our results from the strong subadditivity property of the von Neumann entropy and a recent strengthening thereof~\cite{Fawzi2015}. Furthermore, our result provides an information-theoretic interpretation to TEE as the number of bits of information needed to describe the non-local properties of the edge state of the system. \vspace{0.1 cm} \section{ Assumption: uniform area law} In this work, we consider quantum systems on two-dimensional spin lattices with local dimension $d$. $|R|$ denotes the number of sites in region $R$ of a lattice, and $|\partial R|$ denotes its perimeter length. We will be concerned with pure states $\rho=|\psi\>\<\psi|$ on the lattice satisfying an area law: for every simply connected contractible region $R$, the von Neumann entropy $S(R)_\rho = - \tr(\rho_R \log \rho_R)$ (with $\rho_R$ the reduced density matrix of the state in region $R$) obeys \begin{equation}\label{arealaw} S(R)_\rho = \alpha |\partial R | - \gamma + c + \varepsilon, \end{equation} for constants $\alpha,c, \gamma,\varepsilon \geq 0$ ($\gamma$ is replaced by $n_R\gamma$ when $R$ has $n_R$ distinct boundaries). The constant term $\gamma$ is the topological entanglement entropy (TEE)~\cite{PhysRevLett.96.110404, PhysRevLett.96.110405}. TEE is related to the theory of anyon models via \begin{eqnarray} \gamma=\log\sqrt{\sum_ad_a^2}\,, \end{eqnarray} where $d_a\geq1$ is the quantum dimension associated to anyonic charge $a$. In topologically trivial systems, there is only the vacuum charge ``$1$'' with $d_1=1$, and thus $\gamma=0$. The term $c$ gives the contribution from the corners of the region to the entanglement entropy and has the form: \begin{equation} c=\beta\sum_i\nu(\theta_i)\,, \end{equation} for a constant $\beta$ and function $\nu$. The sum is over all corners of the region, each with angle $\theta_i$. The last term $\varepsilon$ stands for sub-leading terms in ${ o}(1)$ which go to zero when the minimum length of the region grows. In particular, throughout this work we require that the area law is \textit{uniform}, in the sense that the parameters $\alpha$ is independent of the choice of the region $R$. We further require $\varepsilon = \exp(- l / \xi)$, with $l$ the minimum length of the region and $\xi$ a constant (which can however be much larger than the correlation length of the system), which we expect to hold for generic gapped ground states; see Appendix~\ref{ap4}. Note that our result still holds if $\varepsilon$ decays polynomially but sufficiently fast. \vspace{0.1 cm} \section{Definition of edge states and the main formula} Consider a region $R$ with a boundary region $X$ as in Fig.~\ref{ring}. $X$ is composed by $m$ regions $X_i$, each with length scale $l$. We can regard $X=X_1X_2...X_m$ as a one-dimensional spin system with $X_i$ has local dimension $d^{|X_i|}$. We say $\rho_X$, the reduced density matrix of $|\psi\>$ on the boundary $X$, is the {\it edge state} of the region $R$. We could take $R$ as large as the whole lattice, in which case $X$ would indeed be the physical edge of the system. However our result also holds when $R$ is a subregion of the entire lattice (in this case $X$ corresponds to the entanglement cut between $R$ and $R'$). \begin{figure}[htbp] \begin{center} \hspace{0mm} \includegraphics[width=6.0cm]{Figure1.pdf} \vspace{-5mm} \end{center} \caption{Region $R$, its boundary region $X$ and the complement $R'$. The size of each region $X_i$ is specified by $l$. } \label{ring} \end{figure} An important quantity in our approach is the conditional mutual information, defined for tripartite states $\rho_{ABC}$ as \begin{align} I(A:C|B)_{\rho}:=S(AB)_{\rho} + S(BC)_{\rho} - S(ABC)_{\rho} - S(B)_{\rho}. \nonumber \end{align} It is a measure of the correlations between $A$ and $C$ conditioned on the information in $B$. The strong subadditivity inequality of von Neumann entropy~\cite{SSA73} reads $I(A:C|B)_{\rho} \geq 0$. As observed in \cite{PhysRevLett.96.110405}, the uniform area law~\eqref{arealaw} implies that for every (connected) triple $ABC$ with $A$ and $C$ disconnected, conditional mutual information has a dichotomy of values: $I(A:C|B) \approx 0$ if $ABC$ is topologically trivial, while $I(A:C|B) \approx 2 \gamma$ if it is topologically non-trivial annulus. The main formula of this paper is a new characterization of TEE in terms of the relative entropy distance between the edge state and the set of thermal states of local models. Define the set of Gibbs states of short-range Hamiltonians with interaction strength $K$ as \begin{equation} E^K_{nn}:=\left\{e^{-H}\left|\; H=\sum_ih_{X_iX_{i+1}},\; \|h_{X_iX_{i+1}}\|\leq K\right.\right\}\,. \end{equation} Note that here we include the normalization factor in the Hamiltonian so that $\tr(e^{-H})=1$. Then, we can show that \begin{equation} \label{TEErelent} \gamma \approx \frac{1}{2} \min_{e^{-H} \in E^K_{nn}} S \left( \rho_X \left\Vert e^{-H} \right. \right)\, \end{equation} for $K= \Theta(N)$, where $\approx$ means the equality holds up to ${\cal O}(e^{- \Theta(l)})$ if we choose $l=\log|X|$~\footnote{$x$ is in $ \Theta(l)$ if there exists $l_0>0$ such that there exist two constants $c,C>0$ satisfying $cl\leq x\leq Cl$ for any $l\geq l_0$.}. For $\varepsilon=0$ in which we have exact equality, the formula was proven before by one of us in Ref.~\cite{PhysRevA.93.022317}. Each term of $H$ in $E_{nn}^K$ acts on at most $O(\log(|X|))$ sites and thus $H$ is a (quasi-)local Hamitonian. Note that numerical results of Refs. \cite{PhysRevB.83.245134,PhysRevLett.111.090501} suggest that one might be able to improve Eq.~\eqref{TEErelent} to have Hamiltonians with exponentially-decaying interactions with locality independent of system size. Which entanglement Hamiltonian achieves the minimum in Eq.~\eqref{TEErelent}? Although we do not know the answer, \begin{equation} H_X:=-\sum_i\left(\ln\rho_{X_iX_{i+1}}-\ln\rho_{X_i}\right) \end{equation} could be a natural guess. Actually, one can show that unnormalized Gibbs state $e^{-H_X}$ has distance close to $2\gamma$ (see Appendix~\ref{ap1} and Ref.~\cite{PhysRevB.87.155120}). Notably, this (possibly unbounded) local Hamiltonian is calculable only from local reduced states. Equation (\ref{TEErelent}) also provides an information-theoretic interpretation for TEE. Let us recall a result of Ref.~\cite{anshu2017quantum}. Consider two parties, Alice and Bob. Alice (Bob) has a classical description of the density matrix $\rho$ ($\sigma$). They also share unlimited entanglement. Then Alice can send $S(\rho \Vert \sigma)/2$ qubits to Bob such, after a decoding operation by Bob, he has a quantum state which is close to $\rho$ (the error goes to zero in the asymptotic regime, where one consider the protocol applied to $\rho^{\otimes n}/\sigma^{\otimes n}$ for very large $n$). Moreover, there is no protocol with a lower rate \cite{anshu2017quantum}. Therefore the relative entropy $S(\rho \Vert \sigma)$ has the interpretation of the number of qubits which are contained in $\rho$ in addition to the information contained in $\sigma$. Applied to our setting, Eq. (\ref{TEErelent}) can then be interpreted as saying that TEE gives the number of qubits which are contained in the edge state in addition to any local model; it counts the number of topological qubits of the model. \vspace{0.1 cm} \section { Entanglement spectrum on a cylinder} For a pure bipartite state $|\psi\>_{AB}$, consider the Schmidt decomposition: \begin{equation}\label{Schdeco} |\psi\>_{AB}=\sum_i\sqrt{\lambda_i}|i\>_A|i\>_B\,, \end{equation} where $\{|i\>_A\}$ and $\{|i\>_B\}$ are orthonormal vectors of systems $A$ and $B$. The coefficients $\lambda_i$ satisfying $\lambda_i>0$ and $\sum_i\lambda_i=1$ are called the Schmidt coefficients. The entanglement spectrum of $\rho_R$ is defined by $\{-\log\lambda_i\}_i$. Note that Eq.~\eqref{Schdeco} shows that the entanglement spectrum on a subsystem $R$ always matches to the spectrum on the complement. Let us now turn to the application of Eq.~\eqref{TEErelent} to analyze the structure of the entanglement spectrum of the system. For concreteness, we consider the entanglement spectrum of a system defined on a cylinder. Consider a ground state of a system as depicted in Fig.~\ref{cyl}. Then the spectrum (of the reduced state) on region $YY'$ is the same as the spectrum on region $X$. Let us assume that the system has reflection symmetry, so that $\rho_Y=\rho_{Y'}$. For a ground state in a topologically trivial phase satisfying Eq.~\eqref{arealaw}, we have $I(Y:Y')\approx0$ which implies $\rho_{YY'}\approx\rho_Y^{\otimes 2}$ (this is followed by the fact that the ground state is approximately generated by a constant-depth circuit). Indeed, Pinsker's inequality reads \begin{equation} I(Y : Y') \geq \frac{1}{2} \Vert \rho_{YY'} - \rho_Y \otimes \rho_{Y'} \Vert_1^2, \end{equation} with $\Vert \rho_{YY'}- \rho_Y \otimes \rho_{Y'} \Vert_1$ the trace-norm distance between $\rho_{YY'}$ and the product of its reductions $\rho_Y \otimes \rho_{Y'}$. We denote the entanglement Hamiltonian of $\rho_Y^{\otimes 2}$, which we call the double of $H_{\rho_Y}$, by $H^{(2)}_{\rho_Y}=H_{\rho_Y}\otimes I+I\otimes H_{\rho_Y}$ (where $I$ is the identity operator). We also introduce a cut-off $\Lambda$ on the spectrum of operators by \begin{equation} \lambda^\Lambda(A):=\left\{\lambda\in\lambda(A)\left|\lambda\leq\log\Lambda\right.\right\}\,. \end{equation} Then, the result on the locality of edge states~\eqref{TEErelent} implies that when $\gamma=0$, there exists a 1D nearest-neighbor Hamiltonian $H_X=\sum_ih_{X_iX_{i+1}}$ on $X=X_1...X_m$, such that for any $\Lambda>0$, \begin{equation}\label{thm1} \left\|\lambda^\Lambda\left(H_{\rho_Y}^{(2)}\right)-\lambda^\Lambda(H_X)\right\|_1\leq \Lambda e^{- \Theta(l)}\, \end{equation} (the proof is given in Appendix~\ref{ap2}). The upper bound decays exponentially in $l$ if we choose $\Lambda=\poly(l)$. Note that there exists a unique ground states of gapped models with $I(A:C|B)>2\gamma=0$ for a certain choice of region $X=ABC$~\cite{BravyiCEX,PhysRevB.94.075151}. $H_X$ turns out to be non-local in this exotic example not satisfying our assumption. However, we can recover $I(A:C|B)\approx0$ by slightly changing the shape of $X$ for these counterexamples. Eq.~\eqref{TEErelent} also implies that there exists an isometry $V$ from $Y^{\ot2}$ to $X$ such that \begin{equation} V\rho_{Y}^{\otimes 2}V^\dagger=e^{-H_{\rho_X}}\approx e^{-\sum_ih_{X_iX_{i+1}}}\, \end{equation} (here $\approx$ means both sides are exponentially close with respect to $l$ in the relative entropy/the trace distance). When $\rho_Y$ has a symmetry under some unitary $U$, $U\rho_YU^\dagger=\rho_Y$, the edge state have a corresponding symmetry \begin{equation} U'\left(e^{-H_{\rho_X}}\right)U'^\dagger=e^{-U'H_{\rho_X}U'^\dagger}=e^{-H_{\rho_X}} \end{equation} for any $U'$ such that $U'V=VU$. \begin{figure}[htbp] \begin{center} \hspace{-3mm} \includegraphics[width=6.0cm]{Figure2.pdf} \vspace{-5mm} \end{center} \caption{We consider a system on a 2D cylinder. We divide it into three regions $Y$, $X$ and $Y'$ so that $X$ can be viewed as a 1D \lq\lq{}boundary\rq\rq{} of $Y$ as in Fig.~\ref{ring}. } \label{cyl} \vspace{-2.5mm} \end{figure} In topologically ordered phases, one can naturally expect that the entanglement Hamiltonian $H_{\rho_X}$ should be non-local due to a non-zero TEE. However, we have to be careful since it is known that the sub-leading term in Eq.~\eqref{arealaw} for a non-contractible region (like $X$) not only depends on the type of the phase, but also depends on the choice of the ground state~\cite{PhysRevB.85.235151,PhysRevB.94.075126}. For this reason, $I(Y:Y')\approx0$ does not hold for general ground states, and thus the previous argument should be suitably modified. Let us assume that there always exists a special orthonormal basis of the ground subspace for a gapped system such that $I(Y:Y')\approx0$ holds for each basis element. This assumption is reasonable if the ground subspace is spanned by minimally-entangled states~\cite{PhysRevB.85.235151} $\{|\psi_a\>\}_a$, which have a definite anyonic flux threading through the cylinder labeled by a finite set ${\cal L}=\{a\}$. For such states, we expect the modified area law \begin{equation}\label{modarealaw} S(R)_\rho = \alpha |\partial R | - 2\gamma+\log d_a + c + \varepsilon\,, \end{equation} where $d_a$ is the quantum dimension of the anyon flux $a$, to hold for any non-contractible subregion $R$ on the cylinder, as $X$ in Fig.~\ref{cyl}. Then, there exists a 1D Hamiltonian $H^a_X$ on $X=X_1...X_m$ for each $a\in{\cal L}$, such that for any $\Lambda>0$, \begin{equation}\label{eq:thm2} \left\|\lambda^\Lambda\left(H_{\rho^a_Y}^{(2)}\right)-\lambda^\Lambda(H^a_X)\right\|_1\leq \Lambda e^{- \Theta(l)}\,, \end{equation} with $\rho_Y^a=\tr_{XY'}|\psi_a\>\<\psi_a|$. Here we again assume the reflection symmetry. Importantly, here $H^a_X$ contains non-local interactions in contrast to the case of $\gamma=0$. A general ground state $|\psi\>=\sum_{a\in{\cal L}}\sqrt{p_a}|\psi_a\>$ is a superposition of states with different fluxes. Each anyonic flux $a$ can be measured by a projective measurement acting on $YY'$, and therefore the reduced states on $YY'$ with different fixed anyonic flux are orthogonal. Hence, we have a direct sum decomposition of the reduced state: \begin{equation} \rho_{YY'}=\bigoplus_{a\in{\cal L}}p_a\rho^a_{YY'}\,. \end{equation} Using the reflection symmetry and $I(Y:Y')\approx0$ for each $a$, we have \begin{equation}\label{directsumrdm} \rho_{YY'}\approx \bigoplus_{a\in{\cal L}}p_a\rho^{a\otimes2}_{Y}\,. \end{equation} As in the case of the trivial phase, there exists an isometry $V$ from $YY'$ to $X$ such that \begin{equation} V\rho_{YY'}V^\dagger\approx\sum_ap_ae^{-\sum_ih^a_{X_iX_{i+1}}-h_X^a}\,, \end{equation} where $h_X^a$ acts on $X$ non-locally. We expect that each $h^a_X$ represent a topological constraint and is dominated by $m$-body interactions (as we discuss in Appendix~\ref{ap1}). Indeed this has been observed before for some exactly solvable models~\cite{PhysRevB.83.245134,PhysRevLett.111.090501,PhysRevA.93.022317}. We have shown that the double of the entanglement spectrum is approximately equivalent to the spectrum of the 1D edge state, which is local if TEE is zero. We now want to argue that under a few more assumptions, the same property also holds for the single entanglement spectrum. Let us first consider a ground state on a cylinder with a boundary (or boundaries) as in the upper part of Fig.~\ref{cyl2}. Here we choose $X$ as a region around the physical boundary. The entanglement spectrum of $Y$ is equivalent to that of $X$ since the state on $XY$ is pure. The edge state on $X$ depends on how we choose interaction terms around the boundary, but we can still apply Eq.~\eqref{TEErelent} if the edge state satisfies the area law of Eq.~\eqref{arealaw}. For instance, the toric code with a smooth boundary~\cite{Kitaev2012} satisfies the assumption. For more general situations, let us turn back to a ground state $|\psi\>$ of a system defined as in Fig.~\ref{cyl}. Remember that $\rho_{YY'}\approx\rho_Y\otimes\rho_{Y'}$ if $|\psi\>$ satisfies $I(Y:Y')\approx0$. Consider a purification $|\psi^L\>_{YX_1}\otimes|\psi^R\>_{Y'X_2}$ of $\rho_Y\otimes\rho_{Y'}$ on some ancillary system $X_1X_2$ satisfying $\psi^L_Y=\rho_Y$ and $\psi^R_{Y'}=\rho_{Y'}$. By Uhlmann's theorem~\cite{UHLMANN1976273}, there exists a unitary $U_X$ from $X$ to systems $X_1$ and $X_2$ such that \begin{equation}\label{Uhlmann} U_X|\psi\>_{YXY'}\approx|\psi^L\>_{YX_1}\otimes|\psi^R\>_{X_2Y'}\,, \end{equation} We can choose $|X_1|\sim{\cal O}(|\partial Y|)$ {and interpret $YX_1$ as a new cylinder} if $\rho_Y$ can be well-approximated by a low-rank state ${\tilde\rho}_Y$ with $\rank({\tilde \rho}_Y)=2^{{\cal O}(|\partial Y|)}$. This approximation has been shown to be possible for any ground state satisfying the area law~\cite{WE18} (not necessarily to be uniform), while the error term only decreases ${\cal O}(\frac{1}{l})$. The new edge state $\psi^L_{X_1}$ on $X_1$ has almost the same spectrum as $\rho_Y$ and we can use the previous argument discussed above. Furthermore, the validity of the approximation is invariant under any constant-depth local circuit, since such a circuit can only add constant (of the axial length) to the rank of reduced state on $Y$. Therefore, all ground states in the topologically trivial phase satisfy the condition, since they can be created from product state by such circuits. Another example is a family of gapped ground states which can be described by Matrix Product States (MPS)~\cite{MPSo08} defined in the axial direction. Suppose a (unnormalized) state $|\psi_{N}\>$ is defined on a cylinder with the axial length $N$ and the radius $r$. We obtain a 1D system by cutting the cylinder into several slices and then regarding one slice as one large subsystem. Suppose that $|\psi_{ N}\>$ can be written as \begin{align} |\psi_{N}\>=\sum_{i_1,...,i_N}\left(L\right|A^{i_1}\ldots A^{i_N}\left|R\right)|i_1i_2\ldots i_N\>\,, \end{align} where the indices $\{i_j\}$ is associated with the $j$th slice (column) of the cylinder, and $\{A^i\}_i$ are $D\times D$ matrices with a bond dimension $D\sim 2^r$. $|L)$ and $|R)$ are $D$-dimensional vector representing the boundary condition (we used ``)'' to distinguish them from vectors in physical systems). Choose the first $m$ slices as subsystem $Y$. Then, one can show that in generic case the reduced density matrix on $Y$ is almost independent of $N$ for sufficiently large $N$ (More details are in Appendix~\ref{ap3}). Therefore, the spectrum on $Y$ is approximately equivalent to the spectrum of the edge state defined for some fixed cylinder (Fig.~\ref{cyl2}). \begin{figure}[htbp] \begin{center} \vspace{-3mm} \includegraphics[width=6.0cm]{Figure3.pdf} \vspace{-5mm} \end{center} \caption{(Up) We choose $X$ as the region around the physical boundary (the right edge). The entanglement spectrum on $Y$ is the same as that of $X$. (Down) In some cases, the reduced state on $Y$ is almost independent of the length of the opposite side. Then the entanglement spectrum of $Y$ is equivalent to the spectrum of $X$ which is an edge of another cylinder with shorter length. } \label{cyl2} \end{figure} \section{Discussion} In this work we have gave a new formula for TEE, connecting it to the locality of edge states. In particular, we showed that if TEE is zero, the entanglement Hamiltonian of the 1D edge state is approximately a short-range Hamiltonian, while it is a non-local Hamiltonian if the ground state have non-zero TEE. We then applied this result to the entanglement spectrum defined on a half of a cylinder, and derived that the double of the spectrum matches the spectrum of a 1D Hamiltonian (which is local if TEE is zero). We also have shown that the same results hold for the single entanglement spectrum under additional but physically reasonable assumptions. Our techniques only rely on the property of ground states and is independent of specifics of particular models. A similar connection has been observed before in the PEPS formalism, where the edge state is defined for an effective boundary on virtual degrees of freedom. In our case, the edge state is defined via the reduced state on the boundary, and therefore it acts on physical degrees of freedom. Building an explicit connection between our framework and the PEPS formalism is an interesting open question. Another interesting direction for future research is to weaken our assumptions and extend our results for more general gapped systems. Especially, it is unclear if we can always find a suitable isometry in Eq.~\eqref{Uhlmann} such that the edge state on the new physical boundary satisfies the area law assumption (presently we can only show it for a few explicit examples, e.g., the toric code). \vspace{0.2 cm} {\bf Acknowledgements.} We thank for Burak Sahinoglu for helpful discussions. We acknowledge support from the NSF. Part of this work was done when both of us were working in the QuArC group of Microsoft Research. KK thanks Advanced Leading Graduate Course for Photon Science (ALPS) and JSPS KAKENHI Grant Number JP16J05374 for financial support.
{ "timestamp": "2019-04-26T02:03:52", "yymm": "1804", "arxiv_id": "1804.05457", "language": "en", "url": "https://arxiv.org/abs/1804.05457" }
\section{Introduction} Identifying the covariance of a centred random vector using random data is of central importance in high-dimensional statistics and has been studied extensively in recent years. The hope is that by using a relatively small sample $X_1,...,X_N$ of independent random vectors distributed as $X$, one can construct a good enough approximation of the covariance of $X$, and that such an approximation would be possible under minimal assumptions. The question is finding a `right way' of generating an approximation and then estimating the resulting tradeoff between the given sample size $N$, the degree of approximation and the probability with which that degree of approximation can be guaranteed. \vskip0.4cm The random vector $X$ endows an $L_2$ norm on $\R^d$ by setting for $v \in \R^d$, $$ \|v\|_{L_2} \equiv \|\inr{X,v}\|_{L_2} = \left(\E (\inr{X,v})^2\right)^{1/2}, $$ and the unit ball of that norm is $$ {\cal B} = \{v \in \R^d : \|v\|_{L_2} \leq 1\}=\{v \in \R^d : \inr{Tv,v}^{1/2} \leq 1\}, $$ where $T = \E (X \otimes X)$ is the covariance matrix of $X$. Throughout this note we assume without loss of generality that $T$ is invertible. Given $X_1,...,X_N$ that are independent and distributed as $X$, a natural option is to consider the empirical covariance matrix $\hat{T}=\frac{1}{N} \sum_{i=1}^N X_i \otimes X_i$ and approximate ${\cal B}$ by the random ellipsoid $$ \hat{\cal B} = \left\{ v \in \R^d : \inr{\hat{T}v,v}^{1/2} \leq 1 \right\}. $$ Note that even if one selects $\hat{\cal B}$ as the approximating set, there are various notions of approximation that one may consider. For example, by ensuring that the operator norm $\|\hat{T}-T\|_{2 \to 2} \leq \eta$, it follows that $$ {\cal B} \subset \hat{\cal B} + \eta B_2^d \ \ \ \ {\rm and} \ \ \ \ \hat{\cal B} \subset {\cal B} + \eta B_2^d, $$ where $B_2^d$ is the Euclidean unit ball and $A+B$ is the Minkowski sum $\{a+b : a \in A, \ b \in B\}$. A different notion of approximation, which is the one that we focus on here, is equivalence between sets: \begin{Definition} \label{def:approx} The set ${\cal K} \subset \R^d$ is an $\eta$-approximation of ${\cal B}$ if \begin{equation} \label{eq:geo-approx} (1-\eta){\cal K} \subset {\cal B} \subset (1+\eta){\cal K}. \end{equation} \end{Definition} For the choice of ${\cal K}= \hat{\cal B}$ an equivalent formulation of $\eta$-approximation is that \begin{equation} \label{eq:quadratic-1} \sup_{v \in {\cal B}} \left|\frac{1}{N}\sum_{i=1}^N \inr{X_i,v}^2 - \E \inr{X,v}^2 \right| \leq \eta. \end{equation} Observe that if $T=\E (X \otimes X)$ then ${\cal B} = T^{-1/2}B_2^d$; hence, the random vector $Y=T^{-1/2}X$ is \emph{isotropic}: $\E (Y \otimes Y) = Id$, i.e, for every $v \in \R^d$, $\|\inr{Y,v}\|_{L_2} = \|v\|_2$. Moreover, denoting the Euclidean unit sphere by $S^{d-1}$, \eqref{eq:quadratic-1} becomes \begin{equation} \label{eq:quadratic-2} \sup_{v \in S^{d-1}} \left|\frac{1}{N}\sum_{i=1}^N \inr{Y_i,v}^2 - 1 \right| \leq \eta. \end{equation} The behaviour of \eqref{eq:quadratic-2}, the quadratic empirical process indexed by the unit sphere, is well understood (see e.g. \cite{MR2601042,MR3127875, MR3191978}). It characterizes the extremal singular values of the random matrix $N^{-1/2}\sum_{i=1}^N \inr{Y_i,\cdot}e_i$, and is determined by two factors: the growth of moments of linear functionals $\inr{Y,v}$, and tail estimates on the Euclidean norm $\|Y\|_2$. The best known estimate on \eqref{eq:quadratic-2} in a heavy-tailed situation is due to Tikhomirov \cite{Tikh}: \begin{Theorem} \label{thm:quad-est} Let $Y$ be a centred, isotropic random vector in $\R^d$ and for $p>2$ set $L = \sup_{v \in S^{d-1}} \|\inr{Y,v}\|_{L_p}$. Let $Y_1,...,Y_N$ be independent, distributed according to $Y$. If $\hat{T}=N^{-1}\sum_{i=1}^N Y_i \otimes Y_i$ then with probability at least $1-1/d$, $$ C^{-1}\|Id-\hat{T}\|_{2 \to 2} \leq \frac{1}{N} \max_{1 \leq i \leq N} \|Y_i\|_2^2+\left(\frac{d}{N}\right)^{1-2/p} \log^4\left(\frac{eN}{d}\right)+\left(\frac{d}{N}\right)^{1-2/\min\{4,p\}}, $$ for a constant $C$ that depends only on $L$ and $p$. \end{Theorem} If one believes that Theorem \ref{thm:quad-est} is reasonably sharp, it casts a shadow on the choice of $\hat{\cal B}$ as an $\eta$-approximation of ${\cal B}$ in the sense of Definition \ref{def:approx}. Indeed, when $X$ is heavy-tailed it is likely that some of the vectors $Y_i=T^{-1/2}X_i$ will have large Euclidean norms. In Section \ref{sec:example} we will give a concrete example of an isotropic random vector that satisfies an $L_4-L_2$ norm equivalence, but still $\hat{\cal B}$ is very different from ${\cal B}$ with a non-negligible probability. Of course, while $\hat{\cal B}$ is the natural choice for a data-dependent approximation of ${\cal B}$, it is certainly not the only choice. For one, there is no reason to restrict the approximating set to an ellipsoid, though it is not clear offhand how one may generate other approximating sets given the limited data at one's disposal. The method we present does just that. Its starting point is identifying a random property that is satisfied only by points in a set that is `close enough' to ${\cal B}$. To give an example of what we mean by a random property, assume, for example, that $X$ is the standard gaussian vector in $\R^d$. Then ${\cal B}=B_2^d$, and for each $v \in \R^d$, $\inr{X,v}$ is a centred gaussian random variable whose variance is $\|v\|_2^2$. Thus, using the values $\inr{X_1,v},...,\inr{X_N,v}$ one may identify $\|v\|_2$ rather accurately and in particular pin-point the Euclidean sphere on which $v$ is located. The difficulty lies in the fact that the accurate estimate has to hold uniformly for every $v \in \R^d$, and how that can be achieved is not obvious. Our method leads to such uniform estimates, and as examples we obtain approximation results using two different types of sets. \vskip0.4cm The first example we consider has to do with approximations generated by slabs. For $z \in \R^d$ and $\alpha>0$ set $H_{z,\alpha} = \{v \in \R^d : |\inr{z,v}| \leq \alpha\}$. Given $z_1,...,z_n \in \R^d$ and $\alpha_1,...,\alpha_n>0$, define $$ {\cal K} = \{v \in \R^d : v \in H_{z_j,\alpha_j} \ {\rm for \ at \ least \ } \beta n \ {\rm indices} \}. $$ In other words, ${\cal K}$ is a union of all the intersections of $\beta n$ slabs out of $(H_{z_i,\alpha_i})_{i=1}^n$. Note that ${\cal K}$ need not be a convex set though it is star-shaped around $0$: if $v \in {\cal K}$ then for any $0 \leq \theta \leq 1$, $\theta v \in {\cal K}$. This type of approximation has been studied in \cite{MR1761898}, where the authors attempted to approximate the characteristic function of the Euclidean unit ball in $\R^d$ by the characteristic function of a simple set. It was well known that approximating the Euclidean unit ball by a polytope required the polytope to have at least $\exp(cd)$ faces (see, e.g., \cite{MR608101,MR670396} for accurate statements), and the alternative studied in \cite{MR1761898} was to approximate $\IND_{B_2^d}$ by the output of a neural network with two hidden layers; that is, by a characteristic function of a set of the form \begin{equation} \label{eq:K1} \left\{ v \in \R^d : \sum_{i=1}^n \gamma_i \IND_{\{\inr{z_i,v} \geq \alpha_i\}} \geq k \right\}. \end{equation} It was shown in \cite{MR1761898} that one may construct such a set ${\cal K}_1$ using $n=cd^2/\eta^2$ points $z_i$, and for the right choice of $\alpha_i$ and $\gamma_i$ one has $$ (1-\eta)B_2^d \subset {\cal K}_1 \subset (1+\eta)B_2^d. $$ Unfortunately, although it is possible to derive a similar approximation for a general ellipsoid, that construction requires information on the ellipsoid's principal axes, making it unhelpful for covariance approximation. In \cite{MR2204286} the authors considered similar approximating sets (which they called `zig-zag bodies'), but their approach for choosing the points $z_i$ and thresholds $\alpha_i$ was more promising from our perspective; moreover, it also led to a better estimate on the required number of slabs. \begin{Theorem} \label{thm:zig-zag} \cite{MR2204286} There exist absolute constants $c_1$ and $c_2$ for which the following holds. Let $Z$ be distributed according to the uniform measure on $S^{d-1}$ and let $Z_1,...,Z_N$ be independent, distributed as $Z$. Set \begin{equation} \label{eq:zig-zag-body} {\cal K}_2 = \left\{v \in \R^d : |\inr{v,Z_i}| \leq \alpha_d \ \ {\rm for \ at \ least \ } N/2 \ {\rm indices}\right\}, \end{equation} where $\alpha_d$ is the median of $|\inr{Z,v}|$ for $v \in S^{d-1}$. If $0<\eta<1$ and $N = c_1 d\eta^{-2}\log(2/\eta)$ then with probability at least $1-2\exp(-c_2d)$, $$ (1-\eta)B_2^d \subset {\cal K}_2 \subset (1+\eta)B_2^d. $$ \end{Theorem} In other words, the Euclidean ball (which, up to a normalization factor of $c_d \sqrt{d}$, $\lim_{d \to \infty} c_d=1$, is the covariance unit ball endowed by $Z$) can be approximated by the union of intersections generated by $c(\eta)d$ slabs, and this approximation holds with very high (exponential) probability. \begin{Remark} Note that ${\cal K}_2$ belongs to the family of sets \eqref{eq:K1}. Indeed, this is evident because $$ {\cal K}_2 = \left\{v \in \R^d : \sum_{i=1}^N \IND_{\{|\inr{v,Z_i}| \leq \alpha_d\}} \geq \frac{N}{2} \right\}, $$ and for $\alpha>0$, $\IND_{\{|\inr{v,z}| \leq \alpha\}} = \IND_{\{\inr{v,z} \geq - \alpha\}}-\IND_{\{\inr{v,z} \geq \alpha\}}$. \end{Remark} \vskip0.4cm The proof of Theorem \ref{thm:zig-zag} relies heavily on the fact that $Z_1,...,Z_N$ are distributed according to the uniform measure on the sphere. However, it still opens the door to a possible way of addressing the problem at hand: one may try to select ${\cal K}$ randomly, in a similar way to \eqref{eq:zig-zag-body}. We will show that indeed Theorem \ref{thm:zig-zag} can be extended---with some necessary modifications---to an almost arbitrary centered random vector. The proof is based on a random property that allows one to check accurately whether $v \in \R^d$ actually belongs to ${\cal B}$ or not. As we explain in what follows, that property is reflected by the `frequency' with which the $X_i$'s belong to an appropriate slab defined by $v$ (see Section \ref{sec:small-ball} for details). \vskip0.4cm To formulate our main results we need to introduce some additional notation. Throughout, absolute constants are denoted by $c,c_0,c_1,...$; their values may change from line to line. $a \lesssim b$ means that there is an absolute constant $c$ such that $a \leq cb$, and $a \sim b$ implies that $ca \leq b \leq Ca$ for absolute constants $c$ and $C$. Finally, $a \sim_L b$ denotes that $ca \leq b \leq Ca$ for constants $c$ and $C$ that depend only on $L$. Given integers $m$ and $n$ set $N=nm$. Let $\{X_{i,j} : 1 \leq i \leq m, 1 \leq j \leq n\}$ be $N$ independent copies of $X$ and for $1 \leq j \leq n$ put $$ Z_j = \frac{1}{\sqrt{m}} \sum_{i=1}^m X_{i,j}. $$ Also, denote by $g$ the standard gaussian random variable and set $\alpha$ to be the median of $|g|$. For $\eta>0$ define the random set $$ {\cal K}_\eta = \left\{v \in \R^d : |\inr{Z_j,v}| \leq \alpha + \eta \ {\rm for \ at \ least \ } \left(\frac{1}{2}-\eta\right)n \ {\rm indices} \ j \right\}. $$ \begin{Theorem} \label{thm:main-intro} Let $0<\eta<1/10$ and $L \geq 1$. Assume that for every $v \in \R^d$, $\|\inr{X,v}\|_{L_q} \leq L\|\inr{X,v}\|_{L_2}$ for some $q>2$, set $m \geq c_0(\eta,L)$ and let $n \geq c_1(\eta)d$. Then, with probability at least $1-2\exp(-c_2 \eta^2 n)$, $$ {\cal B} \subset {\cal K}_\eta \subset (1+c_3\eta){\cal B}, $$ for absolute constants $c_2$ and $c_3$. Moreover, if $q \geq 3$ one may take $$ c_0 \sim_L \eta^{-2} \ \ {\rm and} \ \ c_1 \sim \eta^{-2}\log(2/\eta), $$ implying that $N =c(L)d \eta^{-4}\log(2/\eta)$ points suffice. \end{Theorem} As it happens, the superfluous factor of $\log(2/\eta)$ can be removed from Theorem \ref{thm:main-intro} if one employs a different method of proof. However, the required argument is rather specific and holds only for approximation by slabs as in Theorem \ref{thm:main-intro}. Because the main point of this note is to advocate our method of constructing approximations, we chose to present the general argument and only outline the alternative proof of Theorem \ref{thm:main-intro} (see Section \ref{sec:improve}). \begin{Remark} As we explain in what follows, if $X$ is a `nice' random vector (and among these `nice' random vectors are the standard gaussian vector or the vector distributed uniformly on the Euclidean unit sphere) then one may take $m=1$ and $n \sim d \eta^{-2}\log(2/\eta)$ (or $n \sim d \eta^{-2}$ using the alternative proof). In particular, Theorem \ref{thm:main-intro} improves Theorem \ref{thm:zig-zag}. \end{Remark} In the other example we present we construct a more complex approximating set: it is the union of intersections of ellipsoids rather than the union of intersections of slabs. On the other hand, the required sample size is smaller and all that one needs is the following weak assumption on $X$: \begin{Assumption} \label{ass:ellipsoids} Assume that for every $\eta>0$ there is some $m=m_0(\eta)$ for which the following holds: if $\|v\|_{L_2}=1$ then $$ Pr\left( \left| \frac{1}{m}\sum_{i=1}^m \inr{X_i,v}^2 - 1\right| \geq \frac{\eta}{10} \right) \leq 0.01. $$ \end{Assumption} To see that Assumption \ref{ass:ellipsoids} is rather minimal, note that under an $L_4-L_2$ norm equivalence (i.e., that for every $v \in \R^d$, $\|\inr{X,v}\|_{L_4} \leq L\|\inr{X,v}\|_{L_2}$), one has $m_0(\eta) \leq c(L)/\eta^2$. Naturally, nontrivial estimates on $m_0(\eta)$ are possible in more general situations than an $L_4-L_2$ norm equivalence. The `ellipsoid approximation' estimate is as follows: \begin{Theorem} \label{thm:ellipsoids} There exist absolute constants $c_0,c_1$ and $c_2$ for which the following holds. For $0<\eta<1/4$ let $m=m_0(\eta)$ and $n \geq c_0\max\{d\log(m/\eta),m\}$. Put $N = nm$ and set $(X_{i,j}), \ 1 \leq i \leq m, \ 1 \leq j \leq n$ to be independent, distributed according to $X$. If $$ {\cal D}_\eta = \left\{v \in \R^d : \frac{1}{m}\sum_{i=1}^m \inr{X_{i,j},v}^2 \leq 1+\eta \ {\rm for \ at \ least \ } 0.9n \ {\rm indices} \ j \right\}, $$ then with probability at least $1-2\exp(-c_1 n/m)$, $$ {\cal B} \subset {\cal D}_\eta \subset (1+c_2\eta){\cal B}. $$ \end{Theorem} To put the outcome of Theorem \ref{thm:ellipsoids} in some perspective, under an $L_4-L_2$ norm equivalence one has that $m_0(\eta) \leq c(L)/\eta^2$, implying that $n=c^\prime \max\{d \log(L/\eta),\eta^{-2}\}$ suffices, and the resulting required sample size of $N \sim d \eta^{-2} \log(2/\eta)$ is better than the outcome of Theorem \ref{thm:main-intro} by a factor of $1/\eta^2$ as long as $\eta \geq 1/d^{1/2}$. \vskip0.4cm In the next section we describe the general method and explain how it is used in the proofs of Theorem \ref{thm:main-intro} and Theorem \ref{thm:ellipsoids}. The argument is actually a variant of the small-ball method introduced in \cite{MenACM}. The proofs of Theorem \ref{thm:main-intro} and Theorem \ref{thm:ellipsoids} are presented in Section \ref{sec:proofs}. \section{The small-ball method} \label{sec:small-ball} Let us begin by describing the argument used in the proof of Theorem \ref{thm:zig-zag}. It is based on three crucial observations: \begin{description} \item{$\bullet$} \emph{All the points on a centred sphere behave in the same way:} By rotation invariance, if $Z$ is distributed according to the uniform measure on $S^{d-1}$ then all the random variables $\inr{Z,v/\|v\|_2}$ have the same distribution; therefore $|\inr{Z,v/\|v\|_2}|$ all have the same quantiles, and in particular, the same median. \item{$\bullet$} \emph{Quantiles can be used to `separate' between different spheres:} If $\|u\|_2 \not = \|v\|_2$, that fact is reflected in a difference between $Pr(|\inr{Z,v}| \leq \alpha)$ and $Pr(|\inr{Z,u}| \leq \alpha)$. \item{$\bullet$} \emph{Separation is visible through sampling:} For every $v \in \R^d$, the sum of independent indicators $$ \frac{1}{N} \sum_{i=1}^N \IND_{\{|\inr{Z,v}| \leq \alpha\}} $$ exhibits sharp concentration around $Pr(|\inr{Z,v}| \leq \alpha)$. \end{description} It follows that for every $v \in S^{d-1}$, the median $\alpha_d$ of $|\inr{Z,v}|$ is the same (and happens to be $c_d/\sqrt{d}$ with $\lim_{d \to \infty} c_d =1$). Moreover, given $Z_1,...,Z_N$ that are independent and distributed according to $Z$, $|\{j: |\inr{Z_j,v}| \leq \alpha_d\}|$ is highly concentrated around $N/2$. The heart of the proof is to show that a similar bound is true \emph{uniformly} on $S^{d-1}$; that is, with high probability, \begin{equation} \label{eq:zig-zag-concentration} \sup_{v \in S^{d-1}} \left| \left| \left\{j : |\inr{Z_j,v}| \leq \alpha_d \right\} \right| -\frac{N}{2} \right| \end{equation} is small provided that $N$ is large enough. To establish \eqref{eq:zig-zag-concentration}, note that the high probability estimate that holds for every individual $v$ allows one to obtain uniform control on a fine enough net in $S^{d-1}$. And, if $\pi u$ denotes the best approximation to $u$ in the net, $|\inr{Z_j,u}|$ cannot be different from $|\inr{Z_j,u-\pi u}|$ by much; indeed, $|\inr{Z,u-\pi u}| \leq \|Z\|_2 \|u-\pi u\|_2 = \|u-\pi u\|_2$ because $Z$ is supported on $S^{d-1}$. Once \eqref{eq:zig-zag-concentration} is established, the outcome of Theorem \ref{thm:zig-zag} follows immediately: the set $$ {\cal K}_2=\left\{v \in \R^d : |\inr{v,z_i}| \leq \alpha_d \ \ {\rm for \ at \ least \ } N/2 \ {\rm indices}\right\} $$ contains $(1-\eta)S^{d-1}$, but does not contain any point on $(1+\eta)S^{d-1}$. Therefore, since ${\cal K}_2$ is star-shaped around $0$, $(1-\eta)B_2^d \subset {\cal K}_2 \subset (1+\eta)B_2^d$. \vskip0.4cm It is clear that when dealing with a general random vector, most of the features used in the proof of Theorem \ref{thm:zig-zag} are simply not true: quantiles $Pr(|\inr{X,v}| \leq \alpha)$ may change on the $L_2$ unit sphere $$ {\cal S}=\{v \in \R^d : \E|\inr{X,v}|^2=1\}; $$ they need not `separate' between two $L_2$ spheres; and `oscillations' $|\inr{X_i,u-\pi u}|$ can be large, especially when $X$ is heavy-tailed rather than being bounded like in Theorem \ref{thm:zig-zag}. \vskip0.4cm The analysis required for addressing these difficulties is based on the \emph{small-ball method}, which was introduced in \cite{MenACM} to deal precisely with this sort of problem: obtaining high probability, uniform estimates in heavy-tailed situations. The path we take follows the main ideas of the method: \begin{description} \item{$(a)$} Identify a property ${\cal P}$ that allows one to check whether a fixed $v \in \R^d$ belongs to ${\cal B}$ or not - using only the probability with which the property holds. Moreover, ${\cal P}$ should be defined using only on relatively small number of the independent copies of $X$ at one's disposal. For example, one may consider the functionals $$ \frac{1}{\sqrt{m}} \sum_{i=1}^m \inr{X_i,v} \ \ \ \ \ {\rm and} \ \ \ \ \ \frac{1}{m}\sum_{i=1}^m \inr{X_i,v}^2 $$ where $m$ is relatively small. The former is close to a centred gaussian variable whose variance is $\E \inr{X,v}^2=\|v\|_{L_2}^2$ while the latter concentrates around $\|v\|_{L_2}^2$. Therefore, if the goal is to check whether $\|v\|_{L_2} \leq 1$ one may define \begin{equation} \label{eq:property-P} {\cal P}_1 = \left\{\left|\frac{1}{\sqrt{m}} \sum_{i=1}^m \inr{X_i,v} \right| \leq \alpha+\eta \right\} \ \ {\rm and} \ \ {\cal P}_2 = \left\{\frac{1}{m}\sum_{i=1}^m \inr{X_i,v}^2 \leq 1+ \frac{\eta}{10}\right\} \end{equation} respectively, where $\alpha$ appearing in ${\cal P}_1$ is the median of $|g|$, the absolute value of a standard gaussian, and $\eta$ is small. In both cases the probability of the events in question are determined by $\|v\|_{L_2}$: the probability of ${\cal P}_1$ will be very close to $1/2$ if and only if $\|v\|_{L_2} =1$, whereas ${\cal P}_2$ holds with probability that is close to $1$ if $\|v\|_{L_2} \leq 1+\eta$ and with probability that is close to $0$ in $\|v\|_{L_2}$ is much larger. \end{description} In general, the idea in $(a)$ is that the identity of $\|v\|_{L_2}$ is reflected by the probability with which ${\cal P}$ hold. The next step is to `detect' that probability with very high confidence. \begin{description} \item{$(b)$} Split $\{1,...,N\}$ to $n$ coordinate blocks $I_j$, each one of cardinality $m$ and set $W_j(v)=\IND_{\{v \ {\rm statisfies \ } {\cal P}\}}(X_i, \ i \in I_j)$. It is evident that $W(v)=n^{-1}\sum_{j=1}^n W_j(v)$ concentrates around its mean, i.e., the probability with which ${\cal P}$ holds. Therefore, the cardinality $|\{j : W_j(v)=1\}|$ leads to a very good estimate of that probability, and in particular of $\|v\|_{L_2}$. Moreover, the resulting estimate is valid with confidence that is exponential in $n=N/m$, say $1-2\exp(-cn)$. \item{$(c)$} Use $(b)$ to define the random approximating set ${\cal K}$: \emph{$v$ belongs to the set if $W_j(v)=1$ for the `right number' of indices $j$}. \end{description} Now one needs to verify that the resulting set ${\cal K}$ is truly close to ${\cal B}$. If ${\cal K}$ happens to be star-shaped around $0$, it suffices to ensure that ${\cal S} \subset {\cal K}$, and at the same time that $\{ v : \|v\|_{L_2} = 1+\eta\} \subset {\cal K}^c$. As a result, one has to obtain a uniform estimate on the cardinality $|\{j : W_j(v)=1\}|$ for $v$'s that belong to the two centred $L_2$ spheres: the unit one, and the one of radius $1+\eta$: \begin{description} \item{$(d)$} The high probability estimate with which $(b)$ holds allows one to control a large collection of $v$'s uniformly. The obvious choice of such a set $V$ is an appropriate $L_2$-net in the sphere in question. This leads to an estimate that holds with high confidence but only for points in $V$ rather than for the entire sphere. \item{$(e)$} Finally, to pass from $V$ to the entire sphere one must control the oscillations: show that if $u$ is `close' to $v$, then the number of indices $j$ on which $W_j(u)=1$ is very close to the number of indices on which $W_j(v)=1$. \end{description} Clearly, the key step is $(e)$: obtaining the required uniform control on random oscillations, a task that is nontrivial in heavy-tailed situations. \vskip0.4cm As this description indicates, the method is rather general and can be employed for a wide variety of choices of ${\cal P}$. One may consider other alternatives beyond the two examples we present in what follows, and those would result in different approximating sets. The crucial point is that as long as ${\cal P}$ is well chosen, those sets would all be good approximations of the covariance ellipsoid. \section{Proofs} \label{sec:proofs} Before we present the proofs of Theorem \ref{thm:main-intro} and Theorem \ref{thm:ellipsoids} we need the following standard observation: \begin{Lemma} \label{lemma:Bernoulli} Let $X$ be a centred random vector in $\R^d$ and let $X_1,...,X_k$ be independent copies of $X$. Then $$ \E \sup_{v \in {\cal B}} \left|\sum_{i=1}^k \eps_i \inr{X_i,v} \right| \leq \sqrt{k} \sqrt{d}, $$ where $(\eps_i)_{i=1}^k$ are independent, symmetric, $\{-1,1\}$-valued random variables that are independent of $X_1,...,X_k$. \end{Lemma} \proof Let $T = \E (X \otimes X)$, and recall that ${\cal B} = T^{-1/2}B_2^d$ and that $T^{-1/2}X$ is isotropic. Note that for an isotropic vector $Y$, $$ \E \|Y\|_2^2 = \E \sum_{i=1}^d \inr{Y,e_i}^2 = d. $$ Therefore, \begin{align*} & \E \sup_{v \in {\cal B}} \left|\sum_{i=1}^k \eps_i \inr{X_i,v} \right| = \E \sup_{w \in B_2^d} \left|\sum_{i=1}^k \eps_i \inr{X_i,T^{-1/2}v} \right| \\ & = \E \sup_{w \in B_2^d} \left|\sum_{i=1}^k \eps_i \inr{T^{-1/2}X_i,v} \right| \leq \E_X \left(\sum_{i=1}^k \|T^{-1/2}X_i\|_2^2 \right)^{1/2}, \end{align*} and the claim follows from Jensen's inequality and the fact that $T^{-1/2}X$ is isotropic. \endproof \subsection{Approximation by slabs} \label{sec:approx-slab} Recall that ${\cal S} \subset \R^d$ is the $L_2$ unit sphere; that is, ${\cal S}=\{v \in \R^d : \|\inr{X,v}\|_{L_2} =1 \}$. As a starting point, let $Z$ be a random vector that has the same covariance as $X$, and therefore endows the same $L_2$ structure on $\R^d$---in particular, $Z$ endows the same unit ball ${\cal B}$ and unit sphere ${\cal S}$. Assume that there are $\alpha>0$, $0<\beta<1$, $\eta < \beta/4$, $\eps_0 < \alpha/2$ and $\gamma>6\eta/\alpha$ such that for every $v \in {\cal S}$ and every $\eps_0<\eps<\alpha/2$, \begin{description} \item{$(1)$} $|Pr(|\inr{Z,v}| \leq \alpha) -\beta| \leq \eta$, and \item{$(2)$} $Pr(|\inr{Z,v}| \in [\alpha-\eps,\alpha]) \geq \gamma \eps$. \end{description} To explain this condition, one should think of $\eta$ as a small number (measuring the wanted degree of approximation), and that $\alpha$ and $\beta$ are just constants; thus, Condition $(1)$ means that the function $\phi(v)=Pr(|\inr{Z,v}| \leq \alpha)$ is roughly a constant on the sphere ${\cal S}$. Condition $(2)$ means that the (marginal) mass of a small interval that ends at $\alpha$ is nontrivial; in other words, there is a noticeable difference between $Pr(|\inr{Z,v}| \leq \alpha)$ and $Pr(|\inr{Z,v}| \leq \alpha-\eps)$ for every $v \in {\cal S}$; the lower bound on $\gamma$ is there to ensure that the difference between the two is indeed noticeable. \vskip0.4cm Note that $G$, the standard gaussian vector in $\R^d$, satisfies $(1)$ and $(2)$: ${\cal S}=S^{d-1}$; for every $v \in S^{d-1}$, $\inr{G,v}$ is distributed as a standard gaussian variable; and one may set $1/10 \leq \alpha \leq 10$, $\beta=Pr(|g| \leq \alpha)$, $\gamma$ that is an absolute constant and $\eps_0=0$. A similar argument shows that the uniform measure on $S^{d-1}$ also satisfies $(1)$ and $(2)$ for the right choice of constants. As we explain in what follows, in general situations our choice of $Z$ will only have approximately gaussian one-dimensional marginals, and that would suffice to ensure that both $(1)$ and $(2)$ hold for $\alpha,\beta$ and $\gamma$ that are absolute constants. \vskip0.4cm The main component in the proof of Theorem \ref{thm:main-intro} is the next fact: \begin{Theorem} \label{thm:main-1} There exist constants $c_0,c_1,c_2$ that depend only on $\alpha,\beta$ and $\gamma$ for which the following holds. Let $Z$ satisfy $(1)$ and $(2)$ for some $\eps_0 \leq (3/\gamma)\eta$. Let $Z_1,...,Z_n$ be independent copies of $Z$ and set $$ {\cal K} = \{v \in \R^d : |\inr{Z_j,v}| \leq \alpha + \eta \ {\rm for \ at \ least \ } (\beta-\eta)n \ {\rm indices} \ j\}. $$ If $n \geq c_1 d\eta^{-2}\log(2/\eta)$ then with probability at least $1-2\exp(-c_2 n \eta^2)$, $$ {\cal B} \subset {\cal K} \subset (1+c_3\eta){\cal B} $$ \end{Theorem} \proof We follow the path outlined in Section \ref{sec:small-ball}. Thanks to $(1)$ and $(2)$ we have the wanted property using a single copy of $Z$. Indeed, as a preliminary step observe that $\{|\inr{Z,v}|\leq \alpha\}$ holds with probability that does not change much on ${\cal S}$. At the same time, by the lower bound on $\gamma$, $\alpha/2 \leq \alpha-(3/\gamma) \eta < \alpha$, and fix $1<\rho \leq 2$ such that $\alpha/\rho = \alpha-(3/\gamma) \eta$. Since $\eps_0 \leq (3/\gamma)\eta$ it follows that $(2)$ holds for $\eps=(3/\gamma)\eta$ and one has $$ Pr(|\inr{Z,v}| \leq \alpha/\rho) \leq \beta - 3\eta. $$ Thus, there is a noticeable difference between $Pr(|\inr{Z,v}| \leq \alpha)$ and $Pr(|\inr{Z,\rho v}| \leq \alpha)$. By Bernstein's inequality, it follows that with probability at least $1-2\exp(-c_0(\beta)n\eta^2)$, $$ \left|\frac{1}{n}\sum_{j=1}^n \IND_{\{|\inr{Z_j,v}| \leq \alpha\}} - Pr(|\inr{Z,v}| \leq \alpha)\right| \leq \eta/2; $$ Therefore, on that event, \begin{equation} \label{eq:main-single-cond-1} |\{j : |\inr{Z_j,v}| \leq \alpha\}| \geq n(\beta-\eta/2). \end{equation} Applying Bernstein's inequality again, with probability at least $1-2\exp(-c_0(\beta)\eta^2n)$, \begin{equation} \label{eq:main-single-cond-2} \left|\left\{j: |\inr{Z_j,v}| \leq \frac{\alpha}{\rho}\right\}\right| \leq \left(\beta - 2\eta\right)n. \end{equation} The heart of the proof is to show that slightly modified versions of \eqref{eq:main-single-cond-1} and \eqref{eq:main-single-cond-2} hold uniformly on ${\cal S}$; that is, with high probability, for every $v \in {\cal S}$, \begin{equation} \label{eq:main-single-cond-1-m} |\{j : |\inr{Z_j,v}| \leq \alpha+\eta\}| \geq n(\beta-\eta), \end{equation} and \begin{equation} \label{eq:main-single-cond-2-m} \left|\left\{j: |\inr{Z_j,v}| \leq \frac{\alpha+\eta}{\rho}\right\}\right| < n(\beta-\eta). \end{equation} Let $c_1=c_0/2$ and let $V \subset {\cal S}$ be an maximal $r$-separated subset of ${\cal S}$ with respect to the $L_2$ norm and of cardinality at most $\exp(c_1\eta^2 n)$. There is an event ${\cal A}_1$ of probability at least $1-4\exp(-c_1 \eta^2 n)$ on which \eqref{eq:main-single-cond-1} and \eqref{eq:main-single-cond-2} hold for every $v \in V$. Also, because ${\cal B}$ is a convex, centrally-symmetric subset of $\R^d$, a standard volumetric estimate shows that \begin{equation} \label{eq:r} r \leq 5\exp(-c_1\eta^2 n/d). \end{equation} For every $u \in {\cal S}$ let $\pi u$ be the nearest in $V$ to $u$ with respect to the $L_2$ norm. Set $$ W = \sup_{u \in {\cal S}} \sum_{j=1}^n \IND_{\{|\inr{Z_j,u-\pi u}| \geq t\}} $$ for $t=\eta/2$ (which is smaller than $\eta/\rho$). Our aim is to ensure that with high probability $W \leq n\eta/2$, and to that end we first estimate $\E W$. Observe that $$ W \leq \sup_{u \in {\cal S}} \frac{1}{t}\sum_{j=1}^n |\inr{Z_j,u-\pi u}|; $$ by the Gin\'{e}-Zinn symmetrization theorem \cite{MR757767} followed by the contraction inequality for Bernoulli processes \cite{LeTa91}, \begin{align*} \E W \leq & \frac{2}{t} \left(\E \sup_{u \in {\cal S}} \left|\sum_{j=1}^n \eps_j \inr{Z_j,u-\pi u} \right| + n \sup_{u \in {\cal S}} \E |\inr{Z,u-\pi u}|\right) \\ \leq & \frac{2r}{t} \left(\E \sup_{u \in {\cal S}} \sum_{j=1}^n \eps_j \inr{Z_j,u} + n\right), \end{align*} where we have used the fact that $\|u-\pi u\|_{L_1} \leq \|u-\pi u\|_{L_2} \leq r$. Moreover, by Lemma \ref{lemma:Bernoulli}, $\E \sup_{u \in {\cal S}} \sum_{j=1}^n \eps_j \inr{Z_j,u} \leq \sqrt{n}\sqrt{d}$, implying that if $n \geq d$ then $$ \E W \leq \frac{c_2n}{t} \exp(-c_1 \eta^2n/d)= \frac{2c_2 n}{\eta} \exp(-c_1 \eta^2n/d), $$ thanks to the estimate on $r$ from \eqref{eq:r} and by the choice of $t$. Now, by the bounded differences inequality (see, e.g., \cite{BoLuMa13}), we have that for every $x>0$, $Pr(W \geq \E W +x) \leq \exp(-c_3x^2/n)$. Setting $x=n\eta/4$, there is an event ${\cal A}_2$ of probability at least $1-2\exp(-c_4 \eta^2n)$ on which $$ W \leq n \left(\frac{2c_2}{\eta} \exp(-c_1\eta^2 n/d) + \frac{\eta}{4}\right) \leq \frac{\eta}{2}n, $$ where the last inequality holds if we set $$ n \gtrsim \frac{d}{\eta^2}\log\left(\frac{2}{\eta}\right). $$ Combining the two estimates, on the event ${\cal A}_1 \cap {\cal A}_2$ one has that for any $u \in {\cal S}$ both \eqref{eq:main-single-cond-1-m} and \eqref{eq:main-single-cond-2-m} hold. Indeed, for every $u \in {\cal S}$ we have \begin{description} \item{$\bullet$} $|\inr{Z_j,\pi u}| \leq \alpha$ for at least $n(\beta-\eta/2)$ indices $j$; and \item{$\bullet$} $|\inr{Z_j,u-\pi u}| \geq \eta$ for at most $\eta/2$ indices $j$. \end{description} Therefore, there is a set of indices of cardinality at least $n(\beta-\eta)$ such that both $|\inr{Z_j,\pi u}| \leq \alpha$ and $|\inr{Z_j,u-\pi u}| \leq \eta$, and for those indices, $$ |\inr{Z_j,u}| \leq |\inr{Z_j,\pi u}|+|\inr{Z_j,u-\pi u}| \leq \alpha+\eta, $$ verifying \eqref{eq:main-single-cond-1-m}. A similar argument may be used to confirm \eqref{eq:main-single-cond-2-m}. Setting $$ {\cal K} = \{v \in \R^d : |\{i : |\inr{Z_j,v}| \leq \alpha+\eta\}| \geq (\beta-\eta)n\}, $$ it follows from \eqref{eq:main-single-cond-1-m} that ${\cal S} \subset {\cal K}$; and, since ${\cal K}$ is star-shaped around $0$, ${\cal B} \subset {\cal K}$ as well. On the other hand, recalling that $\eta \leq \alpha \gamma/6$ then $$ \rho = 1+\frac{3\eta}{\alpha\gamma -3\eta} \leq 1+c_5\eta, $$ where $c_5 \sim 1/\alpha \gamma$. Thus, if $\|u\|_{L_2} =\rho > 1$, then $$ \{j : |\inr{Z_j,u}| \leq \alpha+\eta\} = \left\{j: |\inr{Z_j,v}| \leq \frac{\alpha+\eta}{\rho} \right\} $$ for some $v \in {\cal S}$. Hence, by \eqref{eq:main-single-cond-2-m}, $$ |\{j : |\inr{Z_j,u}| \leq \alpha+\eta\}| < (\beta-\eta)n, $$ and $u \not \in {\cal K}$. It follows that $\{v : \|v\|_{L_2}=\rho\} \subset {\cal K}^c$ and by homogeneity, $(\rho {\cal B})^c \subset {\cal K}^c$, as required. \endproof Once Theorem \ref{thm:main-1} is established, one may apply it to random vectors that satisfy $(1)$ and $(2)$ --- for example, the standard gaussian vector or the vector distributed uniformly on $S^{d-1}$. It follows that for any $\eta \leq c_0$ and given more than $c_1d\eta^{-2} \log (2/\eta)$ random points, the random set ${\cal K}$ is a $c_2\eta$-approximation of ${\cal B}$ for an absolute constant $c_2$. In particular, Theorem \ref{thm:zig-zag} follows from Theorem \ref{thm:main-1}. \vskip0.4cm Clearly, since a general random vector $X$ need not satisfy $(1)$ and $(2)$, the proof of Theorem \ref{thm:main-intro} requires an additional step. To that end one may invoke the Berry-Esseen Theorem (see, e.g., \cite{MR2722836}) to `smooth' $X$ and construct a random vector $Z$ that does satisfy $(1)$ and $(2)$. \begin{Theorem} \label{thm:Berry-Esseen} Let $W$ be a mean-zero random variable and let $W_1,...,W_m$ be independent copies of $W$. If $$ Y=\frac{1}{\sqrt{m}\|W\|_{L_2}} \sum_{i=1}^m W_i, $$ then $$ \sup_{t \in \R} \left|Pr(Y > t) - Pr(g>t) \right| \leq \psi(m), $$ where $\psi(m)=C (\|W\|_{L_3}^3/\|W\|_{L_2}^3)m^{-1/2}$. In particular, if $\|W\|_{L_3} \leq L \|W\|_{L_2}$ then $\psi(m)=c(L)/\sqrt{m}$. \end{Theorem} \begin{Remark} There are other versions of the Berry-Esseen Theorem with different conditions on $W$. For example, one may obtain nontrivial estimates on $\psi(m)$ as soon as $\|W\|_{L_q} \leq L \|W\|_{L_2}$ for some $q>2$, although if $2<q<3$ then $\psi(m)$ tends to $0$ at a slower (polynomial) rate than $1/\sqrt{m}$ (see \cite{MR2630040}). Alternatively, if $Y \in L_{\psi_\alpha}$, one has better estimates on $\psi(m)$ (see, e.g., \cite{MR1353441}). \end{Remark} For an integer $m \leq N$, set \begin{equation} \label{eq:Z} Z=\frac{1}{\sqrt{m}} \sum_{i=1}^m X_i, \end{equation} and thus one has access to $n=N/m$ independent copies of $Z$. Clearly, $Z$ is centred and has the same covariance structure as $X$. Also, for any $v \in {\cal S}$, $$ \sup_{t \in \R} \left|Pr(|\inr{Z,v}| \leq t) - Pr(|g| \leq t) \right| \leq 2\psi(m). $$ Therefore, if we set $\alpha$ to be the median of $|g|$, then for every $v \in {\cal S}$, \begin{equation} \label{eq:(1)-general} \left|Pr(|\inr{Z,v}| \leq \alpha)-\frac{1}{2} \right| \leq 2\psi(m). \end{equation} Moreover, if $\eps \leq \alpha/2$, there is an absolute constant $c$ for which \begin{align*} & Pr(|\inr{Z,v}| \leq \alpha-\eps) \leq Pr(|g| \leq \alpha-\eps) + 2\psi(m) = Pr(|g| \leq \alpha) - c\eps +2\psi(m) \\ \leq & Pr(|\inr{Z,v}| \leq \alpha) -c\eps + 4\psi(m); \end{align*} Hence, if $\eps \geq 8\psi(m)/c$, it follows that $$ Pr(|\inr{Z,v}| \in [\alpha-\eps,\alpha]) \geq c^\prime \eps $$ for an absolute constant $c^\prime$. Thus, Condition $(2)$ holds for $\eps_0 =8\psi(m)/c$ and $\eps$ that satisfies $\eps_0 \leq \eps \leq 1/8$; clearly, $\eps_0$ can be made arbitrarily small by taking a large enough $m$. \vskip0.4cm \noindent{\bf Proof of Theorem \ref{thm:main-intro}.} Given the wanted accuracy parameter $\eta$, let $m$ for which $8\psi(m)/c \leq \eta \leq 1/8$. By Theorem \ref{thm:Berry-Esseen}, if $q \geq 3$ and $\sup_{v \in {\cal S}} \|\inr{X,v}\|_{L_q} \leq L$ then one may take $m=c(L)/\eta^2$, whereas by \cite{MR2630040}, if $2<q<3$ one may take $m=c(L){\rm poly}(1/\eta)$. Define $Z$ as in \eqref{eq:Z} and take $Z_1,...,Z_n$ to be $n$ independent copies of $Z$ for $n \geq c_1 \eta^{-2}\log(2/\eta)d$. Set $\alpha$ to be the median of $|g|$; by Theorem \ref{thm:main-1}, with probability at least $1-2\exp(-c_2\eta^2 n)$, the random set ${\cal K}$ satisfies $$ {\cal B} \subset {\cal K} \subset (1+c_3\eta){\cal B}, $$ as required. \endproof \subsubsection{Isomorphic approximation} If one is interested in an isomorphic approximation, i.e., that $c{\cal B} \subset {\cal K} \subset C{\cal B}$ for constants $c$ and $C$ that need not be close to $1$, the assumption required in Theorem \ref{thm:main-intro} can be relaxed from norm equivalence to a small-ball condition: that there are $0<\lambda,\delta<1$ such that for every $v \in \R^d$, \begin{equation} \label{eq:small-ball-cond} Pr(|\inr{X,v}| \geq \lambda \|v\|_{L_2}) \geq \delta. \end{equation} By a similar argument to the one used in the proof of Theorem \ref{thm:main-intro} it follows that for $$ N \gtrsim \max\left\{\frac{d}{\delta}\log(1/\delta \lambda), \frac{d}{\lambda^2}\right\}, $$ and setting $$ {\cal K} = \{v \in \R^d : |\inr{X_i,v}| \leq \lambda/2 \ {\rm for \ at \ least \ } (1-\delta/4) N \ {\rm indices} \ i \}, $$ with probability at least $1-2\exp(-c\delta N)$, $$ c^\prime \lambda \sqrt{\delta} {\cal B} \subset {\cal K} \subset {\cal B}. $$ The inclusion ${\cal K} \subset {\cal B}$ stems from the small-ball condition: for every $v \in {\cal S}$, with probability at least $1-2\exp(-cN)$, at least $\delta N/2$ of the values $|\inr{X_i,v}|$ are likely to be larger than $\lambda$. The reason behind the other inclusion, that $c^\prime \lambda \sqrt{\delta} {\cal B} \subset {\cal K}$, is that $Pr(|\inr{X,v}| \geq t\|v\|_{L_2}) \leq 1/t^2$; therefore, with probability at least $1-2\exp(-cN)$, most of the values $|\inr{X_i,v}|$ cannot be `too large'. The high probability with which both properties hold allows one to control a fine enough net in the sphere, and the oscillation term is handled in a similar way to the proof of Theorem \ref{thm:main-1}. We omit the straightforward details. \subsection{Approximation using ellipsoids} \label{sec:ellipsoids} This section is devoted to the proof of Theorem \ref{thm:ellipsoids}. Let $m$ to be specified in what follows, set $n=N/m$ and let $I_1,...,I_n$ be the natural decomposition of $\{1,...,N\}$ to coordinate blocks of cardinality $m$. For $1 \leq j \leq n$ and $v \in \R^d$ set $$ Z_j(v)=\frac{1}{m}\sum_{i \in I_j} \inr{X_i,v}^2 $$ and recall that $$ {\cal D}_{\eta} = \{v \in \R^d : |\{j : Z_j(v) \leq 1+\eta \}| \geq 0.9n\}. $$ Our aim is to show that if $m$ and $n$ are chosen properly, then with high probability, $$ {\cal B} \subset {\cal D}_\eta \subset (1+c\eta){\cal B} $$ for a suitable absolute constant $c$. It is important to stress that the natural candidate for approximating ${\cal B}$, the empirical $L_2$ ball $$ \left\{ v \in \R^d : \frac{1}{N}\sum_{i=1}^N \inr{X_i,v}^2 \leq 1\right\}, $$ can be very different from ${\cal B}$ when $X$ is heavy-tailed; this will be illustrated in Section \ref{sec:example}. \vskip0.4cm Again, we follow the general path outlined in Section \ref{sec:small-ball}. The property ${\cal P}$ is given by invoking Assumption \ref{ass:ellipsoids}---that if $m=m_0(\eta)$ then for every $v \in {\cal S}$ $$ Pr\left( \left|\frac{1}{m} \sum_{i=1}^m \inr{X_i,v}^2 - 1 \right| \geq \frac{\eta}{10}\right) \leq 0.01. $$ \begin{Theorem} \label{thm:L-2-uniform} There are absolute constants $c_1$ and $c_2$ for which the following holds. If $$ n \geq c_1\max\{d \log(2m_0(\eta)/\eta),m_0(\eta)\}, $$ then with probability at least $1-2\exp(-c_2n/m_0(\eta))$, for every $v \in \R^d$ \begin{equation} \label{eq:global} |\{ j : Z_j(v) \in [(1-\eta)\E Z(v),(1+\eta)\E Z(v)] \} | \geq 0.96n. \end{equation} In particular, if $m_0(\eta) \leq C\eta^{-k}$ then $n \geq c_1(k+1)d\log(2C/\eta)$ suffices. \end{Theorem} \begin{Corollary} It is straightforward to verify that under an $L_4-L_2$ norm equivalence with constant $L$ one has that $m_0(\eta) \leq c(L)/\eta^2$. Therefore, the required sample size is $N=m_0 n$ for $$ m_0 \leq c_1(L)\eta^{-2} \ \ \ {\rm and} \ \ \ n = c^\prime(L) \max\{d \log (L/\eta),\eta^{-2}\} $$ which is a better estimate than in Theorem \ref{thm:main-intro} as long as $\eta \gtrsim 1/(d \log d)^{1/2}$. \end{Corollary} \proof Since the claim is homogeneous in $v$ it suffices to show that it holds for $v \in {\cal S}$. By a binomial estimate, there is an absolute constant $c_0$ such that each $v \in \R^d$ satisfies \begin{equation} \label{eq:single} |\{j : Z_j(v) \in [(1-\eta/10)\E Z , (1+\eta/10) \E Z]\}| \geq 0.98n \end{equation} with probability at least $1-2\exp(-c_0n)$. Let $V \subset {\cal S}$ be of cardinality at most $\exp(c_0n/2)$. Invoking the probability estimate with which \eqref{eq:single} holds, there is an event ${\cal A}_1$ of probability at least $1-2\exp(-c_0n/2)$ such that \eqref{eq:single} holds for every $v \in V$. As expected, our choice of $V$ is a maximal $r$-separated subset of ${\cal S}$ with respect to the $L_2$ norm; and by a volumetric estimate, $r \leq 5\exp(-c_1n/d)$ for an absolute constant $c_1$. To prove the wanted uniform estimate, for $u \in {\cal S}$ let $\pi u \in V$ be the nearest element to $u$ with respect to the $L_2$ norm. Set $$ W = \sup_{u \in {\cal S}} |\{i : |\inr{X_i,u-\pi u}| \geq \eta/10\}|, $$ and the aim is to show that with high probability, $W \leq 0.02n$. Just as in the proof of Theorem \ref{thm:main-1}, let us first estimate $\E W$. By symmetrization and contraction, followed by the estimate on $r$ and Lemma \ref{lemma:Bernoulli}, \begin{align*} \E W \leq & \frac{10}{\eta} \E \sup_{u \in {\cal S}} \left|\sum_{i=1}^N \left(|\inr{X_i,u-\pi u}| - \E |\inr{X_i,u-\pi u}|\right)\right| + \frac{10}{\eta}\sup_{u \in {\cal S}} |\inr{X,u-\pi u}| \\ \leq & \frac{20r}{\eta} \left(\E \sup_{u \in {\cal B}} \left|\sum_{i=1}^N \eps_i \inr{X_i,u} \right| + N \right) \leq c_2\frac{rN}{\eta}\left(\sqrt{dN} + N \right) \leq 0.01n, \end{align*} provided that $n \geq c_3d \log(m_0(\eta)/\eta)$. Therefore, by the bounded differences inequality, $W \leq 0.02n$ with probability at least $1-2\exp(-c_4n^2/N)=1-2\exp(-c_4n/m)$ for a suitable absolute constant $c_4$. Combining the two estimates, there is an event with probability at least $1-2\exp(-c_5n/m)$ on which: \begin{description} \item{$\bullet$} For every $v \in V$, $Z_j(v) \in [1-\eta/10,1+\eta/10]$ for at least $0.98n$ indices $j$. \item{$\bullet$} For every $u \in {\cal S}$, $|\inr{X_i,u-\pi u} | \geq \eta/10$ for at most $0.02n$ indices $i$; in particular, for every $u$ there could be at most $0.02n$ of the coordinate blocks $I_j$ that are `corrupted' by such a large value of $|\inr{X_i,u-\pi u}| \geq \eta/10$. On all the other blocks, $\max_{i \in I_j} |\inr{X_i,u-\pi,u}| \leq \eta/10$. \end{description} Therefore, by the triangle inequality, for every $u \in {\cal S}$ there are at least $0.96n$ indices $j$ for which $Z_j(u) \in [1-\eta,1+\eta]$, as required. \endproof \noindent{\bf Proof of Theorem \ref{thm:ellipsoids}.} Consider the event from Theorem \ref{thm:L-2-uniform}. If $u \in {\cal S}$ then $Z_j(u) \leq 1+\eta$ for more than $0.9n$ coordinate blocks, implying that $u \in {\cal D}_\eta$. And, since ${\cal D}_\eta$ is star-shaped around $0$, it is evident that ${\cal B} \subset {\cal D}_\eta$. At the same time, if $\|u\|_{L_2}= \rho$ then $Z_j(u) \geq (1-\eta) \rho^2 > 1+\eta$ provided that $\rho \geq 1+c\eta$. Therefore, $(1+c\eta){\cal S} \subset ({\cal D}_\eta)^c$ and in particular, using the star-shape property again, ${\cal D}_\eta \subset (1+c\eta){\cal B}$. \endproof \subsection{Limitations of approximating using the empirical ellipsoid } \label{sec:example} Let us show that selecting ${\cal K}=\{v \in \R^d : N^{-1}\sum_{i=1}^N \inr{X_i,v}^2 \leq 1\}$ as an approximation of ${\cal B}$ is a poor choice when $X$ is heavy-tailed. To that end we construct a collection of random vectors that satisfy an $L_4-L_2$ norm equivalence and for which ${\cal B}$ is equivalent to $B_2^d$. At the same time, with a non-trivial probability there is $v \in S^{d-1}$ for which $N^{-1}\sum_{i=1}^N \inr{X_i,v}^2 \gg 1$. More accurately, for each $u \gtrsim 1/\sqrt{d}$ we construct a centred random vector $X_u$ that satisfies: \begin{description} \item{$(a)$} For every $v \in S^{d-1}$, $1 \leq \|\inr{X_u,v}\|_{L_2} \leq 2$; \item{$(b)$} $\sup_{v \in S^{d-1}} \|\inr{X,v}\|_{L_4} \leq L$ for an absolute constant $L$; and \item{$(c)$} $Pr(\|X_u\|_2^2 \geq ud) \geq 1/2u^2d$. \end{description} Let $\Gamma=N^{-1/2}\sum_{i=1}^N \inr{X_i,\cdot}e_i$ and observe that \begin{equation*} \sup_{v \in S^{d-1}} \frac{1}{N}\sum_{i=1}^N \inr{X_i,v}^2 = \|\Gamma\|^2_{2 \to 2} = \|\Gamma^*\|_{2 \to 2}^2 \geq \max_{1 \leq i \leq N} \|\Gamma^* e_i\|_2^2 \geq \frac{1}{N}\max_{1 \leq i \leq N} \|X_i\|_2^2. \end{equation*} \begin{Lemma} \label{eq:lemma-X-u} Let $0<\delta<1/4$ and set $X_u$ as above for $u=(N/4d \delta)^{1/2}$. Then with probability at least $\delta$, $$ \frac{1}{N} \max_{1 \leq i \leq N} \|X_i\|_2^2 \geq \sqrt{\frac{d}{4\delta N}}. $$ \end{Lemma} In particular, with probability at least $\delta$, $B_2^d \not \subset C{\cal K}$ unless $C \geq (d/4N\delta)^{1/4}$, making even an isomorphic approximation impossible if one would like it to hold with probability $1-\delta$ for a small $\delta$ (corresponding to a large $u$), particularly taking into account that we would like $N$ to scale linearly in $d$. \vskip0.4cm \proof Recall that $Pr(\|X_u\|_2^2 \geq ud) \geq 1/2u^2d=2\delta/N \equiv \rho$. Therefore, given $N$ independent copies of $X_u$ denoted by $Y_1,...,Y_N$, $$ Pr({\rm there \ exists \ } 1 \leq i \leq N, \ \|Y_i\|^2 \geq ud) \geq N \rho (1-\rho)^{N-1} = 2\delta (1-2\delta/N)^N \geq \delta. $$ On that event, $$ \frac{1}{N} \max_{1 \leq i \leq N} \|Y_i\|_2^2 \geq \frac{ud}{N} = \left(\frac{d}{4N\delta}\right)^{1/2}, $$ as claimed. \endproof All that is left now is to construct the random vectors $X_u$. To that end, let $\eta_1,...,\eta_d$ be independent $\{0,1\}$-valued random variables with mean $1/u^2 d^2$ and set $\eps_1,...,\eps_d$ to be independent, symmetric $\{-1,1\}$-valued random variables that are independent of $\eta_1,...,\eta_d$. Let $z_i=\eps_i\max\{\eta_i R,1\}$ where $R = \sqrt{ud}$, and set $X_u=(z_1,...,z_d)$. Clearly, $\E z_i =0$ and $$ \E z_i^2 = \frac{R^2}{u^2 d^2} + \left(1-\frac{1}{u^2d^2}\right); $$ hence, $1 \leq \|z_i\|_{L_2} \leq 2$ if $u \geq 1/d$ as was assumed. Moreover, $$ \E z_i^4 \leq \frac{R^4}{u^2 d^2} + \left(1-\frac{1}{u^2d^2}\right) \leq 2. $$ Now, for $v \in \R^d$ we have that $\E\inr{X_u,v}^2 = \sum_{i=1}^d v_i^2 \E z_i^2$ and $(a)$ follows from the estimate on $\E z_i^2$. As for $(b)$, it is straightforward to verify that since $\E z_i^4 \leq 2$, $\|\sum_{i=1}^d v_i z_i\|_{L_4} \leq L\|v\|_2$ for an absolute constant $L$. Finally, to prove $(c)$, consider $u \gtrsim 1/\sqrt{d}$ and observe that $\|X_u\|_2^2 = \sum_{i=1}^d z_i^2$. Note that with probability at least $d \cdot (1/u^2d^2) \cdot (1-1/u^2d^2)^{d-1} \geq 1/2u^2 d$, there is at least one index $i$ for which $z_i^2 \geq R^2 = ud$; hence, on that event, $\|X_u\|_2^2 \geq ud$, as required. \endproof \subsection{Improving Theorem \ref{thm:main-intro}} \label{sec:improve} Let us sketch an alternative proof of Theorem \ref{thm:main-intro}. On the one hand, it leads to a better estimate on the required sample size; on the other, it is based on a special property of slabs. The components of the proof are well understood so we will only sketch the argument. \vskip0.4cm In what follows we consider $Z_1,...,Z_n$ that are distributed as $m^{-1/2} \sum_{i=1}^m X_i$ and satisfy \eqref{eq:(1)-general}; specifically we assume that $m$ is large enough to ensure that for $v \in {\cal S}$, \begin{equation} \label{eq:emp-proof-2} \left|Pr(|\inr{Z,v}| \leq \alpha)-\frac{1}{2} \right| \leq \frac{\eta}{2} \end{equation} where $\alpha$ is the median of $|g|$. Here, the approximating body will be $$ {\cal K} = \left\{v \in \R^d : |\inr{Z_j,v}| \leq \alpha \ {\rm for \ at \ least \ } \left(\frac{1}{2}-\eta\right)n \ {\rm indices} \ j\right\}. $$ To show that indeed ${\cal K}$ is an $\eta$-approximation of ${\cal B}$, let us estimate the supremum of the empirical process \begin{equation} \label{eq:empirical} W=\sup_{v \in {\cal S}} \left|\frac{1}{n} \sum_{j=1}^n \IND_{\{|\inr{Z_j,v}| \leq \alpha\}} - Pr ( |\inr{Z,v}| \leq \alpha ) \right|. \end{equation} This is an empirical process indexed by a collection ${\cal U}$ of subsets of $\R^d$---the slabs $\{x \in \R^d : |\inr{x,v}| \leq \alpha\}$. It is standard to verify that the \emph{VC dimension} of ${\cal U}$ is at most $cd$: each set is generated by the intersection of two halfspaces, and the VC dimension of the collection of halfspaces in $\R^d$ is at most $c^\prime d$ (see, for example, \cite{vaWe96} for more information on VC classes). By Talagrand's concentration inequality for empirical processes indexed by a class of bounded functions (\cite{MR1258865}, see also \cite{BoLuMa13}), it follows that with probability at least $1-\exp(-t)$, $$ W \leq c_1\left(\E W + \sqrt{\frac{t}{n}} + \frac{t}{n}\right). $$ And, by a standard argument\footnote{The proof is based on symmetrization, the fact that a Bernoulli process is subgaussian with respect to the $\ell_2$ metric, a Dudley entropy integral bound and well-known estimates on the covering numbers of VC-classes.}, \begin{equation*} \E W \leq c_2\sqrt{\frac{d}{n}}. \end{equation*} Thus, with probability at least $1-\exp(-c_3\eta^2n)$, $W \leq \eta/2$ provided that $n \gtrsim d/\eta^2$. Therefore, on that event \begin{equation} \label{eq:emp-proof-1} \sup_{v \in {\cal S}} \left| \left| \left\{j : |\inr{Z_j,v}| \leq \alpha\right\} \right| - n Pr(|\inr{Z,v}| \leq \alpha) \right| \leq \frac{n\eta}{2}. \end{equation} Combining \eqref{eq:emp-proof-1} and \eqref{eq:emp-proof-2} it follows that with probability at least $1-2\exp(-c\eta^2n)$, for every $v \in {\cal S}$, \begin{equation} \label{eq:emp-proof-3} \left| \left\{j : |\inr{Z_j,v}| \leq \alpha\right\} \right| \geq n\left(\frac{1}{2}-\eta\right). \end{equation} In particular we have that ${\cal S} \subset {\cal K}$, and since ${\cal K}$ is star-shaped around $0$ then also ${\cal B} \subset {\cal K}$. A similar estimate to \eqref{eq:emp-proof-3} leads to the fact that $(1+\eta){\cal S} \subset {\cal K}^c$ and completes the proof. \endproof \vskip0.4cm The feature that makes this proof simple is that the class of indicators one is interested in happens to be a VC class. In general, there is no reason to expect such a happy coincidence when choosing a property ${\cal P}$, and controlling the resulting empirical process can be a nontrivial problem. In contrast, the method presented here allows one by bypass this difficulty for rather general choices of ${\cal P}$ and at a price of a slightly suboptimal dependency on $\eta$. \bibliographystyle{plain}
{ "timestamp": "2018-04-17T02:11:34", "yymm": "1804", "arxiv_id": "1804.05402", "language": "en", "url": "https://arxiv.org/abs/1804.05402" }
\section{Introduction} \label{sec:Introduction} Predicting the wear out of components is pivotal in various domains such as the automotive, health and aerospace industries~\cite{allred1998prognostic,ReussSAHH18,DBLP:conf/pkdd/ShekarBSSM17}. Robust and accurate predictions have a great potential for preventing unanticipated equipment failures and increasing productivity. With the recent widespread adoption of the Internet-of-Things (IoT), many sensor signals are now readily accessible for predicting the wear out of components. At Bosch, we often encounter datasets with several hundreds of sensor measurements and other calculated values from vehicles~\cite{DBLP:conf/pkdd/ShekarBSSM17}. These are used for predicting the health-state of a component. For example, in automotive applications, we can predict the wear out of an engine-coolant system using signals from different sensors such as torque, pressure, temperature and speed. Traditional approaches select a small and predictive subset of these measurements (or attributes) by evaluating their relevance to the target (health-state) prediction~\cite{ChandrashekarS14,MolinaBN02}. Several off-the-shelf algorithms, viz., Decision Trees \cite{quinlan2014c4}, Random forests \cite{breiman2001random}, Gaussian processes \cite{lazaro2010sparse} and Support Vector Machines (SVM's) \cite{libsvm}, were used on our fuel system data from different vehicles. Overall, we observed that all aforementioned algorithms selected a similar subset of attributes as the most relevant ones. A problem arises in the case when one or more of these selected (relevant) attributes are invalid due to malfunctioning sensors. During malfunctioning, the sensors measurements are stuck at a constant value, e.g., zero, such cases are denoted as stuck-at-zero condition of the sensor \cite{elleithy2012innovations}. If such a malfunctioning sensor represents a relevant attribute for the target prediction, it leads to unreliable predictions. It is therefore essential to train a model that does not rely on a fixed subset of attributes. Additionally, sensors are electrical devices that are prone to be affected by noise. For example, the magnetic field generated by the ignition system of a vehicle can affect other sensors~\cite{dziubinski2016electromagnetic}. Noisy sensors generate a few distorted measurements amidst valid values. Using these distorted sensors readings can lead to erroneous predictions and raise false alarms by the wear out prediction model. Industries spend millions of dollars to remove the noise from these signals \cite{redman1997data}. However, manual data cleansing process is laborious, time consuming and prone to errors \cite{zhu2004class}. The first challenge is to generate a prediction model that is robust to missing attributes, i.e., stuck-at-zero condition. The second challenge is to ensure that the prediction model is robust against noisy attributes. Solving these two problems are one of the foremost challenges that Bosch faces when predicting the health-state of the vehicle's components. For the aforementioned challenges, we propose: \begin{enumerate} \item A technique for building prediction models that are robust to faulty or missing attributes. \item A strategy for handling noise in the input attributes, that is built upon the data augmentation technique. \end{enumerate} To enhance the robustness of the predictions in spite of faulty attributes, we propose using prediction models that do not rely on a small set of signals. Our approach is founded upon the Dropout technique, a well-known regularization technique used in the training of Artificial Neural Networks (ANN). Dropout randomly removes a few attributes during training. This forces the ANN to use more attributes during the training phase instead of relying on a single small subset of attributes. Moreover, random dropping of the ANN units during training of the network simulates the situation of sensor failure in the real world. To address the second challenge of noisy inputs, ANNs were trained with a certain magnitude of synthetically generated noise in the training data. By replacing the values of the attributes in the training data with random values from a Gaussian distribution, we indirectly simulate the noisy behavior of the sensors. This allows the ANN to learn the contributions of each feature for the output prediction amidst distorted inputs. Bosch provided a labeled dataset related to the health-state of the fuel system. Using this automotive data, we tested the robustness of our framework on a real world scenario. \section{Related Work} As elaborated in the previous section, first we aim to perform predictions based on a large subset of attributes to avoid incorrect predictions during sensor failure. Secondly, we aim to augment the training data to enhance the ability of the network to be able to identify relevant patterns amidst noisy input data. \textit{Preprocessing} techniques for handling noisy and missing input attributes have been of great interest in the data mining community \cite{redman1997data,zhu2004class,wang1995framework,maletic2000data}. The aforementioned methods have their own strengths and weaknesses. However, in real world applications, we do not know the type of noise that can interfere with the sensor measurements. As mentioned in Section \ref{sec:Introduction}, valid sensor measurements can be stuck-at-zero \cite{elleithy2012innovations} in case of malfunction. Applying imputation techniques to extrapolate these values as in the case of a missing value problem is not desirable. Hence, it is not a pragmatic solution apply these data preprocessing techniques in real world applications \cite{zhu2004class}. \textit{Feature Selection} algorithms predominantly focus on selecting a set of attributes relevant for the prediction task \cite{MolinaBN02,ChandrashekarS14,DBLP:conf/pkdd/ShekarBSSM17}. The recent work of \textit{Relevance and Redundancy} ranking \cite{DBLP:conf/pkdd/ShekarBSSM17} is a feature ranking framework that has experimentally shown to be robust amidst noisy target labels. However, we focus on building prediction model using a large number of attributes to enhance robustness of predictions. Secondly, our application scenario involves noisy input attributes and not noisy target labels. \textit{Multi-view learning} algorithms perform predictions based on multiple attribute subsets. In the case of a failed attribute in one subset, the predictions can be supported by attributes from other subsets. However, existing multi-view approaches \cite{DBLP:conf/dawak/ShekarSM17,oza1999dimensionality} do not discuss the effect of faulty input attributes. Nor are they as resistant to multiple sensor failures that can occur over all of the attribute subsets. \textit{Pruning of Decision trees} was introduced to avoid over fitting to noisy training data \cite{quinlan2014c4}. As classifiers learned from noisy data have less accuracy, pruning may have very limited effect in enhancing the system's performance, especially in the situation that the noise level is relatively high \cite{zhu2004class}. \textit{Dropout} technique in ANNs is similar to the idea of pruning in decision trees. The regularization technique of dropout aims to eliminate random units of the neural network to avoid over fitting. However, in this work we use this regularization technique because performing dropout in the inputs is analogous to the real world scenario of sensor failure. The technique of \textit{adding noise} to the training data is reported to enhance the generalization of ANNs by forcing more hidden units to be used \cite{sietsma1991creating}. Hence, to address the second problem of noisy input attributes, we use artificially generated noise in the training data. By training the prediction model with artificially injected noise in the training data, we aim to enhance the prediction model's ability to identify relevant patterns amidst noise in the real world scenario. Hence, in contrast to the preprocessing techniques, our work aims to challenge the prediction model during training phase by forcing it to learn relevant patterns amidst noise. \section{Problem Definition} \label{sec:problem} As explained in Section \ref{sec:Introduction} we address the first problem building prediction models with inputs obtained from malfunctioned sensors. Hence, we begin with the formal definition of a faulty sensor. \begin{definition}{Malfunction of sensors} \label{def:malfuntion}\newline Assume a $d$-dimensional attribute space $\mathcal{F}= \{a_1,\cdots,a_{d}\}$, where a subset of sensors $M\subset\mathcal{F}$ are defective. This means that each attribute $a\in M$ is stuck at zero and continuously generates null values. \end{definition} The second problem being noise in the sensor data, we formally define the behavior of a noisy sensor. \begin{definition}{Noisy sensor}\label{def:noisy}\newline Assume a subset of sensors $N\subset\mathcal{F}$, that are subjected to intermittent deviations or disturbances. This means that the random instances of attribute $a\in N$ fluctuates to absurd values and deviates from the actual measurements. \end{definition} We denote the accuracy of a prediction model trained using the attribute space as $acc:\mathcal{F}\mapsto\mathbb{R}$. We focus on enhancing the robustness of the predictions such that, in the event of a sensor failure, we aim to obtain an accuracy greater than or equal to that of a prediction model with all valid measurements. $$acc(\mathcal{F}\mid|M|<1) \leq acc(\mathcal{F}\mid|M|>1)$$ Similarly, in the case of a noisy sensor, $$acc(\mathcal{F}\mid|N|<1) \leq acc(\mathcal{F}\mid|N|>1).$$ \section{Artificial Neural Networks} To obtain a deeper understanding about the dropout technique, it is necessary to revisit the basics of ANNs. ANNs are machine learning algorithms inspired by the biological nervous system and are capable of identifying complex non-linear relationships. Information is processed using a set of highly interconnected nodes, also referred to as neurons. A network of weighted nodes are stacked into multiple layers. At each node, an activation function combines the weights into a single value. This can effectively limit the signal propagation to the next layers. These weights, therefore, enforce or inhibit the activation of the network’s nodes. This process is comparable to feature selection. Additionally, ANN's require minimal attribute engineering for classification \cite{DBLP:journals/neco/Baxt90,DBLP:journals/cacm/WidrowRL94} and regression~\cite{DBLP:journals/nn/RefenesZF94} problems. This enables ANNs to autonomously identify distinct patterns in the input attributes amidst noise. Hence, with embedded feature selection and the ability to identify distinct patterns with minimal preprocessing, we chose ANNs as an ideal candidate for our experiments. The ANN architecture is typically split into three types of layers: one input layer; one or more hidden layers; and one output layer (c.f. Figure \ref{fig:ANN}). The input layer consumes the data. This layer connects to the first hidden layer, which in turn connects either to the next hidden layer (and so on) or to the output layer. The output layer returns the ANN’s predictions. \begin{figure} \vspace{-3mm} \center \includegraphics[width=0.7\textwidth]{ANN.pdf} \caption{Schema of an artificial neural network. Image Source \cite{ANNimage} } \label{fig:ANN} \vspace{-4mm} \end{figure} There are two main types of ANNs based on the flow of information, referred as Feed-forward Neural Network (FNN) and Recurrent Neural Network (RNN)~\cite{DBLP:conf/emnlp/ChoMGBBSB14}. In FNNs, the flow of information through the hidden layers is acyclic. On the other hand, with RNN, the flow of information in the hidden layers can be bi-directional or cyclic. FNNs have been used in many different domains such as the prediction of medical outcomes \cite{TU19961225}, environmental problems \cite{Maier2000101}, stock market index predictions \cite{Moghaddam201689} and the wear out of machines \cite{BenAli2015150}. Considering its wide usage in applications analogous to ours, in this work we choose to use FNNs for building our prediction framework. Using the FNN, we aim to address the first challenge defined in Section \ref{sec:Introduction}. That is, to build prediction models that are robust to faulty attributes (c.f. Definition \ref{def:malfuntion}). For this we apply the concepts of dropout, which we describe next. \subsection{Dropout} \label{subsec:dropout} Dropout is proven to be an effective regularization technique for ANNs~\cite{DBLP:journals/jmlr/SrivastavaHKSS14}. Technically, it prevents the units from co-adapting too much and consequently avoids over-fitting while training the network. Dropping or removing an unit implies that both the input and output connections of the neuron are disconnected. In Figure \ref{fig:dropout}, we provide an illustration of networks with fully connected and dropped out units. The principal idea of dropout involves removing random units from a layer (both hidden and visible) by setting its activation function to zero. That is, when applied on the input layer, the activations of selected neurons are nullified. Therefore, application of dropout on input layer is analogous to the sensor failure in real world scenario (c.f. Definition \ref{def:malfuntion}). By training the ANNs with dropout, we indirectly aim to make the network aware of these failures. \begin{figure} \vspace{-3mm} \center \includegraphics[width=0.7\textwidth]{dropout.pdf} \caption{Example of Dropout used in ANN (Image Source: \cite{DBLP:journals/jmlr/SrivastavaHKSS14})} \label{fig:dropout} \vspace{-3mm} \end{figure} The abstract concept of Dropout \cite{breiman2001random} sounds very similar to the ensemble technique used by Random forests. Random forest aggregate prediction results from the multiple views of the data based on a number of decision trees that use randomly selected subsets of attributes. Similarly, Dropout networks essentially train different networks on multiple subset of the attributes. However, on a closer look into the details, there are considerable differences between both (c.f. Table \ref{table:differenceRfVsDrop}). \begin{table} \vspace{-12mm} \centering \caption{Differences between Random forest ensembles and Dropout Networks \cite{warde2013empirical,jaquescomparison}} \label{table:differenceRfVsDrop} \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c} \toprule Random Forest & Dropout Network \\ \hline \begin{tabular}[c]{@{}c@{}}A large number of decision trees are trained using\\ randomly selected attribute subsets in parallel.\end{tabular} & \begin{tabular}[c]{@{}c@{}}It is an inherently serial process, where\\neurons are dropped out as each training\\sample is processed.\end{tabular} \\ \hline All data samples are used. & A single sample is used to train a model. \\ \hline Each tree has independent parameters. & \begin{tabular}[c]{@{}c@{}}The parameters are shared between networks with\\different neurons dropped.\end{tabular} \\ \hline Arithmetic mean to combine the results. & Equally weighted geometric mean to combine results. \\ \bottomrule \end{tabular} } \vspace{-8mm} \end{table} Dropping random neurons in each iteration enables every hidden unit to learn to identify relevant patterns from a randomly chosen sample of neurons of the preceding layer. This makes each hidden layer robust and drives them to create useful features on their own without requiring that the next layers correct their mistakes~\cite{DBLP:journals/jmlr/SrivastavaHKSS14}. Recent study also shows that Dropout networks are comparatively more accurate than Random forest for multi-class classification problems \cite{jaquescomparison}. \subsection{Data Augmentation}\label{subsec:dataAugmentataion} As explained in Section \ref{sec:Introduction}, in automotive applications, exposing the sensors to harsh-environmental conditions over a prolonged period of time can cause the sensor values to be distorted due to electrical or magnetic interference \cite{dziubinski2016electromagnetic}. Hence, training the machine learning models to identify relevant patterns irrespective of noisy attributes is of paramount importance. To mimic the problem of noisy sensors (c.f. Definition \ref{def:noisy}) in real world applications, we performed data augmentation on our training data. Data augmentation is a concept introduced from the literature of image classification~\cite{arandjelovic2012three}. It involves transforming the original data (e.g., rotation, zoom, rescaling and cropping) to avoid over-fitting~\cite{2017arXiv171204621P}. For example, to build text-to-speech models, the data is collected from unfiltered Web pages with errors. Rather than using the large unstructured data for learning useful patterns, a small corpus of structured data is extracted and augmented. It is then used to train the machine learning model. This technique has also proven to be effective on unfiltered data that contain errors \cite{2017arXiv171204621P}. We adopt the concept of data augmentation and tailor it to address our second challenge (c.f. Section \ref{sec:Introduction}), i.e., noisy attributes. We replace random attributes in the dataset with noise. That is, we deliberately introduce noise to the original training data and then train our models using this transformed dataset. In practical terms, the values of a randomly selected subset of attributes in each instance is replaced with random values obtained from a Gaussian distribution with mean zero and standard deviation of one, i.e., $\mathcal{N}(0,1)$. Hence, by training the models with certain levels of noise, we enhance their robustness against sensor failures in the real world. \section{Methodology} \label{sec:methodology} In Section \ref{subsec:dropout} and \ref{subsec:dataAugmentataion} we justified the use of dropout and data augmentation to address the problems we are confronted with (c.f Section \ref{sec:problem}). The theoretical concept of dropout and data augmentation emulates the real life situation of sensor failure and noise respectively. However, its practical application raises two major questions, \begin{enumerate} \item What is the magnitude of dropout to be used? \item What is the level of augmentation to be applied for the transformation of the training data? \end{enumerate} For this, we train multiple models with different levels of input dropout and data augmentation. These models are tested upon test data and we observe the prediction accuracy on it as a quality measure. We explain the finer details based on the dataset we use. \subsection{Dataset} In this work, we apply the proposed methodology to an automotive dataset. We are provided with a high-dimensional attribute space $\mathcal{F}= \{a_1,\cdots,a_{149}\}$ of 149 attributes and 4 million instances. The attributes are obtained from various sensor sources present in the vehicles. It also include signals that are calculated in the vehicle hardware using the sensor measurements. The goal is to predict the target classes that represent the health-state of an automotive fuel system. Therefore, we are provided with the target labels ($Y$) of nominal values and the dataset\footnote[3]{Code and data: \url{https://figshare.com/s/d5bcd9b4269afa642e53}} is denoted as $\mathcal{D}=\{\mathcal{F}, Y\}$. Table \ref{table:classDistribution} shows the distribution of the different classes in the dataset. As the data for each health state was obtained from different vehicles, each instance can be seen as a snapshot of the fuel system. In other words, the dataset is not a time-series and health-states are therefore not correlated in time. In such stationary datasets, FNN's are a preferable choice in comparison to RNN's. \begin{table} \vspace{-6mm} \caption{Distribution of the classes in the dataset}\label{table:classDistribution} \centering \resizebox{0.5\textwidth}{!}{% \begin{tabular}{lrr} \toprule Class & Health state& Class distribution \\ \midrule Class 1 & 0\% & 9.96\% \\ Class 2 & 10\% & 13.98\% \\ Class 3 & 20\% & 3.6\% \\ Class 4 & 40\% & 4.6\% \\ Class 5 & 60\% & 12.8\% \\ Class 6 & 80\% & 47.06\%\\ Class 7 & 100\% & 7.9\%\\ \bottomrule \hline \end{tabular} } \end{table} The dataset is split into two parts for training and testing purposes based on the chronology of the data collection. That is, training is performed using the data collected on a specific time of the year (e.g., January) and the testing is performed on a dataset collected from a different time (e.g., August). Both train and test datasets were standardized by subtracting the mean and dividing by the standard deviation. This is also referred as z-score or a standard score. The training dataset is used to train 7 different networks, each with different magnitude of input dropout. For example, $Model~D2$ denotes an ANN model with a dropout of 20 nodes in the input layer. Similarly, we instantiate multiple networks ($Model~D2,Model~D4,\cdots,D14$) with varying dropout levels of $20,40\ldots,140$ attributes respectively. Given an ANN architecture and a dropout level, the dropout can be applied between any two consecutive layers. Nevertheless, we aim study the influence of dropout between the input and the first hidden layer. This implicitly means that each model is trained to predict with a different number of faulty sensors. However, a constant dropout rate of $50\%$ was still used in the hidden layers for regularization purposes. To drop one neuron, is technically setting the activations of this neuron to zero. Hence, we transform the original dataset to mimic the dropout process in the input layer by setting its value to zero. The reason for setting attribute values to zero instead of using the dropout in the input layer of the ANNs is that it allows us to simulate an equivalent dropout in the test dataset as well. The corresponding test datasets are denoted as $DTest2,DTest4,\ldots,DTest14$. Moreover, this experimental setting is comparable to the problem of failed sensor that is stuck-at-zero (c.f. Definition \ref{def:malfuntion}). For simplicity we refer to the original train and test dataset as $D0$ and $DTest0$ respectively. The goal of the experiment is to identify the level of dropout that has the maximal accuracy on the unseen test data. \vspace{-1mm} \begin{algorithm} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \caption{Algorithm for injection of noise into data} \begin{algorithmic}[1] \Require $\mathcal{F}, \alpha \in \{0,20,40,\cdots,140\}$ \State $\mathcal{I}=\{1,...,149\}$ \Comment{Set of attribute indices} \ForEach {$Instance~i$} \State $Select~random~subset~of~attribute~indices~\mathcal{I}'\subset \mathcal{I},~where,~\mid\mathcal{I}'\mid=\alpha$ \State $Replace~instance~i~of~attribute~a_j\in\mathcal{F}\mid\forall j \in \mathcal{I}'~with~values~from~\mathcal{N}(0,1)$ \label{line:Replace} \EndFor \end{algorithmic} \label{alg:noiseApproach} \end{algorithm} \vspace{-3mm} In the case of augmentation, injecting noise in all instances of a single subset of attributes is not challenging for the network because the ANN will simply neglect these attributes during training by inhibiting the corresponding network nodes. Hence, for each instance of the attribute space $\mathcal{F}$, a random attribute subset of size $\alpha \in \mathbb{Z}$ (where, $0\leq\alpha<|\mathcal{F}|$) is selected and replaced with random values from a Gaussian Distribution (c.f. Algorithm \ref{alg:noiseApproach}). In our experiments, $V0,V2,V4...,V14$ denote different variants of training data with $\alpha \in \{0,20,40\cdots,140\}$ respectively. For example, $V2$ represents a dataset where 20 random attributes of the training data are replaced by random numbers from a Gaussian distribution for each instance. By applying the transformation, our goal is to imitate the real world scenario of noisy sensors and analyze the influence of different noise levels in the input attributes. The corresponding transformation is also applied to the test data and is denoted as $VTest0,VTest2,VTest4,\cdots,VTest14$. In electrical applications, white noise is also a commonly observed anomaly in the sensor measurements. Hence, we also generate test datasets with white noise, i.e., $WTest2,WTest4,...,WTest14$. For the generation of data with white noise, we follow the same sequence of steps explained in Algorithm \ref{alg:noiseApproach}. However, instead of replacing (c.f. Line \ref{line:Replace} in Algorithm \ref{alg:noiseApproach}), we add valid measurements in an instance with random values from $\mathcal{N}(0,1)$. As a rule of thumb, all experiments in the forthcoming section will use a FNN architecture with: an input layer of 149 neurons, three hidden layers of 128, 256 and 128 neurons, and an output layer of 7 neurons. \section{Experimental Results} As described in Section~\ref{sec:methodology}, we have 4 types of data: train data with dropout, test data with dropout, train data with noise and test data with noise. To test the influence of dropout and noisy attributes on the test data accuracy, we begin with individual analysis of each technique. \subsection{Input drop} \label{subsec:inputdropExp} In this section, we experiment using ANN networks trained with different levels of dropout. In the first experiment, we trained multiple networks with the datasets $D0, D2,D4,\ldots,D14$. Each of these models were then evaluated on all test datasets that were subjected to the same input drop process which are denoted as $DTest0, DTest2,..., DTest{14}$ respectively. The results are illustrated in Figure~\ref{img:inputdrop_inputdrop}. The network trained with the original data, i.e., $Model~D0$, is accurate when tested on datasets with low or no dropout, i.e., $DTest0$ and $DTest2$. After this point onwards, its accuracy declines steeply with an increasing number of dropped inputs in the test dataset, until it reaches an accuracy of 0.5 for $DTest14$. Interestingly, we observe that the models which were trained on datasets with a larger number of dropped inputs, are comparatively more robust to test data with a large number of dropped inputs. \begin{figure} \vspace{-6mm} \centering \includegraphics[trim=0 0 0 0.58cm, clip, scale=0.7]{ann_Dropout_Dropout.pdf} \vspace{-2mm} \caption{Accuracy (y axis) of different models trained using input drop data. The accuracy was calculated for each test data (x axis) with different levels of dropout ($DTest0,...,DTest14$).} \label{img:inputdrop_inputdrop} \vspace{-6mm} \end{figure} Moreover, they also maintain a high accuracy on test datasets that have more dropped inputs than the one used for training. From the experimental analysis, we observe that the average of all test data accuracies using $Model~D8$ is higher in comparison to the other models. It is therefore much more robust than $Model~D0$ with no dropped units. Let us assume $Model~D8$ is used in a real world scenario to predict the health of the fuel system. In-spite of the failure of 100 sensors ($DTest10$) that are used as input attributes for the prediction model, the predictions will still have an approximate accuracy of 0.85. Hence, the idea of dropout helps us to tackle the problem of failed sensors in the real world prediction systems (c.f. Section \ref{sec:problem}). \subsection{Input noise} \label{subsec:inputnoise} The above dropout experiment does not solve our problem completely because, a noisy sensor will not be seen as missing data. Instead, it will give us a wrong measurement. For this reason, we did a second experiment where we test the input dropout models, i.e., $Model~D0,Model~D2,...,D14$, on scenarios where the data has faulty measurements. That is, we tested the dropout models on test data obtained from the input noise approach, viz., $VTest0, VTest2, \ldots, VTest14$. The behavior of the models are visually represented in Figure~\ref{img:inputdrop_noisedrop}. In comparison to the previous experiment (c.f. Figure \ref{img:inputdrop_inputdrop}), all the models have worser performances because the decline in accuracy happens much earlier in Figure~\ref{img:inputdrop_noisedrop}. This is not surprising because the training was performed with dropout technique without noise and the testing was performed with noisy data. Hence, the network is unaware of the noise in the test data. Nevertheless, by comparing the behavior of $Model~D0$ with $Model~D8$ and $D10$ we observe that training models with input drop is helping them to be more robust to noisy measurements and $Model~D8$ was having the best performance in terms of accuracy. \begin{figure} \vspace{-4mm} \centering \includegraphics[trim=0 0 0 0.58cm, clip, scale=0.7]{ann_Dropout_Dropnoise.pdf} \vspace{-2mm} \caption{Accuracy (y axis) of different models trained on input drop data. The accuracy was measured for each test data (x axis) with different levels of noise ($VTest0,...,VTest14$).} \label{img:inputdrop_noisedrop} \vspace{-0.5cm} \end{figure} To make the network aware of noisy attributes, we perform a third experiment. In the third experiment, we trained our models with the augmented dataset variants that include different levels of noise in the input data, i.e., $V0, V2,\ldots,V14$. The corresponding networks trained using these datasets are denoted as $Model~V0, Model~V2,...,Model~V14$. These models were validated on the test data $VTest0, VTest2,\cdots,VTest14$ that underwent a similar transformation (c.f. Algorithm \ref{alg:noiseApproach}). The results are plotted in Figure~\ref{img:noisedrop_noisedrop}. \begin{figure} \vspace{-6mm} \centering \includegraphics[trim=0 0 0 0.58cm, clip, scale=0.7]{ann_Dropnoise_Dropnoise.pdf} \vspace{-2mm} \caption{Accuracy (y axis) of different models trained on input noise data. The accuracy was measured for each test data (x axis) transformed with the same input noise approach, with different levels of noise ($VTest0,...,VTest14$).} \label{img:noisedrop_noisedrop} \vspace{-4mm} \end{figure} In Figure~\ref{img:noisedrop_noisedrop} we observe that $Model~V6$ and $V8$ have very similar behaviors. For example, $Model~V8$ is able to predict with an accuracy of 0.88 even when 40 sensors measurements are noisy. This represents around $25\%$ of the entire set of inputs. On the other hand, on test datasets with higher levels of noise, like $VTest14$, $Model~V6$ and $V8$ are unable to predict with high accuracy. Moreover, when comparing Figures~\ref{img:inputdrop_noisedrop} and \ref{img:noisedrop_noisedrop}, the results indicate that the best way to deal with noisy sensors is by training the ANN with reasonable levels of noise. This makes the models more robust to defective sensor data in real world. \begin{figure}[H] \vspace{-2mm} \centering \includegraphics[trim=0 0 0 0.58cm, clip, scale=0.7]{ann_Dropnoise_Dropout.pdf} \vspace{-2mm} \caption{Accuracy (y axis) of networks trained using various levels of noise in training data and tested on datasets with varying levels of input dropout.} \label{img:ann_Dropnoise_Dropout} \end{figure} Practically, our idea of injecting noise involves replacing the instances of the attribute space with random values from a Gaussian distribution. This also includes zeros. For this reason, the noise models trained on data $V2,...,V14$ also performs with a high accuracy on test datasets with input dropouts (c.f. Figure \ref{img:ann_Dropnoise_Dropout}). Also here, we observe that $Model~V8$ and $Model~V10$ have the best quality in comparison to the model trained with no random noise ($Model~V0$). Similarly, these models were robust on test data with white noise. For example, in Figure \ref{img:ann_Dropnoise_DropWhiteNoise}, for test data with extreme levels of white noise, i.e., $WTest14$, the accuracy of the models trained with our random noise (e.g., $Model~V8$) is better in comparison to model trained using the original data ($Model~V0$). \begin{figure} \vspace{-2mm} \centering \includegraphics[trim=0 0 0 0.58cm, clip, scale=0.7]{ann_Dropnoise_DropWhiteNoise.pdf} \vspace{-2mm} \caption{Accuracy (y axis) of networks trained using various levels of random noise in training data and tested on datasets with varying levels of white noise.} \label{img:ann_Dropnoise_DropWhiteNoise} \end{figure} Overall, our observation is that our proposed idea of injecting random noise in the instances of random features (c.f. Algorithm \ref{alg:noiseApproach}) enhance the robustness of the prediction model with malfunctioning and noisy sensors as inputs. \section{Conclusions and Future works} Bosch faces the challenge of generating prediction models with noisy and defective input attributes for applications such as predictive diagnostics. The models initially developed by Bosch using different classification algorithms produced very accurate results. However, a closer analysis showed that all these different prediction models relied on the same set of sensor data. Performing predictions with a single set of relevant sensor were not robust in the presence of faulty sensor data. Hence, we proposed and tested two approaches to tackle this problem. One approach (Input drop) uses the Dropout technique from ANNs in the input layer to make the model more robust against defective sensors. The second approach (Input noise) introduces noise into the training datasets, which can be seen as a way of simulating the noisy sensors. Based on our observations, the best level of dropout is between 60 to 80 attributes (i.e., between $40\%$ and $50\%$ of the attributes). As for the right level of augmentation, results indicate that model $V6$ (i.e., around $40\%$ of attributes) is ideal in terms of noisy and missing sensor data. While the major advantages of ANN are the effective and efficient modeling of complex non-linear systems, one downside is that, training a model usually incurs high computational and storage costs. On the other hand, once an ANN is trained, it requires little effort to process the data. This way, such a system could be implemented in the vehicles in a simple way. As future work, we intend to study if this approach can be generalized to other application domains, where sensor data are partially missing or faulty. \section*{Acknowledgments} This research has obtained funding from the Electronic Components and Systems for European Leadership (ECSEL) Joint Undertaking, the framework programme for research and innovation Horizon 2020 (2014-2020) under grant agreement number \emph{662189-MANTIS-2014-1}. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research. \bibliographystyle{abbrv}
{ "timestamp": "2018-04-17T02:14:19", "yymm": "1804", "arxiv_id": "1804.05544", "language": "en", "url": "https://arxiv.org/abs/1804.05544" }
\section{Introduction} The study of ensembles of self-sustained (autonomous) oscillators is relevant for many setups in physics (Josephson junction and spin-torque oscillator arrays~\cite{Benz-Burroughs-91,*Pikovsky-13,*Turtle_etal-17}), engineering (stability of pedestrian bridges, coupled electronic oscillators, power grid networks~\cite{Dallard-01a,*Temirbayev_et_al-12,*Filatrella_etall-08}), chemistry (ensembles of electrochemical oscillators~\cite{Kiss-Zhai-Hudson-02a}), and life sciences (colonies of yeast cells, synthetic gene oscillators~\cite{Richard-Bakker-Teusink-Van-Dam-Westerhoff-96,*Prindle_etal-12}). Of major interest are collective phenomena, like synchronization, in these ensembles. Many essential properties can be described already within simple phase models valid at small coupling. In this limit, only the dynamics of the phases of oscillators is nontrivial, while the amplitudes are enslaved. The paradigmatic model here is the Kuramoto model of sine-coupled phase oscillators~\cite{Kuramoto-75}, demonstrating a nonequilibrium transition from asynchrony to synchrony~\cite{Kuramoto-84}. Recently, remarkable progress has been achieved in the description of the dynamics of order parameters for populations of phase oscillators. Watanabe and Strogatz (WS)~\cite{Watanabe-Strogatz-93,*Watanabe-Strogatz-94} demonstrated partial integrability of a class of phase ensembles, allowing reduction of the full dynamics of a population of $N$ identical elements to a three-dimensional set of equations for certain global variables, plus $N-3$ constants of motion. In the thermodynamic limit, where the number of the constants of motion tends to infinity, this integrability means invariance in time of the density of the constants. The global variables have an especially transparent form if the density of the constants is uniform---in this case one obtains a closed dynamical system for the natural order parameters of the system. These equations have been obtained by Ott and Antonsen (OA)~\cite{Ott-Antonsen-08} with another method, see Refs.~\cite{Pikovsky-Rosenblum-08,Marvel-Mirollo-Strogatz-09} for interrelation of the two approaches. In the full state space, the OA equations are valid on a particular OA manifold, which is only neutrally stable for identical oscillators (due to WS integrability), but becomes weakly stable if one performs coarse graining for nonidentical oscillators~\cite{Ott-Antonsen-09,*Mirollo-12,*Pietras-Daffertshofer-16}. The simplicity of OA equations made them a popular tool in studies of many setups like coupled ensembles~\cite{Martens_etal-09,*Komarov-Pikovsky-11,*So-Barreto-11,*Komarov-Pikovsky-13}, chimera states~\cite{Abrams-Mirollo-Strogatz-Wiley-08,Laing-09,*Bordyugov-Pikovsky-Rosenblum-10} consisting of synchronized and partially synchronous parts, common-noise driven~\cite{Nagai-Kori-10,*Braun-etal-12}, excitable, and non-homogeneous phase oscillators~\cite{Laing-12,Laing-14,*Luke-Barreto-So-14,*Montbrio-Pazo-Roxin-15}. The main goal of this letter is to extend the OA theory to the case of noisy oscillators. For small noise, in the leading order in noise intensity, we will derive and analyze a closed dynamical system for the two order parameters, describing populations of noisy phase oscillators. Our main tool is the reformulation of the full dynamics in terms of the circular cumulants, which are related to the complex order parameters in the same way that cumulants of distributions of real random variables are related to their moments. We stress here, that because the complex order parameters are moments of a complex observable defined on the unit circle, circular cumulants have nothing in common with Gaussian approximations sometimes used in theories of collective dynamics~\cite{Zaks-etal-03}. We start with an ensemble of phase oscillators $\varphi_k(t)$ having the same natural frequency $\Omega$, subject to a common complex-valued external force $h(t)$, and to intrinsic (not common) noise: \begin{equation} \dot\varphi_k=\Omega+\mathrm{Im}(2h(t)e^{-i\varphi_k})+{\sigma}{\xi_k(t)}\,, \label{eq101} \end{equation} where $\xi_k$ are independent white Gaussian noises: $\langle\xi_k(t)\rangle=0$, $\langle\xi_k(t) \xi_m(t')\rangle=2\delta_{km} \delta(t-t')$. In most applications the force itself depends on the phases via mean-field coupling, but to develop the theory it is convenient to write it in a general form. In the thermodynamic limit of an infinite ensemble, its state can be described by the distribution density $w(\varphi,t)$ which obeys the Fokker-Planck equation \begin{equation} \frac{\partial w}{\partial t} +\frac{\partial}{\partial\varphi}((\Omega-ihe^{-i\varphi}+ih^\ast e^{i\varphi})w) ={\sigma^2}\frac{\partial^2 w}{\partial\varphi^2}\,. \label{eq102} \end{equation} It is convenient to introduce Fourier modes, $w(\varphi,t)=(2\pi)^{-1}[a_0+\sum_{j=1}^{\infty}(a_je^{-ij\varphi}+c.c.)]$, with $a_0=1$ due to the normalization condition, and to write an infinite system of equations for their evolution (cf.~\cite{Laing-12}) \begin{equation} \dot{a}_{j}=ji\Omega a_j+jh{a}_{j-1}-jh^\ast{a}_{j+1} -{\sigma^2}j^2{a}_j\,,\quad j\geq 1\;. \label{eq113} \end{equation} Complex quantities $a_j=\langle e^{i j\varphi}\rangle$ are nothing else but the Kuramoto-Daido order parameters~\cite{Daido-96} for the ensemble. In the case of a population of non-identical oscillators with different natural frequencies $\Omega$ (we assume that the forcing $h$ is still a common one), one can consider the Fourier modes $a_j(\Omega,t)$ as functions of $\Omega$. It is natural to introduce order parameters averaged over frequencies $\Omega$ with the distribution $g(\Omega)$: $Z_j=\int g(\Omega)\,a_j(\Omega,t)\,d\Omega$. This averaging takes an especially simple form for the Lorentzian distribution of frequencies $g(\Omega)=\frac{\gamma}{\pi((\Omega-\Omega_0)^2+\gamma^2)}$, where $\gamma$ is the characteristic half-width of the natural frequency band around the central frequency $\Omega_0$. With the assumption that all $a_j(\Omega,t)$ are analytic in the upper half-plane of the complex $\Omega$-plane (if they are analytic at certain time instant $t_\ast$, they will remain analytic for $t>t_\ast$; see \cite{Ott-Antonsen-08} for details), the integral can be evaluated via residues: $Z_j(t)=a_j(\Omega_0+i\gamma,t)$. As a result, we obtain an infinite system of equations for the order parameters $Z_j$: \begin{equation} \dot{Z}_{j}=j(i\Omega_0 -\gamma) Z_j+jh{Z}_{j-1}-jh^\ast{Z}_{j+1} -{\sigma^2}j^2{Z}_j\,. \label{eq124} \end{equation} We stress here, that the analyticity property is important only for the reduction of the sets of the Fourier modes to single values calculated at a pole in the complex plane. For an ensemble of identical oscillators (which formally corresponds to $\gamma=0$) and for frequency distributions which cannot be integrated via residues, analyticity is irrelevant, and our method below does not rely on this property. In the noise-free case $\sigma=0$, the system \eqref{eq124} possesses an invariant two-dimensional manifold, called Ott-Antonsen manifold~\cite{Ott-Antonsen-08}: $Z_n=(Z_1)^n$. The resulting equation for the main order parameter $Z_1$ has a simple form \begin{equation} \dot{Z}_1=(i\Omega_0 -\gamma) Z_1+h-h^\ast{Z}_{1}^2\;. \label{eqOA} \end{equation} On the OA manifold, all the order parameters are determined by $Z_1$ and hence $h$ is a function of $Z_1$; therefore, Eq.~\eqref{eqOA} is a closed equation, the dynamics of which is easy to analyze. This made the OA ansatz so popular in different setups. Our goal in this letter is to develop a low-dimensional description of the dynamics of the order parameters in the presence of weak noise $\sigma\neq 0$. The main idea is to reformulate the general equations \eqref{eq124} in a cumulant form, more suitable for the perturbation approach. The order parameters $Z_n=\langle e^{in\varphi}\rangle$ can be treated as moments of the observable $e^{i\varphi}$, and they can be obtained from the moment-generating function \begin{equation} F(k,t)= \langle\exp(ke^{i\varphi})\rangle \equiv\sum_{m=0}^{\infty}Z_m(t)\frac{k^m}{m!} \label{eq201} \end{equation} as $Z_m(t)=\frac{\partial^m}{\partial k^m}F(k,t)|_{k=0}$. The partial differential equation for $F$ follows directly from Eq.~\eqref{eq124}: \begin{equation} \begin{aligned} \frac{\partial F}{\partial t}=&(i\Omega_0-\gamma) k\frac{\partial}{\partial k}F +hkF -h^\ast k\frac{\partial^2}{\partial k^2}F\\& -\sigma^2k\frac{\partial}{\partial k}\left(k\frac{\partial}{\partial k}F\right). \end{aligned} \label{der01} \end{equation} As in many other situations, it appears beneficial to introduce \textit{circular cumulants} $\varkappa_m$ via the power series of the cumulant-generating function defined as~\cite{footnote1} \begin{equation} \varPsi(k,t)= k\frac{\partial}{\partial k} \ln F(k,t)= \frac{k}{F}\frac{\partial F}{\partial k} \equiv \sum_{m=1}^{\infty}\varkappa_{m}(t)\,k^m\,. \label{eq:cce} \end{equation} For example, the first three circular cumulants are: \[ \varkappa_1=Z_1,\quad\varkappa_2=Z_2-Z_1^2,\quad \varkappa_3=\frac{1}{2}(Z_3-3Z_2Z_1+2Z_1^3)\;. \] The partial differential equation for $\varPsi(k,t)$ can be derived by applying the operator $\partial_t$ to \eqref{eq:cce}, and employing \eqref{der01}: \begin{align} \frac{\partial\varPsi}{\partial t}& =(i\Omega_0-\gamma)k\frac{\partial\varPsi}{\partial k} +hk -h^\ast k\frac{\partial}{\partial k} \left(k\frac{\partial}{\partial k}\left(\frac{\varPsi}{k}\right) +\frac{\varPsi^2}{k}\right) \nonumber\\ & \qquad -\sigma^2k\frac{\partial}{\partial k}\left(k\frac{\partial\varPsi}{\partial k}+\varPsi^2\right). \label{der03} \end{align} The infinite system of equations for the circular cumulants can be obtained directly from (\ref{eq:cce},\ref{der03}): \begin{equation} \begin{aligned} \dot\varkappa_j&=j(i\Omega_0 -\gamma)\varkappa_{j}+h\delta_{j1} \\ & -h^\ast(j^2\varkappa_{j+1}+j\sum_{m=1}^{j}\varkappa_{j-m+1}\varkappa_{m}) \\& \quad -{\sigma^2}(j^{2}{\varkappa}_{j}+j\sum_{m=1}^{j-1}\varkappa_{j-m}\varkappa_{m})\,. \end{aligned} \label{der05} \end{equation} The advantage of circular cumulants is in a simple representation of the Ott-Antonsen manifold: It corresponds to the case where all high cumulants vanish: $\varkappa_m=0,\;m>1$; and the only nontrivial cumulant is $\varkappa_1=Z_1$. In this case the generating functions are $\varPsi(k,t)=k Z_1(t)$ and $F(k,t)=\exp[k Z_1]$. One can easily check that these generating functions are invariant solutions of Eqs.~\eqref{der01} and \eqref{der03} for vanishing noise $\sigma=0$, provided $Z_1$ evolves according to Eq.~\eqref{eqOA}. For non-vanishing noise, generally all the cumulants are non-zero. However, for small noise one can expect the cumulants with orders larger than one to be small. To reveal the hierarchy of this smallness, it is instructive to write explicitly the equations for the cumulants $\varkappa_2$ and $\varkappa_3$: \begin{align} \dot\varkappa_2&=2i(\Omega_0-\gamma)\varkappa_2-4 h^\ast \varkappa_3-4 h^\ast \varkappa_1\varkappa_2\nonumber\\ &\quad-4\sigma^2 \varkappa_2 -2\sigma^2\varkappa_1^2\;, \label{eq:c2}\\ \dot\varkappa_3&=3i(\Omega_0-\gamma)\varkappa_3-9h^\ast \varkappa_4- h^* [6\varkappa_1\varkappa_3+3\varkappa_2^2]\nonumber\\ &\quad-9\sigma^2 \varkappa_3 -6\sigma^2 \varkappa_1\varkappa_2\;.\label{eq:c3} \end{align} On the r.h.s.\ of Eq.~\eqref{eq:c2} there are ``homogeneous'' terms $\propto\varkappa_2$ and ``driving'' terms $\sim\varkappa_3$ and $\sim\sigma^2\varkappa_1^2$. If we assume that the higher-order cumulants are smaller than the lower-order ones, then the term $\sim\sigma^2\varkappa_1^2$ determines the level of the cumulant $\varkappa_2$, which appears to be $\varkappa_2\sim\sigma^2\varkappa_1^2$. A similar inspection of Eq.~\eqref{eq:c3} yields leading ``driving'' terms $\sim \varkappa_2^2$ and $\sim\sigma^2\varkappa_1\varkappa_2$, both have order $\sim\sigma^4$. Thus we conclude that the smallness of the third cumulant is $\sim\sigma^4$. Inspection of the full system~\eqref{der05} shows that an assumption $|\varkappa_m|\sim\sigma^{2(m-1)}$ is consistent with the dynamics in all orders. A more detailed analysis of the hierarchy of cumulants will be reported elsewhere; here below we will exploit the simplest approximation, where we assume all the cumulants above the second one to vanish. As it follows from the above discussion, accuracy of this approximation is $\mathcal{O}(\sigma^4)$. As a result, we obtain a closed system of equations for the first and the second cumulants (for simplicity of further notations, we omit the index of $Z_1$ and denote $\kappa=\varkappa_2$): \begin{equation} \begin{aligned} \dot{Z}&=(i\Omega_0-\gamma)Z+h-h^* Z^2-\sigma^2 Z- h^*\kappa\;,\\ \dot{\kappa}&=2(i\Omega_0-\gamma)\kappa-4h^* Z\kappa-\sigma^2 (4\kappa+2Z^2)\;. \end{aligned} \label{eq2c} \end{equation} This system of two equations for two complex order parameters $Z$ and $\kappa=Z_2-Z_1^2$ generalizes the Ott-Antonsen equation~\eqref{eqOA} to the case of small noise. Below we will explore it in different setups. It is instructive to examine the perturbation of the OA probability density corresponding to the one nonvanishing second circular cumulant $\kappa$. With two nonvanishing cumulants, the moment-generating function is $F(k)=\exp\big[kZ+\kappa\frac{k^2}{2}\big]$. Assuming smallness of $\kappa$, we approximate it as $F(k)\approx (1+\kappa\frac{k^2}{2})\exp[kZ]$, and obtain for the moments $Z_m=Z^m+\frac{m(m-1)}{2}\kappa Z^{m-2}$. Summation of the Fourier series with these Fourier coefficients yields $w(\varphi)=w_{OA}(\varphi)+w_C(\varphi)$, where \begin{equation} w_{OA}(\varphi)=\frac{1-|Z|^2}{2\pi|e^{i\varphi}-Z|^2}\,,\quad w_C(\varphi)=\text{Re}\!\left[\frac{\pi^{-1}\kappa e^{i\varphi}}{\left(e^{i\varphi}-Z\right)^3}\right]. \nonumber \end{equation} The perturbation $w_C$ is a function of the relative phase $\varphi-\text{arg}(Z)$ and depends on the parameter $\Theta=\text{arg}(\kappa)-2\text{arg}(Z)$. As a first application of the theory, we consider the effect of noise on the standard Kuramoto problem, where the ensemble is driven by the mean field itself, i.e.\ $h=\frac{\varepsilon}{2}Z$, where $\varepsilon$ is the coupling constant. By a transformation to a rotating with frequency $\Omega_0$ reference frame we can set $\Omega_0=0$. Now the system~\eqref{eq2c} takes the form \begin{equation} \begin{aligned} \dot{Z}&=-\gamma Z+\frac{\varepsilon}{2}Z(1-|Z|^2) -\sigma^2Z-\frac{\varepsilon}{2}Z^\ast\kappa\;,\\ \dot{\kappa}&=-2\gamma\kappa-2\varepsilon|Z|^2\kappa -\sigma^2(4\kappa+2Z^2)\;. \end{aligned} \label{eq2cK} \end{equation} The dynamics of this model is simple: above the instability threshold of the asynchronous state $Z=\kappa=0$, which is $\varepsilon_c=2(\gamma+\sigma^2)$, the system evolves to a stable steady state \begin{equation} \begin{aligned} |Z|^2&=\frac{1}{2}-\frac{3(\gamma+\sigma^2)}{2\varepsilon}\\ &\quad+\frac{\sqrt{(\varepsilon-\gamma)^2+2\sigma^2(\varepsilon-3\gamma)-7\sigma^4}}{2\varepsilon}\;,\\ \kappa&=\frac{-\sigma^2 Z^2}{\gamma+2\sigma^2+\varepsilon|Z|^2} \;. \end{aligned} \label{eq:Kss} \end{equation} Comparison of this solution with the numerical solution of full Eqs.~\eqref{eq124} in Fig.~\ref{fig1} shows that indeed the approximation \eqref{eq2cK} has accuracy $\sim\sigma^4$ in the whole range of $\gamma$, including the case of identical oscillators $\gamma=0$. \begin{figure}[!thb] \centerline{ \includegraphics[width=0.97\columnwidth {tyulkina-etal-rev-fig1.eps} } \caption{(color online) Error of the approximate solution~\eqref{eq:Kss} as a function of $\sigma^2$, for $\varepsilon=1$ and different values of parameter $\gamma$: red open squares $\gamma=0.2$; green open circles $\gamma=0.05$; blue open triangles $\gamma=0$. Dashed line shows theoretical prediction $\sim\sigma^4$. We show also predictions of the simplified model where one sets $\kappa=0$, with the corresponding filled markers. The error of this approximation is $\sim\sigma^2$ (dotted line).} \label{fig1} \end{figure} In this figure we also show predictions of the simplest approximation, where one sets all higher circular cumulants, except the first one, to zero. In this approximation, the system~\eqref{eq2c} reduces to one simple equation $\dot Z=(i\Omega_0-\gamma)Z+h-h^* Z^2-\sigma^2 Z$. The solution of the Kuramoto model then reads $|Z|^2=(\varepsilon-2\gamma-2\sigma^2)/\varepsilon$. As it follows from comparison with solution~\eqref{eq:Kss}, the error of this approximation is of the order $\sim\sigma^2$; this is confirmed by calculations presented in Fig.~\ref{fig1}. Thus, this approximation, although it leads to very simple equations, does not catch the effect of noise in the leading order, contrary to the system~\eqref{eq2c}. With the next example we illustrate that small noise acts as a factor, stabilizing a vicinity of the OA manifold in systems of identical oscillators. The system suggested by Abrams et al.~\cite{Abrams-Mirollo-Strogatz-Wiley-08} consists of two symmetrically coupled populations (variables $\varphi$ and $\psi$) of phase oscillators. We write this system with additional independent white noise terms: \begin{equation} \begin{aligned} \dot\varphi_k=&\Omega+\frac{1+A}{2N}\sum _{j=1}^N\sin(\varphi_j-\varphi_k-\alpha)\\ &+\frac{1-A}{2N}\sum _{j=1}^N\sin(\psi_j-\varphi_k-\alpha) +\sigma\xi_k(t)\;,\\ \dot\psi_k=&\Omega+\frac{1+A}{2N}\sum _{j=1}^N\sin(\psi_j-\psi_k-\alpha)\\ &+\frac{1-A}{2N}\sum _{j=1}^N\sin(\varphi_j-\psi_k-\alpha) +\sigma\eta_k(t)\;. \end{aligned} \label{eq:abr1} \end{equation} Here $N$ is the size of the populations, $\alpha$ is the phase shift in the coupling. Parameter $A$ determines different coupling strengths of intra- and inter-population interactions. Noise-free ($\sigma=0$) regimes have been analyzed in Refs.~\cite{Abrams-Mirollo-Strogatz-Wiley-08,Pikovsky-Rosenblum-08}; for experimental realization of this setup see \cite{Tinsley_etal-12,Martens_etal-13}. In Ref.~\cite{Abrams-Mirollo-Strogatz-Wiley-08} it was shown, using the OA ansatz, that in a range of parameters a regime where one population fully synchronizes (i.e., $\psi_1=\ldots=\psi_N=\Psi$), while the other one is partially synchronous (i.e., its order parameter $Z=\langle e^{i\varphi}\rangle$ takes values $0<|Z|<1$), is stable. This chimera state in the reference frame rotating with the phase of the second population $\Psi$, can be static (i.e., $Ze^{-i\Psi}=\text{const}$) or periodic (i.e., $Ze^{-i\Psi}$ is a periodic function of time). In Ref.~\cite{Pikovsky-Rosenblum-08} it was shown that the regimes studied in~\cite{Abrams-Mirollo-Strogatz-Wiley-08} are observed only if the initial conditions lie on the OA manifold for the first population. Because the OA manifold is not attractive, for more general initial conditions, one more nontrivial frequency is added: one observes a periodic regime instead of a steady state, and a quasiperiodic regime instead of a periodic one. This is illustrated in Fig.~\ref{fig2}(a). The dashed green line shows a periodic solution on the OA manifold, while the solid grey line shows a quasiperiodic regime for the initial conditions away from the OA manifold. The bifurcation analysis for the attracting regimes in system~\eqref{eq:abr1} with intrinsic noise, performed in the thermodynamic limit $N\to\infty$ within the framework of Eqs.~(\ref{eq113}), can be also found in~\cite{Laing-12}. We now apply to system~\eqref{eq:abr1} the small-noise theory developed above. In the presence of noise both populations are partially synchronous, thus we have to write a system of two equations of type~\eqref{eq2c} for two order parameters $Z$, $Y$ and for two corresponding second cumulants $\kappa$, $\nu$. Here also enter two fields acting on the populations, $H=0.25((1+A)Z+(1-A)Y)e^{-i\alpha}$ and $F=0.25((1+A)Y+(1-A)Z)e^{-i\alpha}$ (we set $\Omega=0$, because this parameter can be excluded in the rotating reference frame): \begin{equation} \begin{aligned} \dot{Z}&=H-H^* Z^2-\sigma^2 Z- H^*\kappa\;,\\ \dot{\kappa}&=-4H^* Z\kappa-\sigma^2 (4\kappa+2Z^2)\;,\\ \dot{Y}&=F-F^* Y^2-\sigma^2 Y- F^*\nu\;,\\ \dot{\nu}&=-4F^* Y\nu-\sigma^2 (4\nu+2Y^2)\;. \end{aligned} \label{eq:abr2} \end{equation} Solutions of this system are shown in Fig.~\ref{fig2}(b) with circles, for the same parameters as used in Fig.~\ref{fig2}(a), but with addition of a small noise $\sigma^2=10^{-4}$. This solution practically overlaps with the solution of the full equations \eqref{eq113} (solid blue line), where the infinite system \eqref{eq113} was truncated at a large number of modes $m=200$. This comparison confirms the quality of the cumulant approximation. Also we show in Fig.~\ref{fig2}(b) the OA solution for the noise-free case (the same dashed green line as in panel (a)). Importantly, the solution of system~\eqref{eq:abr2} is an attractor: we checked this by starting from different initial conditions in the (truncated as described above) full equations \eqref{eq113} (these initial conditions cannot be tested within system~\eqref{eq:abr2}, because it is valid only for small higher cumulants and not for generic initial states with potentially large higher cumulants). We illustrate convergence to the solution near the OA manifold described by system~\eqref{eq:abr2} in Fig.~\ref{fig2}(c). \begin{figure}[!thb] \centering \includegraphics[width=\columnwidth {tyulkina-etal-rev-fig2ab.eps} \includegraphics[width=0.85\columnwidth {tyulkina-etal-rev-fig2c.eps} \caption{(color online) Panel (a): Noise-free chimera dynamics. Dashed green line: periodic solution on the OA manifold; solid grey line: quasiperiodic solution out of the OA manifold. Panel (b): Dynamics in the presence of noise $\sigma^2=10^{-4}$. Solid blue line (solution of the full equations \eqref{eq113}) is overlapped by red circles (solution of system~\eqref{eq:abr2}). Dashed green line is the same as in panel (a). Panel (c): time evolution of the order parameters in the noise-free case (bottom red line, this solution corresponds to the grey solid line in panel (a)), for $\sigma^2=4\cdot 10^{-5}$ (middle green line), and for $\sigma^2=10^{-4}$ (top blue line). Middle and top lines are shifted for better visibility. All solutions start from the same initial conditions out of the OA manifold. } \label{fig2} \end{figure} With this example we see, that noise acts in a stabilizing manner on the dynamics of the populations of identical oscillators. The probability density evolves toward a state close to the OA manifold. This state is well described, for small noise, by the first and the second circular cumulants. The distance to the OA manifold is visible even for small noise (cf.\ distance between the green curve and the circles in Fig.~\ref{fig2}(b)). In conclusion, we have developed an analytic approach yielding closed equations for the collective modes for ensembles of noisy coupled phase oscillators. The equations generalize the Ott-Antonsen approach, valid in the noise-free situation, to the case of small noise. Our theory is based on the reformulation of the dynamics in terms of the circular cumulants. These new variables have a nice property: all high cumulants vanish on the OA manifold, thus providing a natural way to construct a perturbation procedure, using the noise intensity as a small parameter. Our equations account for the leading order in this parameter. For the Abrams et al.\ chimera model, we demonstrated, that small noise makes a neighborhood of the OA manifold stable even for identical populations: a solution far from this manifold converges to a $\sigma^2$-vicinity of the OA manifold, where it can be described by the system derived in this letter. We expect this stabilizing effect of noise to be a rather generic property. However, a systematic analysis of different situations, especially of states far from the OA manifold, where a nontrivial interplay between noise and the deterministic dynamics may occur, is necessary to clarify the problem. The method of circular cumulants can potentially be used to develop a perturbation approach for other situations, where the conditions of validity of the OA theory are slightly violated. These results will be reported elsewhere. At this point it is instructive to compare the cumulant approach of this paper with the perturbation theory developed in Ref.~\cite{Vlasov-Rosenblum-Pikovsky-16}. Theory~\cite{Vlasov-Rosenblum-Pikovsky-16} uses the Watanabe-Strogatz formalism and provides results in terms of corrections to the WS global variables. They are, however, different from the usual order parameters used in this paper, thus equations obtained here allow for a direct interpretation. Our approach is, however, restricted to the thermodynamic limit. \acknowledgments The authors thank V.\ Vlasov, M.\ Rosenblum, R.\ Mirollo, and J.\ Engelbrecht for fruitful discussions. Work of A.P.\ on the Kuramoto model was supported by the Russian Science Foundation (grant No.\ 17-12-01534). Work of L.S.K.\ and D.S.G.\ on general development of the cumulant approach was supported by the Russian Science Foundation (grant No.\ 14-12-00090). The paper was finalized during the visit supported by G-RISC (grant No.\ M-2018a-7).
{ "timestamp": "2018-05-29T02:15:57", "yymm": "1804", "arxiv_id": "1804.05326", "language": "en", "url": "https://arxiv.org/abs/1804.05326" }
\section{Introduction} Cooperative game theory~\cite{chalkiadakis2011} provides a rich framework for the coordination of the actions of self-interested agents. Despite the maturity of the related literature, it is usually assumed that an agent can be a member of exactly one coalition. Nevertheless, in many real-world scenarios this is simply not realistic. In environments where agents hold an amount of a divisible resource~(e.g., time, money, computational power), which they can invest to earn utility, it is natural for them to divide that resource in order to simultaneously participate in a number of \emph{overlapping coalitions} \cite{shehory1998,dang2006,chalkiadakis2010,zick2012,zick2011,zick2014,mamakos2017}, to maximize their profits. As real-world environments exhibit a high level of uncertainty, it is more natural than not to assume that agents do not have complete knowledge of the utility that can be yielded by every possible team of agents~\cite{suijs1999,kraus2003,chalkiadakis2004,ieong2008}. Moreover, coalitional value is often determined by an underlying structure defined given \emph{relations} among the members of the coalition. These relations reflect the \emph{synergies} among the coalition members. It is natural to posit that agents do not know the exact synergies at work in their coalitions. Against this background, in our system the coalitional value depends on the amount of resources the agents invest, and, crucially, the explicit relations among coalition members. As such, we build on the idea of \emph{marginal contribution nets}~(MC-nets)~\cite{ieong2005} and introduce \emph{Relational Rules (RRs)}, a representation scheme for cooperative games with \emph{overlapping} coalitions. The RR scheme allows for the concise representation of the synergies-dependent coalition value. Now, an agent can make an observation of the utility that can be earned by the resource offerings of the members of a coalition, but it is a much more complex task to determine her relations with subsets of agents of that coalition. Probabilistic topic modeling (PTM)~\cite{blei2012} is a form of \emph{unsupervised learning} which is particularly suitable for unravelling information from massive sets of documents. Probabilistic topic models infer the probability with which each word of a given ``vocabulary'' is part of a \emph{topic}. Intuitively, the words that have high probability in a topic, are very likely to appear together in a document that refers to this topic with high probability. Therefore, a topic, which is essentially a probability distribution of the words of a given vocabulary, reveals the underlying \emph{hidden structure}. One of the most popular PTM algorithms~\cite{blei2012} is \emph{online Latent Dirichlet Allocation} (online LDA)~\cite{hoffman2010}, which, as its name indicates, is a an online version of the well-known \emph{Latent Dirichlet Allocation} (LDA)~\cite{blei2003} algorithm. LDA is a generative probabilistic model for sets of discrete data, while online LDA can handle documents that arrive in streams, enabling the continuous evolution of the topics. The method we develop employs online LDA to allow agents to learn how well they can cooperate with others. In our setting, agents \emph{repeatedly} form overlapping coalitions, as the game takes place over a number of \emph{iterations}. Thus, we utilize a simple, yet appropriate, protocol, under which in each iteration an agent is (randomly) selected in order to propose (potentially) overlapping coalitions. Agents that use our method take decisions on which coalitions to join by exploiting the topics of the model that they have learned via employing online LDA: by interpreting formed coalitions as documents, represented given an appropriate vocabulary, agents are able to use online LDA to update beliefs regarding the hidden collaboration structure---and thus implicitly learn rewarding synergies with others~(synergies which are in our experiments described by RRs). Moreover, agents are able to gain knowledge regarding coalitions that are costly, and should thus be avoided. Hence, agents can, over time, pick partners with which to cooperate effectively. We have evaluated our approach against two reinforcement learning (RL) algorithms we developed for this setting, and which serve as baselines. Our algorithm vastly outperforms the baselines, implying a high degree of accuracy in the beliefs of the agents, and a high quality of agent decisions. To the best of our knowledge, the recent work of \cite{mamakos2017} is the only one that has so far approached \emph{overlapping} coalition formation {\em under uncertainty}, but it is concerned with the class of Threshold Task Games~\cite{chalkiadakis2010}, which greatly differs to the more general setting we study here. Moreover, ours is the first paper that employs probabilistic topic modeling for multiagent learning: existing literature on multiagent learning~\cite{fudenberg1998,tuyls2012}, in both non-cooperative~\cite{littman1994,hu1998,hu2003} and cooperative~\cite{kraus2003,kraus2004,chalkiadakis2004,balcan2015} game settings, is largely preoccupied with the study of RL, PAC learning, or simple belief updating algorithms. As such, this paper introduces an entirely novel paradigm for (decentralized) learning employed by rational autonomous decision makers in multiagent settings. \section{Background and Related Work} In this section, we provide an overview of previous work on overlapping coalition formation, multiagent learning and agent decision-making under uncertainty. Furthermore, we offer the necessary background on Probabilistic Topic Modeling, and in particular (online) Latent Dirichlet Allocation---which is employed in our proposed agent-learning method. \subsection{Overlapping Coalition Formation} Overlapping coalition formation was initially studied in~\cite{shehory1998}, which provided an approximate solution to the corresponding optimal coalition structure generation problem~\cite{rahwan2015}, in a setting where the costs of the coalitions and the capabilities of the agents are globally announced. The method proposed in~\cite{shehory1998} employs concepts from combinatorics and approximation algorithms. Though related, our approach differs in that it is \emph{decentralized}, since the (overlapping) coalitions are formed by the agents themselves, and are \emph{not} provided for the agents by an algorithm. The subsequent work of~\cite{dang2006} presented an application of overlapping coalitions in sensor networks. An approximate greedy algorithm with worst-case guarantees is introduced, and constitutes a real-world example of employing overlapping coalitions. However, in that work, the agents do not form coalitions acting in a completely autonomous manner, since they are, at a step of the algorithm, hardwired to agree on taking a specific action~(regarding the choice of the members of the coalition). As illustrated by the work of~\cite{chalkiadakis2010} which formally introduced cooperative games with overlapping coalitions (or OCF games), whenever an agent can be part of a number of coalitions simultaneously, coalition structures are much more complex than in non-OCF games---and so is the concept of deviation. We provide a further discussion on the richness of OCF games later in Section~\ref{sec:discussion} of this paper. Furthermore, in~\cite{chalkiadakis2010} an expressive class of OCF games, \emph{threshold task games} (TTGs), is also presented. In TTGs, a coalition achieves a task and earns utility $u$ if its members manage to collect a number of resource units which exceeds a threshold $T > 0$. TTGs provide the framework of study for the work of~\cite{mamakos2017}. In that work, probability bounds for the resources contributed by members of overlapping coalitions are computed, and subsequently exploited to form (overlapping) coalitions that are deemed, with some probabilistic confidence, capable of carrying over assigned tasks since they are believed to possess resources exceeding the required threshold. The paper uses Bayesian updating to update agent beliefs regarding partners' resources following coalition formation and task execution, but no actual machine learning technique is used in that work. In a series of works~\cite{zick2012,zick2011,zick2014} following~\cite{chalkiadakis2010}, Zick and colleagues study {\em stability}~\cite{chalkiadakis2011} with respect to the behaviour of non-deviating players towards deviators in OCF settings. Several variants of the core are developed, and the approach is based on the notion of \emph{arbitration functions}, which define the payoff of the deviators according to the attitude of the non-deviators. A class of games highly related to cooperative games with overlapping coalitions, is that of \emph{fuzzy coalitional games}~\cite{aubin1981}. In a fuzzy game, an agent can be part of a coalition at various levels. Thus, the coalitional value of $C \subseteq N$ is defined by the level at which the agents have joined $C$. There is a number of differences between overlapping coalition formation and fuzzy games, the biggest one being that in fuzzy games the core is the only acceptable outcome. Finally, coalition structure generation with overlapping coalitions is studied in~\cite{zhang2010}, where a metaheuristic is developed, based on particle swarm optimization~\cite{eberhart1995}. \subsection{Uncertainty and Learning} Stochasticity in the value of payoffs in non-overlapping cooperative games has been studied in~\cite{suijs1999}, in a setting where agents have different preferences over a set of random variables. The focus of that study is on core-stability. Bayesian coalitional games are introduced in~\cite{ieong2008}, where suitable variations of the core are also defined. In \cite{kraus2003, kraus2004} agents have incomplete information regarding the costs that the other agents incur by performing a task within a coalition, while the formation of the coalitions takes place through information-revealing negotiations and the conduction of auctions. The formation of overlapping coalitions is not allowed in~\cite{kraus2003, kraus2004}. One of the very early attempts in approaching learning in cooperative settings was presented in~\cite{claus1998}, where the dynamics of a set of RL algorithms~\cite{sutton1998} were studied. A Bayesian approach to reinforcement learning for coalition formation is presented in~\cite{chalkiadakis2004}, along with the introduction of a variation of the core. The more recent work of~\cite{balcan2015} explores a PAC (probably approximately correct) model for obtaining theoretical predictions for the value of coalitions that have not been observed in the past. The links between evolutionary game theory and multiagent reinforcement learning is the topic of study in~\cite{tuyls2007, kaisers2009}. Multi-agent learning in non-cooperative games~\cite{fudenberg1998} has been studied for a longer time. Much of the early seminal work~\cite{littman1994,hu1998,hu2003} is interested in the study of Q-learning algorithms and their convergence to Nash equilibria~\cite{nash1951}. In particular, the algorithm presented in~\cite{hu1998} is shown to converge to a Nash equilibrium if every state and action has been visited infinitely often and the learning rate satisfies some conditions regarding the values it takes over time. Overall, the literature on multiagent learning~\cite{tuyls2012}, in both cooperative and non-cooperative settings, is largely concerned with the study of reinforcement learning algorithms. \subsection{Probabilistic Topic Modeling} Probabilistic topic models~(PTMs) consist of statistical methods that analyze words of documents, in order to discover the topics (or themes) to which these refer to, and the ways the topics interconnect. One hugely popular and successful PTM is the Latent Dirichlet Allocation (LDA) \cite{blei2003}. \subsubsection{Latent Dirichlet Allocation} We begin by defining basic terms, following~\cite{blei2003,blei2012}: \begin{itemize} \item A \emph{word} is the basic unit of discrete data. A vocabulary consists of words and is indexed by $\{1,2,\ldots,V\}$, while it is fixed and has to be known to the LDA. \item A \emph{document} is a series of $L$ words, denoted by $\bm{w} = (w_1, w_2, \ldots, w_L)$, where the $l^{th}$ word is denoted by $w_l$. \item A \emph{corpus} is a collection of $M$ documents, denoted by $D = \{\bm{w_1}, \bm{w_2}, \ldots, \bm{w_M}\}$. \item A \emph{topic} is a distribution over a vocabulary. \end{itemize} LDA is a Bayesian probabilistic model, the intuition behind it being that a document is a mixture of topics. For each document $\bm{w}$ in $D$, LDA assumes a generative process where a random distribution over topics is chosen, and for each word in $\bm{w}$ a topic is chosen from that topics' distribution, finally choosing a word from that topic. Documents share the same set of topics, but exhibit topics in different portions. While LDA observes only series of words, its objective is to discover the \emph{topic structure} which is \emph{hidden}. It is thus assumed that the generative process includes latent variables. The topics are $\beta_{1:K}$, where $K$ is their number; each topic $\beta_k$ is a distribution over the vocabulary, where $k \in \{1,\ldots,K\}$; and $\beta_{kw}$ is the probability of word $w$ in topic $k$. For the $d^{th}$ document the topic proportion of topic $k$ is $\theta_{dk}$, as $\theta_d$ is a distribution over the topics. The topic assignments for the $d^{th}$ document are denoted by $z_d$, with $z_{dl}$ being the topic assignment for the $l^{th}$ word of the $d^{th}$ document. Thus, $\beta, \theta$ and $z$ are the latent variables of the model, while the only observed variable is $w$, where $w_{dl}$ is the $l^{th}$ word observed in the $d^{th}$ document. Given the documents, the posterior of the topic structure is: \begin{equation*} p(\beta_{1:K}, \theta_{1:D}, z_{1:D} \mid w_{1:D}) = \frac{p(\beta_{1:K}, \theta_{1:D}, z_{1:D}, w_{1:D})}{p(w_{1:D})} \end{equation*} where the computation of $p(w_{1:D})$, the probability of seeing the given documents under any topic structure, is intractable~\cite{blei2012}. Furthermore, LDA introduces priors, so that $\beta_k \thicksim Dirichlet(\eta)$ and $\theta_d \thicksim Dirichlet(\alpha)$. Though the exact computation of the posterior, and thus the topic structure as a whole, cannot be efficiently computed, it can be approximated~\cite{blei2003}. The two most prominent alternatives for this are Markov Chain Monte Carlo (MCMC) sampling methods~\cite{jordan1998} and variational inference~\cite{jordan1999}. In variational inference for LDA, the true posterior is approximated by a simpler distribution $q$ that depends on parameters (matrices) $\phi_{1:D}$, $\gamma_{1:D}$ and $\lambda_{1:K}$, defined as follows: \begin{gather*} \phi_{dwk} \propto exp\{E_q[log\hspace{1mm}\theta_{dk}] + E_q[log\hspace{1mm}\beta_{kw}]\} \\ \gamma_{dk} = \alpha + \sum_w n_{dw}\phi_{dwk} \\ \lambda_{kw} = \eta + \sum_d n_{dw} \phi_{dwk} \end{gather*} The variable $n_{dw}$ is the number of times that word $w$ has been observed in document $d$. Parameters $\gamma_{1:D}$ and $\lambda_{1:K}$ are associated with $n_{dw}$, while $\phi_{dwk}$ denotes the probability~(under distribution $q$) that the topic assignment of word $w$ in document $d$ is $k$~\cite{blei2003}. The variational inference algorithm minimizes the {\em Kullback-Leibler divergence} between the variational distribution and the true posterior. This is achieved via iterating between assigning values to document-level variables and updating topic-level variables. \subsubsection{Online Latent Dirichlet Allocation} In online LDA~\cite{hoffman2010}, documents can arrive in batches~(streams), and the value of $\lambda_{1:K}$ is updated through analyzing each batch of documents. The variable $\rho_t = (\tau_0 + t)^{-\kappa}$ controls the rate at which the documents of batch $t$ impact the value of $\lambda_{1:K}$. Furthermore, the algorithm (Alg. 1) requires an estimation, at least, of the total number of documents $D$, in case this is not known in advance. The values of $\alpha$ and $\eta$ can be assigned once and remain fixed. Essentially, the probability of word $w$ in topic $\beta_k$, can be estimated as $\beta_{kw} = \lambda_{kw} / \sum \lambda_k$. \begin{algorithm} \label{alg:1} \SetAlgoNoEnd% Initialize $\lambda$ randomly \\ \For{t = 1 to $\infty$} { $\rho_t = (\tau_0 + t)^{-\kappa}$ \\ E step : \\ Initialize $\gamma_{tk}$ randomly\\ \Repeat{$\frac{1}{K}\sum_k |$change in $\gamma_{tk}| < \epsilon$} { Set $\phi_{twk} \propto exp\{E_q [log \hspace{0.6mm} \theta_{tk}] + E_q[log \hspace{0.5mm} \beta_{kw}]\}$ \\ Set $\gamma_{tk} = \alpha + \sum_w n_{tw} \phi_{twk}$ \\ } M step : \\ Compute $\tilde{\lambda}_{kw} = \eta + D n_{tw} \phi_{twk}$ \\ Set $\lambda = (1 - \rho_t) \lambda + \rho_t \tilde{\lambda}$ } \caption{Online Variational Inference for LDA~\protect\cite{hoffman2010}.} \end{algorithm} \section{Relational Rules} Agents have to form coalitions under what we term \emph{structural uncertainty}. This notion describes the uncertainty agents face regarding the value of \emph{synergies} among them. Such synergies are, in a non-overlapping setting, concisely described by \emph{marginal contribution nets}~(MC-nets). In MC-nets, coalitional games are represented by a set of rules of the form $Pattern \rightarrow value$, where $Pattern$ is a conjunction of literals~(representing the participation or absence of agents), and applies to coalition $C$ if $C$ satisfies $Pattern$, with $value \in \mathbb{R}$ being added to the coalitional value of $C$. We now extend MC-nets to overlapping environments by introducing \emph{Relational Rules}~(RR), with the following form: \begin{equation*} A \rightarrow \frac {\sum_{i \in A} \pi_{i, C}} {|A|} \cdot value \end{equation*} where $A \subseteq N$~(with $N = \{1, \ldots, n\}$ being the set of agents), $value \in \mathbb{R}$; $C \subseteq N$ is a coalition such that $A \subseteq C$; $\pi_{i,C}$ is the portion of her resource that $i$ has invested in coalition $C$: i.e., $\pi_{i,C} = r_{i,C} / r_i$, where $r_i$ is the total resource quantity~(continuous or discrete) that $i$ holds and $r_{i,C}$ is the amount she has invested in $C$. Therefore, $\pi_{i,C} > 0$, since $i \in C$~($r_{i,C}$ = $0$ essentially means that $i \notin C$), and $\pi_{i,C} \leq 1$, since $i$ can offer to $C$ at most $r_i$. A rule applies to coalition $C$ if and only if $A \subseteq C$, and in that case utility $ \frac {\sum_{i \in A} \pi_{i, C}} {|A|} \cdot value $ is added to the coalitional value of $C$. Note that it is {\em not} required that an agent's total resource quantity $r_i$ has to be communicated to $C$'s other members, since a rule is applied by the environment. In non-overlapping games, RRs reduce to MC-nets rules without negative literals, as it then holds that $ \frac {\sum_{i \in A} \pi_{i, C}} {|A|} = 1 $. \begin{example} Assume that $N = \{1, 2, 3\}$, $r_1 = 10, r_2 = 8$, $r_3 = 6$, and the Relational Rules of the game are: \begin{flalign*} { \emph{(1)}}: & \hspace{2mm} \{1\} \rightarrow \pi_{1,C} \cdot 100 \\ { \emph{(2)}}: & \hspace{2mm} \{1, 2\} \rightarrow \frac{\pi_{1,C} + \pi_{2,C}}{2} \cdot (-50) \\ { \emph{(3)}}: & \hspace{2mm} \{2, 3\} \rightarrow \frac{\pi_{2,C} + \pi_{3,C}}{2} \cdot 50 \end{flalign*} Let coalition $C_1$ = $\{1, 2\}$ form, with $r_{1, C_1}$ = $5$~($\pi_{1,C_1}$ = $0.5)$ and $r_{2, C_1}$ = $8$~($\pi_{2, C_1}$ = $1)$. The value of $u_{C_1}$ will be determined by rules \emph{(1)} and \emph{(2)}, since rule $(3)$ does not apply, as agent $3 \notin C_1$. Applying rule \emph{(1)} to $C_1$ will result in value $\pi_{1,C} \cdot 100 = 0.5 \cdot 100 = 50$ and applying rule \emph{(2)} to $C_1$ will result in value $\frac{\pi_{1, C_1} + \pi_{2, C_1}}{2} \cdot (-50) = \frac{0.5 + 1}{2} \cdot (-50) = -37.5$. Thus, $u_{C_1} = 50 - 37.5 = 12.5$. \end{example} In our setting, the value of a coalition is determined through RRs, but agents \emph{do not know the RRs in effect}, and hence cannot determine the value of a coalition with certainty. Thus, agents do not know how well they can do with others, and cannot determine their relations just by an observation of a coalitional value. However, in Section 4 we show how PTMs can be exploited so that agents learn the underlying RR-described collaboration structure. \section{Learning by Interpreting Coalitions as Documents} In this section, we present how agents can employ online LDA in order to effectively learn the underlying collaboration structure. We let each agent maintain and train her own online LDA model. Thus, there are $n$ such models in the system. The agents' formation decision-making process (Section~\ref{sec:formation}) employs the learned topics. For each (possibly overlapping) coalition $C \subseteq N$ formed,\footnote{Note that to improve readability we use the set notation ``$C \subseteq N$'' to refer to coalitions that can in reality be overlapping: these are in fact vectors of the resource quantities that each agent contributes to this coalition, i.e. a coalition is a $\mbox{\boldmath $r$} = \langle r_1 , \cdot , r_n\rangle$ vector~\cite{chalkiadakis2010}.} $i \in C$, $i$ observes the earned utility $u_C$. The contribution $r_{i,C}$ of agent $i$ to coalition $C$ is known to each other agent $j \in C \setminus i$, once $C$ is successfully formed. However, in order to supply that information to her online LDA model, an agent must maintain a vocabulary. We define the vocabulary of an agent to include $n$ words, one for each agent~(including herself), indicating their contribution, plus two words for the utility, one representing gain and the other representing loss, since the value earned from a coalition can be either positive or negative. Therefore, the vocabulary of an agent consists of $n+2$ words. Assuming a game that proceeds in rounds, in round $t$ agent $i$ interprets the coalitional configuration regarding $C, i \in C$, as a document by ``writing'' in the document the word that indicates the contribution of agent $j \in C$ \hspace{0.7mm} $r_{j,C}$ times---where $r_{j,C} \in \mathbb{N}_+$ is the contribution of $j$ to $C$. The restriction of the resource contributions of agents to positive~($r_{j,C} = 0 \Rightarrow j \notin C$) natural numbers is thus necessary when LDA is used, since a word can only appear in a document a discrete number of times. Thus, agent $i$ ``writes'' in the document, that corresponds to $C$, either the word that indicates gain or the one that indicates loss as many times as the absolute value of the utility earned by the coalition is.\footnote{The number of times that the word for utility is written may require scaling when its domain ranges from very low to very high values. } Since words are discrete data, $u_C$ cannot be real-valued; so, we let the actual value earned by $C$ be $\floor{u_C}$, instead of the $u_C$ computed by the application of the RRs related to $C$. The number of documents that an agent passes in an iteration~(round) to her online LDA is equal to the number of coalitions that she is member of. \begin{example} \label{ex-2} Let an agent's vocabulary include the words ``ag1'', ``ag2'', ``gain'' and ``loss'', corresponding respectively to agents' $1$ and $2$ contribution and the positive and negative utility. Thus, for coalition $C$ = $\{1,2\}$ where $r_{1,C}$ = $3$, $r_{2,C}$ = $1$, and $u_C$ = $-3$, each agent $i \in C$ forms the document: \begin{equation*} \textbf{w} = (\text{``ag1'', ``ag1'', ``ag1'', ``ag2'', ``loss'', ``loss'', ``loss''}) \end{equation*} \end{example} \begin{example} Following Example \ref{ex-2}, let also agent $3$ participate in the game, and so an agent's vocabulary includes the words ``ag1'', ``ag2'', ``ag3'', ``gain'' and ``loss'', with the corresponding meaning of each being as defined in Example \ref{ex-2}. Now, for coalition $C$ = $\{1,2, 3\}$ where $r_{1,C}$ = $1$, $r_{2,C}$ = $1$, $r_{3,C} = 2$, and $u_C$ = $4$, each agent $i \in C$ forms the document: \begin{equation*} \textbf{w} = (\text{``ag1'', ``ag2'', ``ag3'', ``ag3'', ``gain'', ``gain'', ``gain'', ``gain''}) \end{equation*} \end{example} Since LDA is a ``bag-of-words'' model, the order of the words in the document does not matter. The batch of documents the online LDA model of agent $i$ is supplied with at iteration $t$, consists of the interpreted-as-documents coalitions $i$ has joined at $t$. The intuition behind the notion of a topic is that the words that appear in it with high probability are very likely to appear together in a document that exhibits this topic with high probability. Thus, \emph{the probability with which the word corresponding to an agent's contribution appears in a topic, is correlated with the amount of her contribution}. Therefore, the meaning of a topic identified by agent $i$, is that $i$ has observed in many documents certain agents who contributed a lot, and some that contributed less; and this configuration \emph{results to gain or loss with the corresponding probabilities}. \begin{figure}[H] \centering \subfloat[A ``profitable'' learned topic.] {{ \includegraphics[width=10cm]{beta8.png} }}% \\ \subfloat[A ``non-profitable'' learned topic.] {{ \includegraphics[width=10cm]{beta9.png} }}% \caption{Typical topics, as formed by a randomly selected agent at the end of a random iteration in an experiment, where an agent's vocabulary consists of $52$ words ($n = 50$). The two last words in a topic indicate the probability of gain and loss respectively, while the rest correspond to agents' contribution. In (a), the ``profitable'' learned topic, the word for loss appears with near-zero probability; in (b), the ``non-profitable'' topic, the word for gain has near-zero probability.} \label{fig:topics}% \end{figure} Thus, the topic in Fig.~\ref{fig:topics}(a) implies that if $i$ joins a coalition with the agents that appear in the topic with high probability, then that coalition would be profitable. On the other hand, the topic in Fig.~\ref{fig:topics}(b) implies that forming a coalition with the agents that appear in it with high probability would result in loss. Note that learning a topic's profitability corresponds to acquiring information on the RRs associated with that topic. However, these RRs are not explicitly learned; what is learned is the \emph{underlying collaboration structure}~(which might, in the general case, be generated by means {\em other} than RRs). It is natural to expect that agents who appear with (relatively) high probability in a topic which has been associated with loss, like the one in Fig.~\ref{fig:topics}(b), will not appear~(as a group) with high probability in a topic that has been associated with gain, like the one in Fig.~\ref{fig:topics}(a). Such occurrence would reflect that an agent's beliefs indicate that cooperation with a group of agents is~(paradoxically) both beneficial and harmful. Furthermore, as an agent observes documents that always include the word that corresponds to her own contribution, it is expected that her corresponding word will have a non-trivial probability in her topics. \section{Taking Formation Decisions} \label{sec:formation} We now present \texttt{OVERPRO}, a method for agent decision-making in iterated OVERlapping coalition formation games, via PRObabilistic topic modeling~(here, online LDA). \subsection{A Repeated OCF Protocol} The protocol of our game operates in $I$ iterations~(rounds). At the beginning of an iteration one agent is randomly selected, from the set of agents $N = \{1, \ldots, n\}$, as the proposer. Then, this agent proposes a number of (overlapping) coalitions, where for each such coalition she offers an integer quantity of her resource and asks for a (possibly different) resource quantity from each agent of each coalition. Therefore, proposer $i \in N$ is asked to pass a list of tuples of the form $\langle demands_C, r_{i,C} \rangle$, where $demands_C$ is an $n$-dimensional vector whose $j^{th}$ entry denotes the (integer) resource quantity that the proposer asks from agent $j$ for joining $C$, and $r_{i,C} \in \mathbb{N}^+$ denotes the amount of resource that $i$ offers to coalition $C$. Naturally, if the proposer does not ask from $j$ to participate in $C$, then the $j^{th}$ entry of $demands_C$ is $0$. By limiting the agents' resource investments to discrete quantities we disallow the formation of an infinite number of coalitions. Then, every agent $j \in N \setminus i$ is a responder and gets informed of the proposals in which she is involved, while she has to respond to each such proposal by either accepting (and thus offering the requested resource quantity) or rejecting it. A (possibly overlapping) coalition $C$ forms if and only if all involved agents accept to participate in it. At the end of each round, all coalitions are dissolved and the resources of the agents are replenished. This removes the need for long-term strategic reasoning by the agents---and thus removes unnecessary distractions from the study of the effectiveness of the method used for learning the collaboration structure (which is what we focus on in this paper). The utility $u_{i,C}$ that $i$ $\in C$ earns from coalition $C$ is \emph{proportional} to her $r_{i,C}$ contribution, i.e., $u_{i,C} = u_C \cdot r_{i,C}/\sum_{j \in C} r_{j,C}$, where $u_C$ is the total utility earned by the coalition.\footnote{The use of more elaborate reward allocation methods is interesting future work.} An agent receives information regarding partners' contributions to, and the total coalitional utility of, her own formed coalitions only. \subsection{The OVERPRO Method} The main idea behind \texttt{OVERPRO} is that an agent exploits her learned LDA topic model for profitable coalition formation. Specifically, by considering as ``profitable''~(``non-profitable'') the topics in which the probability of gain~(loss) is higher than the probability of loss~(gain), the agent can identify coalitions that will potentially result in gain (loss). Now, it might be that not all of the topics are \emph{significant}, in the sense that they are not clearly profitable or harmful. This is because some of them might not be well formed~(especially at the early iterations of the game). We define a topic to be \emph{significant} if the absolute value of the difference between the probability of the word representing gain and the probability of the word representing loss is greater than $\epsilon$. Therefore, the \emph{significant topics} of agent $i$ are: \begin{equation*} ST^i \leftarrow \{k: |\beta^i_{k,'gain'} - \beta^i_{k,'loss'}| > \epsilon\} \end{equation*} \noindent where $\beta^i_k$ is the $k^{th}$ (out of the $K$) topic of agent $i$. Furtheremore, we define \emph{Good}~(profitable) topics as the ones in which the probability of the word representing gain is greater than that of the word representing loss, and are significant. \emph{Bad} topics are defined analogously. Formally: \begin{gather*} Good^i \leftarrow \{k: \beta^i_{k,'gain'} > \beta^i_{k,'loss'} \land k \in ST^i\} \\ Bad^i \leftarrow \{k: \beta^i_{k,'gain'} < \beta^i_{k,'loss'} \land k \in ST^i\}. \end{gather*} How much probability should an agent appear with in a topic in order for the \emph{agent} to be considered significant for that topic? For instance, in Fig.~\ref{fig:topics}(a) not all agents appear in the profitable topic with similar probability values. Note that, due to initialization of Dirichlet distributions, each word appears in a topic with positive probability, no matter how small. We define the \emph{significant agents} of topic $k$ of agent $i$, denoted as $SA^i_k$, as those whose corresponding words in topic $k$ have probability higher than the mean value $\mu$ of the probabilities of the words corresponding to agents, plus the standard deviation $\sigma$ of those. Formally: \begin{gather*} SA^i_k \leftarrow \{j: \beta^i_{k, j} > \mu(\beta^i_{k, N}) + \sigma(\beta^i_{k, N}) \land j \in N \setminus i \} \end{gather*} Given the above, the approach of \texttt{OVERPRO} is that a proposer $i$ proposes one coalition for every topic $k \in Good^i$ ~(proposing just the one coalition corresponding to the most profitable topic is risky, since a single rejection would mean formation failure), with the proposed members of the coalition stemming from topic $k$ being $SA^i_k$. Then, the resource quantity $r_{i,C}$ offered to $C$ by $i$ is proportional to $\beta^i_{k,i} \cdot ( \beta^i_{k, 'gain'} - \beta^i_{k, 'loss'} )$, where $k$ is the topic in which coalition $C$ was identifibed as profitable. Thus, a proposer's contribution to a coalition is affected by both his effect on the corresponding topic, and the profitability of this topic. The proposer $i$ asks from agent $j \in C$ to offer to $C$ the quantity $r_{j,C} = r_{i,C} \cdot \beta^i_{k, j} / \beta^i_{k,i}$, since $i$ assumes that others will respond according to the topics that $i$ has observed. Now, an agent often faces the dilemma to either exploit her best-so-far action, or explore different options~\cite{sutton1998}. We deal with this issue by allowing agent $i$ to do both at the same time, since $r_i$ is divisible. Specifically, at iteration $t$ proposer $i$ dedicates $z_t \in (0, 1)$ of $r_i$ in exploring and $1-z_t$ in exploiting~(with the exploitation part defined as described above). Then, an agent performs exploration by proposing to $\floor{r_i \cdot z_t}$ random coalitions, offering to each~(and asking from each one's participating agents) the minimum possible resource quantity $(=1)$. In each iteration, responders receive the proposals in which they are involved, and decide, for each proposal in turn whether to accept it~(invest the resource quantity) or reject it~(offering nothing). \texttt{OVERPRO} employs a parameter $c \in (0,1)$, so that an agent rejects a proposed coalition $C$ if she identifies a non-profitable~(bad) topic in which at least $c$ of the agents in $C$ are significant. The intuition behind the employment of parameter $c$ is that it suffices to observe a certain percentage of agents of a proposed coalition in a ``non-profitable'' topic in order to reject it. Parameter $c$ can have different values at different iterations, so we refer to its value at iteration $t$ as $c_t$. As agents make more observations, and thus become more confident about their beliefs over time, they gradually become more strict about who they cooperate with, and thus the value of $c_t$ decreases with $t$. If a proposal to form coalition $C$ is not rejected, then it is checked whether there is a profitable~(good) topic in which at least $(1 - c_t)$ of the agents in $C$ are significant. Since responder $j$ has to split her resources among the proposals she has received, a proposal associated with a profitable topic like the one described above, is an item of a \emph{KNAPSACK} problem that $j$ has to solve where the value of the item is $r_{j, C} / \sum_{m \in C} r_{m,C} \cdot (\beta^j_{k, 'gain'} - \beta^j_{k,'loss'})$: this stands for the profit portion of $j$, multiplied by the profitability of the corresponding topic. The weight of the item is the requested quantity $r_{j, C}$, and the constraint is that responder $j$ cannot invest more than $r_j$ in total. Thus, responder $j$ accepts the coalitions which correspond to the items given by solving the \emph{KNAPSACK} problem, while she rejects the rest. Despite that \emph{KNAPSACK} is an \emph{NP-Hard} problem, the dynamic programming pseudopolynomial algorithm~\cite{knapsackbook}, which we used in our experiments, often admits not excessive running times, while an alternative is to use an \emph{FPTAS}. After the decisions regarding the coalitions identified by profitable topics are made, the responder replies positively~(if there is sufficient resource quantity) to an offer which has been neither accepted nor rejected~(no relevant information found) if either the requested quantity is $1$, or, if not, with probability $c_t$~(in an exploratory sense). The training of LDA, and thus consequently that of online LDA, takes polynomial time in the number of documents and topics~\cite{blei2003}. Despite the fact that the number of documents depends on the resource quantities of the agents, which are numeric values and thus imply pseudopolynomial complexity, in practice the number of documents formed is far from the worst case, which can be attested by our experimental results. The ``exploitation'' part of \texttt{OVERPRO} takes polynomial time in the number of topics and agents, while the exploration part, which is independent of \texttt{OVERPRO} and can be replaced by one's choosing, takes pseudopolynomial time, as it linearly depends on the proposer's resource quantity. Furthermore, topics convergence is guaranteed~\cite{hoffman2010} if $\kappa \in (0.5, 1]$ (online LDA parameter). Therefore, assigning such a value to $\kappa$ and letting the resource portion dedicated by an agent for exploration decrease over time, we have as a corollary that the actions of the agents employing \texttt{OVERPRPO} converge. \section{Reinforcement Learning for OCF} To the best of our knowledge, this is the first work on~(decentralized) multiagent learning for \emph{overlapping} coalition formation under uncertainty not restricted in the context of Threshold Task Games~\cite{mamakos2017}, and thus there is no algorithm to use as a means for comparison. To this end, we have developed a a Greedy top-k algorithm and a Q-learning style~\cite{watkins1992,claus1998} one, and use these as baselines. \subsection{Greedy top-k algorithm} An agent that uses our Greedy top-k algorithm maintains the \emph{k} most profitable coalitions she has observed, along with their values, and the resources offered by participating agents. A proposer makes \emph{k} proposals, one for each of the top-k coalitions. The resource $r_{i,C}$ offered to coalition $C$ by proposer $i$ is proportional to the amount $i$ previously offered to $C$ and $C$'s corresponding value derived by applying the \emph{softmax function}~\cite{sutton1998} over the maintained values~(observed utilities) of the top-k coalitions; while $i$ asks from agent $j \in C$ an amount equal to $r_{i,C}$, multiplied by the ratio of their previous offerings~(with $j$'s being on the numerator and $i$'s on the denominator) and a random value in $U(0.9, 1.1)$. The approach to the exploitation-vs-exploration problem is exactly the same as in \texttt{OVERPRO}, and thus the $z_t$ parameter is also employed here. A responder $j$ adds a proposal to the input of a \emph{KNAPSACK} problem if at least $(1 - c_t)$ of the agents in the corresponding~(proposed) coalition appear in one of the top-k coalitions. The value of the \emph{KNAPSACK} item is equal to the sum of the values of the coalitions in which the agents of the proposed coalition were identified, multiplied by $r_{j, C} / \sum_{m \in C} r_{m,C}$. The weight of the corresponding item is, naturally, the requested quantity $r_{j, C}$. A responder accepts a proposal which was not included in the input of the \emph{KNAPSACK} instance, and for which there is sufficient remaining resource, if the requested quantity is $1$, or else with probability $c_t$. \subsection{Q-learning for OCF} An agent that uses our Q-learning algorithm employs two distinct kinds of Q-values. The first one, denoted as $Q_a$, maintains agent-level values; while the second, denoted as $Q_s$, maintains coalition size-level values. Employing two different sets of Q-values is necessary, since the alternative of maintaining a Q-value for every possible coalition requires exponential space in the number of the agents~(rendering the problem practically intractable in large settings). Agent $i$ maintains for each agent $j \in C \setminus i$ a $Q^i_{a,j}$ value, and for each $h \in \{1, \ldots, n-1\}$ a $Q^i_{s,h}$ value; keeping a $Q^i_{s,h}$ value for $h = n$ is redundant since the decision-maker always includes herself in a coalition. Furthermore, a learning rate $\delta_t \in (0,1)$ is employed~\cite{sutton1998}, as is common in Q-learning, where $t$ is the game iteration. After $C \ni i $, is formed and coalitional value $u_C$ is observed, agent $i$ updates her Q-values as follows: \begin{gather*} Q^i_{a, j} \leftarrow Q^i_{a, j} + \delta_t \cdot (u_C - Q^i_{a, j}) \hspace{2mm} \forall j \in C \setminus i \\ Q^i_{s, h} \leftarrow Q^i_{s, h} + \delta_t \cdot (u_C - Q^i_{s, h}) \hspace{2mm} \text{where } h = |C| - 1 \end{gather*} A proposer employing our Q-learning algorithm iteratively selects some quantity of her resource, at random, to offer to a coalition, until it is depleted. Then, at each iteration, the size of the coalition to propose (excluding herself) is selected using the \emph{softmax function} over the $Q^i_s$ values, and afterwards the agents to include in the coalition are selected using the \emph{softmax function} over the $Q^i_a$ values. The proposer asks from each member in $C$ the same quantity she has offered to $C$ multiplied by $U(0.9, 1.1)$. Exploration is employed in the same way as in the other methods. Responder $j$ has to solve a \emph{KNAPSACK} problem, where a proposal regarding coalition $C$ is given as input only if $\sum_{l \in C \setminus j} Q^j_{a,l}$ is positive, with its value being $\sum_{l \in C \setminus j} Q^j_{a,l} \cdot$ $r_{j, C} / \sum_{m \in C} r_{m,C}$ and its weight $r_{j, C}$. If $\sum_{l \in C \setminus j} Q^j_{a,l}$ is negative, $j$ accepts (if she can afford it) joining $C$ if the requested quantity is $1$, or else with probability $c_t$ \section{Experimental Evaluation} We evaluated \texttt{OVERPRO}'s effectiveness and robustness in environments with 50 and 250 agents. Agent resource quantities were generated from $\{475, \ldots, 525\}$ uniformly at random. The RRs were $500$ for $n = 50$, and $20,000$ for $n=250$, where the value of each RR was generated from $\mathcal{N} (0,100^2)$. We added stochasticity in our setting, so that with probability $5\%$ the value of a coalition, as resulted by applying RRs, is multiplied by a factor generated from $\mathcal{N} (0,5^2)$. Every game ran for $I$ = 1000 iterations, and thus, agent $i$ can observe at most $r_i \cdot 1000$ documents. Coding was in Python 3 and online LDA was implemented as in~\cite{hoffman2010}.\footnote{https://github.com/blei-lab/onlineldavb} The same exploration rate $z_t$ was set for all methods, decreasing quadratically from $1$ to $10^{-3}$. The value of $c_t$ decreases linearly, for every method as well, from $1$ to $0.5$. We tested \texttt{OVERPRO}, which requires $K$ (number of topics),\footnote{In some LDA implementations the value of $K$ is automatically derived~\cite{teh2012}, but we use the standard online LDA algorithm, which requires the value of $K$ as a parameter.} $\tau_0$ and $\kappa$ (that determine the impact \mbox{$\rho = (\tau_0 +t)^{-\kappa}$} of a batch of documents on the topics), and Q-learning, which requires $\delta_t$, for a number of different parameters; while for Greedy top-k we set values of \emph{k} equal to the number of topics $K$ for \texttt{OVERPRO}. The value of $\epsilon$ used in \texttt{OVERPRO} was equal to the vocabulary's length raised to $-1$, i.e., $(n+2)^{-1}$. Experiments ran on a grid~(each execution instance ran sequentially) with 4GB RAM 2.6GHz computers. \begin{table \caption { Results~(averages over $75$ runs) for $50$ agents and different values of $\langle K ,\tau_0, \kappa \rangle$ for \texttt{OVERPRO}, \emph{k} for Greedy top-k, and and $\delta_t$ for Q-learning. Participation and time are per agent per iteration $t$~(there is a unique proposer in $t$). } \center \label{table:1} \begin{tabular}{|c|c|c|c|} \hline $n = 50$ & sw~($\cdot 10^3$) & participation & time~(sec) \\ \hline $\langle 10, 100, 0.7 \rangle$ & 95.60 & 24.91 & 0.525 \\ \hline $\langle 10, 200, 0.9 \rangle$ & 117.27 & 24.97 & 0.528 \\ \hline $\langle 15, 100, 0.7 \rangle$ & 108.59 & 25.09 & 0.540 \\ \hline $\langle 15, 200, 0.9 \rangle$ & 119.47 & 25.17 & 0.543 \\ \hline $k = 10$ & 34.54 & 37.70 & 0.363 \\ \hline $k = 15$ & 51.72 & 37.64 & 0.366 \\ \hline $\delta_t = 0.95^t$ & 14.53 & 38.15 & 0.009 \\ \hline $\delta_t = 0.99^t$ & 10.69 & 38.08 & 0.009 \\ \hline \end{tabular} \end{table} In Table~\ref{table:1} we present for $n = 50$ the average: social welfare~(total utility) earned in a game~(sw); number of coalitions in which an agent participates in a round~(participation); and game completion time per agent per iteration. As observed in Table~\ref{table:1}, \texttt{OVERPRO} vastly outperforms both Greedy top-k and Q-learning in terms of social welfare. For the best set of parameters of \texttt{OVERPRO}, $\langle K=15, \tau_0=200, \kappa=0.9 \rangle$, the average social welfare earned in a game was more than double of that earned when the best alternative was employed, which is Greedy top-$15$, since $119.47 / 51.72 = 2.3$. Thus, we can conclude that a stochasticity probability even as low as $5\%$ can have a largely negative impact on Greedy top-k. On the other hand, this demonstrates the robustness of \texttt{OVERPRO}. Q-learning, for both values of $\delta_t$, performed very poorly, as in both cases the social welfare was not far above zero: $14.53$ for $\delta_t = 0.95$ and $10.69$ for $\delta_t = 0.99$. This suggests deficiency in matching good agent-level Q-values to coalition size-level ones, and thus unsuitability of Q-learning approaches, when Q-values for every coalition cannot be maintained. For both $10$ and $15$ topics, the social welfare was better for $\langle \tau_0=200, \kappa=0.9 \rangle$ than for $\langle \tau_0=100, \kappa=0.7 \rangle$. Since $\tau_0$ and $\kappa$ determine the impact that a batch of documents has on the formation of the topics, $\rho_t = (\tau_0 + t)^{-\kappa}$ can be interpreted as a learning rate. Now, higher values of $\tau_0$ and $\kappa$ result in smaller values of $\rho_t$. Therefore, it can be conjectured that lower learning rates are preferred over higher ones. Despite the better social welfare performance of \texttt{OVERPRO} against any of the alternatives, the average agent participation per iteration is lower when \texttt{OVERPRO} is employed, as seen in Table~\ref{table:1}. In particular, an agent using \texttt{OVERPRO} joins about $25$ coalitions per iteration, while one using either Greedy top-k or Q-learning joins about $38$. By the end of a game an agent employing \texttt{OVERPRO} will have trained her online LDA with more than $24.9$k documents, since one coalition corresponds to one document, an agent participates in at least $24.91$ coalitions in a round~\big(value for $\langle K=10, \tau_0 = 100, \kappa=0.7 \rangle$\big), and $I$=$1000$. Notice that the number of coalitions in which an agent participates when \texttt{OVERPRO} is employed is much smaller than her resource quantity, which is at least $475 >>25.17$ (= max \texttt{OVERPRO} participation). Thus, the number of documents is much smaller than the maximum possible, which implies that the pseudopolynomial complexity related to the number of documents does not have an actual impact. The time taken per agent per iteration is less than $0.55$ sec for \texttt{OVERPRO} and $0.37$ sec for Greedy top-k, while it is about two orders of magnitude lower for Q-learning. Now, one cannot draw accurate conclusions regarding the real power of an agent decision-making algorithm relying solely on social welfare. For instance, when more coalitions form, this will likely have a positive impact on social welfare---but rational agents aim to maximize their own utility. Therefore, we define \emph{efficiency} as the ratio of social welfare~(total utility) to total resource quantity invested by all agents in every coalition in a round. This efficiency metric is natural, as it takes the focus away from social welfare. \begin{figure}[h] \centering \includegraphics[width=10cm]{chart.png} \caption{Average efficiency defined as the ratio of social welfare to total resource quantity invested by all agents. For \texttt{OVERPRO} the values of $K$~(number of topics), and respectively for Greedy top-$k$ the values of $k$, are denoted on the left of each bar. Results are averages over all rounds over multiple runs: $75$ runs for $n=50$, and $30$ for $n=250$. } \label{fig:uq} \end{figure} It can be observed in Fig.~\ref{fig:uq} that \texttt{OVERPRO}, for $n = 50$~(orange bars), outperforms both Greedy top-k and Q-learning in terms of efficiency. In particular, the highest efficiency value, which appears for $\langle K=10, \tau_0 = 200, \kappa=0.9 \rangle$, is more than double of the best efficiency value of the alternatives, observed for Greedy top-$10$~(the former being over $0.06$, and the latter lower than $0.03$). We can thus conclude that agents employing \texttt{OVERPRO} are more efficient in terms of earning utility~(as a function of resources invested), as they focus more on coalitions identified as profitable. Now, as depicted by the blue bars in Fig.~\ref{fig:uq}, \texttt{OVERPRO} achieves even better efficiency for $n=250$ than for $n=50$, and it thus appears to effectively exploit the richer emerging collaboration structure. Moreover, we observed through experimentation that the number of topics $K$ should increase sublinearly to $n$, and we have thus used $15$ and $20$ as its values. We set the values of $k$ for Greedy top-k accordingly. We observe that for $\tau_0 = 200$ and $\kappa = 0.9$, for $K$ set to either $15$ or $20$, the achieved efficiency is over $0.1$, thus vastly outperforming the RL algorithms, whose efficiency is always lower than $0.03$; and at the same time supporting our conjecture that lowering learning rates are associated with increased performance. Agents employing Q-learning performed better for $n = 250$ than for $n = 50$ in terms of efficiency, but their performance was still very far below that of the ones using \texttt{OVERPRO}. The efficiency of Greedy top-k deteriorated for $n$ = 250~(dropping even below $0.02$ for $k = 20$), as it fails to identify and exploit patterns in the collaboration structure, while it is more difficult for the method to identify profitable coalitions in this larger setting. Each agent using \texttt{OVERPRO} had trained her online LDA model with more than $33.7$k documents (coalitions) in total, at the end of a game. The \texttt{OVERPRO} running time for $n=250$, was $\sim1.15$ sec per agent per iteration. \section{Discussion} \label{sec:discussion} One of the aims of this paper is to provide insights towards new research directions. In particular, this work diverges from the standard RL/MDPs paradigm used for multiagent learning in games. As such, we expect that it will raise intriguing questions, and bears the potential for the development of exciting new theory to be applied to challenging problems. It is, for one, interesting to study the effect of adopting PTMs for multiagent learning. Taking a reverse point of view, it is of interest of examine the effect that specific multiagent environments can have on PTM properties. For instance, the convergence property of online LDA tranfers ``for free'' to \texttt{OVERPRO}, but in certain environments, or under additional assumptions on the structure of the game, the convergence rate might be different. Moreover, a somewhat ``orthogonal'' contribution in this paper was the introduction of the novel concept of \emph{Relational Rules (RRs)}, which consist a natural scheme for representing synergies in overlapping settings. As such, pursuing their further study could lead to new stability results for overlapping cooperative games. Indeed, the concept of stability in games with overlapping coalitions is quite elaborate and different than the disjoint coalitional ones, since it is not just the membership of an agent in a coalition that matters, but the degree by which she participates in that; and since the number of different coalition structures cannot be enumerated in such settings~\cite{chalkiadakis2010}. Additionally, in such settings, it is not just agents or coalitions that can deviate, but entire coalition structures, since agents can withdraw just a portion of their resource from the formed coalitions. All these render the study of stability challenging in OCF domains. Though we did not pursue the study of OCF stability in this paper, it would be interesting to define and study stability concepts that take into account agent preferences regarding collaboration structures learned using our method. \section{Conclusions and Future Work} We have presented a novel approach for multiagent learning in cooperative game environments, where probabilistic topic modeling is exploited. Furthermore, this is the first work to tackle overlapping coalition formation under uncertainty, where the uncertainty is on the relations entailing synergies among the agents. To this end, we first proposed Relational Rules, a representation scheme which extends MC-nets to cooperative games with overlapping coalitions; and then showed how to use online LDA to implicitly learn the agents' synergies described by~(unknown) RRs. Simulations confirm the method's effectiveness. As immediate future work, we intend to test these ideas in non-transferable utility settings. Moreover, we would like to apply our method to non-cooperative environments. Naturally, this would require adjustments to the ``vocabulary'' used in ``documents'' representing coalitions. Finally, we intend to apply alternative PTM algorithms, to this or different game theoretic settings. \balance
{ "timestamp": "2018-04-17T02:07:04", "yymm": "1804", "arxiv_id": "1804.05235", "language": "en", "url": "https://arxiv.org/abs/1804.05235" }
\section{Introduction}\label{sec:introduction} Never-Ending Language Learning (NELL)~\cite{carlson_coupling_2009,mitchell_never-ending_2015} is an autonomous computational system that aims at continually and incrementally learning. NELL has been running for about 7 years in Carnegie Mellon University (US). Currently, NELL has collected over 50 million of candidate beliefs, from with about 3.6 million have been promoted as trustworthy statements. NELL learns from the web and uses an ontology previously created to guide the learning. One of the most significant resource contributions of NELL, in addition to the millions of beliefs learned from the Web, is NELL's internal representation (or metadata) for categories, relations and concepts. Such internal representation grows in every iteration, and is used by NELL as a set of different (and constantly updated) \emph{feature vectors} to continuously retrain NELL's learning components and build its own way to understand what is read from the Web. \citet{zimmermann_nell2rdf:_2013} published in 2013 a solution to convert NELL's beliefs and ontology into RDF and OWL. However, NELL's internal metadata is not modeled in their work. Thus, the main contribution of this work is to extended the approach to include all the provenance metadata (NELL's internal representation) for each belief. We publish this data using five different representation models: RDF reification~\cite[Sec.~5.3]{brickley_rdf_2014}, N-Ary relations~\cite{noy_defining_2006}, Named Graphs~\cite{carroll_named_2005}, Singleton Properties~\cite{nguyen_dont_2014}, and NdFluents~\cite{gimenez-garcia_ndfluents:_2017}. In addition, we publish not only the promoted beliefs, but also the candidates. As far as we know, this dataset contains more metadata about the statements than any other available dataset in the linked data cloud. This in itself can also be interesting for researchers that seek to manage and exploit meta-knowledge. Our intention is to keep this information updated and integrate it on NELL's web page\footnote{\url{http://rtw.ml.cmu.edu/}}. The rest of the paper is organized as follows: Section \ref{sec:nell} presents NELL and the components it comprises; in Section \ref{sec:nell2rdf} describes the transformation of NELL data and metadata to RDF; Section \ref{sec:dataset} presents the dataset generated in this paper and how it is published; finally, Section \ref{sec:conclusion} provides final remarks and future work. \section{The Never-Ending Language Learning System}\label{sec:nell} NELL~\cite{carlson_coupling_2009,mitchell_never-ending_2015} was built based on a new Machine Learning (ML) paradigm, the Never-Ending Learning (NEL). NEL paradigm is a semi-supervised learning~\cite{blum_combining_1998} approach focused on giving the ability to a machine learning system to autonomously use what it has previously learned to continuously become a better learner. NELL is based on a number of coupled components working in parallel. These components read the web and use different approaches to, not only infer new knowledge in the form of beliefs, but also to infer new ways of internally representing the learned beliefs and their properties. Beliefs are divided into candidates and promoted beliefs. In order to be promoted a belief needs to have a confidence score of at least 0.9. \begin{enumerate} \item \textbf{AliasMatcher} finds relations between entities and their Wikipedia URL on Freebase. It was run only once and is currently not active. \item \textbf{CML} \textit{(Coupled Morphologic Learner)} \cite{carlson_toward_2010} is responsible for identifying morphological regularities (such as that words finished in \texttt{burg} could be cities). It makes use of orthographic features of noun phrases (\eg length and number of words, capitalization, prefixes and suffixes).\textit{CMC} is the previous version of this component. \item \textbf{CPL} \textit{(Coupled Pattern Learner)} \cite{carlson_toward_2010} is the component that learns Named Entities (NE) and Textual Patterns (TP) from text in the web pages. Internally, a different implementation was used between 2010 and 2013 that could learn categories and relations together. After that, CPL was splitted in CPL1 and CPL2, the former learning categories and the latter relations, but the distinction is not made in the knowledge base. All the knowledge from CPL1 is promoted promoted only if CPL2 agrees. \ie CPL will extract TPs for categories (\texttt{\_ is a city}, \texttt{city such as \_}, \etc) and for relations (\texttt{arg1 is a city located in arg2}, \texttt{arg1 is the capital of arg2}, \etc). Then, using those TPs, CPL will extract NEs for categories (e.g. \texttt{city(Paris)}, \texttt{city(Annecy)}, \etc) and NE pairs for relations (\texttt{locatedIn(Paris, France)}, \texttt{locatedIn(Annecy, France)}, \etc). \item \textbf{KbManipulation} is used to correct some old bugs from NELL's internal indexing knowledge. Several of these bugs should be removed automatically, but NELL has not one automated process for this task yet. \item \textbf{LatLong} matches the literal string of Named Entities against a fixed geolocation database. \item \textbf{LE (Learned Embeddings)} \cite{yang_joint_2016} predicts new categories or relations of entities based on Event and Named Entity extraction It creates a feature space where each dimension is a single NELL predicate, and NELL's learned NE (or NE pairs for relations) is used as training examples. LE's process predicts category or relation for NE (or NE pairs) that were not related in the training set. \item \textbf{MBL}, also known as \textit{ErrorBasedIntegrator} and \textit{Knowledge Integrator}, is the component responsible for taking the decision of promotion based on the contributions of the other components. \textit{EntityResolverCleanup} is the name used for the same MBL process applied during a big alteration in NELL's knowledge base. In 2010 a big change was made in the NELL’s KB structure to make possible for two words to have different meanings (e.g apple the fruit and Apple the company) and, conversely, for a concept to use different words (e.g Google and Google Inc.). \item \textbf{OE} \textit{(Open Eval)} \cite{samadi_openeval:_2013} queries the web and extract small text using predicate instances. OE calculates the score based on the text distance between the instances in a relation. \item \textbf{OntologyModifier} is used for any ontology alteration. This component appears in the Knowledge base when a new seed or and ontology extension is manually introduced. \item \textbf{PRA} \textit{(Path Ranking Algorithm)} \cite{gardner_incorporating_2014} is based on Random Walk Inference. PRA analyzes the connections between two categories instances which are the arguments for a relation. This component replaced the old \textit{Ruler Learner} component. \item \textbf{RL} \textit{(Rule Learner)} \cite{lao_random_2011} extracts new knowledge using Horn Clauses based on the ontology. Its implementation was based on FOIL \cite{quinlan_foil:_1993}. It can be found in NELL’s KB, but its execution stopped when NELL started to deal with polysemy resolution. \item \textbf{SEAL} \textit{(Coupled Set Expander for Any Language)} \cite{wang_language-independent_2007} is the component responsible for extracting knowledge from HTML patterns. It works in a similar way to CPL, but using HTML patterns instead of textual patterns. In the past it was called \textit{CSEAL}, but after some improvements in its performance it changed the name for SEAL. \item \textbf{Semparse} \cite{krishnamurthy_joint_2014} combines syntactic parsing from CCGbank (a conversion of the corpus of trees Penn Treebank \cite{marcus_penn_1994}) and distant supervision. \item \textbf{SpreadsheetEdits} provides modifications in the NELL's Knowledge base using human feedback. \end{enumerate} Each of of these components, with the exception of \texttt{LE}, output provenance information regarding theirs execution. In the next sections we present how this metadata is modeled in RDF. \section{Converting NELL to RDF}\label{sec:nell2rdf} In this section we describe how NELL data and metadata are transformed into RDF. The first subsection presents how NELL's ontology and beliefs are converted, following the work by \citet{zimmermann_nell2rdf:_2013}; the second subsection describes how we convert the provenance metadata associated with each belief. NELL's Knowledge bases used in this paper for the promoted and candidates beliefs are respectively corresponding to the iterations 1075\footnote{\url{http://rtw.ml.cmu.edu/resources/results/08m/NELL.08m.1075.esv.csv.gz}} and 1070\footnote{\url{http://rtw.ml.cmu.edu/resources/results/08m/NELL.08m.1070.cesv.csv.gz}}. The code is publicly available in GitHub\footnote{\url{https://github.com/WDAqua/nell2rdf}}. \subsection{Converting NELL's beliefs to RDF}\label{subsec:nell2rdf.data} NELL's ontology is published as a file with three tab separated values per line, where each line expresses a relationship between categories and other categories, relations, or values used by NELL processes. In order to convert NELL's ontology to RDF each line is transformed into a triple as per \citet{zimmermann_nell2rdf:_2013}. In short, the first and the third values are a pair of categories or relations, or either a category or relation in the first field and a value in the third. The second field is a predicate that indicates the relationship between the two elements. The transformations can be seen in Table~\ref{tab:ontology}. \begin{table} \centering \caption{NELL’s ontology predicates and their translation in RDFS / OWL (from \cite{zimmermann_nell2rdf:_2013})} \label{tab:ontology} \begin{tabular}{l|l} \textbf{NELL predicate} & Translation to RDFS / OWL \\ \hline antireflexive & rdf:type owl:IrreflexiveProperty \\ antisymmetric & antisymmetric Literal(?object,xsd:boolean) \\ description & rdfs:comment Literal(?object,@en) \\ domain & rdfs:domain Class(?object) \\ domainwithinrange & domainWithinRange Literal(?object,xsd:boolean) \\ generalizations & rdfs:subClassOf Class(?object) \\ humanformat & humanFormat Literal(?object,xsd:string) \\ instancetype & instanceType IRI(?object) \\ inverse & owl:inverseOf ?object \\ memberofsets & \textit{if} ?object \textit{is} rtwcategory \textit{then} rdf:type rdfs:Class \\ & \textit{else} ?object \textit{is} rtwrelation \textit{then} rdf:type rdf:Property \\ mutexpredicates & \textit{if} ?subject \textit{is a} class \textit{then} owl:disjointWith ?object \\ & \textit{else} ?subject \textit{is a} property \textit{then} owl:propertyDisjointWith ?object \\ nrofvalues & \textit{if} ?object \textit{is} 1 \textit{then} rdf:type owl:FunctionalProperty \\ populate & populate Literal(?object,xsd:boolean) \\ range & rdfs:range ?object \\ rangewithindomain & rangeWithinDomain Literal(?object,xsd:boolean) \\ visible & visible Literal(?object,xsd:boolean) \\ \end{tabular} \end{table} NELL's beliefs are also published in tab-separated format, where each line contains a number of fields to express the belief and the associated metadata, such as iteration of promotion, confidence score, or the activity of the components that inferred the belief. All the fields except 4, 5, 6, and 13 are used to convert the beliefs into RDF statements. Table \ref{tab:fields} shows the meaning of each field. Fields 1, 2, and 3 are converted into the subject, predicate, and object of an RDF statement; the content of fields 7 and 8 create new statements using \texttt{rdf:label} properties; fields 9 and 10 create new triples with the property \texttt{skos:prefLabel}; finally, fields 11 and 12 are used to create triples indicating the types of the subject and the object. For a more detailed description of this step, refer to \citet{zimmermann_nell2rdf:_2013}. \begin{table} \centering \caption{Description of NELL’s beliefs fields} \label{tab:fields} \begin{tabular}{r|l|l} \textbf{\#} & \textbf{Field} & \textbf{Description} \\ \hline 1 & Entity & Subject of the belief \\ 2 & Relation & Predicate of the belief \\ 3 & Value & Object of the belief \\ 4 & Iteration & Iteration when the belief was promoted, or a list of iterations \\ & & when the components generated the belief \\ 5 & Probability & Confidence score of the belief \\ 6 & Source & MBL activity to promote the belief \\ 7 & Entity literalStrings & Labels of the subject \\ 8 & Value literalStrings & Labels of the object \\ 9 & Best Entity literalString & Preferred label of the subject \\ 10 & Best Value literalString & Preferred label of the object \\ 11 & Categories for Entity & Classes of the subject \\ 12 & Categories for Value & Classes of the object \\ 13 & Candidate Source & Activity of the components that generated the belief\\ \end{tabular} \end{table} \subsection{Converting NELL metadata to RDF}\label{subsec:nell2rdf.metadata} Fields 4, 5, 6, and 13 of each NELL's belief are used to extract the metadata. Each belief is represented by a resource, to which we attach the provenance information. In the promoted beliefs process, field 4 is used to extract the iteration when the belief was promoted, while field 5 gives a confidence score about it. On the other hand, in the candidate beliefs process, fields 4 and 5 contains the iterations when each component generated information about the belief, and the confidence score provided by each of them. Field 6 contains a summary information about the activity of MBL when processing the promoted belief. The complete information from field 6 is a summary of field 13. For that reason, we only process field 13. Finally, in field 13 every activity that took part in generating the statement is parsed. The ontology can be seen in Figure \ref{fig:metadata_ontology}. We make use of the PROV-O ontology \cite{lebo_prov-o:_2013} to describe the provenance. Each \texttt{Belief} can be related with one or more \texttt{ComponentExecution} that, in turn, are performed by a \texttt{Component}. If the belief is a \texttt{PromotedBelief}, it has attached its \texttt{iterationOfPromotion} and \texttt{probabilityOfBelief}. The \texttt{ComponentIteration} is related to information about the process: the \texttt{iteration}, \texttt{probabilityOfBelief}, \texttt{Token}, \texttt{source} and \texttt{atTime} (the date and time it was processed). The \texttt{Token} expresses the concepts that the \texttt{Component} is relating. Those concepts can be a pair of entities for a \texttt{RelationToken}, and entity and a class for a \texttt{GeneralizationToken} (note that \texttt{LatLong} component has a different token \texttt{GeoToken}, further described later). Finaly, each component have a \texttt{source} string describing their process for the belief. This string is then further analyzed and translated into a different set if IRIs for each type of component in the subsections below. The classes of the ontology are described in Table \ref{tab:classes} and properties of the ontology are described in Table \ref{tab:properties}. The classes and properties of each component are described down below. \begin{sidewaysfigure} \centering \includegraphics[width=1\textwidth]{img/nellrdf_ontology.eps} \caption{NELL2RDF metadata ontology} \label{fig:metadata_ontology} \end{sidewaysfigure} \begin{table} \centering \caption{Description of NELL metadata classes} \label{tab:classes} \begin{tabular}{l|l|l} \textbf{Class} & \textbf{rdfs:subClassOf} & \textbf{Description} \\ \hline \texttt{Belief} & \texttt{prov:Entity} & A belief \\ \texttt{PromotedBelief} & \texttt{Belief} & A promoted belief \\ \texttt{CandidateBelief} & \texttt{Belief} & A candidate belief \\ \texttt{ComponentExecution} & \texttt{prov:Activity} & The activity of a component in an iteration \\ \texttt{Component} & \texttt{prov:SoftwareAgent} & A component \\ \texttt{Token} & \texttt{owl:Thing} & The tuple that was inferred by the activity \\ \texttt{RelationToken} & \texttt{Token} & The tuple \textless Entity,Entity\textgreater~that was \\ & & inferred for a relation \\ \texttt{GeneralizationToken} & \texttt{Token} & The tuple \textless Entity,Category\textgreater~that was \\ & & inferred for a generalization \\ \texttt{GeoToken} & \texttt{Token} & The tuple \textless Entity,Longitude,Latitude\textgreater \\ & & that was inferred for a geografical belief \\ \end{tabular} \end{table} \begin{table} \centering \caption{Description of NELL metadata properties} \label{tab:properties} \begin{tabular}{l|lll} \textbf{Property} & \textbf{rdfs:subPropertyOf} & \textbf{rdfs:domain} & \textbf{rdfs:range} \\ & \multicolumn{3}{|l}{\textbf{Description}} \\ \hline \texttt{generatedBy} & \texttt{prov:wasGeneratedBy} & \texttt{Belief} & \texttt{ComponentIteration} \\ & \multicolumn{3}{|l}{The Belief was generated by the iteration of the component} \\ \texttt{associatedWith} & \texttt{prov:wasAssociatedWith} & \texttt{ComponentIteration} & \texttt{Component} \\ & \multicolumn{3}{|l}{The iteration was performed by the component} \\ \texttt{iterationOfPromotion} & \texttt{owl:DatatypeProperty} & \texttt{PromotedBelief} & \texttt{xsd:integer} \\ & \multicolumn{3}{|l}{iteration in which the component was promoted} \\ \texttt{probabilityOfBelief} & \texttt{owl:DatatypeProperty} & \texttt{PromotedBelief} & \texttt{xsd:decimal} \\ & \multicolumn{3}{|l}{Confidence score of the Belief} \\ \texttt{iteration} & \texttt{owl:DatatypeProperty} & \texttt{ComponentIteration} & \texttt{xsd:integer} \\ & \multicolumn{3}{|l}{Iteration in which a component performed the activity} \\ \texttt{probability} & \texttt{owl:DatatypeProperty} & \texttt{ComponentIteration} & \texttt{xsd:decimal} \\ & \multicolumn{3}{|l}{Confidence score given by the component} \\ \texttt{hasToken} & owl:ObjectProperty & \texttt{ComponentIteration} & \texttt{Token} \\ & \multicolumn{3}{|l}{The concepts that the component is relating} \\ \texttt{source} & \texttt{owl:DatatypeProperty} & \texttt{ComponentIteration} & \texttt{xsd:string} \\ & \multicolumn{3}{|l}{Data that was used by the component in the activity} \\ \texttt{atTime} & \texttt{owl:DatatypeProperty} & \texttt{ComponentIteration} & \texttt{xsd:dateTime} \\ & \multicolumn{3}{|l}{Date and time when the component execution was performed} \\ \texttt{tokenEntity} & \texttt{owl:DatatypeProperty} & \texttt{Token} & \texttt{xsd:string} \\ & \multicolumn{3}{|l}{Entity on which the data was inferred} \\ \texttt{relationValue} & \texttt{owl:DatatypeProperty} & \texttt{RelationToken} & \texttt{xsd:string} \\ & \multicolumn{3}{|l}{Entity related the entity appointed by \texttt{tokenEntity}} \\ \texttt{generalizationValue} & \texttt{owl:DatatypeProperty} & \texttt{GeneralizationToken} & \texttt{xsd:string} \\ & \multicolumn{3}{|l}{Class of the entity appointed by \texttt{tokenEntity}} \\ \end{tabular} \end{table} \paragraph{AliasMatcher} execution is denoted by a resource of class \texttt{AliasMatcherExecution}, and includes the date when the data was extracted from Freebase using the property \texttt{freebaseDate}. The added ontology can be seen in Figure~\ref{fig:aliasmatcherexecution}. \begin{figure} \centering \includegraphics[height=0.30\linewidth]{img/AliasMatcherExecution.eps} \caption{AliasMatcherExecution metadata ontology} \label{fig:aliasmatcherexecution} \end{figure} \paragraph{CMC} execution is denoted by a resource of class \texttt{CMCExecution}. A number of morphological patterns \texttt{MorphologicalPatternScoreTriple} are attached to it, each one containing a name, a value, and a confidence score. The properties used can be seen in Table~\ref{tab:properties.cmc}, while the ontology diagram is shown in Figure~\ref{fig:cmcexecution}. \begin{table} \centering \caption{Description of CMC metadata properties} \label{tab:properties.cmc} \begin{tabular}{l|ll} \textbf{Property} & \textbf{rdfs:domain} & \textbf{rdfs:range} \\ & \multicolumn{2}{|l}{\textbf{Description}} \\ \hline \texttt{morphologicalPattern} & \texttt{CMCExecution} & \texttt{MorphologicalPatternScoreTriple} \\ & \multicolumn{2}{|l}{One of the morphological patterns used by \texttt{CMC}} \\ \hline \texttt{morphologicalPatternName} & \texttt{MorphologicalPatternScoreTriple} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Name of the morphological pattern (\ie prefix, suffix, etc.)} \\ \hline \texttt{morphologicalPatternValue} & \texttt{MorphologicalPatternScoreTriple} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Value of the morphological pattern (\ie prefix = Saint and suffix = burgh)} \\ \hline \texttt{morphologicalPatternScore} & \texttt{MorphologicalPatternScoreTriple} & \texttt{xsd:decimal} \\ & \multicolumn{2}{|l}{Score of the morphological pattern} \\ \end{tabular} \end{table} \begin{figure} \centering \includegraphics[height=0.50\linewidth]{img/CMCExecution.eps} \caption{CMC metadata ontology} \label{fig:cmcexecution} \end{figure} \paragraph{CPL} execution is denoted by a resource of class \texttt{CPLExecution}. It contains a series of textual patterns \texttt{patternOccurrences}, each one with a literal that describes the pattern, and the number of times it has occurred in the NELL's data source. The properties used are described in Table~\ref{tab:properties.cpl}, and the diagram for the ontology is shown in Figure~\ref{fig:cplexecution}. \begin{table} \centering \caption{Description of CPL metadata properties} \label{tab:properties.cpl} \begin{tabular}{l|ll} \textbf{Property} & \textbf{rdfs:domain} & \textbf{rdfs:range} \\ & \multicolumn{2}{|l}{\textbf{Description}} \\ \hline \texttt{patternOccurrences} & \texttt{CPLExecution} & \texttt{PatternNbOfOccurrencesPair} \\ & \multicolumn{2}{|l}{One of the textual patterns used by \texttt{CPL}} \\ \hline \texttt{textualPattern} & \texttt{PatternNbOfOccurrencesPair} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Textual pattern in the form of a sentence} \\ \hline \texttt{nbOfOccurrences} & \texttt{PatternNbOfOccurrencesPair} & \texttt{xsd:nonNegativeInteger} \\ & \multicolumn{2}{|l}{Number of times it has occurred in the NELL's source data} \\ \end{tabular} \end{table} \begin{figure} \centering \includegraphics[height=0.50\linewidth]{img/CPLExecution.eps} \caption{CPL metadata ontology} \label{fig:cplexecution} \end{figure} \paragraph{KbManipulation} execution is denoted by a resource of class \texttt{KbManipulationExecution}. Ir contains the bug \texttt{oldBug} that was manually fixed. Its shown in Figure~\ref{fig:kbmanipulationexecution}. \begin{figure} \centering \includegraphics[height=0.30\linewidth]{img/KbManipulationExecution.eps} \caption{KbManipulation metadata ontology} \label{fig:kbmanipulationexecution} \end{figure} \paragraph{LatLong} execution is denoted by a resource of class \texttt{LatLongExecution}. It contains a list of locations \texttt{NameLatLongTriple} that were used to infer the belief. Each one containing the \texttt{name} and the latitude and longitude values. This execution has also its own token \texttt{GeoToken} with the latitude and longitude values reusing the same properties. The properties are detailed in Table~\ref{tab:properties.latlong}, and the ontology diagram is shown in Figure~\ref{fig:latlongexecution}. \begin{table} \centering \caption{Description of LatLong metadata properties} \label{tab:properties.latlong} \begin{tabular}{l|ll} \textbf{Property} & \textbf{rdfs:domain} & \textbf{rdfs:range} \\ & \multicolumn{2}{|l}{\textbf{Description}} \\ \hline \texttt{location} & \texttt{LatLongExecution} & \texttt{NameLatLongTriple} \\ & \multicolumn{2}{|l}{One of the locations used by \texttt{Latlong}} \\ \texttt{name} & \texttt{NameLatLongTriple} & \texttt{rdf:langString} \\ & \multicolumn{2}{|l}{Name of the location} \\ \texttt{latitudeValue} & \texttt{NameLatLongTriple} & \texttt{xsd:decimal} \\ & \multicolumn{2}{|l}{Latitude of the location} \\ \texttt{longitudeValue} & \texttt{NameLatLongTriple} & \texttt{xsd:decimal} \\ & \multicolumn{2}{|l}{Longitude of the location} \\ \end{tabular} \end{table} \begin{figure} \centering \includegraphics[height=0.50\linewidth]{img/LatLongExecution.eps} \caption{LatLong metadata ontology} \label{fig:latlongexecution} \end{figure} \paragraph{LE} execution is denoted by a resource of class \texttt{LEExecution}. It does not contain any additional triples. \paragraph{MBL} execution is denoted by a resource of class \texttt{MBLExecution}. It contains the entities and the categories of the other belief that was used to promote this one. The properties used are described in Table~\ref{tab:properties.mbl}, and the ontology diagram is shown in Figure~\ref{fig:mblexecution}. \begin{table} \centering \caption{Description of MBL metadata properties} \label{tab:properties.mbl} \begin{tabular}{l|ll} \textbf{Property} & \textbf{rdfs:domain} & \textbf{rdfs:range} \\ & \multicolumn{2}{|l}{\textbf{Description}} \\ \hline \texttt{promotedEntity} & \texttt{MBLExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Entity of a belief previously promoted} \\ \hline \texttt{promotedEntityCategory} & \texttt{MBLExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Category of the entity of the promoted belief} \\ \hline \texttt{promotedRelation} & \texttt{MBLExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Relation of the promoted belief} \\ \hline \texttt{promotedValue} & \texttt{MBLExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Value of the promoted belief} \\ \hline \texttt{promotedValueCategory} & \texttt{MBLExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Category of the promoted belief, if applicable} \\ \end{tabular} \end{table} \begin{figure} \centering \includegraphics[height=0.40\linewidth]{img/MBLExecution.eps} \caption{MBL metadata ontology} \label{fig:mblexecution} \end{figure} \paragraph{OE} execution is denoted by a resource of class \texttt{OEExecution}. It contains a set of pairs \texttt{TextUrlPair}, each one including the sentence that was used to infer the belief, and the URL from where it was extracted. The properties used can be found in Table~\ref{tab:properties.oe}, and the ontology diagram in Figure~\ref{fig:oeexecution}. \begin{table} \centering \caption{Description of OE metadata properties} \label{tab:properties.oe} \begin{tabular}{l|ll} \textbf{Property} & \textbf{rdfs:domain} & \textbf{rdfs:range} \\ & \multicolumn{2}{|l}{\textbf{Description}} \\ \hline \texttt{textUrl} & \texttt{OEExecution} & \texttt{TextUrlPair} \\ & \multicolumn{2}{|l}{One of the pairs \textless text, url\textgreater~used by \texttt{OE}} \\ \texttt{text} & \texttt{TextUrlPair} & \texttt{rdf:langString} \\ & \multicolumn{2}{|l}{Text extracted from the web} \\ \texttt{url} & & \texttt{xsd:anyURI} \\ & \multicolumn{2}{|l}{Web page where the text was extracted} \\ \end{tabular} \end{table} \begin{figure} \centering \includegraphics[height=0.50\linewidth]{img/OEExecution.eps} \caption{OE metadata ontology} \label{fig:oeexecution} \end{figure} \paragraph{OntologyModifier} execution is denoted by a resource of class \texttt{OntologyModifierExecution}. It contains the \texttt{ontologyModification}, which can be either a modification of a category or a modification of a relation. The ontology diagram can be seen in Figure~\ref{fig:ontologymodifierexecution}. \begin{figure} \centering \includegraphics[height=0.30\linewidth]{img/OntologyModifierExecution.eps} \caption{OntologyModifier metadata ontology} \label{fig:ontologymodifierexecution} \end{figure} \paragraph{PRA} execution is denoted by a resource of class \texttt{PRAExecution}. It includes a series of \texttt{Path} resources describing the path followed in NELL dataset to infer the belief. Each \texttt{Path} includes its direction and a confidence score, along with a list of relations followed. The properties used can be seen in Table~\ref{tab:properties.pra}, while the ontology diagram is shown in Figure~\ref{fig:praexecution}. \begin{table} \centering \caption{Description of PRA metadata properties} \label{tab:properties.pra} \begin{tabular}{l|ll} \textbf{Property} & \textbf{rdfs:domain} & \textbf{rdfs:range} \\ & \multicolumn{2}{|l}{\textbf{Description}} \\ \hline \texttt{relationPath} & \texttt{PRAExecution} & \texttt{Path} \\ & \multicolumn{2}{|l}{Relation path that entails the belief} \\ \texttt{direction} & \texttt{Path} & \texttt{DirectionOfPath} \\ & \multicolumn{2}{|l}{Direction of the path} \\ \texttt{score} & \texttt{Path} & \texttt{xsd:decimal} \\ & \multicolumn{2}{|l}{Score assigned to the entailment} \\ \texttt{listOfRelations} & \texttt{Path} & \texttt{rdf:List} \\ & \multicolumn{2}{|l}{Ordered list of relations in the path} \\ \end{tabular} \end{table} \begin{figure} \centering \includegraphics[height=0.50\linewidth]{img/PRAExecution.eps} \caption{PRA metadata ontology} \label{fig:praexecution} \end{figure} \paragraph{RL} execution is denoted by a resource of class \texttt{RLExecution}. It contains a resource \texttt{RuleScoresTuple} that contains the \texttt{Rule} and a set of scores indicating the confidence, and the number of beliefs that are estimated to be correctly and incorrectly inferred (and the number of inferred beliefs for which it is not known if they are correct or not) with that rule. The rule itself contains the variables and their values, and the predicates that are part of it. Each \texttt{Predicate} includes the name of the predicate and the two variables it uses. The complete list of properties can be found in table~\ref{tab:properties.rl}. The ontology diagram is presented in Figure~\ref{fig:rlexecution}. \begin{table} \centering \caption{Description of RL metadata properties} \label{tab:properties.rl} \begin{tabular}{l|ll} \textbf{Property} & \textbf{rdfs:domain} & \textbf{rdfs:range} \\ & \multicolumn{2}{|l}{\textbf{Description}} \\ \hline \texttt{ruleScores} & \texttt{RLExecution} & \texttt{RuleScoresTuple} \\ & \multicolumn{2}{|l}{The rule and set of scores used by \texttt{RL}} \\ \texttt{rule} & \texttt{RuleScoresTuple} & \texttt{Rule} \\ & \multicolumn{2}{|l}{The rule \texttt{RL} used to infer the belief, in the form of horn clauses} \\ \texttt{accuracy} & \texttt{RuleScoresTuple} & \texttt{xsd:decimal} \\ & \multicolumn{2}{|l}{Estimated accuracy of the rule in NELL} \\ \texttt{nbCorrect} & \texttt{RuleScoresTuple} & \texttt{xsd:nonNegativeInteger} \\ & \multicolumn{2}{|l}{Estimated number of correct beliefs created by the rule} \\ \texttt{nbIncorrect} & \texttt{RuleScoresTuple} & \texttt{xsd:nonNegativeInteger} \\ & \multicolumn{2}{|l}{Estimated number of incorrect beliefs created by the rule} \\ \texttt{nbUnknown} & \texttt{RuleScoresTuple} & \texttt{xsd:nonNegativeInteger} \\ & \multicolumn{2}{|l}{Number of rules created by the rules with no known correctness} \\ \texttt{variable} & \texttt{Rule} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{One of the variables that appear in the rule} \\ \texttt{valueOfVariable} & \texttt{Rule} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Value of the variable inferred by the rule} \\ \texttt{predicate} & \texttt{Rule} & \texttt{Predicate} \\ & \multicolumn{2}{|l}{One of the predicates that appear in the rule} \\ \texttt{predicateName} & \texttt{Predicate} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Name of the predicate} \\ \texttt{firstVariable} & \texttt{Predicate} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{First variable of the predicate} \\ \texttt{secondVariable} & \texttt{Predicate} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Second variable of the predicate} \\ \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=1\linewidth]{img/RLExecution.eps} \caption{RL metadata ontology} \label{fig:rlexecution} \end{figure} \paragraph{SEAL} execution is denoted by a resource of class \texttt{SEALExecution}. It includes the URL it used with the property \texttt{url}. The ontology diagram can be seen in Figure~\ref{fig:sealexecution}. \begin{figure} \centering \includegraphics[height=0.30\linewidth]{img/SEALExecution.eps} \caption{SEAL metadata ontology} \label{fig:sealexecution} \end{figure} \paragraph{Semparse} execution is denoted by a resource of class \texttt{SemparseExecution}. It includes a literal with the sentence used during it, using the property \texttt{sentence}. The ontology diagram can be seen in Figure~\ref{fig:semparseexecution}. \begin{figure} \centering \includegraphics[height=0.30\linewidth]{img/SemparseExecution.eps} \caption{Semparse metadata ontology} \label{fig:semparseexecution} \end{figure} \paragraph{SpreadsheetEdits} execution is denoted by a resource of class \texttt{SpreadsheetEditsExecution}. It contains a set of literals describing the user who made the modification, the file used as input, the action made, and the modified entity, relation, and value. The list of properties can be seen in Table~\ref{tab:properties.spreadsheetedits}, while the ontology diagram is shown in Figure~\ref{fig:spreadsheeteditsexecution}. \begin{table} \centering \caption{Description of SpreadsheetEdits metadata properties} \label{tab:properties.spreadsheetedits} \begin{tabular}{l|ll} \textbf{Property} & \textbf{rdfs:domain} & \textbf{rdfs:range} \\ & \multicolumn{2}{|l}{\textbf{Description}} \\ \hline \texttt{user} & \texttt{SpreadsheetEditsExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{User that made the modification} \\ \texttt{entity} & \texttt{SpreadsheetEditsExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Entity of the belief affected by the modification} \\ \texttt{relation} & \texttt{SpreadsheetEditsExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Relation of the belief affected by the modification} \\ \texttt{value} & \texttt{SpreadsheetEditsExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Value of the belief affected by the modification} \\ \texttt{action} & \texttt{SpreadsheetEditsExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{Action made in the modification} \\ \texttt{file} & \texttt{SpreadsheetEditsExecution} & \texttt{xsd:string} \\ & \multicolumn{2}{|l}{File where the modification was saved and then read by \texttt{SpreadsheetEdits}} \\ \end{tabular} \end{table} \begin{figure} \centering \includegraphics[height=0.40\linewidth]{img/SpreadsheetEditsExecution.eps} \caption{SpreadsheetEdits metadata ontology} \label{fig:spreadsheeteditsexecution} \end{figure} \section{The NELL2RDF Dataset}\label{sec:dataset} The current version of NELL2RDF updates the promoted beliefs to the last version, adding the provenance triples about them. It also adds the candidate beliefs and their corresponding provenance triples. We provide the dumps for the promoted beliefs\footnote{\url{https://w3id.org/nellrdf/nellrdf.promoted.n3.gz}} and the candidate beliefs\footnote{\url{https://w3id.org/nellrdf/nellrdf.candidates.n3.gz}}. The ontologies for the beliefs\footnote{\url{https://w3id.org/nellrdf/ontology/nellrdf.ontology.n3}} and the provenance metadata\footnote{\url{https://w3id.org/nellrdf/provenance/ontology/nellrdf.ontology.n3}} is common for both dumps. Metadata about the dataset\footnote{\url{https://w3id.org/nellrdf/metadata/nellrdf.metadata.n3}} is modeled using VoID and DCAT vocabularies. In order to attach the metadata to each belief, we need to reify the statement into a resource. We follow five different models, described down below. A graphical representation of the models is shown in Figure~\ref{fig:6figures}. A summary of the triples and resources of each model can be seen in Table~\ref{tab:models}. \begin{itemize} \item \emph{RDF Reification}~\cite[Sec.~5.3]{brickley_rdf_2014} represents the statement using a resource, and then creates triples to indicate the subject, predicate and object of the statement. \item \emph{N-Ary relations}~\cite{noy_defining_2006}: This model creates a new resource that identifies the relation and connects subject and object using different design patterns. Wikidata\footnote{\url{https://www.wikidata.org}} makes use of this model of annotation. \item \emph{Named Graphs}~\cite{carroll_named_2005}: A forth element is added to each triple, that can be used to identify a triple or set of triples later on. This model is used by Nano-publications~\cite{mons_nano-publication_2009}. \item \emph{The Singleton Property}~\cite{nguyen_dont_2014} creates a unique property for each triple, related to the original one. It defines its own semantics that extend RDF, RDFS. \item \emph{NdFluents}~\cite{gimenez-garcia_ndfluents:_2017} creates a unique version of the subject and the object (in the case it is not a literal) of the triple, and attaches them to the original resources and the context of the statement. \end{itemize} \begin{sidewaysfigure} \centering \subfigure[Original Triple]{ \centering \includegraphics[width=0.25\linewidth]{img/Belief.eps} \label{subfig:belief} } \subfigure[RDF Reification]{ \centering \includegraphics[width=0.40\linewidth]{img/Reification.eps} \label{subfig:reification} } \subfigure[Singleton Property]{ \centering \includegraphics[width=0.25\linewidth]{img/SingletonProperty.eps} \label{subfig:sp} } \subfigure[Named Graphs]{ \centering \includegraphics[width=0.25\linewidth]{img/NamedGraph.eps} \label{subfig:namedgraph} } \subfigure[N-Ary Properties]{ \centering \includegraphics[width=0.40\linewidth]{img/N-Ary.eps} \label{subfig:nary} } \subfigure[NdFluents]{ \centering \includegraphics[width=0.25\linewidth]{img/NdFluents.eps} \label{subfig:ndfluents} } \caption{Reification models} \label{fig:6figures} \end{sidewaysfigure} \begin{table} \centering \caption{Summary of dataset stats for each model} \label{tab:models} \begin{tabular}{r|rr|rr|rr|} & \multicolumn{2}{c|}{\textbf{Promoted}} & \multicolumn{2}{c|}{\textbf{Candidates}} & \multicolumn{2}{c|}{\textbf{Total}} \\ \textbf{Model} & \textbf{Size} & \textbf{Triples} & \textbf{Size} & \textbf{Triples} & \textbf{Size} & \textbf{Triples} \\ \hline \textbf{W/O metadata} & 2.99GB & 0.02B & 162GB & 1.45B & 165GB & 1.48B \\ \textbf{RDF Reification} & 50.9GB & 0.24B & 776GB & 4.50B & 827GB & 4.74B \\ \textbf{N-Ary Relations} & 50.7GB & 0.24B & 770GB & 4.50B & 821GB & 4.74B \\ \textbf{Named Graphs} & 49.8GB & 0.24B & 727GB & 4.24B & 777GB & 4.48B \\ \textbf{Singleton Property} & 49.8GB & 0.24B & xxxGB & x.xxB & xxxGB & x.xxB \\ \textbf{NdFluents} & 51.3GB & 0.25B & xxxGB & x.xxB & xxxGB & x.xxB \\ \end{tabular} \end{table} \section{Discussion and Future Work}\label{sec:conclusion} In this work we present the conversion of both data and metadata from NELL into RDF. It presents a thesaurus of entities and binary relations between them, as well as a number of lexicalizations for each entity. It also includes detailed provenance metadata along with confidence scores, encoded using five different reification approaches. Our goals for this dataset are twofold: First, we want to improve WDAqua-core0~\cite{diefenbach_wdaqua-core0:_2017} query answering system, providing it with more relations and lexicalizations, along with confidence scores that can help to give hints about how trustworthy is the answer. Second, given that it contains a big proportion of metadata statements, we want to use it as a testbed to compare how the different different metadata representations behave in current triplestores. While currently we only publish the dumps of the datasets, we plan to provide SPARQL endpoint and full dereferenceable URLs. In addition, NELL is starting to be explored in languages different than English, such as Portuguese~\cite{hruschka_jr._coupling_2013,duarte_how_2014} and French~\cite{duarte_vers_2017}. Our intention is to convert those datasets to RDF as they become available to the public, since the system and knowledge base are exactly the same used in the English one. \paragraph{Acknowledgements:} This work is supported by funding from the EU H2020 research and innovation program under the Marie Sk\l{}odowska-Curie grant No 642795. We would like to thank Bryan Kisiel from NELL's CMU team for the technical support about NELL's components. \bibliographystyle{mysplncsnat}
{ "timestamp": "2018-04-17T02:16:29", "yymm": "1804", "arxiv_id": "1804.05639", "language": "en", "url": "https://arxiv.org/abs/1804.05639" }
\section{Introduction} North Polar Spur is the second largest structure in the sky that extends from Galactic longitude $l \approx 20^\circ$ to $-30^\circ$ and Galactic latitude $b \approx 10^\circ-70^\circ$ in the form of an arc with a thickness of $\sim 15^\circ$. This is encircled by another structure called the Loop-I feature that extends $\sim 10^\circ$ beyond the NPS in almost all directions. Both these structures are visible in X-rays and Loop-I in $\gamma$-rays in northern Galactic hemisphere towards the centre of our Galaxy \citep{Berkhuijsen1971, Sofue1979, Snowden1997, Sofue2000}. Although there are faint indications of southern counterparts \citep[see Fig 13 of][]{Ackermann2014}, the absence of prominent signatures in the southern hemisphere has shadowed the truth behind the origin of these structures. Despite several claims by \cite{Sofue1977a, Sofue1984, Sofue1994, Sofue2000, Sofue2003, BlandHawthorn2003, Kataoka2013, Sarkar2015b, Sofue2016, Kataoka2018} that the NPS is `Galactic centre' phenomena, the origin of the NPS still remains debated even after half a century of the first discovery of these structures. The main reason is the absence of significant counterparts in southern hemisphere and a superposition of a nearby ($\sim 200$ pc) OB association, Sco-Cen along the line of sight. This has led a part of the scientific community to believe that the NPS/Loop-I are compressed shells collectively driven by several supernovae (SNe) from Sco-Cen OB association \citep{Berkhuijsen1971, Egger1995}. In this model, the apparent lack of the X-ray emission inner to the NPS is attributed to the absorption by a local HI shell around the local bubble. Although this HI shell would be sufficient to explain the absorption in the lower energy band ($0.1-0.4$ keV), it can not be the only reason for the lack of X-ray in $0.5-2.0$ keV band and indicates a true lack of X-ray emission in this region \citep{Egger1995}. Models, describing the NPS and Loop-I to be the emission arising from an interaction between two shells have been also proposed \citep{Wolleben2007}. The success of this model was to explain the polarised radio emission from the NPS/Loop-I and a `new loop' in the southern hemisphere. This new loop is also recognised by more recent observations by \cite{PlanckCollaborationXXV2016} as the `South Polar Spur'. The problem, however, stays in explaining the energetics of such shells. As noted by \cite{Shchekinov2018}, that the energy required to expand the HI shell generated at the interaction zone between these shells is equivalent to $\sim 60$ SNe (including the effect of cooling in the ISM). This is almost an order of magnitude larger than the expected number of SNe inside the Sco-Cen association over last $\approx 10$ Myr. On the other hand, there are growing evidences that the NPS is not of a `local origin' and that its distance correlates well with the `Galactic centre origin' scenario. X-ray observations by \cite{Kataoka2013, Lallement2016} indicate that the NPS is highly absorbed by a hydrogen column density up to, $N_{\rm HI} \sim 4 \times 10^{21}$ cm$^{-2}$ towards the Galactic disc indicating a distance $\gg 200$ pc. Although, most of the volume within $150$ pc is occupied by the local bubble \citep{Egger1995}, it is, in principle, possible to achieve such a high column density within $\lesssim 200$ pc provided there is compressed wall of high density gas at $15-60$ pc region between the local bubble and the NPS \citep{Willingale2003a}. Although, observations by \cite{Lallement2014} indicate the presence of a high density shell towards the NPS, the required column density still falls short. This indicates the NPS to be beyond $\sim 4$ kpc \citep[see Fig 11 of][]{Lallement2016} By analysing \textsc{O viii} Ly-$\alpha$ and Ly-$\beta$ and other Lyman series lines from \textit{Suzaku} and XMM-$Newton$ spectrum, \cite{Gu2016} also found that the lines are well explained if they are absorbed by a $0.17-0.20$ keV ionised medium with required hydrogen column density $N_{\rm H} \sim 5 \times 10^{19}$ cm$^{-2}$. This value is much more than what the local bubble could have provided ($\sim 5\times 10^{-3} \times 200$ cm$^{-3}$ pc $\approx 3 \times 10^{18}$ cm$^{-2}$). Moreover, the temperature of the local bubble ($\sim 10^6$ K; \citealt{Egger1995}) is also less than the required value. On the other hand, such temperature and column density for the absorption is easily achievable if the NPS is $\sim 8-10$ kpc into the CGM, assuming $T_{\rm CGM} \approx 0.2$ keV and density $\sim 10^{-3}$ $m_{\rm p}$ cm$^{-3}$ \citep{Henley2010a, Miller2015}. Another factor that goes against the NPS to have a local origin is its metallicity. Fitting of X-ray spectrum shows that the metallicity of the NPS is $\approx 0.3-0.7$ Z$_\odot$ \citep{Kataoka2013, Lallement2016} which is {closer} to the CGM value \citep[$\approx 0.5$ Z$_\odot$;][]{Miller2015, Faerman2017} than that of the local interstellar medium \citep[$\approx$ Z$_\odot$;][]{Maciel2010}. The estimated density ($\approx 2\times 10^{-3}$ $m_{\rm p}$ cm$^{-3}$), temperature ($\approx 0.25-0.35$ keV) and metallicity ($\approx 0.3-0.7$ Z$_\odot$) of NPS are suggestive of a structure in the Galactic CGM compressed by a Mach $\sim 1.5$ shock which could have been originated from the Galactic centre \citep{Kataoka2013}. This particular conclusion has far reaching implications towards understanding the origin of the Fermi Bubbles (FBs) as it directly constrains the energetics and thus the age of these bubbles and rules out many existing models. Since the discovery of the FBs \citep{Su2010} and further studies \citep{Ackermann2014, Keshet2016, Keshet2017}, there have been a numerous number of arguments regarding the origin of these bubbles. The arguments, can be classified into three main categories - (i) high luminosity ($\sim 10^{42-44}$ erg s$^{-1}$) wind driven by the central black hole \citep{Zubovas2011, Guo2012b, Zubovas2012, Yang2012, Yang2017} requiring the age of FBs to be $t_{\rm age} \sim$ few Myr, (ii) low luminosity ($\sim 2\times 10^{41}$ erg s$^{-1}$) wind driven by accretion disc around the central black hole, with $t_{\rm age} \approx 12$ Myr \citep{Mou2014, Mou2015} and (iii) star formation driven wind (star formation rate $\approx 0.1-0.3$ M$_\odot$ yr$^{-1}$) with estimated age of $\approx 25-300$ Myr \citep{Crocker2015, Sarkar2015b}. There are also other constrains from \textsc{O viii} to \textsc{O vii} line ratio towards the FBs that are in favour of option ii and iii \citep{Miller2016b, Sarkar2017}. Such a variety of arguments crucially depend on whether one considers the NPS, Loop-I and FBs to be a `common origin' or not and would collapse to a small parameter space if one can answer the very origin of the NPS and Loop-I. Despite a number of arguments regarding the NPS/Loop-I to be a Galactic centre (GC) phenomena, their origin is still questioned and revolves around the fact that these structures are asymmetric across the Galactic disc. Interestingly, \cite{Kataoka2018} speculates that such an asymmetry could have been originated from an asymmetric density in the southern hemisphere. However, the fact that both the northern and southern FBs are almost of same size lead them to conclude that the NPS and Loop-I are probably a result of previous star-burst episode of the GC. In this paper, I show that a common origin for NPS, Loop-I and FBs is possible and that the asymmetric nature of the NPS and Loop-I can be obtained by having a local asymmetry in the CGM density without affecting the symmetry of FBs. The base of the arguments, presented in this paper, crucially depend on the projection effects of the large scale structures. It has been shown by \cite[][hereafter SNS15]{Sarkar2015b} that the NPS/Loop-I are the outer shock (OS) of a star formation driven wind and has reached a distance of $\approx 8$ kpc starting $\approx 27$ Myr ago from the GC. Since we are at $\approx 8.5$ kpc away from the GC, the projection effects put this OS at $b \sim 70^\circ$ and $l \sim 60^\circ$. Now, if the CGM density in the southern hemisphere is slightly lower then the OS, in that hemisphere, has just run past the Solar system and, therefore, does not appear have a clear signature like a shock. The FBs, if considered to be the contact discontinuity (CD), then {do} not have to be very asymmetric. This would solve the tension between an asymmetric NPS/Loop-I and symmetric FBs as feared by \cite{Kataoka2018}. The rest of the paper provides full details of the above arguments and presents numerical studies in a realistic Galactic environment generating X-ray, $\gamma$-ray and radio sky maps that can be compared with the actual observations from \textsc{rosat} and \textit{Fermi Gamma-ray Space Telescope}. \section{Numerical set up} \label{sec:numerical-set-up} This problem is studied by performing hydrodynamical simulations, without magnetic field and cosmic rays (CR). The simulations are performed using \textsc{pluto-v4.0} \citep{Mignone2007}. Since the shock structure crucially depends on the exact density distribution, we pay careful attention to the initial numerical set up. This set up is exactly the same one as presented in \cite[][hereafter SNS17]{Sarkar2017} (which was adapted from \cite{Sarkar2015a} to represent our Galaxy) except {for} a few modifications. \begin{figure*} \centering \includegraphics[width=\textwidth]{density-evo.eps} \caption{Evolution of the density contours for $f_h = 1/3$ . The Solar location has been shown by the white circle at $R, z = 8.5, 0.0$ kpc. The arrows in the third panel represent the apparent hight of the contact discontinuity and therefore the height of the FBs at the present moment. SW: shocked wind, SH: shocked halo} \label{fig:density-evo} \end{figure*} \subsection{Initial condition} \label{subsec:initial-condition} In SNS17, we considered that the CGM ($T_{\rm CGM} = 2\times 10^6$ K) is in hydrostatic equilibrium with the background gravity of dark matter, stellar disc and bulge. The parameters for the gravity and CGM temperature were fixed to best match the observed values. The resultant density distribution of the CGM was found to mimic the inferred density distribution from the \textsc{O viii} and \textsc{O vii} line emissions. I have, however, introduced some modifications to SNS17 set up to make it suitable for the present study. A warm ($\approx 5\times 10^4$ K) and dense ($\sim 1$ $m_{\rm p}$ cm$^{-3}$) gaseous disc in the initial density distribution has now been introduced. The disc gas is assumed to be rotating at $97.5\%$ level of the rotation curve. The rest of the support against gravity is provided by the thermal pressure. I also introduce a rotation to the hot CGM to comply with the observations of \cite{Miller2016a}. The speed of rotation, however, is assumed to be only a fraction ($f_h = 1/3$) of the Galactic disc rotation at that cylindrical radius ($R$). Although, \cite{Miller2016a} find that the CGM rotation speed is $\sim 180$ \kmps, their assumption of a spherical gaseous distribution makes this value uncertain. A proper estimation of the CGM rotation would require self consistent consideration of the flattening of the CGM arising due to rotation. Since that is not the main focus of this paper, I consider $f_h$ to take different values ($1/3, 1/2$ or $2/3$) to make up for this caveat. The exact pressure and hence the density distribution is then obtained by assuming that the disc gas and the CGM are both in steady state equilibrium with the background gravity \citep[see][for details]{Sarkar2015a}. I also switched off radiative cooling for $|z| \leq 1$ kpc to avoid artificial radiative cooling in the disc. An active cooling in this region would {cause} the numerical disc to collapse into a thin layer of cold gas. In reality, turbulence generated by SN activity and infalling gas are responsible for maintaining a fluffy disc \citep{Krumholz2017a}. Since the current set up does not contain any of these physics, switching off the cooling in the disc is a way around this issue. As mentioned earlier, the NPS/Loop-I or FBs are structures in the CGM, therefore, this implementation is not expected to affect the results. Relaxing this assumption would lead to a large amount of injected energy to be lost via radiation within first $2-3$ Myr. This radiation loss would, however, reduce sharply as the shock breaks out of the ISM and starts propagating into the CGM. Therefore, the required energy would be $\sim 10\%$ ($\sim 2$ Myr$/28 $ Myr) higher than the injected value to achieve similar dynamics. To achieve the purpose of this work, I assume that the CGM density in the southern hemisphere is $20\%$ less than the northern counterpart. Since this introduces a pressure imbalance across the galactic disc, I further set the temperature of the southern CGM $20\%$ higher than the northern one. This temperature asymmetry is not very realistic. It makes the SGH appear brighter than they would be without such a temperature asymmetry. However, the actual brightness difference between the north and south (without the FBs) depends on the size of the asymmetric region and {is} discussed more in Section \ref{sec:hint-observation}. The density asymmetry in the CGM can be caused either due to the motion of our Galaxy through the local group that caused an asymmetric ram pressure on the CGM or some previous star formation driven wind activity. Although the motion of our Galaxy towards the centre of the local group (somewhat similar direction towards the Andromeda galaxy) is almost parallel to the Galactic disc \cite{VanDerMarel2012a}, a local density asymmetry of size $\sim 10$ kpc can still be present in the CGM (see Section \ref{sec:hint-observation}). \subsection{Grid and energy injection} \label{subsec:grid-energy} The computational box is chosen in 2D spherical coordinates which, by definition, assumes axisymmetry. The box extends till $15$ kpc in radial direction and from $0$ to $\pi$ in the $\theta$-direction. A total of $1024\times 512$ grid points were set uniformly in radial and $\theta$-direction{s}. The resolution of the box is, therefore, $\approx 15\times 6$ pc$^2$ at $r = 1$ kpc and $\approx 15\times 61$ pc$^2$ at $r = 10$ kpc. Both the boundaries in the $r$-direction are set to be outflowing, whereas, the $\theta$ boundaries are set to be axisymmetric (i.e. only $v_\theta$ and $v_\phi$ is reversed). Supernovae energy is added within central $100$ pc \footnote{This value is somewhat arbitrary and is a typical for star forming regions. This particular choice, however, does not have much influence on the size of the OS or the contact discontinuity. It slightly affects the shape of FBs as can be seen in Figure A1 of \cite{Sarkar2017}.} in the form of thermal energy. A constant mechanical luminosity is provided assuming a constant star formation rate (SFR) based on a Kroupa/Chebrier IMF and \textit{starburst99} \citep{Leitherer1999} recipe. The mass and energy injection rates are, thereafter, given by \begin{eqnarray} \dot{M}_{\rm inj} &=& 0.1\,\, {\rm SFR} \nonumber \\ \mathcal{L} &=& 10^{41} \times \frac{\rm SFR}{\mbox{M$_\odot$ yr$^{-1}$}} \,\,\, \mbox{erg s$^{-1}$} \label{eq:mech-lumn-sfr} \end{eqnarray} where, only $30\%$ of the SNe energy is assumed to survive the interstellar radiation loss in the initial SN expansion phase and become available for driving a large scale wind. In all the simulations presented here, I assume a mechanical luminosity $\mathcal{L} = 4.5 \times 10^{40}$ erg s$^{-1}$ which was found to match the observed X-ray and $\gamma$-ray signatures of the NPS and the FBs in SNS15. If converted directly to SFR, this luminosity would imply SFR $\approx 0.45$ M$_\odot$ yr$^{-1}$. We, however, should keep in mind that the non-thermal components like magnetic field and cosmic rays can contribute a large fraction of this energy and, therefore, the required star formation rate would decrease further from this value as noted in SNS15. \section{Results and Discussion} The evolution of density for $f_h = 1/3$ has been shown in Figure \ref{fig:density-evo}. As can be seen, the structure of the outflowing gas is similar to a wind driven shock as studied by \cite{Castor1975, Weaver1977}. In the inner part, it contains a free wind region which undergoes a reverse shock shortly. The wind material extends till the contact discontinuity (CD) beyond which the shocked CGM continues till the OS. Since the mass injected by the SNe driven wind is very small, the region inside the CD has low density gas (compared to the background CGM) which makes it suitable for hosting a X-ray cavity. Note that the $\gamma$-ray or radio emission, on the other hand, depend on the CR energy density which is dependent on the presence of shocks and turbulence. Given that there is a reverse shock (Mach $\sim 10$) and a turbulent medium inside, it is likely that this region hosts high energy cosmic ray electrons and, therefore, produce the observed FBs or the microwave haze. {These} arguments were used by \cite[][SNS15]{Mertsch2011} to assume that the FBs can be represented by the inner bubble extending all the way till the CD. Due to the lack of cosmic ray physics in the current simulations, I also follow the same arguments. While this argument is persuasive and likely true, a better understanding should, in any case, be built by performing numerical simulations including both CR physics and magnetic field. \begin{figure} \centering \includegraphics[width=0.45\textwidth, angle=-90, clip=true, trim={0 3.5cm 0 4cm}]{bubbles.eps} \caption{The outer edge of the FBs taken from \protect\cite{Su2010}. The southern bubble has been inverted in latitude to compare it with the northern bubbles. There is a clear signature that the southern bubble is $\approx 5^\circ$ bigger than the northern one. } \label{fig:bubbles} \end{figure} Based on the above arguments, the age of FBs is the time when the CD reaches $\approx 50^\circ$, i.e. $t_{\rm age} \approx 28$ Myr as can be seen in the third panel of Figure \ref{fig:density-evo}. Note that, here $t_{\rm age}$ is taken when the northern FB reaches $50^\circ$ (observed size of the northern FB). The southern bubble, however, appears to be $\approx 7^\circ$ bigger in latitude. It is indeed interesting to note that although both the observed FBs are considered to be of similar size, a careful look at these bubbles reveal that the southern bubble is $\approx 5^\circ$ bigger than the northern one. In Figure \ref{fig:bubbles}, I re-plotted the outer edge of the FBs (taken from \citealt{Su2010}) to establish this point. The figure shows a consistently larger southern bubble in all directions except in the bottom left part which may occur due to local density variation. \begin{figure} \centering \includegraphics[width=0.5\textwidth, clip=true, trim={6cm 2cm 5cm 4cm}]{xrmk_28_rbox50.eps} \caption{Soft X-ray ($0.5-2.0$ keV) sky map in Aitoff projection generated at $t_{\rm age} = 28$ Myr. NPS and Loop-I like features are clearly visible in the northern hemisphere compared to southern hemisphere. The surface brightness compares well with the observed values in \textsc{rosat} R6R7 band. The outer edge of the FBs have been over-plotted (blue solid lines) for visual comparison. A uniform background of $5 \times 10^{-9}$ erg s$^{-1}$ cm$^{-2}$ Sr$^{-1}$ (comparable to $\approx 10^{-4}$ counts s$^{-1}$ arcmin$^{-2}$ in \textsc{rosat} R6R7 band) has been added to account for the observed diffuse X-ray background/foreground.} \label{fig:x-ray-map} \end{figure} \subsection{X-ray sky map} \label{subsec:X-ray-map} As mentioned in earlier discussion, the projection effects are very important while comparing simulations with observations of large structures in our Galaxy. I have made use of the module \textsc{pass}\footnote{Projection Analysis Software for Simulations (\textsc{pass}) described in SNS17. This code is freely available at \url{https://github.com/kcsarkar/}.} to produce proper projection effects at the Solar location. Figure \ref{fig:x-ray-map} shows {a} $0.5-2.0$ keV X-ray sky map generated at $t_{\rm age} = 28$ Myr from simulations \footnote{An extended box of $200$ kpc is also included to account for the emission beyond the computational box. The density asymmetry in the SGH, however, iss considered only till $50$ kpc.} with CGM rotation of $f_h = 1/3$ \footnote{See appendix for maps with CGM rotation of $f_h = 1/2$ and $2/3$.}. It shows the presence of features very similar to the NPS and Loop-I in the northern hemisphere along with the absence of these features in the southern part. A lower density in the southern hemisphere affects the surface brightness in two ways. Firstly, a $20\%$ lower density means a $\sim 40\%$ drop in X-ray brightness since the emissivity is $\propto n^2$. Secondly, due to a lower density the shock runs faster in the southern part and at $t = t_{\rm age}$, the OS just crossed us while the northern shock is still in front of us. Once we are inside shock, the projection effect makes it hard for us to detect any such shock in the southern hemisphere. The NPS, as seen in the current simulations, is not simply the shell that extends from the CD to the OS (in contrast to what was seen in SNS15). As can be noticed in Figure \ref{fig:density-evo}, there are few shocks present between the CD and the OS. Although, the presence of these shocks are not expected from simple analytical considerations, they arise due to the presence of an inhomogeneous and anisotropic medium and a low luminosity wind. For a typical wind scenario where the luminosity is very high, the wind is able to overcome the effect of disc pressure and thus follow a standard wind structure. However, for a low luminosity wind where the oblique ram pressure of the wind is just larger than the disc pressure, the free wind gets nudged at certain moments and thus produce{s} a variable luminosity wind. The shocks between CD and OS are generated due to such nudging. Note that this is also a channel by which the disc material gets entrained by the free wind and can produce high velocity warm clouds \cite{Sarkar2015a}. The NPS is, therefore, the projection of one such shock close to the CD and does not have to be extended till the Loop-I (See third panel of Fig \ref{fig:density-evo}). While such a shock follows the CD, it does not necessarily follow the outer edge of the $\gamma$-ray emission (as can be understood from Fig. \ref{fig:x-ray-map} and \ref{fig:g-ray-map}). We speculate that such secondary shocks may also be the origin of the \textit{inner arc} and \textit{outer arc}. It is also possible that the shock edge detected in \cite{Keshet2017} could be one of such secondary shocks. \subsection{$\gamma$-ray sky map} \label{subsec:gamma-map} To generate the $\gamma$-ray map, I follow SNS15 and assume that the main source of the $\gamma$-ray emission is via inverse Compton of cosmic microwave background by high energy CR electrons (CRe) \footnote{It was shown in SNS15 that a hadronic process is ineffective in producing enough surface brightness for the FBs.} and that the total CR energy density if assumed to be $15\%$ (of which only $0.0075\%$ is in the CRe) of the local thermal energy density at any grid location. The CRe spectrum inside the FBs (in this case, the CD) is assumed to be $dN/dE \propto E^{-2.2}$ \citep{Su2010, Ackermann2014}, which is also the electron spectrum required for explaining the microwave haze \citep{PlanckCollaboration2013}. It is, therefore, generally believed that both the radio and $\gamma$-ray emission originated from the same population of CRe. Since such high energy CRe is expected to cool down via inverse Compton and synchrotron emission, a break at Lorentz factor $\Gamma = 2\times 10^6$ is also assumed. After this break the CRe spectrum follows $dN/dE \propto E^{-3.2}$. Outside the CD and inside the OS, a softer CRe spectrum is assumed, $dN/dE \propto E^{-2.4}$ and a break at $\Gamma = 2\times 10^6$ is considered. This spectrum is consistent with the estimated value for the Loop-I \citep{Su2010} although the break location and the cut-off frequency is somewhat uncertain . A softer spectrum is indeed expected at the OS as it is much weaker (Mach $\sim 1.5$) than the reverse shock inside the FBs. Note that the above prescribed assumptions to get $\gamma$-ray emission are very simplistic. A better approach requires a self-consistent implementation of the evolution of CR spectra in real time. Our focus in this section is, however, only to show the size and shape of the FBs and Loop-I. Figure \ref{fig:g-ray-map} shows the $\gamma$-ray sky map at $5$ GeV, generated at $t = 28$ Myr for CGM rotation of $f_h = 1/3$. It shows a good match for the size and shape of the FBs although the surface brightness inside the FBs is not as uniform as the observed ones. This can be attributed to the simple assumption of CR energy density to be a constant fraction of the thermal energy density. In reality, the CR behaves as a relativistic fluid (adiabatic index $= 4/3$) and, therefore, does not exactly follow the Newtonian plasma (adiabatic index $= 5/3$). Besides, the CR diffusion and the effect of the magnetic field in CR propagation is also not taken into account in the current numerical simulations. Diffusion can make the CR energy density more uniform than the thermal pressure, while the inclusion of magnetic field can make the outer edge of the simulated FBs much smoother than as seen in observations. \begin{figure} \centering \includegraphics[width=0.5\textwidth, clip=true, trim={6cm 2cm 5cm 4cm}]{gl_0028_5GeV.eps} \caption{$\gamma$-ray sky map at $5$ GeV generated at $t_{\rm age} = 28$ Myr. The shapes and sizes of north and south FBs are consistent with the observed ones (blue solid lines). A clear shock structure is seen in the northern hemisphere but not in the southern part. A constant background/foreground of $1$ keV s$^{-1}$ cm$^{-2}$ Sr$^{-1}$ \citep{Su2010} is added in order to account for the observed diffuse emission. Also, regions with $|z|\leq 700$ pc is not included in the map to avoid any disc emission.} \label{fig:g-ray-map} \end{figure} Similar to the X-rays, a larger structure beyond the FBs is also noticed. This may correspond to the Loop-I structure seen in the northern sky. The surface brightness and contrast with the background seem to match quite consistently. However, an excess emission beyond the southern FB can be noticed, although no shock structure is clearly identifiable. This excess emission is in contrast with the observations. However, we should remember that, in the souther hemisphere, we are inside the shock and, therefore, the observable CRe spectrum is not the same as the northern Loop-I, it can be steeper due to lack of further re-acceleration of CRe behind the OS. This would mean that there is less amount of excess surface brightness in the southern hemisphere. For an example, the $\gamma$-ray emissivity for a $\propto E^{-2.45}$ CRe spectrum can be only $60\%$ of the emissivity for a $\propto E^{-2.4}$ spectrum, considering everything else to be the same. Therefore, it is possible that the excess brightness in the Southern part is not distinguishable from the background. At this point, it should be noted that although there is no clear signature of a southern counterpart of Loop-I, two rising $\gamma$-ray horns are clearly visible in the observations by \cite{Ackermann2014} (see their figure 13). \subsection{Note on the East-West asymmetry} \label{subsec:east-west-asym} Along with a the North-South asymmetry, we also see an East-West asymmetry in NSP/Loop-I and also the Fermi Bubbles. While the NPS and the Loop-I are clearly seen to be bent towards the West, the bend in FBs are marginal but can be noticed in Fig \ref{fig:bubbles} (also see \citealt{Keshet2017}). In a shock propagation scenario, such {a deformation} is indication of an enhanced density towards the East. This also coincides with the fact that our `collision course' towards M31 is towards the East ($l \approx 121^\circ, b\approx 21^\circ$). The increased ram pressure from the intra-group medium can cause such density enhancement towards the East and, therefore, induce the observed East-West asymmetry. The fact that the NPS, Loop-I and the FBs are simultaneously bent towards the West (both in north and south for FBs) {is} another indication that these structures could be related to each other. \begin{figure} \centering \includegraphics[width=0.5\textwidth, clip=true, trim={30cm 15cm 5cm 20cm}]{sync_23GHz_28_lm.eps} \caption{Brightness temperature map at 23 GHz. A possible location of the apparent southern counterpart of the Loop-I, called `Loop-Ic' here, is represented by the red dots to guide the eye. A possible location of the SPS is also shown in the southern hemisphere. The appearance of the Loop-Ic has been discussed in Section \ref{subsec:radio-map} and the corresponding density map in shown in Figure \ref{fig:density-evo-cases}.} \label{fig:radio-map} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{rosat_asym.eps} \caption{Estimating the asymmetry in X-ray sky brightness. Left panel: \textsc{rosat} R67 band sky brightness observed towards the north Galactic pole presented in Aitoff-Hammer equal area projection. Middle panel: R67 band sky brightness towards the south Galactic pole. The white hashed region represents the masks applied to avoid the area contaminated by the big scale structures, like the NPS \citep{Snowden1997}. Right panel: Expected asymmetry in the diffuse sky brightness towards the Galactic poles in \textsc{rosat}-R67 band. The blue curve shows a case, where the pressure was kept constant inside the cavity and the red curve shows the case where the temperature inside the cavity was kept constant. Data values above $10^{-3}$ counts s$^{-1}$ arcmin$^{-2}$ are removed from the plot to avoid very bright foreground} \label{fig:rosat-comp} \end{figure*} \subsection{Radio sky map} \label{subsec:radio-map} It has been argued several times that the Loop-I is a closed by ($\sim 200$ pc) feature and that there is a structure in the southern hemisphere that makes the Loop-I to be a complete loop in radio \citep{PlanckCollaborationXXV2016, Dickinson2018, Liu2018}. However, it should be noted that there is no unique way to draw such a loop as it is mostly driven by eye. One important feature, in this context, is the `South Polar Spur' (SPS) \cite[as named by][]{PlanckCollaborationXXV2016} or the `new loop' \cite[as named by][]{Wolleben2007}. Although it was initially thought to be part of a bigger loop, called Loop II, it was later ruled out as the curvature of the SPS is in the opposite direction of the Loop II. Interestingly, the curvature of the SPS is also bent towards the west much like the Loop-I, moreover, the magnetic field (MF) is also almost parallel to the structure \citep[see Fig 20 of][]{PlanckCollaborationXXV2016}. Since the simulations presented in this paper do not have MFs, it is not possible to show the orientation of the MF. It can only be speculated that such fields would be compressed in the outer shock and also would be parallel to the shocked shell (much like a SN shell in the ISM). Approximate radio intensity maps can, however, be obtained with the help of some assumptions regarding the CR and the MFs energy density. Following SNS15, I assume that the CR and MF energy densities are only a fraction ($\epsilon_{\rm cr}$ and $\epsilon_B$, respectively) of the thermal energy ($u_{\rm th}$) at that location. Assuming that the synchrotron emission is coming from an electron distribution of $n(E)\, dE = \kappa\, E^{-2.4}\, dE$ (as observed in Loop-I), the synchrotron emissivity per unit solid angle can be written as \citep{Longair1981} \begin{equation} J_\nu = 2.2\times 10^{-19}\, \epsilon_{\rm cr}\,\epsilon_B^{0.85}\, u_{\rm th}^{1.85}\, \nu_{\rm GHz}^{-0.7}\,\,\,\,\, \mbox{erg s$^{-1}$ cm}^{-3} \mbox{Hz}^{-1}\mbox{Sr}^{-1} \end{equation} Fig. \ref{fig:radio-map} shows the obtained $23$ GHz brightness temperature (in excess to the CMB) assuming $\epsilon_{\rm cr} = 0.15$ and $\epsilon_{B} = 0.4$ (as obtained in SNS15). The map clearly shows the Loop-I structure as well as many other radio structures that have similar arcs as the Loop-I and seem to originate from the centre, but lies at different longitudes. Similar but fainter signatures of such arcs are also seen in the southern hemisphere. Interestingly, it is possible to identify arcs in the southern hemisphere that are bent towards the west and are similar to the SPS in nature. It is also possible to draw an arc which could correspond to the southern part of the Loop-I but would be limited in extension. Such a loop is shown as the Loop-Ic in Fig. \ref{fig:radio-map}. This arc is a part of one of the secondary shocks present in the simulation as mentioned in Section \ref{subsec:X-ray-map}. This arc has also been marked in Fig. \ref{fig:density-evo-cases} for convenience. It, therefore, appears that such a secondary loop can easily be mistaken as the southern counterpart of the Loop-I which would then lead to a shorter distance estimate for the structure. Note that the \textit{WMAP} Haze is not visible in the radio map as the assumed electron spectral index ($ x = 2.4$) is steeper than the spectral index in the Haze ($x = 2.2$). \section{hints from the observations} \label{sec:hint-observation} \subsection{The data} \label{subsec:the-data} Although one can easily notice the asymmetry of sky brightness across the northern and southern hemispheres in the Fermi maps \citep[Fig 13 of ][]{Ackermann2014}, \textsc{rosat} maps \citep{Snowden1997} as well as in the emission measure along the FBs \citep[][]{Kataoka2015}, it is hard to compare the theoretical maps with the observations as the size of the NPS is different from observations. Moreover, due to the axisymmetric nature of the simulation, the mirrored NPS is also seen at $l\approx 330^\circ$. Also, there are few arcs, like the one from $b \approx 0 - 45^\circ$ at $l \approx 330^\circ$, that are present in the observed map of SGH but not in the theoretical maps. Therefore, it is hard to find out if there is truly any density asymmetry in the CGM by looking at the sky towards the Galactic centre ($270^\circ \lesssim l \lesssim 90^\circ$). To find out if the initial density distribution was asymmetric, one needs to look at the region where there is no contamination from these structures, i.e. towards $270^\circ \gtrsim l \gtrsim 90^\circ$ . In an attempt to estimate the quantitative value of the proposed asymmetry, I consider the \textsc{rosat}-R67 data towards the north and south Galactic poles \citep[Fig 4a,b of][]{Snowden1997} which is corrected for point sources, exposure time and normalised to an effective `on-axis' response of the XRT/PSPC. Additionally, the expected absorption by neutral Hydrogen is minimal in this region of the sky and in this band ($0.73-2.04$ keV). \subsection{Masks applied} \label{subsec:masks} I mask out regions within $270^\circ \leq l \leq 90^\circ$ to avoid any emission from extended structures that could have {been} generated from the forward shock as seen in Fig \ref{fig:x-ray-map}. The considered region by default excludes regions of $|b| \lesssim 30^\circ$ where the disc emission could be important. We also mask out a small region within $ 180^\circ \lesssim l \lesssim 210^\circ $ and $-45^\circ \lesssim b \lesssim -30^\circ$ where the emission seems to be related to a radio bright ($\sim 50 - 120 \mu$m in IRIS maps) arm extended from the disc. Moreover, its relatively sharp boundary makes it unsuitable for studying the background CGM emission. The sky-maps and the masks applied for this estimation {are} shown in the left and middle panel of Fig \ref{fig:rosat-comp} by the white hashed region. Each of these panels shows a $102.4^\circ \times 102.4^\circ$ patch of the sky with a pixel size of $12'\times 12'$. The maps are shown in an Aitoff-Hammer equal area projection so that every pixel has the same area irrespective of its position in the sky. I also remove some pixels ($3189$ in the north and $12511$ in the south) where i) the brightness is more than $10^{-3}$ counts s$^{-1}$ arcmin$^{-2}$ (to avoid any bright foreground emission) or ii) the data is missing or iii) the hardness ratio (R67/R45) was more than $3.0$ (to avoid any non-thermal emission). \subsection{results} \label{subsec:obs-results} After applying the above filters, the average brightness of the northern and southern patch is estimated to be $(11.7 \pm 6.7) \times 10^{-5}$ count s$^{-1}$ arcmin$^{-2}$ and $(11.2 \pm 7.3) \times 10^{-5}$ count s$^{-1}$ arcmin$^{-2}$, respectively. Although the ratio between the mean values indicate a $\sim 4\%$ deficiency in the southern hemisphere compared to the north, the error in estimating the ratio is $\sim 100\%$ (obtained from the error maps provided by \cite{Snowden1997}). Clearly, this result is unsuitable for putting any kind of constrain on the size of the asymmetric region. To have a more reliable data, one needs to model and subtract the effect of Solar flares and extragalactic sources from the data in addition to having better sky maps from other X-ray missions. This is out of the scope of this paper. In case, a better modelling of the non-CGM emission in the \textsc{rosat} data is able to minimise the error, the results can be compared to the right panel of Fig. \ref{fig:rosat-comp}. Here, I consider the initial set-up (with $f_h = 1/3$) as described in Section \ref{subsec:initial-condition} with different sizes of the asymmetric region ($r_{\rm asym}$). The temperature inside the cavity is assumed to be such that a) there is no pressure asymmetry and b) there is no temperature asymmetry across the north and south. The ratios between the average northern and southern sky brightness in R67 band ($0.73-2.04$ keV) in the region of $90^\circ \leq l \leq 270^\circ$ and $|b| \geq 30^\circ$ are shown by the blue (equal pressure) and red (equal temperature) lines in this figure. The difference between these two cases arises due to the difference in emissivities at the assumed temperatures inside $r_{\rm asym}$. For equal pressure case, since the temperature inside the asymmetric region is assumed to be $20\%$ higher than the northern part, the emissivity inside $r_{\rm asym}$ is higher compared to the case where the cavity temperature is assumed to be the same as the background. We also notice that the ratio becomes $1$ for $r_{\rm asym} \simeq 9$ kpc. This is due to the fact that the effect of such {a} smaller cavity would be visible only within $270^\circ \leq l \leq 90^\circ$ and is by default not considered in the estimation. In any case, with the current analysis of the \textsc{rosat} data, it is hard to put any constrain on the size of the asymmetric region. A better constrain on the size should include better data and also an estimation of the temperature inside the proposed cavity. A similar exercise can also be done for the Fermi data to look for such an asymmetry. However, it involves very accurate modelling of the Galactic foreground and point sources in the $\gamma$-rays and is also out of the scope of this paper. \section{Conclusion} In this paper, I demonstrated the feasibility of an idea that the NPS, Loop-I and FBs can have a common origin despite the asymmetry of NPS/Loop-I across {the} Galactic disc and apparent symmetry between the FBs. I show that a density asymmetry, as small as $20 \%$, in the southern hemisphere can produce the sizes, shapes and surface brightness in X-ray and $\gamma$-ray along with the asymmetric signatures of the NPS and Loop-I strikingly similar to the observed ones. This asymmetry requires the southern FB {to be} only $\approx 7^\circ$ bigger than the northern one, which is consistent with the observations of the southern bubble to be $\approx 5^\circ$ bigger than the northern one. Note that this particular value of $20\%$ is only a choice to prove the feasibility of the idea. At this point, it is not very clear how such asymmetry could have originated. Best guesses are either from the motion of our Galaxy in the local group which caused an asymmetric ram pressure to the CGM or a previous activity of asymmetric SNe-driven wind. Since the motion of the Milky Way towards the M31 is almost in a straight line (towards, $l \approx 121^\circ,\,b\approx 21^\circ$), it is more likely to produce an east-west asymmetry (as discussed in Section \ref{subsec:east-west-asym}) than a north-south asymmetry. In such a case, the north-south density asymmetry is more likely to be generated from a previous episode of star formation that released more energy in the south than the north. We should however, keep this caveat of the proposed model in mind. It is interesting to note that the same projection effects would also appear if the energy injection happens slightly below the Galactic mid-plane. In this case, the OS in {the} southern hemisphere would be given a head start compared to the northern part. Even then, the southern shell would still be visible in X-ray as the density of the shell is {the} same as the northern part. Besides, the observations already put the star forming region at the Galactic centre roughly at the mid-plane. Also, note that such a projection model works only in case of a SNe driven wind and not in AGN driven winds. As can be seen in Fig 4 of SNS17 that the AGN driven bubbles are more vertical and, therefore, are not expected to respond to such a projection effect at $t = t_{\rm age}$ as presented in this paper. One concern is that the simulations are performed only at one mechanical luminosity $4.5\times 10^{40}$ erg s$^{-1}$ that corresponds to SFR $\approx 0.45$ M$_\odot$ yr$^{-1}$. This conversion, however, may change depending on the effect of non-thermal pressures, like the cosmic ray and the magnetic pressure. As seen in SNS15, the total non-thermal contribution is almost $50\%$ of the thermal contribution. This makes the required star formation rate $\approx 0.3$ M$_\odot$ yr$^{-1}$ for producing the above mechanical luminosity. This value is almost factor of $2-3$ higher compared to observed value of $\approx 0.1$ M$_\odot$ yr$^{-1}$ \citep{Yusef-Zadeh2009, Immer2012, Koepferl2015}. Moreover, the conversion from mechanical luminosity to the SFR also depends on the assumed thermalisation efficiency of the SNe. In Eq. \ref{eq:mech-lumn-sfr}, I assume this efficiency to be $0.3$ \citep{Gupta2016}. However, the calculation of such efficiencies are obtained when the bubble is expanding in an uniform medium and can be, in principle, higher than $0.3$ in case the bubble breaks out of the disk and expands in a {low density} medium where the radiation loss is negligible. Therefore, a higher thermalisation efficiency would mean a lower SFR to maintain the same mechanical luminosity. A further limitation is the absence of proper CR physics and magnetic field in the simulations. This forces one to assume some prescriptions while calculating the non-thermal emission and also affects the conversion between the mechanical luminosity to SFR. Despite these limitations, the potential of the arguments presented in this paper indicates towards a common origin of the NPS, Loop-I and the FBs. The current all sky maps that are available from \textsc{rosat} are unsuitable (due to large errors in the counts) for verifying the proposed scenario. Hopefully, future X-ray missions like, \textit{e-ROSITA} will be able to verify or nullify this scenario. \section*{ACKNOWLEDGEMENTS} It is a pleasure to thank Orly Gnat, Reetanjali Moharana, Biman Nath, Prateek Sharma, Yuri Shchekinov and Amiel Sternberg for helpful discussions and suggestions. I also thank the anonymous referee whose critical feedback improved the content of this article. This work was supported by the Israeli Centers of Excellence (I-CORE) program (center no. 1829/12), the Israeli Science Foundation (ISF grant no. 857/14) and DFG/DIP grant STE 1869/2-1 GE 625/17-1. I thank the Center for Computational Astrophysics (CCA) at the Flatiron Institute Simons Foundation, where some of the computations were carried out.
{ "timestamp": "2018-10-30T01:26:30", "yymm": "1804", "arxiv_id": "1804.05634", "language": "en", "url": "https://arxiv.org/abs/1804.05634" }
\section{Introduction} We consider a family of Hamiltonians on $\mathcal{H}=L^2(\mathbf{R})$ given as \begin{equation}\label{model} H_0^{\varepsilon}=-\frac{d^2}{dx^2}+\varepsilon x, \quad H^{\varepsilon}=H_0^{\varepsilon}+V, \quad \varepsilon\in[0,\infty), \end{equation} where $V$ is a bounded self-adjoint operator. Under suitable assumptions on $V$ one can define resonances of $H^{\varepsilon}$ as poles of matrix elements $\ip{u}{(H^{\varepsilon}-\zeta)^{-1}v}$ continued analytically from the upper halfplane across $(0,\infty)$ to the lower halfplane. Consider the following situation. Suppose that $H^0$ has a resonance $\zeta_0$ in the lower halfplane close to the positive real axis. Suppose that there exists a sequence $\varepsilon_n\downarrow0$ for $n\to\infty$ such that each $H^{\varepsilon_n}$ has resonance $\zeta_n$ in the lower half-plane. We then ask: Is it possible that $\zeta_n\to\zeta_0$ as $n\to\infty$? The main result here is that under suitable conditions on $V$ the answer is \textbf{no}. One example is that $V$ is a rank one operator $V=c\ket{\psi}\bra{\psi}$ such that $\psi\in L^2(\mathbf{R})$ has compact support and is a real-valued function. The instability of pre-existing resonances was first considered in \cite{HR}. They obtained results for two different models, a Friedrich model, and a model of the form \eqref{model} with $V$ a rank one perturbation. An explicit construction of a dilation analytic rank one perturbation leading to a resonance of $H^0$ close to the real axis is given. Then as $\varepsilon\downarrow0$ all resonances of $H^{\varepsilon}$ are converging to the real axis, i.e. do not converge to a pre-existing resonance, see~\cite[Theorem 1.13]{HR}. Their proofs rely of detailed studies of the resolvent behavior. We obtain results for a class of perturbations different from the one in \cite{HR}. We use techniques from abstract analytic scattering theory. Stationary scattering theory for Stark Hamiltonians was first obtained in~\cite{Y79} and results on analytic scattering theory for Stark Hamiltonians was obtained in~\cite{Y81}. An abstract analytic scattering theory was given in~\cite{AJ}. In particular the identity between poles of the analytically continued matrix elements of the resolvent and poles of the analytically continued scattering matrix was shown. This result was obtained in~\cite{Y81} for Stark Hamiltonians. Our main results are stated in Theorem~\ref{rank-one} for rank one perturbations, and in Theorem~\ref{rankN} for a rank $N$ perturbation under the assumptions that it is given by compactly supported, real, and even functions. The proofs rely on the connection between poles of analytically continued matrix elements of the resolvent and poles of the analytically continued scattering matrix, and a detailed analysis of asymptotics of the Airy function. \section{Notation and framework}\label{sect2} We consider the following families of Hamiltonians on $\mathcal{H}=L^2(\mathbf{R})$: \begin{equation} H_0^{\varepsilon}=-\frac{d^2}{dx^2}+\varepsilon x, \quad H^{\varepsilon}=H_0^{\varepsilon}+V, \quad \varepsilon\in[0,\infty). \end{equation} The perturbation $V$ is assumed to be a bounded self-adjoint operator on $\mathcal{H}$ which is factored as $V=B^{\ast}A=A^{\ast}B$. Here $A,B\colon \mathcal{H}\to\mathcal{K}$ are bounded operators and $\mathcal{K}$ is an auxiliary Hilbert space. Further assumptions on $A$ and $B$ will be stated later. We start by recalling a variant of the notation used in the Kuroda approach to scattering theory~\cite{Kuroda}. We define \begin{equation} R_0^{\varepsilon}(\zeta)=(H_0^{\varepsilon}-\zeta)^{-1},\quad R^{\varepsilon}(\zeta)=(H^{\varepsilon}-\zeta)^{-1}, \quad \im\zeta\neq0. \end{equation} We also define \begin{equation}\label{def-G} Q_0^{\varepsilon}(\zeta)=BR_0^{\varepsilon}(\zeta)A^{\ast},\quad G_0^{\varepsilon}(\zeta)=1+Q_0^{\varepsilon}(\zeta). \end{equation} We have that $G_0^{\varepsilon}(\zeta)$ is invertible for $\im\zeta\neq0$. The second resolvent equation can be written as \begin{equation}\label{resolvent-eq} R^{\varepsilon}(\zeta)=R_0^{\varepsilon}(\zeta)-R_0^{\varepsilon}(\zeta)A^{\ast}G_0^{\varepsilon}(\zeta)^{-1}B R_0^{\varepsilon}(\zeta). \end{equation} We need the spectral representation for $H_0^{\varepsilon}$. Since the spectral multiplicity is $2$ for $\varepsilon=0$ and $1$ for $\varepsilon>0$, we split into these two cases. For $\varepsilon=0$ the spectral representation is $F^0\colon L^2(\mathbf{R})\to L^2((0,\infty);\mathbf{C}^2)$ defined as \begin{equation} (F^0u)(\lambda)=T^0(\lambda)u=\begin{bmatrix} T_0^+(\lambda)u \\ T_0^-(\lambda)u \end{bmatrix}=\frac{1}{\sqrt{2}\lambda^{1/4}}\begin{bmatrix} \widehat{u}(\sqrt{\lambda})\\ \widehat{u}(-\sqrt{\lambda}) \end{bmatrix},\quad \lambda>0. \end{equation} The operator of multiplication by $\lambda$ on $L^2((0,\infty);\mathbf{C}^2)$ is denoted by $M_{\lambda}$. Then $F^0$ is unitary and we have $F^0H_0^0(F^0)^{\ast}=M_{\lambda}$. For the case $\varepsilon>0$ we define \begin{align} (V(\varepsilon)u)(x)&=\frac{1}{\sqrt{\varepsilon}}u(\frac{1}{\varepsilon}x),\\ U(\varepsilon)&=V(\varepsilon) \mathcal{F}^{\ast}M_{\exp(-ip^3/(3\varepsilon))}\mathcal{F}.\label{U-def} \end{align} Then $F^{\varepsilon}$ given by $(F^{\varepsilon}u)(\lambda)=(U(\varepsilon)u)(\lambda)$ is unitary, and we have $F^{\varepsilon}H_0^{\varepsilon}(F^{\varepsilon})^{\ast}=M_{\lambda}$, see \cite{Y79}. The trace operators used in the Kuroda approach are defined as follows for $v\in\mathcal{K}$ \begin{align} T^{\varepsilon}(\lambda;A)v&=(F^{\varepsilon}A^{\ast}v)(\lambda),\label{Teps-def}\\ T^{\varepsilon}(\lambda;B)v&=(F^{\varepsilon}B^{\ast}v)(\lambda). \end{align} We will assume that there exists $\Omega\subseteq\mathbf{C}$ satisfying $\overline{\Omega}=\Omega$ such that $\Omega\cap\mathbf{R}=I$ is an open interval satisfying $I\subseteq(0,\infty)$. We assume that $T^{\varepsilon}(\lambda;A)$ and $T^{\varepsilon}(\lambda;B)$ have analytic extensions to $\Omega$ with values in $\mathcal{B}(\mathcal{K},\mathbf{C}^2)$ for $\varepsilon=0$ and in $\mathcal{B}(\mathcal{K},\mathbf{C})$ for $\varepsilon>0$. Let $\mathbf{C}^{\pm}=\{\zeta\,|\,\pm\im\zeta>0\}$. We define \begin{equation} \Omega^{\pm}=\{\zeta\in\Omega\,|\,\pm\im\zeta>0\}. \end{equation} and then \begin{equation} Q^{\varepsilon}_{0,\pm}(\zeta)=Q_0^{\varepsilon}(\zeta),\quad \zeta\in\mathbf{C}^{\pm}. \end{equation} We recall \begin{proposition}[{\cite[Proposition 3.1]{AJ}}]\label{prop21} We have the following results: \begin{enumerate} \item $Q^{\varepsilon}_{0,+}(\zeta)$ has an analytic continuation from $\mathbf{C}^+$ to $\mathbf{C}^+\cup I\cup\Omega^-$, which we denote by $\widetilde{Q}^{\varepsilon}_{0,+}(\zeta)$. \item $Q^{\varepsilon}_{0,-}(\zeta)$ has an analytic continuation from $\mathbf{C}^-$ to $\mathbf{C}^-\cup I\cup\Omega^+$, which we denote by $\widetilde{Q}^{\varepsilon}_{0,-}(\zeta)$. \item We have for $\zeta\in\Omega$ \begin{equation} \widetilde{Q}^{\varepsilon}_{0,+}(\zeta)-\widetilde{Q}^{\varepsilon}_{0,-}(\zeta) =2\pi i T^{\varepsilon}(\overline{\zeta};B)^{\ast}T^{\varepsilon}(\zeta;A). \end{equation} \end{enumerate} \end{proposition} We use the notation \begin{equation} \widetilde{G}_{0,\pm}^{\varepsilon}(\zeta)=1+\widetilde{Q}^{\varepsilon}_{0,\pm}(\zeta), \quad \zeta\in\Omega. \end{equation} We impose assumptions on $A$ and $B$ such that $\widetilde{Q}^{\varepsilon}_{0,\pm}(\zeta)$ is compact for $\zeta\in\Omega$. We can then use the analytic Fredholm theorem to obtain the following result. \begin{proposition}[{\cite[Proposition 3.2]{AJ}}] There exist discrete sets $e_{\pm}^{\varepsilon}\subset I$ with the end points of $I$ as the only possible points of accumulation, and discrete sets $r_{\pm}^{\varepsilon}\subset\Omega^{\mp}$ with $\partial\Omega^{\mp}\setminus I$ as the only possible points of accumulation. Then $\widetilde{G}_{0,\pm}^{\varepsilon}(\zeta)$ are invertible for $\zeta\in(\mathbf{C}^{\pm}\cup \Omega^{\mp}\cup I)\setminus(e_{\pm}^{\varepsilon}\cup r_{\pm}^{\varepsilon})$. The continued inverse $(\widetilde{G}_{0,\pm}^{\varepsilon}(\zeta))^{-1}$ has poles contained in the set $e_{\pm}^{\varepsilon}\cup r_{\pm}^{\varepsilon}$. \end{proposition} We define $\mathbf{C}^{(\varepsilon)}$, such that $\mathbf{C}^{(0)}=\mathbf{C}^2$ and $\mathbf{C}^{(\varepsilon)}=\mathbf{C}$, $\varepsilon>0$. We introduce the dense subsets \begin{multline} \mathcal{R}_0^{\varepsilon}=\{f\in L^2(I;\mathbf{C}^{(\varepsilon)})\,|\, f\colon I\to \mathbf{C}^{(\varepsilon)}\\ \text{has an analytic continuation to $\Omega$ with values in $\mathbf{C}^{(\varepsilon)}$}\}. \end{multline} We then have the result that for $f,g\in (F^{\varepsilon})^{-1}\mathcal{R}_0^{\varepsilon}$ the matrix element $\ip{f}{R_0^{\varepsilon}(\zeta)g}$ has an analytic continuation from $\mathbf{C}^{\pm}$ to $\mathbf{C}^{\pm}\cup I\cup\Omega^{\mp}$, see \cite[Proposition 3.6]{AJ}. Using \eqref{resolvent-eq} we can get a meromorphic continuation of matrix elements of the full resolvent. For each $\varepsilon\geq0$ we have that $e_+^{\varepsilon}=e_-^{\varepsilon}=e^{\varepsilon}=I\cap\sigma_p(H^{\varepsilon})$, see \cite[Theorem 3.9]{AJ}. In the sequel we will only consider $r_+^{\varepsilon}$. These points are the possible locations of poles of the meromorphically continued full resolvent matrix elements in the lower half plane, and are called the resonances. We now recall the results from \cite{AJ} identifying these with poles of the meromorphically continued scattering matrix. For $\zeta\in(\mathbf{C}^+\cup I\cup \Omega^{-})\setminus(e_{+}^{\varepsilon}\cup r_{+}^{\varepsilon})$ we introduce the notation $\widetilde{G}_+^{\varepsilon}(\zeta)=\widetilde{G}_{0,+}^{\varepsilon}(\zeta)^{-1}$. We define $\widetilde{G}_-^{\varepsilon}(\zeta)$ analogously. We have the following formulas for the scattering matrix and its inverse, see \cite[Theorem 3.11]{AJ}. \begin{align} S^{\varepsilon}(\lambda)&=1-2\pi i T^{\varepsilon}(\lambda;A) \widetilde{G}_+^{\varepsilon}(\lambda) T^{\varepsilon}(\overline{\lambda};B)^{\ast},\\ S^{\varepsilon}(\lambda)^{-1}&=1+2\pi i T^{\varepsilon}(\lambda;A) \widetilde{G}_-^{\varepsilon}(\lambda) T^{\varepsilon}(\overline{\lambda};B)^{\ast}.\label{Sinv} \end{align} We have a meromorphic extension of $S^{\varepsilon}(\lambda)$ to $\Omega$ with poles at most in $r_+^{\varepsilon}$. Note that the singularities in $e^{\varepsilon}$ are removable. Analogously for $S^{\varepsilon}(\lambda)^{-1}$, now with poles at most in $r_-^{\varepsilon}$. The main result in \cite{AJ} is the following theorem. \begin{theorem}[{\cite[Theorem 3.12]{AJ}}]\label{main} The set of poles of $S^{\varepsilon}(\zeta)$ in $\Omega$ is equal to the set $r_+^{\varepsilon}$. For a given $\varepsilon\geq0$ and $\zeta_0\in r_+^{\varepsilon}$ we have that $\Ker(\widetilde{G}_{0,+}^{\varepsilon}(\zeta_0))$ is isomorphic to $\Ker(S^{\varepsilon}(\zeta_0)^{-1})$. \end{theorem} The relation between existence of a resonance and existence of a non-zero solution to $S^{\varepsilon}(\zeta_0)^{-1}u=0$ given by \eqref{Sinv} will be used to study the stability or instability of resonances for a sequence $\varepsilon_n$ with $\varepsilon_n\to0$ as $n\to\infty$. \section{Rank one perturbation}\label{sect3} We consider the case of $V$ a rank one perturbation. We assume $V=c\ket{\psi}\bra{\psi}$ for some vector $\psi\in L^2(\mathbf{R})$, $\psi\neq0$, and $c$ real, $c\neq0$. We take $\mathcal{K}=\mathbf{C}$ and $A=\bra{\psi}$, $B=c\bra{\psi}$. Consider first the case $\varepsilon=0$. We have for $z\in\mathcal{K}$ \begin{equation}\label{T0} T^0(\lambda;A)z=\frac{1}{\sqrt{2}\lambda^{1/4}}\begin{bmatrix} \widehat{\psi}(\sqrt{\lambda})\\ \widehat{\psi}(-\sqrt{\lambda}) \end{bmatrix}z. \end{equation} The determination of $\sqrt{\lambda}$ is the one with $\sqrt{\lambda}>0$ for $\lambda>0$ and the cut along $(-\infty,0]$. We need to be able to continue this operator analytically in $\lambda$. We introduce the following assumption, where $L^2_{\rm comp}(\mathbf{R})$ denotes the compactly supported functions in $L^2$. \begin{assumption}\label{assump31} Assume $\psi\in L^2_{\rm comp}(\mathbf{R})$. \end{assumption} It follows from this assumption that $\widehat{\psi}$ has an analytic continuation from $\mathbf{R}$ to the complex plane $\mathbf{C}$. We take $\Omega=\mathbf{C}\setminus(-\infty,0]$. Then it follows from \eqref{T0} and Assumption~\ref{assump31} that $T^0(\lambda;A)$ can be continued analytically to $\Omega$. We have the same result for $T^0(\lambda;B)=cT^0(\lambda;A)$. We now consider $\zeta$ with $\re\zeta>0$ and $\im\zeta<0$. We continue \eqref{T0} to these $\zeta$. We also have \begin{equation} T^0(\overline{\zeta},B)^{\ast}= \frac{c}{\sqrt{2}\zeta^{1/4}}\begin{bmatrix} \widehat{\overline{\psi}}(-\sqrt{\zeta}) & \widehat{\overline{\psi}}(\sqrt{\zeta}) \end{bmatrix}, \end{equation}since $\overline{\widehat{\psi}(\sqrt{\overline{\zeta}})} =\widehat{\overline{\psi}}(-\sqrt{\zeta})$. Continuing \eqref{Sinv} to $\{\zeta\,|\,\re\zeta>0,\; \im\zeta<0\}$ we get the following components of the matrix $S^0(\zeta)^{-1}$. \begin{align} (S^0(\zeta)^{-1})_{11}&=1+\frac{\pi i c}{\sqrt{\zeta} }G_-^0(\zeta)\widehat{\psi}(\sqrt{\zeta}) \widehat{\overline{\psi}}(-\sqrt{\zeta}),\label{S11}\\ (S^0(\zeta)^{-1})_{12}&=\frac{\pi i c}{\sqrt{\zeta} }G_-^0(\zeta)\widehat{\psi}(\sqrt{\zeta}) \widehat{\overline{\psi}}(\sqrt{\zeta}),\label{S12}\\ (S^0(\zeta)^{-1})_{21}&=\frac{\pi i c}{\sqrt{\zeta} }G_-^0(\zeta)\widehat{\psi}(-\sqrt{\zeta}) \widehat{\overline{\psi}}(-\sqrt{\zeta}),\label{S21}\\ (S^0(\zeta)^{-1})_{22}&=1+\frac{\pi i c}{\sqrt{\zeta} }G_-^0(\zeta)\widehat{\psi}(-\sqrt{\zeta}) \widehat{\overline{\psi}}(\sqrt{\zeta})\label{S22}. \end{align} Next we consider $\varepsilon>0$. Using \eqref{U-def} and \eqref{Teps-def} we have for $z\in\mathcal{K}=\mathbf{C}$ \begin{equation}\label{Teps} T^{\varepsilon}(\lambda;A)z=\frac{1}{\sqrt{2\pi}}\frac{1}{\sqrt{\varepsilon}} \int_{-\infty}^{\infty} e^{i\lambda p/\varepsilon} e^{-ip^3/(3\varepsilon)}\widehat{\psi}(p)dp \cdot z. \end{equation} We want to continue analytically in $\lambda$ into $\Omega$. To this end we study the integral in \eqref{Teps}. Assume $u\in\mathcal{S}(\mathbf{R})$ and define \begin{equation} \Gamma^{\varepsilon}(\lambda)u=\frac{1}{2\pi\sqrt{\varepsilon}}\int_{-\infty}^{\infty}\Bigl( \int_{-\infty}^{\infty} e^{ip(\lambda/\varepsilon)-ip^3/(3\varepsilon)-ixp}u(x)dx\Bigr)dp, \end{equation} where the successive integrals converge absolutely. Thus, it can also be represented as the limit of the double integral \begin{equation}\label{eq4} \Gamma^{\varepsilon}(\lambda)u=\lim_{\delta\downarrow0}\frac{1}{2\pi\sqrt{\varepsilon}} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{ip(\lambda/\varepsilon)-ip^3/(3\varepsilon)-ixp-\delta p^2}u(x)dxdp. \end{equation} We note that $\Gamma^{\varepsilon}(\lambda)u=(U(\varepsilon)u)(\lambda)$. From \eqref{eq4} we see that for $z\in\mathcal{K}=\mathbf{C}$ \begin{equation}\label{eq5} (\Gamma^{\varepsilon}(\lambda))^{\ast}z=\Bigl(\lim_{\delta\downarrow0}\frac{1}{2\pi\sqrt{\varepsilon}} \int_{-\infty}^{\infty} e^{-ip(\lambda/\varepsilon)+ip^3/(3\varepsilon)+ixp-\delta p^2}dp\Bigr)z. \end{equation} We continue the function inside the parentheses in \eqref{eq5} from $\lambda\in\mathbf{R}$ to $\zeta\in\mathbf{C}$ and define \begin{equation}\label{eq6} \mathcal{G}^{\varepsilon}(\zeta,x)=\lim_{\delta\downarrow0}\frac{1}{2\pi\sqrt{\varepsilon}} \int_{-\infty}^{\infty} e^{-ip(\zeta/\varepsilon)+ip^3/(3\varepsilon)+ixp-\delta p^2}dp. \end{equation} For $\zeta\in\mathbf{R}$ and $x\in\mathbf{R}$ we have that $\mathcal{G}^{\varepsilon}(\zeta,x)\in\mathbf{R}$, since then the imaginary part of the integrand in \eqref{eq6} is an odd function of $p$. \begin{lemma}\label{lemma1} We have the following results: \begin{itemize} \item[\rm(1)] The limit in \eqref{eq6} is uniform in compact subsets of $\mathbf{C}\times\mathbf{R}$. \item[\rm(2)] For any $\eta>0$ we can write \begin{equation}\label{eq312} \mathcal{G}^{\varepsilon}(\zeta,x)=\frac{e^{\eta(\zeta/\varepsilon-x)+\eta^3/(3\varepsilon)}}{2\pi\sqrt{\varepsilon}} \int_{-\infty}^{\infty} e^{-p^2\eta/\varepsilon-i(\zeta p-p^3/3+p\eta^2)/\varepsilon+ixp}dp. \end{equation} \item[\rm(3)] $\mathcal{G}^{\varepsilon}(\zeta,x)$ can be extended to an entire function of $(\zeta,x)\in\mathbf{C}\times\mathbf{C}$. For all $(\zeta,x)\in\mathbf{C}\times\mathbf{R}$ we have $\overline{\mathcal{G}^{\varepsilon}(\overline{\zeta},x)}=\mathcal{G}^{\varepsilon}(\zeta,x)$. \item[\rm(4)] $\mathcal{G}^{\varepsilon}(\zeta,x)$ satisfies \begin{equation} \bigl(-\frac{d^2}{dx^2}+\varepsilon x -\zeta\bigr)\mathcal{G}^{\varepsilon}(\zeta,x)=0,\quad (\zeta,x)\in\mathbf{C}\times\mathbf{R}. \end{equation} \end{itemize} \end{lemma} \begin{proof} Let $(\zeta,x)\in\mathbf{R}\times\mathbf{R}$, $\varepsilon>0$, and $\ell>0$ be fixed. Then there exists a constant $C_0$ such that for $0\leq\eta\leq\ell$ and $p\in\mathbf{R}$ we have \begin{align} \re\bigr(-i(p+i\eta) (\zeta/\varepsilon)&+i(p+i\eta)^3/(3\varepsilon)+ix(p+i\eta)-\delta(p+i\eta)^2\bigr)\notag\\ &=-p^2\eta/\varepsilon-\delta p^2 + \eta^3/(3\varepsilon)-x\eta+\eta^2\delta+\eta\zeta/\varepsilon\notag\\ &\leq -(\delta+\eta/\varepsilon)p^2+C.\label{eq7} \end{align} Then using Cauchy's theorem we can change the integration contour to $\im p=i\eta$ for any $\eta>0$ such that \begin{equation} \mathcal{G}^{\varepsilon}(\zeta,x)=\lim_{\delta\downarrow0}\frac{1}{2\pi\sqrt{\varepsilon}} \int_{-\infty}^{\infty} e^{-i(p+i\eta) (\zeta/\varepsilon)+i(p+i\eta)^3/(3\varepsilon)+ix(p+i\eta)-\delta(p+i\eta)^2}dp.\label{eq8} \end{equation} Then using \eqref{eq7} we conclude that for any $\eta>0$ the limit in \eqref{eq8} (hence also the limit in \eqref{eq6}) exists uniformly with respect to $(\zeta,x)$ in a compact subset of $\mathbf{R}\times\mathbf{R}$ along with all derivatives. We obtain \begin{equation*} \mathcal{G}^{\varepsilon}(\zeta,x)=\frac{1}{2\pi\sqrt{\varepsilon}} \int_{-\infty}^{\infty}e^{-i(p+i\eta)(\zeta/\varepsilon)+i(p+i\eta)^3/(3\varepsilon)+ix(p+i\eta)}dp, \end{equation*} which may be written in the form \eqref{eq312}. It follows that $\mathcal{G}^{\varepsilon}(\zeta,x)$ can be extended to an entire function of $(\zeta,x)\in\mathbf{C}\times\mathbf{C}$. For $x\in\mathbf{R}$ we have that $\mathcal{G}^{\varepsilon}(\zeta,x)$ is real for $\zeta\in\mathbf{R}$. The reflection principle from complex analysis implies that $\overline{\mathcal{G}^{\varepsilon}(\overline{\zeta},x)}=\mathcal{G}^{\varepsilon}(\zeta,x)$. We leave the proof of part (4) to the reader. \end{proof} We study the behavior of $\mathcal{G}^{\varepsilon}(\zeta)$ as $\varepsilon\downarrow0$ in the sector $\{\zeta\in\mathbf{C}\,|\, -\pi/3<\arg \zeta <0\}$. We first represent it using the Airy function. We write $K\Subset\mathbf{R}$ for a compact subset. \begin{lemma} Let $K\Subset\mathbf{R}$ and let $M\Subset\{\zeta\in\mathbf{C}\,|\, -\pi/3<\arg \zeta <0\}$. Then we have for $x\in K$ and $\zeta\in M$ \begin{equation}\label{Ai-G} \mathcal{G}^{\varepsilon}(\zeta,x)=\frac{1}{\varepsilon^{\frac16}}\Ai(\omega), \quad \omega=\varepsilon^{\frac13}x-\varepsilon^{-\frac23}\zeta, \end{equation} where $\Ai(\omega)$ denotes the Airy function \begin{equation} \Ai(\omega)=\frac{1}{2\pi i}\int_{\infty e^{-i\pi/3}}^{\infty e^{i\pi/3}} \exp({t^3}/{3}-\omega t)dt. \end{equation} Here the integral is computed over the halflines $\infty e^{-i\pi/3}\to0 \to\infty e^{i\pi/3}$. \end{lemma} \begin{proof} We first make the change of variables $q= -ip$ or $p=iq$ in the integral in \eqref{eq6} and then $q=\varepsilon^{\frac13}t$ and write it as the line integral in the complex plane \begin{align} \mathcal{G}^{\varepsilon}(\zeta,x)&=\lim_{\delta\downarrow0}\frac{1}{2i\pi\sqrt{\varepsilon}} \int_{-i\infty}^{i\infty}e^{q(\zeta/\varepsilon)+q^3/(3\varepsilon)-xq+\delta q^2}dq\\ &=\lim_{\delta\downarrow0}\frac{1}{2i\pi\varepsilon^{\frac16}} \int_{-i\infty}^{i\infty}e^{t^3/3-t\omega+\delta\varepsilon^{\frac23}t^2}dt, \end{align} where $\omega= \varepsilon^{\frac13}x-\varepsilon^{-\frac23}\zeta$. We now want to deform the contour. We note the following implications \begin{gather*} -\pi/2 \leq \arg t \leq -\pi/3 \Rightarrow -3\pi/2 \leq \arg t^3 \leq -\pi, \quad -\pi \leq \arg t^2 \leq -2\pi/3; \\ \pi/3 \leq \arg t \leq \pi/2 \Rightarrow \pi \leq \arg t^3 \leq 3\pi/2, \quad 2\pi/3 \leq \arg t^2 \leq \pi. \end{gather*} Thus for $t \in \{-\pi/2 \leq \arg t \leq -\pi/3\} \cup \{\pi/3 \leq \arg t \leq \pi/2\}$, we have $\re t^3 \leq 0$ and $\re t^2 \leq0$ and we may deform the contour of integration to the line graph $e^{-\pi/3}\infty \to 0 \to e^{\pi/3}\infty$ on which $\arg t^3 = \pi$ or $\arg t^3=-\pi$ and $t^3<0$. Thus the limit $\delta \to 0$ may be taken inside the integral sign and we obtain the desired expression \eqref{Ai-G}. \end{proof} Now we define $\rho=-\zeta$ for $\zeta \in M$ so that $2\pi/3+\kappa<\arg \rho<\pi-\kappa$ for a $\kappa>0$ and \begin{equation*} \omega= \varepsilon^{\frac13}x-\varepsilon^{-\frac23}\zeta = \varepsilon^{-\frac23}\rho(1 + \varepsilon (x/\rho)). \end{equation*} Then for sufficiently small $\varepsilon_0>0$ there exists another constant $\kappa>0$ such that for any $0<\varepsilon<\varepsilon_0$, $x\in K$ and $\zeta\in M$, $1+\varepsilon(x/\rho)$ is a small perturbation of $1$ and $2\pi/3+\kappa < \arg \omega < \pi-\kappa$. It follows from (9.5.4) and (9.7.5) in~\cite{DLMF} that $\Ai(\omega)$ has the following asymptotic expansion as $\abs{\omega}\to \infty$ or $\varepsilon \downarrow 0$ \begin{equation*} \Ai(\omega) \sim \frac{e^{-\xi}}{2\sqrt{\pi}\omega^{1/4}} \sum_{k=0}^\infty (-1)^k \frac{u_k}{\xi^k}, \end{equation*} where $\xi$ is defined in~\cite[(9.7.1)]{DLMF} (the notation there is $\zeta$) as the principal branch of \begin{equation*} \xi=\frac23 \omega^{\frac32}, \end{equation*} and $u_0=1$ and $u_1, \dots$ are constants defined in~\cite[(9.7.2)]{DLMF}. Note that $\pi <\arg \xi < 3\pi/2$ and $0< \arg(-\xi)< \pi/2$ so that $\re(-\xi)>0$ and $\Ai(\omega)$ blows up as $\varepsilon \downarrow 0$. Using the binomial formula $(1+\tau)^{3/2}=1+\frac32 \tau+\frac38 \tau^2+O(\tau^3)$, we have as $\varepsilon \downarrow 0$ uniformly with respect to $x\in K$ and $\zeta\in M$ that \begin{align*} \xi& = \frac{2}{3}\frac{{\rho}^\frac32}{\varepsilon} \Bigl(1+\frac{\varepsilon x}{\rho}\Bigr)^{\frac32} = \frac{2}{3}\frac{{\rho}^\frac32}{\varepsilon} \Bigl\{1+ \frac{3\varepsilon}{2}\frac{x}{\rho} + \frac{3\varepsilon^2}{8}\Bigl(\frac{x}{\rho}\Bigr)^2 + O\Bigl(\frac{\varepsilon x}{\rho}\Bigr)^3\Bigr\} \\ & = \frac{2\rho^\frac32}{3\varepsilon} + x\rho^\frac12 + \frac{\varepsilon}{4}\frac{x^2}{\rho^{\frac12}} + O(\varepsilon^2), \end{align*} hence \begin{equation*} e^{-\xi}= \exp\Bigl( -\frac{2\rho^\frac32}{3\varepsilon} - x\rho^\frac12 \Bigr)\cdot \Bigl(1- \frac{\varepsilon}{4}\frac{x^2}{\rho^\frac12} + O(\varepsilon^2)\Bigr). \end{equation*} Applying the binomial formula to $\omega^{-\frac14}= \varepsilon^{\frac16}\rho^{-\frac14} (1 + \varepsilon (x/\rho))^{-\frac14}$, we obtain \begin{equation*} \frac{1}{\varepsilon^\frac16\omega^\frac14} = \rho^{-\frac14} \bigl(1 -\frac14 \frac{\varepsilon x}{\rho} + O(\varepsilon^2)\bigr). \end{equation*} Combining these products with $u_1=5/72$ we get \begin{align} \mathcal{G}^\varepsilon(\zeta,x)& = \frac{1}{\varepsilon^\frac16}\Ai(\omega) = \frac{1}{2\sqrt{\pi}\rho^\frac14} \exp\Bigl( -\frac{2\rho^\frac32}{3\varepsilon} - x\rho^\frac12 \Bigr) \notag \\ & \times \Bigl( 1- \frac{\varepsilon}{4} \frac{x^2}{\rho^\frac12} + O(\varepsilon^2) \Bigr) \Bigl( 1 -\frac{\varepsilon}{4} \frac{x}{\rho} + O(\varepsilon^2) \Bigr) \Bigl( 1- \frac{3\varepsilon}{2} \frac{u_1}{\rho^\frac32} + O(\varepsilon^2) \Bigr) \notag \\ & = \frac1{2\sqrt{\pi}\rho^\frac14} \exp\Bigl( -\frac{2\rho^\frac32}{3\varepsilon} - x\rho^\frac12 \Bigr) \Bigl\{ 1-\frac{\varepsilon}{4} \Bigl( \frac{x^2}{\rho^\frac12} + \frac{x}{\rho}+6\frac{u_1}{\rho^\frac32} \Bigr) + O(\varepsilon^2) \Bigr\}. \label{asymp-result} \end{align} This leads to the following lemma. \begin{lemma} \label{asymp-G} We have \begin{equation} \label{asymp} \lim_{\varepsilon \downarrow 0} \mathcal{G}^\varepsilon(\zeta,x)\exp \Bigl(\frac{2\rho^\frac32}{3\varepsilon}\Bigr) = \frac{e^{i\frac{\pi}4}} {2\sqrt{\pi}\zeta^{\frac14}} e^{- ix\sqrt{\zeta}}, \end{equation} uniformly with respect to $\zeta\in M\Subset \{\zeta\in \mathbf{C}\,|\, -\pi/3<\im\zeta<0\}$ and $x \in K\Subset \mathbf{R}$. \end{lemma} \begin{proof} Due to \eqref{asymp-result} we have \eqref{asymp} with the right hand side \begin{equation*} \frac{1}{2\sqrt{\pi}(-\zeta)^\frac14}\exp \bigl(- x(-\zeta)^\frac12\bigr), \end{equation*} and we only need to fix the branch. We have $(-\zeta)^\frac14 = \zeta^\frac14 e^{i\frac{\pi}4}$ and $(-\zeta)^\frac12 = i \zeta^\frac12$. Thus the result follows. \end{proof} We now have all the results needed to continue $T^{\varepsilon}(\lambda;A)$ and $T^{\varepsilon}(\lambda;B)$ analytically to $\mathbf{C}$, thus in particular to $\Omega$. Since $B=cA$, we omit statements for $T^{\varepsilon}(\zeta;B)$ and its adjoint. Let $\varepsilon>0$. We have that \begin{equation}\label{Tstar} T^{\varepsilon}(\overline{\zeta};A)^{\ast}=\int_{-\infty}^{\infty}\mathcal{G}^{\varepsilon}(\zeta,x) \overline{\psi}(x)dx \end{equation} and \begin{equation}\label{T} T^{\varepsilon}(\zeta;A)=\int_{-\infty}^{\infty}\mathcal{G}^{\varepsilon}(\zeta,x)\psi(x)dx. \end{equation} The integrals are absolutely convergent due to Assumption~\ref{assump31} and Lemma~\ref{lemma1}. The analytic continuation follows from Lemma~\ref{lemma1}(3). Since we have analytic continuations of $T^{\varepsilon}(\lambda;A)$ and $T^{\varepsilon}(\lambda;B)$ for all $\varepsilon\geq0$, the results on continuation of resolvents and scattering matrices, and the results on resonances are available from Section~\ref{sect2}. We will use them in the next sections to obtain our results. \section{A result for rank one perturbations}\label{sect4} We now formulate and prove the main result for rank one perturbations. We need the following well known result, cf.~\cite{Y79}. Recall the definition of $G_{0}^{\varepsilon}(\zeta)$ from \eqref{def-G}. \begin{lemma}\label{lemma41} Let $K\Subset\mathbf{C}^-$. Then $G_{0}^{\varepsilon}(\zeta)^{-1}$ converges strongly to $G_{0}^{0}(\zeta)^{-1}$ as $\varepsilon\downarrow0$, uniformly with respect to $\zeta\in K$. \end{lemma} \begin{theorem}\label{rank-one} Let $\psi$ satisfy Assumption~\ref{assump31}. Assume furthermore that $\psi$ is real-valued. Let $V=c\ket{\psi}\bra{\psi}$, $c\in\mathbf{R}$, $c\neq0$. Let $H^{\varepsilon}=H_0^{\varepsilon}+V$, $\varepsilon\geq0$. Assume that there exists a sequence $\varepsilon_n\downarrow0$ as $n\to\infty$, such that each $H^{\varepsilon_n}$ has a resonance $\zeta_n$, $-\pi/3<\arg\zeta_n<0$. Assume $\zeta_n\to\zeta_0$ as $n\to\infty$ and $-\pi/3<\arg\zeta_0<0$. Then $\zeta_0$ is not a resonance of $H^0$. \end{theorem} \begin{proof} Let the assumptions in the Theorem be satisfied. It follows from Theorem~\ref{main} that $(S^{\varepsilon_n}(\zeta_n))^{-1}=0$ for all $n\geq1$, since the scattering matrix is multiplication by a scalar. Thus we have from \eqref{Sinv} that \begin{equation}\label{S-n} 1+2\pi i T^{\varepsilon_n}(\zeta_n;A) \widetilde{G}_-^{\varepsilon_n}(\zeta_n) T^{\varepsilon_n}(\overline{\zeta_n};B)^{\ast}=0\quad\text{for all $n\geq1$}. \end{equation} Since $\im\zeta_n<0$ we can write $G_-^{\varepsilon_n}(\zeta_n)$ instead of $\widetilde{G}_-^{\varepsilon_n}(\zeta_n)$. We can then use Lemma~\ref{lemma41} to conclude that $G_-^{\varepsilon_n}(\zeta_n)\to G_-^{0}(\zeta_0)$ as $n\to\infty$. Next we look at the limit of $T^{\varepsilon_n}(\zeta_n;A)$ as $n\to\infty$. Let $K=\supp \psi$. We can determine a set $M\Subset \{\zeta\in \mathbf{C}\,|\, -\pi/3<\arg\zeta<0\}$ such that $\zeta_n\in M$ for all $n$. We recall from Section~\ref{sect3} the notation $\rho_n=-\zeta_n$. Since $\zeta_n\in M$, we can determine $\kappa>0$ such that for all $n$ we have $\frac23\pi+\kappa<\arg\rho_n<\pi-\kappa$. This implies that there exists $\delta>0$ such that $\re\rho_n^{\frac32}<-\delta$. Thus we have that \begin{equation*} \exp((4\rho_n^{\frac32})/(3\varepsilon_n))\to 0\quad\text{as}\quad n\to\infty. \end{equation*} Multiply by $\exp((4\rho_n^{\frac32})/(3\varepsilon_n))$ on both sides in \eqref{S-n} and take the limit $n\to\infty$. Using \eqref{Tstar}, \eqref{T}, Lemma~\ref{asymp-G}, Lemma~\ref{lemma41}, and dominated convergence we get that \begin{equation} \widehat{\psi}(\sqrt{\zeta_0})G_-^0(\zeta_0)\widehat{\psi}(\sqrt{\zeta_0})=0, \end{equation} since $\psi$ is assumed to be real. Since $G_-^0(\zeta_0)\neq0$ we conclude that $\widehat{\psi}(\sqrt{\zeta_0})=0$. We now use the formulas \eqref{S11}--\eqref{S22} and the assumption that $\psi$ is real to get \begin{equation} S^0(\zeta_0)^{-1}= \begin{bmatrix} 1 & 0\\ a & 1 \end{bmatrix}, \end{equation} where $a=(S^0(\zeta_0)^{-1})_{21}$. This matrix is obviously invertible. Theorem~\ref{main} implies that $\zeta_0$ is not a resonance of $H^0$. \end{proof} \section{A result for rank $N$ perturbations} We outline an extension to a rank $N$ perturbation in this section. We assume that \begin{equation}\label{rank-N} V=\sum_{k=1}^Nc_k\ket{\psi_k}\bra{\psi_k}. \end{equation} We introduce the following assumption. \begin{assumption}\label{assumpN} Let $V$ be given by \eqref{rank-N}. Assume that $c_k\in\mathbf{R}\setminus\{0\}$, $k=1,\ldots,N$, and $\psi_k\in L^2_{\rm comp}(\mathbf{R})$, $k=1,\ldots,N$, linearly independent real functions. Assume that each $\psi_k$ is an even function. \end{assumption} The factorization $V=B^{\ast}A$ is given with $\mathcal{K}=\mathbf{C}^N$ by the operators \begin{equation} Af=\begin{bmatrix} \ip{\psi_1}{f}\\ \vdots\\ \ip{\psi_N}{f} \end{bmatrix} \quad\text{and}\quad Bf=\begin{bmatrix} c_1\ip{\psi_1}{f}\\ \vdots\\ c_N\ip{\psi_N}{f}. \end{bmatrix}. \end{equation} The operator $Q^{\varepsilon}_0(\zeta)=BR_0^{\varepsilon}(\zeta)A^{\ast}$ is an $N\times N$ matrix with matrix elements \begin{equation} Q^{\varepsilon}_0(\zeta)_{k\ell}=c_k\ip{\psi_k}{R_0^{\varepsilon}(\zeta)\psi_{\ell}},\quad k,\ell=1,\ldots N. \end{equation} The operator $T^0(\lambda;A)\colon\mathbf{C}^N\to\mathbf{C}^2$ is given by the following matrix \begin{equation} T^0(\lambda;A)=\frac{1}{\sqrt{2}\lambda^{1/4}}\begin{bmatrix} \widehat{\psi}_1(\sqrt{\lambda}) & \widehat{\psi}_2(\sqrt{\lambda}) & \cdots& \widehat{\psi}_N(\sqrt{\lambda})\\ \widehat{\psi}_1(-\sqrt{\lambda}) & \widehat{\psi}_2(-\sqrt{\lambda}) & \cdots& \widehat{\psi}_N(-\sqrt{\lambda}) \end{bmatrix}. \end{equation} The operator $T^0(\overline{\lambda};B)^{\ast} \colon\mathbf{C}^2\to\mathbf{C}^N$ is given by the following matrix \begin{equation} T^0(\overline{\lambda};B)^{\ast}=\frac{1}{\sqrt{2}\lambda^{1/4}}\begin{bmatrix} c_1\widehat{\overline{\psi}}_1(-\sqrt{\lambda}) & c_1\widehat{\overline{\psi}}_1(\sqrt{\lambda})\\ c_2\widehat{\overline{\psi}}_2(-\sqrt{\lambda}) & c_2\widehat{\overline{\psi}}_2(\sqrt{\lambda})\\ \vdots & \vdots\\ c_N\widehat{\overline{\psi}}_N(-\sqrt{\lambda}) & c_N\widehat{\overline{\psi}}_N(\sqrt{\lambda})\\ \end{bmatrix}. \end{equation} We introduce a shorthand notation for these two marices. We write \begin{equation} T^0(\lambda;A)=\frac{1}{\sqrt{2}\lambda^{1/4}}\begin{bmatrix} \mathsf{r}_1\\ \mathsf{r}_2 \end{bmatrix} \quad\text{and}\quad T^0(\overline{\lambda};B)^{\ast}=\frac{1}{\sqrt{2}\lambda^{1/4}}\begin{bmatrix} \mathsf{s}_1 & \mathsf{s}_2 \end{bmatrix}. \end{equation} This leads to the result that \begin{equation} T^0(\lambda;A)G_-^0(\lambda)T^0(\overline{\lambda};B)^{\ast} =\frac{1}{2\sqrt{\lambda}} \begin{bmatrix} \mathsf{r}_1\widetilde{G}_-^0(\lambda)\mathsf{s}_1 & \mathsf{r}_1\widetilde{G}_-^0(\lambda)\mathsf{s}_2\\ \mathsf{r}_2\widetilde{G}_-^0(\lambda)\mathsf{s}_1 & \mathsf{r}_2\widetilde{G}_-^0(\lambda)\mathsf{s}_2 \end{bmatrix}. \end{equation} As in Section~\ref{sect3} we can continue into the lower half plane, such that for $\im\zeta<0$ we get from \eqref{Sinv} the expression \begin{equation}\label{S0-formula} S^0(\zeta)^{-1}=\begin{bmatrix} 1 & 0\\ 0 & 1\end{bmatrix} + \frac{\pi i}{\sqrt{\lambda}} \begin{bmatrix} \mathsf{r}_1G_-^0(\zeta)\mathsf{s}_1 & \mathsf{r}_1G_-^0(\zeta)\mathsf{s}_2\\ \mathsf{r}_2G_-^0(\zeta)s_1 & \mathsf{r}_2G_-^0(\zeta)s_2 \end{bmatrix}. \end{equation} Note that we write $G_-^0$ instead of $\widetilde{G}_-^0$ since we are not using a continuation. Now we look at the expression for $S^{\varepsilon}(\zeta)^{-1}$ in the case $\varepsilon>0$. Define \begin{equation} \Phi^{\varepsilon}_k(\zeta)= \int_{-\infty}^{\infty}\mathcal{G}^{\varepsilon}(\zeta,x)\psi_k(x)dx,\quad k=1,\ldots,N. \end{equation} Define the matrices \begin{equation} \mathsf{u}=\begin{bmatrix} \Phi^{\varepsilon}_1(\zeta)& \Phi^{\varepsilon}_2(\zeta) &\cdots & \Phi^{\varepsilon}_N(\zeta) \end{bmatrix} \quad\text{and}\quad \mathsf{v}=\begin{bmatrix} c_1\overline{\Phi^{\varepsilon}_1(\overline{\zeta})}\\ c_2\overline{\Phi^{\varepsilon}_2(\overline{\zeta})}\\ \vdots\\ c_N\overline{\Phi^{\varepsilon}_N(\overline{\zeta})} \end{bmatrix}. \end{equation} Then we have for $\im\zeta<0$ \begin{equation} S^{\varepsilon}(\zeta)^{-1}=1+2\pi i\, \mathsf{u} G_-^{\varepsilon}(\zeta)\mathsf{v}. \end{equation} We can now state the following result. \begin{theorem}\label{rankN} Let $V$ satisfy Assumption~\ref{rank-N}. Let $H^{\varepsilon}=H_0^{\varepsilon}+V$, $\varepsilon\geq0$. Assume that there exists a sequence $\varepsilon_n\downarrow0$ as $n\to\infty$, such that each $H^{\varepsilon_n}$ has a resonance $\zeta_n$, $-\pi/3<\arg\zeta_n<0$. Assume $\zeta_n\to\zeta_0$ as $n\to\infty$ and $-\pi/3<\arg\zeta_0<0$. Then $\zeta_0$ is not a resonance of $H^0$. \end{theorem} \begin{proof} We sketch the main steps in the proof. We have by assumption and Theorem~\ref{main} that $S^{\varepsilon_n}(\zeta_n)^{-1}=0$ for all $n\geq1$. Repeating the convergence argument in the proof of Theorem~\ref{rank-one} we can conclude that \begin{equation}\label{eqr1s2} \mathsf{r}_1 G_-^0(\zeta_0)\mathsf{s}_2=0. \end{equation} Now since $\psi_k$ is assumed to be even, we also have that $\widehat{\psi}_k$ is even. Furthermore $ \psi_k$ is assumed to be real. Thus we have \begin{equation} \widehat{\psi}_k(\sqrt{\zeta_0})= \widehat{\psi}_k(-\sqrt{\zeta_0})= \widehat{\overline{\psi}}_k(\sqrt{\zeta_0})= \widehat{\overline{\psi}}_k(-\sqrt{\zeta_0}), \quad k=1,2,\ldots,N. \end{equation} This result implies $\mathsf{r}_1=\mathsf{r}_2$ and $\mathsf{s}_1=\mathsf{s}_2$. From \eqref{S0-formula} and \eqref{eqr1s2} we conclude that \begin{equation} S^0(\zeta_0)^{-1}=\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}, \end{equation} such that by Theorem~\ref{main} $\zeta_0$ is not a resonance of $H^0$. \end{proof} \subsection*{Acknowledgements} KY thanks Ira Herbst for asking him about the instability of resonances under Stark perturbations. KY is supported by JSPS grant in aid for scientific research No. 16K05242. AJ acknowledges support from the Danish Council of Independent Research $|$ Natural Sciences, Grant DFF4181-00042.
{ "timestamp": "2018-04-17T02:16:08", "yymm": "1804", "arxiv_id": "1804.05620", "language": "en", "url": "https://arxiv.org/abs/1804.05620" }
\section{Introduction} The low solar atmosphere host numerous small-scale events related to magnetic reconnection \cite{2015ApJ...798L..11Y, 2016NatCo...711837X, 2017ApJ...836...52Z}. These events have been observed by both ground based solar telescopes [e.g. the \emph{Swedish Solar Telescope}(SST), the \emph{New Vacuum Solar Telescope}(NVST), the \emph{Goode Solar Telescope}(GST)] as well as space-based observatories [e.g., the \emph{Solar Dynamics Observatory} (\emph{SDO}), the \emph{Interface Region Imaging Spectrograph} (\emph{IRIS})]. The most interesting and famous activities are the Ellerman Bomb type events \cite{2014Sci...346C.315P, 2015ApJ...812...11V, 2016A&A...593A..32G, 2016ApJ...824...96T, 2018APJ}, which are usually called IRIS bombs on UV bursts. The traditional Ellerman bomb (EB) is a transient and prominent enhancement in brightness of the far wings of spectral lines, in particular the Balmer H${\alpha}$ line at the wavelength of 6563\r{A}. An EB is usually observed in the solar active region. In the original work of Ellerman\cite{1917ApJ....46..298E}, EBs were described as "a very brilliant and very narrow band extending four or five angstroms on either side of the H${\alpha}$ line, but not crossing it, "gradually disappearing in a few minutes. Ellerman (1917) originally named them "solar hydrogen bombs"; and nowadays, this brightening phenomenon is named after him. The EB takes place exclusively in the region of new emerging flux, and manifests characteristic elongated shaped when showing in high resolution images. Usually, the temperature of the region where as EB occurs is below $10^4$~K \cite{2006ApJ...643.1325F, 2014ApJ...792...13H, 2015ApJ...798...19N}. The EB-type events have been observed both in the wings of H${\alpha}$ and the IRIS Si IV passbands, and they share some characteristics with traditional EBs [e.g., similar locations ( $250-750$~km above the solar surface), similar life times (about 3-5 min) and sizes (about $0.3^{\prime \prime}$-$0.8^{\prime \prime}$)]. Both are considered to be formed in magnetic reconnection processes. However, the emission in the Si IV 139.3 nm line requires a temperature of at least $2\times10^4$~K in the dense photosphere. Therefore, the maximum temperatures inside these EB-type events are much higher than traditional EBs. Most recently, Tian et al. (2018) have discovered the inverted Y-shape prevalent jets from sunspot light bridges \cite{2018APJ} by using multi-wavelength observations from both GST and IRIS. The transient brightenings with significant heating and bi-directional flows at the foot points of these jets indicated that magnetic reconnection drove the formation of the inverted Y-shape jets. These transient brightenings are also the EB-type events where the presence of Fe II and Ni II absorption lines indicated that the hot reconnection region was located below the cooler chromosphere. Additionally, the O IV 1401.16\r{A} and 1399.77\r{A} forbidden lines were almost absent during these compact brightenings, which demonstrated that the reconnection site was in a region of very high plasma density\cite{2014Sci...346C.315P}. Previous numerical simulations have studied the formation of EBs through magnetic reconnection by using the one-fluid MHD model \cite{2001ChJAA...1..176C, 2007ApJ...657L..53I, 2009A&A...508.1469A, 2011RAA....11..225X}. The maximum temperature increase observed in these simulations modeling the conditions between the photosphere and the low solar chromosphere was no more than several thousand Kelvin. However, the resolutions in these simulations were low, the magnetic diffusion in the reconnection region was not realistic, and the important interactions between ions and neutrals were ignored. The high resolution simulations with a more realistic magnetic diffusion in Ni et al. (2016) showed that the plasma can be heated from $4200$~K to above $8 \times10^4$~K inside the multiple magnetic islands of a reconnection process with strong magnetic fields ($500$~G) in the temperature minimum region (TMR) of the solar atmosphere (about $500$~km above the solar surface). Ambipolar diffusion, temperature-dependent magnetic diffusion, heat conduction, and optically thin radiative cooling were all included \cite{2016ApJ...832..195N}. However, in this work the plasma was assumed to be in a steady ionization equilibrium state. Most recently, one-fluid 3D MHD simulations with radiative transport studied EBs and flares at the surface and in the lower atmosphere of the Sun \cite{2017ApJ...839...22H}. In these simulations, the plasma temperature was observed to remain below $10^4$~K during the EB formation process in the photosphere. However, non-equilibrium ionization effects were not considered in their model, and the artificial hyper-diffusivity operator that was included to prevent the collapse of the current sheets leaves open the possibility of smaller scale and hotter structures at spatial scales not covered in that simulation. Background plasma near the TMR is weakly ionized and the plasma density is high. Thus, realistic simulations of magnetic reconnection in this region of the solar atmosphere must account for the interactions between ions and neutrals as well as radiative cooling. It has been shown that ambipolar diffusion which results from collisions between ions and neutrals makes current sheet thin rapidly when no guide field is present \cite{1994ApJ...427L..91B, 1995ApJ...448..734B, 1999ApJ...511..193V, 2015ApJ...799...79N}. Previous 1D analytical work also studied magnetic reconnection in weakly ionized plasma \cite{1999ApJ...511..193V, 2003ApJ...583..229H, 2004ApJ...603..180L}. They found that an excess of ions can build up in the reconnection region if the ions pulled in by the reconnecting magnetic field are decoupled from the neutrals. High recombination can then produce a loss of ions in the reconnection region that prevents ion pressure from building up further, which leads to faster magnetic reconnection independent of magnetic diffusivity. Leake et al. (2012, 2013) have used the reactive multi-fluid plasma-neutral module within the HiFi modeling framework to study null-point magnetic reconnection in the solar chromosphere \cite{2012ApJ...760..109L, 2013PhPl...20f1202L}. They showed that strong ion recombination in the reconnection region, combined with Alfv\'enic outflows, leads to faster reconnection within a two-dimensional (2D) numerical model. When strong radiative losses cool the plasma, it decreases the pressure inside the reconnection region and also results in the thinning of a current sheet when no guide field is present \cite{1995ApJ...449..777D, 2011PhPl...18d2105U}. Recently, Alvarez Laguna et al. (2017) studied how the radiative cooling affects magnetic reconnection in the solar chromosphere by using the same reactive multi-fluid plasma-neutral model as Leake et al. (2012) but with a different code \cite{2017ApJ...842..117A}. They found that the radiative losses strongly decreased the ion density and plasma pressure in a case with a high initial ionization fraction, which resulted in the rapid thinning of the current sheet and an enhancement of the reconnection rate \cite{2017ApJ...842..117A}. In all the previous reactive multi-fluid plasma-neutral simulations \cite{2012ApJ...760..109L, 2013PhPl...20f1202L, 2015ApJ...805..134M, 2017ApJ...842..117A}, the background neutral density ($3\times10^{18}-8\times10^{18}$~m$^{-3}$) is representative of a plasma above the middle chromosphere ($1000-1500$~km above the solar surface) according to the VAL-C solar atmosphere model \cite{1981ApJS...45..635V}. The magnetic field strength is only around $10$~G. The plasma $\beta$ in each of these simulations is above 1. However, in order to study magnetic reconnection in EB-type events, a different choice of the background plasma and neutral density to be characteristic of the TMR, i.e., approximately two orders of magnitude higher than those used in the previous papers, has to be made. Further, the EB-type events described by Tian et al. (2018) were observed in a sunspot region with the maximum bi-directional flow speed as high as $200$~km\,s$^{-1}$ \cite{2018APJ}. The reconnection magnetic field in these events may therefore be higher than $1000$~G. Our recent numerical simulations \cite{2018ApJ...852...95N} have studied magnetic reconnection in strongly magnetized regions around the solar TMR. In that paper, we have presented the first reactive multi-fluid simulations of magnetic reconnection in low $\beta$ plasmas with a guide field. The simulation results were significantly different from the previous high $\beta$ simulations with zero guide field. We found that the neutrals and ions were well-coupled throughout the reconnection region for the low $\beta$ plasma. The neutral and ionized fluid components decoupled upstream of the reconnection site only when the plasma $\beta$ was sufficiently high. The rate of ionization of the neutral component of the plasma was always faster than recombination within the current sheet region; and the initially weakly ionized plasmas could become fully ionized within the reconnection region when plasma $\beta$ was low enough. The current sheet could be strongly heated to high temperatures (above $2.5 \times10^4$~K) only when the reconnecting magnetic field was in excess of a kilogauss and the plasma inside became fully ionized. However, only a simple radiative cooling model \cite{2012ApJ...760..109L, 2013PhPl...20f1202L, 2018ApJ...852...95N} was applied in Ni et al. (2018). In particular, this simple model neglects the presence of minority species in the low solar atmosphere and can significantly underestimate radiative losses in a hydrogen-dominated plasma approaching full ionization. In this work, we use a more realistic radiative cooling model \cite{2017ApJ...842..117A} to simulate magnetic reconnection around the solar TMR. This stronger radiative cooling model may be expected to result in faster recombination than ionization and to significantly impact the magnetic reconnection process as shown in Alvarez Laguna et al. (2017)\cite{2017ApJ...842..117A}. Including such a strong radiative cooling model may also be expected to reduce the temperature increases during the reconnection process, such that the high temperature plasma (above $2.5 \times10^4$~K) observed in our previous work would not appear even for magnetic fields in excess of a kilogauss. Therefore, the numerical results and conclusions in this work were expected to be significantly different from those reported in Ni et al. (2018)\cite{2018ApJ...852...95N}. Section II describes our numerical model and simulation setup. We present our numerical results and compare them with our previous work \cite{2018ApJ...852...95N} in Section III. A summary and discussion are given in Section IV. \section{Normalization and Initial Conditions} Our simulations are performed by using the reactive multi-fluid plasma-neutral module of the HiFi modeling framework\cite{2015ApJ...805..134M, 2018ApJ...852...95N}. Here, we normalize the equations by using the characteristic plasma density and magnetic field around the solar TMR. The characteristic values set in our simulations are exactly the same as in Ni et al. (2018), the characteristic plasma number density is $n_{\star}=10^{21}$~m$^{-3}$, and the characteristic magnetic field is $B_{\star}=0.05$~T=$500$~G, the characteristic length of $L_{\star}=100$~m. We have also derived the additional normalizing values to be $V_{\star}\equiv B_{\star}/\sqrt{\mu_0m_pn_{\star}}=34.613$~km\,s$^{-1}$, $t_{\star}\equiv L_{\star}/V_{\star}=t_A=0.0029$~s and $T_{\star} \equiv B_{\star}^2/(k_B\mu_0n_{\star})=1.441\times10^5$~K. The initial ionized and neutral fluid densities are set to be uniform with a neutral particle number density of $n_{n0}=0.5n_{\star}=0.5\times10^{21}$~m$^{-3}$, and the initial ionization degree is $f_{i0}=n_{i0}/(n_{i0}+n_{n0})=0.01\%$. Thus, the neutral-ion collisional mean free path of the background plasma is $\lambda_{ni0}=23.74$~m by assuming the cross-section $\Sigma_{ni}=5\times10^{-19}$ m$^{2}$. The ion inertial length is $d_{i0}=0.99$~m. The initial temperatures of the ionized and neutral fluids are set to be uniform at $T_{i0}=T_{n0}=8400$~K to keep the ionization degree the same as that around the solar TMR. The initial dimensionless magnetic flux in $z$ direction is given by \begin{equation} A_{z0}(y)=-b_p\lambda_{\psi} \mathrm{ln} \left[\mathrm{cosh}\left(\frac{y}{\lambda_{\psi}}\right)\right], \end{equation} where $b_p$ is the strength of the the magnetic field and $\lambda_{\psi}$ is the initial thickness of the current sheet. The initial magnetic field in z-direction is \begin{equation} B_{z0}(y)=b_p \bigg/ \left[ \mathrm{cosh}\left(\frac{y}{\lambda_{\psi}}\right) \right]. \end{equation} In our previous paper \cite{2018ApJ...852...95N}, the numerical results demonstrated that the collisions between electrons and neutrals are not important for magnetic reconnection in low $\beta$ plasmas. In order to compare with the corresponding cases in Ni et al. (2018), we also ignore the collisions between electrons and neutrals in this work. The dimensionless magnetic diffusivity is \begin{equation} \eta=\eta_{ei}=\eta_{ei\star}T_e^{-1.5}, \end{equation} where $\eta_{ei\star}=7.457\times10^{-6}$ is a normalization constant derived from the characteristic values $n_{\star}$, $B_{\star}$, and $L_\star$. $T_e$ is the dimensionless electron temperature. Since the electron and the ion are assumed to be coupled together and only the hydrogen gas is considered in our model, we assume $T_i=T_e$, $n_i=n_e$, and the pressure of the ionized component is twice the ion (or electron) pressure, $P_p = P_e + P_i = 2P_i$. The only difference between the model in this work and that in Ni et al. 2018 is the radiative cooling function. In our previous work \cite{2018ApJ...852...95N}, the simple radiative cooling model represents the radiative losses that are due primarily to radiative recombination, with a very crude approximation for radiation due to the presence of excited states of neutral hydrogen. Further, no account is taken of the presence of minority ion species in the TMR. This simple model is given by \begin{equation} L_{rad} = \Gamma_i^{ion}\phi_{eff}, \end{equation} where $\phi_{eff}=33$~eV$=5.28\times10^{-18}$~J. The ionization rate $\Gamma_i^{ion}$ is defined as \begin{equation} \Gamma_i^{ion}= \frac{n_n n_e A}{X+\phi_{\mathrm{ion}}/T_e^{\ast}}\left(\frac{\phi_{\mathrm{ion}}}{T_e^{\ast}}\right)^{K} \mathrm{exp}(-\frac{\phi_{\mathrm{ion}}}{T_e^{\ast}})~~\mbox{m$^{3}$\,s$^{-1}$}, \end{equation} using the values $A=2.91\times10^{-14}$, $K=0.39$, $X=0.232$, and the hydrogen ionization potential $\phi_{\mathrm{ion}}=13.6$~eV. The unit for the neutral and electron number density are both m$^{-3}$, $T_e^{\ast}$ is the electron temperature specified in eV. Then the unit for $\Gamma_i^{ion}$ is m$^{-3}$\,s$^{-1}$ and the unit for $L_{rad}$ is J\,m$^{-3}$\,s$^{-1}$. One finds that the above radiative cooling function approaches zero when a plasma is fully ionized. As shown in our previous work \cite{2018ApJ...852...95N}, the plasmas will be fully ionized if the reconnection magnetic fields are strong enough. This simple radiative cooling model becomes invalid in this situation. In this work, we use a radiative cooling model computed using the OPACITY project and the CHIANTI databases \citep{2012ApJ...751...75G}. This radiative cooling model is considered to be a more realistic cooling model for plasmas in the solar chromosphere for plasma temperatures below $1.5\times10^4$~K\cite{2017ApJ...842..117A}. The expression for this radiative model in units of J\, m$^{-3}$\,s$^{-1}$ is \begin{equation} L_{r} = C_E n_e (n_n+n_i) 8.63\times10^{-6}T^{-1/2} \times \sum_{i=1}^2 E_i \Upsilon_i exp(-eE_i/k_BT)~~\mbox{m$^3$\,K$^{1/2}$\,s$^{-1}$}, \end{equation} where $C_E=1.6022\times10^{-25}$~J\,eV$^{-1}$, $E_1=3.54$~eV and $E_2=8.28$~eV, $\Upsilon_1=0.15\times10^{-3}$ and $\Upsilon_2=0.065$. A three level hydrogen atom with two excited levels are included in this function, and $E_1$ and $E_2$ are the excited level energies. In Eq. (6), temperature T is specified in Kelvin and the unit for number density is m$^{-3}$. $k_B=1.3806\times10^{-23}$~J\,K$^{-1}$ is the Boltzmann constant. In the expression for the exponent, $eE_1\simeq 3.54\times1.602\times10^{-19}$~J$\simeq5.671\times10^{-19}$~J, $eE_2\simeq8.28\times1.602\times10^{-19}$~J$\simeq1.326\times10^{-18}$~J. The radiative cooling function for the solar atmosphere with $T\geq2\times10^4$~K can simply take the form \cite{2014masu.book.....P} \begin{equation} L_{r1} = n_e n_H Q(T), \end{equation} where $Q(T) = 10^{-32} T^{-1/2}$~W\,m$^3$\,K$^{1/2}$ is a reasonable approximation that is useful for analytical modeling over the whole temperature range $2\times10^4$~K$<T<10^7$~K. We have calculated the values of the radiative cooling by using both $L_{r}$ and $L_{r1}$ for $2\times10^4$~K$<T<10^7$~K, the values calculated from the two functions are close for each fixed temperature and plasma density. Therefore, we have used the radiative model $L_r$ provided by Eq. (6) for all the simulations in this work. The background constant heating is also included to balance the initial radiative cooling. The heating function $H_{0}$ is equal to $L_{r0}$ with $n_{e0}$, $n_{n0}$, $n_{i0}$ and $T_{i0}$ set to the initial values shown above. In our simulations, we normalize Eq. (6) by using the characteristic values presented above. We have simulated three cases in this work, Case~ALr, Case~CLr and Case~ELr. As shown in our previous simulations \cite{2018ApJ...852...95N}, the Hall term and electron-neutral collisions are not important for magnetic reconnection in our model. We have ignored the Hall effect and the electron-neutral collisions in all the three cases in this work. The only difference among Case~ALr, Case~CLr and Case~ELr is the strength of the initial magnetic field: $b_p=1$ in Case~ALr, $b_p=0.2$ in Case~Clr, $b_p=3$ in Case~Elr. Therefore, one can calculate the initial plasma $\beta$ in each case: $\beta_0=0.058$ in Case~ALr, $\beta_0=1.46$ in Case~CLr, and $\beta_0=0.0064$ in Case~ELr. Except for the radiative function and the background heating, Case~ALr, Case~CLr and Case~ELr in this work are the same as Case~A, Case~C and Case~E in our previous work \cite{2018ApJ...852...95N}, respectively. The reconnection processes are also symmetric in both x and y direction in Cases~ALr, Case~CLr and Case~ELr. Therefore, we only simulate one quarter of the domain ($0<x<2$, $0<y<1$) in the three cases. We also use the same outer boundary conditions at $|y| = 1$ and the initial electric field perturbations to initiate magnetic reconnection in this work. The perturbation electric field is applied for $0\leq t \leq 1$. The perturbation magnitude is proportional to the value of $b_p$ in each of the cases with the amplitude of $\delta E=10^{-3}b_p$. Periodicity of the physical system is imposed in the x-direction at $|x| = 2$. The highest number of grid elements in Cases~ALr, CLr, and ELr is $m_x=96$ elements in the $x$-direction and $m_y=96$ elements in the $y$-direction. We use sixth order basis functions for all simulations, resulting in effective total grid size $(M_x, M_y)=6(m_x,m_y)$. Grid packing is used to concentrate mesh in the reconnection region. Therefore, the mesh packing along the y-direction is concentrated to a thin region near y=0. The quantities shown in the figures in this work are in dimensionless units except for temperatures and velocities. \section{numerical results} In this section, we present the numerical results of simulating magnetic reconnection in the solar TMR with different strengths of initial magnetic field. We compare the results with our previous work \cite{2018ApJ...852...95N} and show how the more realistic radiative cooling model affects the results. The important variables in this work and in our previous paper are listed in table 1. A more in-depth discussion of magnetic reconnection in initially weakly ionized plasmas with different plasma $\beta$ is also presented. Fig.~1 shows the current density $J_z$ and ionization fraction $f_i$ in one quarter of the domain at three different times in Case~ALr, Case~CLr and Case~ELr respectively. The current sheet lengths in Case~ALr at $t=6.897$, in Case~CLr at $t=21.948$ and in Case~ELr at $t=5.032$ are the same, as are those shown in Fig.~1(b), Fig.~1(e) and Fig.~1(h), and those shown in Fig.~1(c), Fig.~1(f) and Fig.~1(i). As expected, the ionization fraction strongly increases with time inside the current sheet for the cases (Case~ALr and Case~ELr) with low $\beta$ and strong magnetic field, the ionization fraction slowly increases with time in the high $\beta$ case (Case~CLr). The highest ionization fraction is $72\%$ in Case~ELr, $12\%$ in Case~ALr, and only $0.8\%$ in Case~CLr. Therefore, the lower plasma $\beta$ and higher reconnection magnetic field lead to higher ionization fractions inside the current sheet. However, the ionization fractions in Case~ALr, Case~CLr and Case~ELr are respectively much lower than those in Case~A, Case~C and Case~E in our previous work \cite{2018ApJ...852...95N}. The stronger radiative cooling in this work results in the lower ionization fraction. The neutral fluids in Case~ELr do not become fully ionized as in Case~E with the same plasma $\beta$ in our previous work. Fig.~2 shows the profiles of ion and neutral temperatures across the current sheet in Cases~ALr, CLr and ELr at the same three pairs of times as in Fig.~1. The maximum temperatures within the current sheets in Case~ALr, Case~CLr and Case~ELr are $1.95\times10^4$~K, $1.6\times10^4$~K and $2.3\times10^4$~K, respectively. Therefore, the stronger reconnection magnetic fields and lower plasma $\beta$ result in the higher maximum temperature inside the current sheet. The significant difference of the plasma temperatures between this work and the previous work is the maximum temperature in Case~E and Case~ELr. In the previous work \cite{2018ApJ...852...95N}, the maximum temperature within the narrow current sheet was heated above $4\times10^4$~K in Case~E after the neutral fluids were fully ionized and the simple radiative cooling function was turned off. However, the neutral fluids are not fully ionized in Case~ELr during the reconnection process in this work. Moreover, the strong radiative cooling always exists in Case~ELr even if the plasmas are fully ionized. Therefore, the maximum temperature does not reach above $4\times10^4$~K in Case~ELr. In this work, the ion and neutral temperatures are also nearly equal throughout the evolution due to rapid thermal exchange between the plasma components in all the three cases. In the previous work \cite{2012ApJ...760..109L, 2013PhPl...20f1202L, 2015ApJ...805..134M}, the high $\beta$ simulations showed that the neutral and ionized fluid components decouple upstream of the reconnection site on scales smaller than the neutral-ion mean free path $\lambda_{ni}$. As shown in Fig.~3, the decoupling of neutral and ionized fluid is most obvious in Case~CLr, but the neutral and ion inflows are well coupled in the reconnection phase in Case~ELr. Fig.~3(b) shows that the decoupling of neutral and ion inflows also appears during the later reconnection stage in Case~ALr, which is different from our previous result in Case~A with the same plasma $\beta$ \citep{2018ApJ...852...95N}, the neutral and ion inflows are coupled better in Case~A. The reason for causing such a difference is that the ionized fluid components in Case~ALr are much fewer than that in Case~A. The more ionized components result in a shorter neutral-ion mean free path. Thus, the decoupling of neutral and ion inflows is more obvious in the more weakly ionized plasmas with a longer neutral-ion mean free path. In Fig.~3(a), (d) and (g), one can also see that the ion inflow $V_{iy}$ is higher for a lower $\beta$ case. Our simulations results also show that the ionized and neutral fluids are well coupled in the reconnection outflow regions, which is consistent with the previous results \cite{2012ApJ...760..109L, 2013PhPl...20f1202L, 2015ApJ...805..134M, 2018ApJ...852...95N}. Panels (c), (f) and (i) of Fig.~3 show the outflow plasma velocity $V_{ix}$ at three different times in Case~ALr, CLr and Case~ELr, respectively. The outflow velocities increase with time during the magnetic reconnection process, and are higher in the lower $\beta$ case. The maximum reconnection outflow velocity in Case~ELr is above $50$~km\,s$^{-1}$. Fig.~4(a), (b) and (c) show the four time dependent components contributing to $\partial n_i/\partial t$ in Cases~ALr, CLr and ELr. As in our previous work \cite{2018ApJ...852...95N}, the values of the four corresponding components are the average values inside the current sheet domain at each time. The ionization rate $\Gamma_i^{ion}$ and the inflow $-\partial (n_i V_{iy})/{\partial y}$ contribute to the gain of ions, and the recombination rate $-\Gamma_i^{rec}$ and the outflow $\partial (n_iV_{ix})/\partial x$ contribute to the loss of ions. These four components behave similarly in Case~ALr, CLr, ELr and Case~A in our previous work \citep{2018ApJ...852...95N}. However, the lower plasma $\beta$ and weaker radiative cooling make the four terms relatively higher. The ionization rate $\Gamma_i^{ion}$ is also faster than the recombination rate $-\Gamma_i^{rec}$ in all of the three cases, which is the same as our previous work \cite{2018ApJ...852...95N} but significant different from the previous higher $\beta$ simulations \cite{2012ApJ...760..109L, 2013PhPl...20f1202L}. We have also tested a simulation with the initial magnetic field four times smaller than that in Case~CLr; the ionization rate is eventually smaller than the recombination rate in such a high $\beta$ case ($\beta_0=23.36$). Therefore, the plasma $\beta$ inside the current sheet region appears to be the main factor determining whether ionization or recombination dominate within the current sheet during the reconnection process. From the simulation results presented in this work, we conclude that the ionization rate $\Gamma_i^{ion}$ becomes faster than the recombination rate $-\Gamma_i^{rec}$ somewhere between $\beta=1.46$ and $\beta=23.36$ inside the current sheet. The initial radiative cooling $L_r$ in Eq. (6) in this work and $L_{rad}=\Gamma_i^{ion} \phi_{eff}$ in our previous work \cite{2018ApJ...852...95N} have both been calculated. Their respective dimensionless values are $L_{r0}=5.6927\times10^{-6}$ and $L_{rad0}=6.7225\times10^{-9}$. Therefore, the initial radiative cooling in this work is about three orders of magnitude higher than that in our previous work \cite{2018ApJ...852...95N}. One should notice that the background heating is also included in this work, but the radiative cooling inside the current sheet increases sharply with time and quickly becomes much greater than the background heating. In this work, we have calculated the time evolution of the total radiated energy $Q_{rad}=\int_{0}^{1}\int_{0}^{2} L_{rad} dxdy$, the total background heating $Q_{bh}=\int_{0}^{1}\int_{0}^{2} H_{0} dxdy$, the Joule heating $Q_{Joule}=\int_{0}^{1}\int_{0}^{2} \eta J^2 dxdy$, the frictional heating between ions and neutral particles $Q_{in}$ and the viscous heating of ions and neutral particles $Q_{vis}$ in the whole simulation domain. Joule heating is several orders of magnitude higher than other heating terms in all of our simulations (not shown). Fig.~5 shows the time evolution of $Q_{Joule}+Q_{in}+Q_{vis}$, $Q_{rad}$ and $Q_{bh}$ in Case~ALr, CLr and ELr. It is shown that most of the total generated thermal energy $Q_{Joule}+Q_{in}+Q_{vis}$ is radiated in all of the three cases. The stronger reconnection magnetic fields result in the more generated Joule heating, the radiative cooling is also becoming much greater when more neutrals inside the current sheets are ionized. Therefore, both the Joule heating and radiative cooling in Case~ELr are the greatest in all of the three cases. Comparing Case~ALr and CLr with Case~A and C in our previous work \cite{2018ApJ...852...95N}, one can find that both the generated thermal energy and radiated heat in Case~ALr are correspondingly a little bit higher than those in Case~A at the same time point, the radiated heat in Case~CLr is obviously higher than that in Case~C. The stronger radiative cooling model in this work makes the neutrals inside the current sheet more difficult to be ionized for the same reconnection magnetic fields and plasma $\beta$. Though the initial radiative cooling in this work is about three orders of magnitude higher than that in our previous work \cite{2018ApJ...852...95N}, the values of the radiated heat in Case~ALr, CLr and ELr are correspondingly at the same order of magnitude as those in Case~A, C and E during the reconnection process before the plasmas are nearly fully ionized. In the reconnection process, the strong ionization rate $\Gamma_i^{ion}$ in the previous work makes the simple radiative cooling $L_{rad}=\Gamma_i^{ion} \phi_{eff}$ to be big enough to compare with the radiative cooling applied in this work for the same plasma $\beta$. Fig.~5 also shows that the background heating $Q_{bh}$ is very small compared with $Q_{rad}$ and $Q_{Joule}+Q_{in}+Q_{vis}$, it can be ignored during the magnetic reconnection process. In the paper by Alvarez Laguna et al. 2017, the heating term was the same order as the radiative cooling term, including the background heating strongly affected the reconnection process in their work \cite{2017ApJ...842..117A} . Fig.~6(a) shows the half length $L_{sim}$ and width $\delta_{sim}$ of the CSs in Case~ALr, CLr, and ELr. The half length of the current sheet is also about $0.5-0.6$ during the later stages of magnetic reconnection in all of the three cases. The half width of the current sheet eventually drops to about $0.012$ in Case~ALr, $0.017$ in Case~CLr and $0.008$ in Case~ELr. The reconnection rate is calculated as $M_{sim}=\eta^{\ast} j_{max}/(V_A^{\ast} B_{up})$, where $j_{max}$ is the maximum value of the out of plane current density $j_{z}$, located at $(x,y)=(0,0)$ in all the simulations in this work. $B_{up}$ is $B_x$ evaluated at $(0, \delta_{sim})$, where $\delta_{sim}$ is defined as the half-width at half-max in $j_{z}$. $V_{A}^{\ast}$ is the relevant Alfv\'en velocity defined using $B_{up}$ and the total number density $n^{\ast}$ at the location of $j_{max}$. $\eta^{\ast}$ is the magnetic diffusion coefficient defined in Equation (3) at the location of $j_{max}$. The solid lines in Fig.~6(b) represent the time evolution of the reconnection rates in Cases~ALr, CLr and ELr. The reconnection rate $M_{sim}$ can reach 0.121 in Case~CLr, which is the highest in all of the three cases. The maximum reconnection rate in Case~ELr is only around 0.016, which is the lowest in all of the three cases. We have also calculated the reconnection rates $M_{SP}$ which are predicted by the Sweet-Parker model, $M_{SP}=1/\sqrt{S_{sim}}$. The Lundquist number $S_{sim}$ in the simulations is defined by $S_{sim}=V_{A}^{\ast}L_{sim}/\eta^{\ast}$. The dash-dotted lines in Fig.~6(b) represent $M_{SP}$ in the three cases. One can find that the value of $M_{SP}$ predicted by the Sweet-Parker model is about three times smaller that the realistic reconnection rate $M_{sim}$ during the later reconnection stage in Case~CLr. The values of the reconnection rate $M_{sim}$ are much closer to the values of $M_{SP}$ in Case~ALr and ELr. As discussed above and shown in Fig.~3, the decoupling of ion and neutral inflows is obvious only in Case~CLr with a much higher plasma $\beta$. Therefore, the decoupling of ion and neutral inflows can result in a much faster reconnection process than that predicted by the Sweet-Parker model in Case~CLr. The Sweet-Parker type magnetic reconnection appears in Case~ALr and ELr. Alvarez Laguna et al. (2017) concluded that strong radiative cooling produced faster reconnection than without radiation \cite{2017ApJ...842..117A} . However, the strong radiative cooling in Case~ALr and ELr in our simulations dose not result in a faster reconnection than that predicted by the Sweet-Parker model. One should notice that the recombination dominated over the ionization in all the high $\beta$ simulations \cite{2017ApJ...842..117A} in Alvarez Laguna et al. (2017). The strong radiative cooling resulted that the ionization degrees at the reconnection X-points sharply decreased with time in the cases with high initial ionization degrees, the recombination effect and the decoupling of ion and neutral inflows were even more significant in the cases with strong radiative cooling than those without radiative cooling. Though the same strong radiative cooling model is included in our simulations, the ionization degrees at the reconnection X-points increase with time and the ionization is always faster than recombination in Cases~ALr, CLr and ELr in our simulations, especially in Case~ALr and ELr with very low plasma $\beta$ and strong reconnection magnetic fields. The decoupling of ion and neutral inflows only significantly appears in Case~CLr. We can conclude that the decoupling of ion and neutral inflows is not significant and the recombination does not obviously affect magnetic reconnection in strongly magnetized regions (above 500 G) around the solar TMR. As presented in Tabel~1, the maximum current densities and reconnection outflow velocities in Case~ALr and ELr are higher than those in Case~A and E, respectively. However, these variables increase with time in our simulations. The runs in Case~ALr and ELr lasted longer than those in Case~A and E. Therefore, the maximum current densities and reconnection outflow velocities could have reached higher values in Cases~A and E if the simulation runs lasted longer. The current sheet width in Case~E reached a very small value ($0.003L_0$) and the maximum reconnection rate was higher than that in Case~A. However, one should note that the radiative cooling model in the prior work became particularly unrealistic in situations such as that in the latter stages of the Case~E simulation, when the hydrogen plasma in the current sheet became nearly fully ionized. \begin{table} \caption{The important variables in Case~ALr, CLr and ELr in this work and in Case~A, C, E in our previous work \cite{2018ApJ...852...95N}.The maximum values of the ionization fraction $f_i$, the plasma temperature $T_i$, the current density $J_z$, the difference between the ion and neutral inflows $\vert V_{iy}-V_{ny} \vert$, the outflow ion velocity $V_{ix}$, the magnetic reconnection rate $M_{sim}$, the minimum value of the current sheet width $\delta_{sim}$ and the ion density $n_i$ during the later reconnection stage are presented.} \label{models} \begin{tabular}{|c|c|c|c|c|c|c|c|cl} \hline & Max ($f_i$) & Max ($T_i$) & Max ($J_z$) & Max ($\vert V_{iy}-V_{ny} \vert$) & Max ($V_{ix}$) & Max ($M_{sim}$) & $\delta_{sim}$ & Max($n_i$) \\ \hline \hline Case ALr & $12\%$ & $1.95\times10^4$~K& 44 & 0.185 km/s & 21.1 km/s & 0.025 & 0.012 & 0.0860 \\ \cline{1-9} Case CLr & $ 0.8\%$& $1.6\times10^4$~K & 6 & 0. 692 km/s & 6.3 km/s & 0.121 & 0.017 & 0.0027 \\ \cline{1-9} Case ELr & $ 72\% $& $2.3\times10^4$~K & 206 & 0.076 km/s & 52.3 km/s & 0.016 & 0.008 & 0.5884 \\ \cline{1-9} \hline \hline Case A & 45\%& $1.6\times10^4$~K & 29 & 0.048 km/s & 13.9 km/s & 0.030 & 0.015 & 0.2873 \\ \cline{1-9} Case C & 3\% & $1.6\times10^4$~K & 5.5 & 0.182 km/s & 6.7 km/s & 0.109 & 0.018 & 0.0094 \\ \cline{1-9} Case E & 100\%& $4.6\times10^4$~K& 192 & 0.061 km/s & 16.4 km/s & 0.035 & 0.003 & 0.5133 \\ \cline{1-9} \hline \end{tabular} \end{table} \begin{figure} \centerline{\includegraphics[width=0.33\textwidth, clip=]{fig1CASEALRJzfi.png} \includegraphics[width=0.33\textwidth, clip=]{fig1CASECLRJzfi.png} \includegraphics[width=0.33\textwidth, clip=]{fig1CASEELRJzfi.png}} \caption{(a), (b) and (c) show the current density $J_z$ (left) and ionization degree $f_i$ (right) in one quarter of the domain at $t=6.897$, $t=12.696$ and $t=26.968$ in Case~ALr; (d),(e) and (f) show the same at $t=21.948$, $t=37.09$ and $t=62.779$ in Case~CLr; (g),(h) and (i) show the same at $t=5.032$, $t=7.67$ and $t=16.092$ in Case~ELr. The black contour lines represent the out of plane component of the magnetic flux $A_z$ in these 2D figures. } \label{fig.1} \end{figure} \begin{figure} \centerline{\includegraphics[width=0.33\textwidth, clip=]{fig2CASEALRTcuty.png} \includegraphics[width=0.33\textwidth, clip=]{fig2CASECLRTcuty.png} \includegraphics[width=0.33\textwidth, clip=]{fig2CASEELRTcuty.png} } \caption{(a) shows the distributions of the ion temperature $T_i$ and neutral temperature $T_n$ in Kelvin at $x=0$ along y direction at $t=6.897$, $t=12.696$ and $t=26.968$ in Case~ALr; (b) shows the same at $x=0$ along y direction at $t=21.948$, $t=37.09$ and $t=62.779$ in Case~CLr; (c) shows the same at $x=0$ along y direction at $t=5.032$, $t=7.67$ and $t=16.092$ in Case~ELr. } \label{fig.2} \end{figure} \begin{figure} \centerline{\includegraphics[width=0.33\textwidth, clip=]{fig3CASEALRvicuty.png} \includegraphics[width=0.33\textwidth, clip=]{fig3CASEALRvincuty.png} \includegraphics[width=0.33\textwidth, clip=]{fig3CASEALRvicutx.png}} \centerline{\includegraphics[width=0.33\textwidth, clip=]{fig3CASECLRvicuty.png} \includegraphics[width=0.33\textwidth, clip=]{fig3CASECLRvincuty.png} \includegraphics[width=0.33\textwidth, clip=]{fig3CASECLRvicutx.png}} \centerline{\includegraphics[width=0.33\textwidth, clip=]{fig3CASEELRvicuty.png} \includegraphics[width=0.33\textwidth, clip=]{fig3CASEELRvincuty.png} \includegraphics[width=0.33\textwidth, clip=]{fig3CASEELRvicutx.png}} \caption{Panels (a), (d) and (g) show the ion inflow speed $V_{iy}$ cross the current sheet with dimensions at $x=0$ in Case~ALr, CLr and ELr. Panels (b), (e) and (h) show the difference in speed between the ion and the neutral inflows $V_{iy}-V_{ny}$ across the current sheet with dimensions at $x=0$ in Case~ALr, CLr and ELr. Panels (c), (f) and (i) show the ion outflow $V_{ix}$ along the current sheet at $y=0$ in Case~ALr, CLr and ELr.} \label{fig.3} \end{figure} \begin{figure} \centerline{\includegraphics[width=0.33\textwidth, clip=]{fig4CASEALRIR.png} \includegraphics[width=0.33\textwidth, clip=]{fig4CASECLRIR.png} \includegraphics[width=0.33\textwidth, clip=]{fig4CASEELRIR.png} } \caption{Panels (a), (b) and (c) show the time dependent contributions of four components to $\partial n_i/\partial t$ in Case~ALr, CLr and ELr. The four contributions are the average values inside the current sheet, the loss due to recombination $-\Gamma_i^{rec}$; the loss due to the outflow$\partial(n_iV_{ix})/\partial x$, the gain due to the inflow $-\partial(n_iV_{iy})/\partial y$, and the gain due to ionization $\Gamma_i^{ion}$. } \label{fig.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=0.33\textwidth, clip=]{fig5CASEALREN.png} \includegraphics[width=0.33\textwidth, clip=]{fig5CASECLREN.png} \includegraphics[width=0.33\textwidth, clip=]{fig5CASEELREN.png} } \caption{(a), (b) and (c) show the time evolution of thermal energy gain and loss integrated over the simulation domain in Case ALr, CLr and ELr, respectively. The black solid lines represent the generated thermal energy $Q_{Joule}+Q_{in}+Q_{vis}$; the red dash-dotted lines represent the radiated energy $Q_{rad}$; the blue dashed lines represent the background heating $Q_{bh}$. } \label{fig.5} \end{figure} \begin{figure} \centerline{ \includegraphics[width=0.33\textwidth, clip=]{fig6CASELdeltat.png} \includegraphics[width=0.33\textwidth, clip=]{fig6CASERCRt.png} } \caption{(a) shows the half length $L_{sim}$ and width $\delta_{sim}$ of the current sheets in Case~ALr, CLr and ELr. (b) shows the time dependent reconnection rates in Cases~ALr, CLr, and ELr. } \label{fig.6} \end{figure} \section{Summary and discussion}\label{s:summary} We have used the reactive multi-fluid plasma-neutral module of the HiFi modeling framework to study magnetic reconnection in strongly magnetized regions around the solar TMR, with a more realistic radiative cooling model computed using the OPACITY project and CHIANTI databases \cite{2012ApJ...751...75G}. Numerical results with different magnetic field strengths have been presented, and we have also compared the results in this work with those in our previous work \cite{2018ApJ...852...95N} that included a simpler radiative cooling model. We summarize our results as follows: (1) The more realistic radiative cooling model does not result in qualitative changes of the characteristics of magnetic reconnection in strongly magnetized regions around the solar TMR. In this work, the rate of ionization of the neutral component is still faster than recombination within the current sheet region even when the initial plasma $\beta$ is as high as $\beta_0=1.46$. The ionized and neutral fluid flows are also well-coupled throughout the reconnection region for the low $\beta$ plasmas; significant decoupling of ion and neutral inflows appears in the higher $\beta$ case with $\beta_0=1.46$, which leads to a reconnection rate about three times faster than predicted by the Sweet-Parker model. The reconnection process more closely resembles the Sweet-Parker model when plasma $\beta$ is lower in our low Lundquist number simulations. (2) In the case with stronger reconnection magnetic fields and lower plasma $\beta$, there is more thermal energy generated by Joule heating and also more radiated thermal energy in the magnetic reconnection process. The strong radiative cooling does not result in faster magnetic reconnection in strongly magnetized regions. Though most of the generated thermal energy is radiated, the maximum temperature inside the current sheet can still reach a higher value in a lower $\beta$ case. The maximum temperature is above $2\times10^4$~K when the reconnection magnetic field is higher than $500$~G. The maximum reconnection outflow velocity is above $50$~km\,s$^{-1}$ when the initial reconnection magnetic fields is as high as $1500$~G. (3) The more realistic radiative cooling model quantitatively changes the values of some variables in the magnetic reconnection process around the solar TMR. The maximum ionization fraction is lower than that in our previous work \cite{2018ApJ...852...95N} for the same plasma $\beta$. The generated Joule heating and the radiated thermal energy in each case in this work are higher than the corresponding ones in our previous work. Our numerical results show that the ion and neutral fluids are well-coupled as a single fluid through the reconnection region in strongly magnetized regions around the solar TMR. Though most of the generated thermal energy is always dissipated by strong radiative cooling in such a high density environment, the ionization is still faster than recombination, and no acceleration of the magnetic reconnection rate is observed in the low $\beta$ environment with strong magnetic fields (above $500$~G), which is significantly different from the previous high $\beta$ simulations \cite{2012ApJ...760..109L, 2013PhPl...20f1202L, 2015ApJ...805..134M, 2017ApJ...842..117A}. The stronger reconnection magnetic fields result in the higher plasma temperature inside the current sheet. The plasma can be heated above $2\times10^4$~K when the reconnection magnetic fields are above $500$~G. Since the plasmas are still not fully ionized in Case~ELr with reconnection magnetic fields as strong as $1500$~G, the maximum temperature above $4\times10^4$~K shown in our previous work \cite{2018ApJ...852...95N} does not appear in our simulations with a stronger radiative cooling model. However, we should point out that all the simulation runs end because of the grid errors and limited resolution, not because the magnetic reconnection processes are naturally stopped. Both the ionization fraction and the plasma temperature increase with time during the reconnection process, and could possibly reach higher values if the simulations were allowed to run for a longer time. We can still expect that the plasmas may become fully ionized and higher temperature may be achieved during a magnetic reconnection process in the strongly magnetized solar TMR. By comparing the simulation results in this work and our previous work \cite{2018ApJ...852...95N}, we can conclude that the simple radiative cooling model in our previous work is good enough for studying the characteristics of magnetic reconnection around the solar TMR with strong reconnection magnetic field ($>100$~G) and low plasma $\beta$ ($<1.46$) inside the current sheet. In such a reconnection process before the plasmas are fully ionized, the strong ionization rate $\Gamma_i^{ion}$ in the previous work allows the simple radiative cooling $L_{rad}=\Gamma_i^{ion} \phi_{eff}$ to be comparable with the radiative cooling applied in this work for the same plasma $\beta$. However, the more realistic radiative cooling model applied in this work is necessary for studying the variations of the plasma temperature and ionization fraction inside the current sheet. It is also important for studying the energy conversion during the magnetic reconnection process when the hydrogen gas approaches full ionization. With the high temperature plasmas ($>2\times10^4$~K) likely to be generated in the Ellerman Bomb type events observed by the IRIS satellite, the hydrogen gas in the immediate neighborhood of the event is expected to become fully ionized necessitating the more realistic radiative cooling function to study the Ellerman Bomb type events and their observables. We note that the simulation scale is only $100$~m in both this work and our previous work \cite{2018ApJ...852...95N}, which is much smaller than the observable scales of the brightening events in the solar atmosphere. Future work will show numerical results on larger scales and with much higher Lundquist numbers to reveal more physical mechanisms guiding magnetic reconnection processes and observables around the solar TMR. \begin{acknowledgments} The authors are grateful to the referee for the valuable comments and suggestions. This research is supported by NSFC Grants 11573064, 11203069, 11333007, 11303101 and 11403100; the Western Light of Chinese Academy of Sciences 2014; the Youth Innovation Promotion Association CAS 2017; Program 973 grant 2013CBA01503; NSFC-CAS Joint Grant U1631130; CAS grant QYZDJ-SSW-SLH012; and the Special Program for Applied Research on Super Computation of the NSFC-Guangdong Joint Fund (nsfc2015-460, nsfc2015-463, the second phase) under Grant No. U1501501. The authors gratefully acknowledge the computing time granted by the Yunnan Astronomical Observatories and the National Supercomputer Center in Guangzhou, and provided on the facilities at the Supercomputing Platform, as well as the help from all faculties of the Platform. J.L. also thanks the help from the National Supercomputing Center in Tianjin. V.S.L. acknowledges support from the US National Science Foundation (NSF). Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. N.A.M. acknowledges support from NSF SHINE grant AGS-135842 and DOE grant DE-SC0016363. \end{acknowledgments}
{ "timestamp": "2018-10-26T02:13:11", "yymm": "1804", "arxiv_id": "1804.05631", "language": "en", "url": "https://arxiv.org/abs/1804.05631" }
\section{Introduction} A remarkable theorem due to Freed, Hopkins and Teleman \cite{FHTI,FHTII,FHTIII} relates the representation theory of the loop group $LG$ of a compact Lie group $G$ to the equivariant twisted K-theory of $G$. In the special case of a connected, simply connected and simple Lie group, the theorem states that there is an isomorphism of rings $R_k(G)\simeq \ensuremath{\textnormal{K}}^G_0(G,\ensuremath{\mathcal{A}}^{(k+\ensuremath{\textnormal{h}^\vee})})$. Here $R_k(G)$ is the Verlinde ring of level $k>0$ positive energy representations of the basic central extension $LG^{\tn{bas}}$ of the loop group, while $\ensuremath{\textnormal{K}}^G_0(G,\ensuremath{\mathcal{A}}^{(k+\ensuremath{\textnormal{h}^\vee})})$ is the equivariant K-homology of $G$ with twisting (Dixmier-Douady class) $k+\ensuremath{\textnormal{h}^\vee} \in \ensuremath{\mathbb{Z}} \simeq H^3_G(G,\ensuremath{\mathbb{Z}})$. The shift $\ensuremath{\textnormal{h}^\vee}$ is a Lie theoretic constant associated to $G$ called the dual Coxeter number. Freed-Hopkins-Teleman work with twisted K-theory, which is related by a Poincare duality isomorphism \cite{TuPoincare}. In the proof Freed, Hopkins and Teleman construct a map, at the level of cycles \[ \left(\begin{array}{c} \text{level $k$ positive energy}\\ \text{representations of } LG^{\tn{bas}} \end{array}\right) \quad \dashrightarrow \quad \left(\begin{array}{c} \text{$(k+\ensuremath{\textnormal{h}^\vee})$-twisted}\\ \text{K-theory of } G \end{array}\right). \] The construction involves an interesting family of algebraic Dirac operators parametrized by the space of connections on a $G$-bundle over $S^1$. Computing the equivariant twisted K-theory of $G$ using techniques from algebraic topology, they are able to show that their map is an isomorphism. It is less clear how to construct a map in the opposite direction (from twisted K-homology to $R_k(G)$) \emph{at the level of cycles}. One goal of this article is to describe such a map for \emph{analytic cycles} or Fredholm modules, which are the cycles for the analytic description of twisted K-homology (cf. \cite{AtiyahKHomology, HigsonRoe, KasparovNovikov}). A special class of analytic cycles are those which come from Baum-Douglas-type \emph{geometric cycles} (cf. \cite{BaumDouglas, BaumCareyWang}), and we also study the specialization of our map to such cycles and obtain a correspondingly more explicit description. \ignore{\begin{theorem} \label{Thm1} Let $G$ be a compact, connected, simply connected, simple Lie group. Let $\ensuremath{\mathcal{A}}$ be a twisting with Dixmier-Douady class $\tn{DD}(\ensuremath{\mathcal{A}})=k+\ensuremath{\textnormal{h}^\vee} \in \ensuremath{\mathbb{Z}} \simeq H^3_G(G,\ensuremath{\mathbb{Z}})$, $k>0$. The map \[ \scr{I} \colon \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}) \rightarrow R_k(G)\] defined in detail in Section \ref{sec:DefI}, is inverse to the Freed-Hopkins-Teleman isomorphism. \end{theorem}} We should remark at the outset that we do not directly build a positive energy representation from a cycle, which would be interesting and perhaps preferable. The output will instead be a formal character. Let us give an overview of the construction. The data used to describe a twist in the analytic picture is a $G$-equivariant Dixmier-Douady bundle $\ensuremath{\mathcal{A}}$ over $G$; this is a locally trivial bundle of elementary $C^\ast$ algebras over $G$, equipped with a $G$-action covering the conjugation action on the base. Such a bundle has an invariant, the Dixmier-Douady class $\tn{DD}(\ensuremath{\mathcal{A}}) \in H^3_G(G,\ensuremath{\mathbb{Z}})\simeq \ensuremath{\mathbb{Z}}$, and we assume $\tn{DD}(\ensuremath{\mathcal{A}})=\ell>0$. Consider an analytic cycle representing a class $x$ in the twisted K-homology group $\ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}})$. Restrict $x$ from $G$ to a tubular neighborhood $U$ of a maximal torus $T$ inside $G$. Over $U$ we show that there is a Morita equivalent Dixmier-Douady bundle $\ensuremath{\mathcal{A}}_U$ which has an especially simple structure: its algebra of continuous sections can be presented as a twisted crossed product algebra $\Pi \ltimes_\tau C_0(\ensuremath{\mathcal{U}})$, where $\tau$ is the twist, $\Pi$ is the integer lattice, and $\ensuremath{\mathcal{U}}$ is a $\Pi$-covering space of $U$. Applying tools from KK-theory (a Green-Julg-type isomorphism and the analytic assembly map), we obtain an element in the K-theory of the group $C^\ast$ algebra for $T\ltimes \Pi^\tau$. There is a map from the latter K-group into the space of formal characters $R^{-\infty}(T)$ for $T$. The image of $\ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}})$ under this composition is $R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}},\,\ell}$, the subspace of formal characters that are alternating under the action of the affine Weyl group at level $\ell$. For $\ell>\ensuremath{\textnormal{h}^\vee}$, the space of such characters is canonically isomorphic to $R_k(G)$, $k=\ell-\ensuremath{\textnormal{h}^\vee}$, via the Weyl-Kac character formula. Our construction provides an elaboration of a remark made by Freed, Hopkins and Teleman in \cite[Remark 3.5]{FHTIII}. They comment that there ought to be an inverse map from the twisted K-theory of $T$ to a suitable `representation ring' for $T\ltimes \Pi^\tau$ perhaps defined using $C^\ast$ algebras, involving an analogue of `integration over $\ensuremath{\mathfrak{t}}$'. Our `integration over $\ensuremath{\mathfrak{t}}$' map is the analytic assembly map. We study the specialization of our map to `D-cycles' in the sense of Baum-Carey-Wang \cite{BaumCareyWang}, which are an analogue of Baum-Douglas cycles in geometric K-homology \cite{BaumDouglas}. A D-cycle for $\ensuremath{\textnormal{K}}_0(X,\ensuremath{\mathcal{A}})$ is a 4-tuple $(M,E,\Phi,\ensuremath{\mathcal{S}})$ consisting of a compact Riemannian manifold $M$, a Hermitian vector bundle $E$ over $M$, a continuous map $\Phi \colon M \rightarrow X$, and a Morita morphism $\ensuremath{\mathcal{S}} \colon \ensuremath{\textnormal{Cliff}}(TM) \dashrightarrow \Phi^\ast \ensuremath{\mathcal{A}}$. If $\ensuremath{\mathcal{A}}=\ul{\ensuremath{\mathbb{C}}}$ is trivial, $\ensuremath{\mathcal{S}}$ is equivalent to a spin-c structure on $M$, and we recover an ordinary Baum-Douglas cycle. In the case of a D-cycle $(M,E,\Phi,\ensuremath{\mathcal{S}})$ representing a class $x \in \ensuremath{\textnormal{K}}^G_0(G,\ensuremath{\mathcal{A}})$, we prove that its image under our map is the $T$-equivariant $L^2$-index of a first-order elliptic operator on a $\Pi$-covering space of $\Phi^{-1}(U)$, where $U \supset T$ is a tubular neighborhood of the chosen maximal torus $T$ in $G$. Let us give a summary of the proof. Using $(M,E,\Phi,\ensuremath{\mathcal{S}})$ we construct an analytic cycle $(H,\rho,\st{D})$ representing $x$: the Hilbert space $H$ is the space of $L^2$ sections of a smooth Hilbert bundle over $M$ and $\st{D}$ is a Dirac operator acting on smooth sections. Because the bundle has infinite dimensional fibres, $\st{D}$ is not necessarily Fredholm, but the action of the $C^\ast$ algebra $C(\ensuremath{\mathcal{A}})$ (continuous sections of $\ensuremath{\mathcal{A}}$) along the fibres provides the needed analytic control to make this a cycle. After passing to the Morita equivalent Dixmier-Douady bundle over $U \supset T$, the fibres are replaced with copies of $L^2(\Pi)$ (tensored with a finite dimensional bundle); using a correspondence that is well-known (for example in the context of Atiyah's $L^2$-index theorem), the operator $\st{D}$ then has an alternate interpretation as a Dirac-type operator on a $\Pi$-covering space $\ensuremath{\mathcal{Y}}$ of $\Phi^{-1}(U)$. Applying the analytic assembly map gives the $T$-equivariant $L^2$-index of this operator. The initial motivation for this work was to understand the relationship between two approaches---one via $D$-cycles for twisted K-homology of $G$ \cite{MeinrenkenKHomology} and the other via index theory on non-compact manifolds \cite{LMSspinor,LSQuantLG}---to defining a representation-theoretic `quantization' of a Hamiltonian $LG$-space. The construction of a suitable quantization has interesting applications, for example to the Verlinde formula for moduli spaces of flat connections on Riemann surfaces, cf. \cite{LecturesGroupValued} for an overview. A corollary of our results is that the two approaches agree with each other. Indeed for $x$ represented by a D-cycle, the first-order elliptic operator on $\ensuremath{\mathcal{Y}}$ mentioned above coincides with the operator studied in \cite{LSQuantLG}. Our construction thus connects the index of this operator with the image of $x$ in $R_k(G)$ under the Freed-Hopkins-Teleman isomorphism. \ignore{ Let $(\ensuremath{\mathcal{M}},\omega_{\ensuremath{\mathcal{M}}},\Phi_{\ensuremath{\mathcal{M}}})$ be a proper Hamiltonian $LG$-space, that is, a Banach manifold $\ensuremath{\mathcal{M}}$ equipped with a smooth $LG$-action, a symplectic form, and a proper moment map $\Phi_{\ensuremath{\mathcal{M}}} \colon \ensuremath{\mathcal{M}} \rightarrow L\ensuremath{\mathfrak{g}}^\ast$ (see Section \ref{sec:HamLGSpace}). The subgroup $\Omega G \subset LG$ of based loops acts freely and properly on $L\ensuremath{\mathfrak{g}}^\ast$ and $\ensuremath{\mathcal{M}}$, and the smooth quotient $M=\ensuremath{\mathcal{M}}/\Omega G \rightarrow L\ensuremath{\mathfrak{g}}^\ast/\Omega G=G$ is an example of a \emph{quasi-Hamiltonian $G$-space} \cite{AlekseevMalkinMeinrenken}. It was shown in \cite{DDDFunctor} (\cite{LMSspinor} for an alternate proof) that quasi-Hamiltonian $G$-spaces naturally give rise to D-cycles for the twisted K-homology of $G$. Based on this, Meinrenken in \cite{MeinrenkenKHomology} defined the quantization of a (pre-quantized) quasi-Hamiltonian $G$-space in terms of a push-forward map to twisted K-homology of $G$. By the Freed-Hopkins-Teleman theorem, the result can be identified with an element of the Verlinde ring, hence seems a reasonable candidate for the quantization of the $LG$-space $\ensuremath{\mathcal{M}}$. In \cite{LMSspinor,LSQuantLG} we constructed a finite dimensional spin-c submanifold $\ensuremath{\mathcal{Y}} \subset \ensuremath{\mathcal{M}}$ and then studied an index pairing with the Dirac operator, with the result being a formal character of $T$ alternating under the action of the affine Weyl group. This suggests that alternatively one might define the quantization of $\ensuremath{\mathcal{M}}$ as the element of the Verlinde ring whose Weyl-Kac numerator equals this formal character. } Throughout the paper we have restricted ourselves to the special case that $G$ is connected, simply connected and simple, but the methods likely generalize. We will fairly easily be able to check that the map $\scr{I}\colon \ensuremath{\textnormal{K}}^G_0(G,\ensuremath{\mathcal{A}}) \rightarrow R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}},\ell}$ that we construct is surjective. With some additional effort and a little topology, together with a known (and relatively easy) case of the Baum-Connes conjecture, we could use the construction described here to prove a weak form of the Freed-Hopkins-Teleman theorem (that $\scr{I}$ is also injective modulo torsion at primes dividing the order of the Weyl group). We hope to return to these questions in the future. There is an overlap of some of our methods with interesting work of Doman Takata on Hamiltonian $LT$-spaces \cite{Takata1,Takata2}. In particular, Takata also studies an assembly map into the K-theory of a twisted group $C^\ast$ algebra of $T \times \Pi$. Takata has built infinite dimensional analogues of several well-known objects from index theory/non-commutative geometry in the setting of Hamiltonian $LT$-spaces. It would be interesting to explore further connections with his work. Sections 2 and 3 briefly introduce twisted K-homology, loop groups, and the Freed-Hopkins-Teleman theorem. Section 4 contains some results on twisted convolution algebras and generalized fixed-point algebras. In Section 5 we prove some basic facts about the $C^\ast$ algebra of the semi-direct product $T\ltimes \Pi^\tau$ that plays a key role. In Section 6 we construct the map, denoted $\scr{I}$, from $\ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}})$ to $R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}}, \, \ell}$, and prove that it is inverse to the Freed-Hopkins-Teleman map. Section 7 studies the specialization of $\scr{I}$ to geometric cycles (D-cycles in the sense of Baum-Carey-Wang), and briefly describes the application to Hamiltonian loop group spaces. For the reader's convenience, we have included an appendix with proofs of a couple of standard (but not so easy to find) results used in Section 7. \vspace{0.3cm} \noindent \textbf{Acknowledgements.} I especially thank Eckhard Meinrenken and Yanli Song for interesting discussions about quantization of Hamiltonian $LG$-spaces over the past couple of years. The work described in this paper is motivated by our joint work on spinor modules for Hamiltonian $LG$-spaces \cite{LMSspinor,LSQuantLG}. I also thank Nigel Higson and Shintaro Nishikawa for helpful suggestions and for answering several questions about KK-theory. \vspace{0.3cm} \noindent \textbf{Notation.} The $C^\ast$ algebras of bounded (resp. compact) operators on a Hilbert space $H$ will be denoted $\ensuremath{\mathbb{B}}(H)$ (resp. $\ensuremath{\mathbb{K}}(H)$). If $(V,g)$ is a finite dimensional real Euclidean vector space, $\ensuremath{\textnormal{Cliff}}(V)$ denotes the complex Clifford algebra of $V$, the $\ensuremath{\mathbb{Z}}_2$-graded complex algebra generated in degree 1 by the elements $v \in V$ subject to the relation $v^2=\|v\|^2$. For $V$ a real Euclidean vector bundle over $M$, $\ensuremath{\textnormal{Cliff}}(V)$ denotes the bundle of algebras with fibres $\ensuremath{\textnormal{Cliff}}(V)_m=\ensuremath{\textnormal{Cliff}}(V_m)$. On a Riemannian manifold $M$, we write $\ensuremath{\textnormal{Cl}}(M)$ for the algebra of continuous sections of $\ensuremath{\textnormal{Cliff}}(TM)$ vanishing at infinity. If $K$ is a compact Lie group, $\ensuremath{\tn{Irr}}(K)$ denotes the set of isomorphism classes of irreducible representations of $K$, and $R(K)$ is the representation ring. The formal completion $R^{-\infty}(K)=\ensuremath{\mathbb{Z}}^{\ensuremath{\tn{Irr}}(K)}$ consists of formal infinite linear combinations of irreducibles $\pi \in \ensuremath{\tn{Irr}}(K)$ with coefficients in $\ensuremath{\mathbb{Z}}$. When discussing a $U(1)$ central extension $\Gamma^\tau$ of a group $\Gamma$, we use the notation $\ensuremath{\widehat{\gamma}}$ to denote some lift to $\Gamma^\tau$ of an element $\gamma \in \Gamma$. Throughout $G$ denotes a compact, connected, simply connected, simple Lie group with Lie algebra $\ensuremath{\mathfrak{g}}$. Let $T \subset G$ be a maximal torus with Lie algebra $\ensuremath{\mathfrak{t}}$. We fix a positive Weyl chamber $\ensuremath{\mathfrak{t}}_+$, and let $\ensuremath{\mathcal{R}}_+$ (resp. $\ensuremath{\mathcal{R}}_-$) denote the positive (resp. negative) roots. The half sum of the positive roots is denoted $\rho$, and $\ensuremath{\textnormal{h}^\vee}$ is the dual Coxeter number of $G$. Since $G$ is simply connected, the integer lattice $\Pi=\ensuremath{\textnormal{ker}}(\exp \colon \ensuremath{\mathfrak{t}} \rightarrow T)$ coincides with the coroot lattice. The dual $\Pi^\ast=\ensuremath{\textnormal{Hom}}(\Pi,\ensuremath{\mathbb{Z}})$ is the (real) weight lattice. There is a unique invariant inner product $B$ on $\ensuremath{\mathfrak{g}}$, the \emph{basic} inner product, with the property that the squared length of the short co-roots is $2$. We often use the basic inner product to identify $\ensuremath{\mathfrak{g}} \simeq \ensuremath{\mathfrak{g}}^\ast$, and we sometimes write $B^\flat$, $B^\sharp$ for the musical isomorphisms when we want to emphasize this. The basic inner product has the property that $B(\Pi,\Pi)\subset \ensuremath{\mathbb{Z}}$, and thus $B^\flat(\Pi)\subset \Pi^\ast$. \section{Twisted K-homology} Here we give a brief introduction to the analytic description of twisted K-homology. Our discussion is similar to \cite{MeinrenkenKHomology, MeinrenkenConjugacyClasses} where one can find further details. For further background on analytic K-homology and KK-theory, see for example \cite{HigsonRoe, HigsonPrimer, KasparovNovikov}. We also recall the definition of `D-cycles' due to Baum, Carey and Wang \cite{BaumCareyWang}, which are a version of Baum-Douglas-type geometric cycles \cite{BaumDouglas} for twisted K-homology. Let $X$ be a locally compact space. A \emph{Dixmier-Douady bundle} over $X$ is a locally trivial bundle of $C^\ast$ algebras $\ensuremath{\mathcal{A}} \rightarrow X$, with typical fibre isomorphic to the compact operators $\ensuremath{\mathbb{K}}(H)$ for a (separable) Hilbert space $H$, and structure group the projective unitary group $PU(H)$ with the strong operator topology. Restricting to a sufficiently small open $U \subset X$, $\ensuremath{\mathcal{A}}|_U$ is isomorphic to $\ensuremath{\mathbb{K}}(\ensuremath{\mathcal{H}})$ for a bundle of Hilbert spaces $\ensuremath{\mathcal{H}} \rightarrow U$, but this need not be true globally. The notation $\ensuremath{\mathcal{A}}^{\ensuremath{\textnormal{op}}}$ denotes the Dixmier-Douady bundle obtained by taking the opposite algebra structure on the fibres. The tensor product $\ensuremath{\mathcal{A}}_0 \otimes \ensuremath{\mathcal{A}}_1$ of Dixmier-Douady bundles is again a Dixmier-Douady bundle. A \emph{Morita morphism} $\ensuremath{\mathcal{S}} \colon \ensuremath{\mathcal{A}}_0 \dashrightarrow \ensuremath{\mathcal{A}}_1$ between Dixmier-Douady bundles over $X$ is a bundle of $\ensuremath{\mathcal{A}}_1 \otimes \ensuremath{\mathcal{A}}_0^{\ensuremath{\textnormal{op}}}$ modules $\ensuremath{\mathcal{S}} \rightarrow X$, locally modelled on the $\ensuremath{\mathbb{K}}(H_1)-\ensuremath{\mathbb{K}}(H_0)$ bimodule $\ensuremath{\mathbb{K}}(H_0,H_1)$. In the special case $\ensuremath{\mathcal{A}}_1=\underline{\ensuremath{\mathbb{C}}}$, $\ensuremath{\mathcal{S}}$ is called a \emph{Morita trivialization} of $\ensuremath{\mathcal{A}}_0$. Any two Morita morphisms $\ensuremath{\mathcal{A}}_0 \dashrightarrow \ensuremath{\mathcal{A}}_1$ are related by tensoring with a line bundle; if the line bundle is trivial, one says the Morita morphisms are \emph{2-isomorphic}. By a theorem of Dixmier and Douady \cite{DixmierDouady}, Morita isomorphism classes of Dixmier-Douady bundles are classified by a degree 3 integral cohomology class $\tn{DD}(\ensuremath{\mathcal{A}}) \in H^3(X,\ensuremath{\mathbb{Z}})$ known as the \emph{Dixmier-Douady class}. The Dixmier-Douady class satisfies \[ \tn{DD}(\ensuremath{\mathcal{A}}^{\ensuremath{\textnormal{op}}})=-\tn{DD}(\ensuremath{\mathcal{A}}), \qquad \tn{DD}(\ensuremath{\mathcal{A}}_0 \otimes \ensuremath{\mathcal{A}}_1)=\tn{DD}(\ensuremath{\mathcal{A}}_0)+\tn{DD}(\ensuremath{\mathcal{A}}_1).\] There are modest generalizations to the case where the fibres of $\ensuremath{\mathcal{A}}$ (resp. $\ensuremath{\mathcal{S}}$) carry $\ensuremath{\mathbb{Z}}_2$-gradings; in this case $\ensuremath{\mathcal{A}}$ (resp. $\ensuremath{\mathcal{S}}$) is locally modelled on $\ensuremath{\mathbb{K}}(H)$ for a $\ensuremath{\mathbb{Z}}_2$-graded Hilbert space $H$ (resp. $\ensuremath{\mathbb{K}}(H_0,H_1)$ with $H_0$, $H_1$ being $\ensuremath{\mathbb{Z}}_2$-graded Hilbert spaces), and the Dixmier-Douady class $\tn{DD}(\ensuremath{\mathcal{A}}) \in H^3(X,\ensuremath{\mathbb{Z}})\oplus H^1(X,\ensuremath{\mathbb{Z}}_2)$. If $X$ carries an action of a compact group $G$, one can define $G$-equivariant Dixmier-Douady bundles, which are classified up to $G$-equivariant Morita morphisms by classes in the analogous equivariant cohomology groups. The $C^\ast$ algebraic definition of twisted K-theory goes back to Donovan-Karoubi \cite{DonovanKaroubi} (in the case of a torsion Dixmier-Douady class) and Rosenberg \cite{RosenbergContinuousTrace} (the general case); see also \cite{AtiyahSegalTwistedK,KaroubiOldNew}. Let $\ensuremath{\mathcal{A}}$ be a $G$-equivariant $\ensuremath{\mathbb{Z}}_2$-graded Dixmier-Douady bundle and $C_0(\ensuremath{\mathcal{A}})$ the $\ensuremath{\mathbb{Z}}_2$-graded $G$-$C^\ast$ algebra of continuous sections of $\ensuremath{\mathcal{A}}$ vanishing at infinity. One defines the $G$-equivariant $\ensuremath{\mathcal{A}}$-\emph{twisted K-homology} of $X$ to be the $G$-equivariant analytic K-homology of this $C^\ast$ algebra: \[ \ensuremath{\textnormal{K}}_i^G(X,\ensuremath{\mathcal{A}})=\ensuremath{\textnormal{KK}}^i_G(C_0(\ensuremath{\mathcal{A}}),\ensuremath{\mathbb{C}}), \qquad i=0,1\] where $\ensuremath{\textnormal{KK}}_G^i(A,B)$ is Kasparov's KK-theory (cf. \cite{KasparovNovikov}). This definition is well known to be equivalent to Atiyah-Segal's \cite{AtiyahSegalTwistedK} description in terms of homotopy classes of continuous sections of bundles with typical fibre the Fredholm operators on a Hilbert space. \begin{remark} \label{rem:CanonicalIso} A Morita morphism $\ensuremath{\mathcal{A}}_0 \dashrightarrow \ensuremath{\mathcal{A}}_1$ defines an invertible element in the group $\ensuremath{\textnormal{KK}}^0_G(C_0(\ensuremath{\mathcal{A}}_1),C_0(\ensuremath{\mathcal{A}}_0))$, and hence an isomorphism between the corresponding twisted K-homology groups. Thus, the resulting groups depend only on the Dixmier-Douady class of $\ensuremath{\mathcal{A}}$. Note however that there may be no \emph{canonical} isomorphism; different Morita morphisms can lead to genuinely different maps. Any two Morita morphisms are related by tensoring with a $\ensuremath{\mathbb{Z}}_2$-graded line bundle, hence the set of Morita morphisms is a torsor for $H^2_G(X,\ensuremath{\mathbb{Z}})\times H^0_G(X,\ensuremath{\mathbb{Z}}_2)$. \end{remark} \begin{example} \label{ex:DeRhamDirac} An important example of a $\ensuremath{\mathbb{Z}}_2$-graded Dixmier-Douady bundle is the Clifford algebra bundle $\ensuremath{\textnormal{Cliff}}(TM)$ of a Riemannian manifold $M$. Kasparov's fundamental class $[\scr{D}]$ is the class in the twisted K-homology group $\ensuremath{\textnormal{K}}_0(M,\ensuremath{\textnormal{Cliff}}(TM))=\ensuremath{\textnormal{KK}}(\ensuremath{\textnormal{Cl}}(M),\ensuremath{\mathbb{C}})$ represented by the de Rham-Dirac operator $\scr{D}=d+d^\ast$ acting on smooth differential forms over $M$ (cf. \cite[Definition 4.2]{KasparovNovikov}). A Morita trivialization $\ensuremath{\mathcal{S}} \colon \ensuremath{\textnormal{Cliff}}(TM)\dashrightarrow \underline{\ensuremath{\mathbb{C}}}$ is the same thing as a spinor module for $\ensuremath{\textnormal{Cliff}}(TM)$. $\ensuremath{\mathcal{S}}$ defines an invertible element $[\ensuremath{\mathcal{S}}] \in \ensuremath{\textnormal{KK}}(C_0(M),\ensuremath{\textnormal{Cl}}(M))$, and the KK product $[\ensuremath{\mathcal{S}}]\otimes_{\ensuremath{\textnormal{Cl}}(M)} [\scr{D}] \in \ensuremath{\textnormal{KK}}(C_0(M),\ensuremath{\mathbb{C}})$ is the class represented by a spin-c Dirac operator for $\ensuremath{\mathcal{S}}$. More generally, twisting $\scr{D}$ by a complex vector bundle $E$, one obtains a class $[\scr{D}^E] \in \ensuremath{\textnormal{KK}}(\ensuremath{\textnormal{Cl}}(M),\ensuremath{\mathbb{C}})$, and the KK product $[\ensuremath{\mathcal{S}}]\otimes_{\ensuremath{\textnormal{Cl}}(M)}[\scr{D}^E]$ is the class represented by the Dirac operator coupled to $E$. \end{example} \subsection{Geometric twisted K-homology} Baum, Carey and Wang \cite{BaumCareyWang} describe a `geometric' approach to twisted K-homology, in the spirit of Baum-Douglas \emph{geometric K-homology} \cite{BaumDouglas} (see also \cite{BaumHigsonSchick}). Actually in \cite{BaumCareyWang} two types of cycles for twisted geometric K-homology are discussed: `K-cycles' versus `D-cycles'. The geometric K-homology groups defined by both types of cycles admit natural maps to the analytic K-homology group described above. In this paper we will only discuss D-cycles, and only use the even case. \begin{definition}\cite{BaumCareyWang} Let $\ensuremath{\mathcal{A}}$ be a $G$-equivariant $\ensuremath{\mathbb{Z}}_2$-graded Dixmier-Douady bundle over a locally finite $G$-CW complex $X$. An (even) \emph{D-cycle} for $(X,\ensuremath{\mathcal{A}})$ is a 4-tuple $(M,E,\Phi,\ensuremath{\mathcal{S}})$ where \begin{itemize} \item $M$ is an even-dimensional smooth closed $G$-manifold, with a $G$-invariant Riemannian metric \item $\Phi \colon M \rightarrow X$ is a $G$-equivariant continuous map \item $E$ is a $G$-equivariant Hermitian vector bundle over $M$ \item $\ensuremath{\mathcal{S}} \colon \ensuremath{\textnormal{Cliff}}(TM) \dashrightarrow \Phi^\ast \ensuremath{\mathcal{A}}$ is a $G$-equivariant Morita morphism. \end{itemize} \end{definition} \begin{remark} The terminology `D-cycle' comes from string theory. If $M$ is orientable, the Dixmier-Douady class of $\ensuremath{\textnormal{Cliff}}(TM)$ is the third integral Stieffel-Whitney class $W_3(M)$ (the obstruction to the existence of a spin-c structure on $M$). The existence of $\ensuremath{\mathcal{S}}$ implies \[ \Phi^\ast \tn{DD}(\ensuremath{\mathcal{A}})=W_3(M),\] which is called the `Freed-Witten anomaly cancellation condition' in string theory. \end{remark} The \emph{geometric twisted K-homology} $\ensuremath{\textnormal{K}}^G_{\tn{geo},i}(X,\ensuremath{\mathcal{A}})$ of $X$ is the set of D-cycles modulo an equivalence relation analogous to Baum-Douglas geometric K-homology (generated by suitable versions of `disjoint union equals direct sum', `bordism', and `bundle modification'), see \cite{BaumCareyWang}. There is a functorial map \begin{equation} \label{eqn:GeoAnalyticMap} \ensuremath{\textnormal{K}}^G_{\tn{geo},i}(X,\ensuremath{\mathcal{A}}) \rightarrow \ensuremath{\textnormal{K}}^G_i(X,\ensuremath{\mathcal{A}}) \end{equation} which is straight-forward to describe at the level of cycles. We will only use the even case $i=0$ here; the odd case is similar. Let $[\scr{D}^E] \in \ensuremath{\textnormal{KK}}_G(\ensuremath{\textnormal{Cl}}(M),\ensuremath{\mathbb{C}})$ be the class of the de Rham-Dirac operator on $M$, coupled to the vector bundle $E$. The pair $(\Phi,\ensuremath{\mathcal{S}})$ defines a push-forward map \[ (\Phi,\ensuremath{\mathcal{S}})_\ast \colon \ensuremath{\textnormal{KK}}_G(\ensuremath{\textnormal{Cl}}(M),\ensuremath{\mathbb{C}}) \rightarrow \ensuremath{\textnormal{KK}}_G(C_0(\ensuremath{\mathcal{A}}),\ensuremath{\mathbb{C}}) \] given as the composition of the Morita morphism $\ensuremath{\textnormal{Cliff}}(TM) \dashrightarrow \Phi^\ast \ensuremath{\mathcal{A}}$, with the map induced by the $^\ast$-homomorphism \[ \Phi^\ast \colon C_0(\ensuremath{\mathcal{A}}) \rightarrow C_0(\Phi^\ast \ensuremath{\mathcal{A}}).\] The image of $[(M,E,\Phi,\ensuremath{\mathcal{S}})]$ in $\ensuremath{\textnormal{K}}^G_0(X,\ensuremath{\mathcal{A}})$ is the push-forward: \begin{equation} \label{eqn:PushNotation} (\Phi,\ensuremath{\mathcal{S}})_\ast [\scr{D}^E]. \end{equation} The push-forward can alternately be expressed as a KK product \begin{equation} \label{eqn:ProdNotation} [\ensuremath{\mathcal{S}}] \otimes [\scr{D}^E] \end{equation} where $[\ensuremath{\mathcal{S}}]\in \ensuremath{\textnormal{KK}}_G(C_0(\ensuremath{\mathcal{A}}),\ensuremath{\textnormal{Cl}}(M))$ is the class defined by the triple $(C_0(\ensuremath{\mathcal{S}}),\Phi^\ast,0)$. \begin{remark} A proof that the map \eqref{eqn:GeoAnalyticMap} is an isomorphism has been announced by Baum, Joachim, Khorami and Schick \cite{BJKS}, at least for the non-equivariant case. \end{remark} \subsection{Twisted K-homology of $G$.} Let $G$ be a compact, connected, simply connected, simple Lie group acting on itself by conjugation. Then $H^3_G(G,\ensuremath{\mathbb{Z}})\simeq \ensuremath{\mathbb{Z}}$, while $H^2_G(G,\ensuremath{\mathbb{Z}})=H^1_G(G,\ensuremath{\mathbb{Z}}_2)=H^0_G(G,\ensuremath{\mathbb{Z}}_2)=0$. There is a canonical generator of $H^3_G(G,\ensuremath{\mathbb{Z}})$; in de Rham cohomology, it is represented by the equivariant Cartan 3-form \[ \eta_G(\xi)=\eta-\tfrac{1}{2}B(\theta^L+\theta^R,\xi), \qquad \eta=\tfrac{1}{12}B(\theta^L, [\theta^L,\theta^L]),\] where $\xi \in \ensuremath{\mathfrak{g}}$ and $\theta^L$ (resp. $\theta^R$) denotes the left (resp. right) invariant Maurer-Cartan form. Thus $G$-equivariant Dixmier-Douady bundles $\ensuremath{\mathcal{A}}$ over $G$ are classified up to Morita equivalence by an integer $\ell \in \ensuremath{\mathbb{Z}}$, and moreover any two Morita morphisms are 2-isomorphic, see Remark \ref{rem:CanonicalIso}. Although we will not use it, it is known that the twisted K-homology group $\ensuremath{\textnormal{K}}^G_0(G,\ensuremath{\mathcal{A}})$ carries a ring structure; in this picture the ring structure originates from a canonical Morita morphism $\ensuremath{\mathcal{A}} \boxtimes \ensuremath{\mathcal{A}} \dashrightarrow \tn{Mult}^\ast \ensuremath{\mathcal{A}}$, where $\tn{Mult}\colon G \times G \rightarrow G$ is the group multiplication, cf. \cite{FHTI,MeinrenkenConjugacyClasses}. \section{Loop groups and the Freed-Hopkins-Teleman Theorem}\label{sec:loopgroup} In this section we briefly introduce the loop group $LG$ and its important class of projective positive energy representations, cf. \cite{PressleySegal}. We then recall the Freed-Hopkins-Teleman theorem, which relates loop groups to twisted K-homology. To obtain a Banach-Lie group, we take $LG$ to consist of maps $S^1=\ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}} \rightarrow G$ of some fixed Sobolev level $s>\tfrac{1}{2}$. The basic inner product defines a central extension of the Lie algebra $L\ensuremath{\mathfrak{g}}$ by $\ensuremath{\mathbb{R}}$, with cocycle \begin{equation} \label{eqn:cocycleLG} c(\xi_1,\xi_2)=\int_0^1 B(\xi_1(t), \xi_2^\prime(t))\,\,dt. \end{equation} This extension integrates to a $U(1)$ central extension $LG^\ensuremath{\tn{bas}}$ of $LG$, that we will call the \emph{basic central extension}. For $G$ connected, simple, and simply connected, $U(1)$ central extensions of $LG$ are uniquely determined by their Lie algebra cocycle, which must be an integer multiple of the generator \eqref{eqn:cocycleLG}; thus $U(1)$ central extensions are classified by $\ensuremath{\mathbb{Z}}$, with $LG^\ensuremath{\tn{bas}}$ corresponding to the generator $1 \in \ensuremath{\mathbb{Z}}$. \ignore{ $U(1)$ central extensions of $LG$ are classified by an integer $k$ called the \emph{level}, and the basic central extension corresponds to the generator $k=1$.} For later reference, note that the loop group can be written as a semi-direct product $LG=G\ltimes \Omega G$, where $\Omega G=\{\gamma \in LG|\gamma(0)=\gamma(1)\}$ is the based loop group, and $G \hookrightarrow LG$ identifies $G$ with the constant loops in $LG$. Our assumptions on $G$ imply that any $U(1)$ central extension of $G$ is trivial, hence in particular the restriction of $LG^\ensuremath{\tn{bas}}$ to the constant loops is trivial. Let $T \subset G$ be a fixed maximal torus and let $\Pi=\ensuremath{\textnormal{ker}}(\exp \colon \ensuremath{\mathfrak{t}} \rightarrow T)$ be the integral lattice. The product group $T \times \Pi$ may be viewed as a subgroup of $LG$, where $T$ is embedded as constant loops and $\Pi$ as exponential loops: $\eta \in \Pi$ corresponds to the loop $s \in \ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}} \mapsto \exp(s \eta) \in T$. The restriction of the central extension $LG^\ensuremath{\tn{bas}}$ to $T \times \Pi$ is a central extension \[ 1 \rightarrow U(1) \rightarrow T\ltimes \Pi^\ensuremath{\tn{bas}} \rightarrow T \times \Pi \rightarrow 1.\] We discuss the subgroup $T\ltimes \Pi^\ensuremath{\tn{bas}} \subset LG^\ensuremath{\tn{bas}}$ in detail in Section \ref{sec:SemiDirect}. \subsection{Positive energy representations.} The loop group has a much-studied class of projective representations known as \emph{positive energy} representations, which have a detailed theory parallel to the theory for compact groups cf. \cite{KacBook,PressleySegal}. Let $S^1_{\ensuremath{\tn{rot}}} \ltimes LG$ denote the semi-direct product constructed from the action of $S^1$ on $LG$ by rigid rotations. This action lifts to an action on the basic central extension. A positive energy representation is a representation of $LG^{\ensuremath{\tn{bas}}}$ which extends to a representation of the semi-direct product $S^1_{\ensuremath{\tn{rot}}}\ltimes LG^{\ensuremath{\tn{bas}}}$ such that the weights of $S^1_{\ensuremath{\tn{rot}}}$ are bounded below. One can always tensor a positive energy representation by a 1-dimensional representation of $S^1_{\ensuremath{\tn{rot}}}$, hence one often normalizes positive energy representations by requiring that the minimal $S^1_{\ensuremath{\tn{rot}}}$ weight is $0$, and we always assume this. For an irreducible positive energy representation, the central circle of $LG^{\ensuremath{\tn{bas}}}$ acts by a fixed weight $k \ge 0$ called the \emph{level}. There are finitely many irreducible positive energy representations at any fixed level, parametrized by the `level $k$ dominant weights': weights $\lambda \in \Pi^\ast \cap \ensuremath{\mathfrak{t}}^\ast_+$ satisfying $B(\lambda,\theta) \le k$, where $\theta \in \ensuremath{\mathcal{R}}_+$ is the highest root of $\ensuremath{\mathfrak{g}}$. Equivalently the level $k$ weights $\Pi^\ast_k=\Pi^\ast \cap k\ensuremath{\mathfrak{a}}$, where $\ensuremath{\mathfrak{a}} \subset \ensuremath{\mathfrak{t}}_+$ is the fundamental alcove, which we identify with a subset of $\ensuremath{\mathfrak{t}}^\ast$ using the basic inner product. Let $R_k(G)$ denote the free abelian group of rank $\#(\Pi_k^\ast)$ generated by $\ensuremath{\mathbb{Z}}$-linear combinations of the level $k$ irreducible positive energy representations. There is a canonical isomorphism (`holomorphic induction', cf. \cite{FHTI}) \[ R_k(G) \simeq R(G)/I_k(G) \] where $R(G)$ is the representation ring of $G$ and $I_k(G)$ is the \emph{Verlinde ideal} consisting of characters vanishing on the conjugacy classes of the elements \[ \exp\Big(\frac{\xi+\rho}{k+\ensuremath{\textnormal{h}^\vee}}\Big), \qquad \xi \in \Pi_k^\ast.\] In particular $R_k(G)$ is a ring, known as the level $k$ \emph{Verlinde ring}. There is an alternate description of $R_k(G)$ that will be crucial for us later on; this description plays a significant role in the proof of the Freed-Hopkins-Teleman Theorem as well. An element of $R_k(G)$ is uniquely determined by its \emph{multiplicity function}, a map \[ m \colon \Pi^\ast_k \rightarrow \ensuremath{\mathbb{Z}}.\] It is known that $\Pi^\ast_k$ is precisely the set of weights contained in the interior of the shifted, scaled alcove $(k+\ensuremath{\textnormal{h}^\vee})\ensuremath{\mathfrak{a}}-\rho$. The latter is a fundamental domain for the $\rho$-shifted level $(k+\ensuremath{\textnormal{h}^\vee})$ action of the affine Weyl group $W_{\ensuremath{\textnormal{aff}}}=W\ltimes \Pi$, given by \begin{equation} \label{eqn:ShiftedAction} w\bullet_{k+\ensuremath{\textnormal{h}^\vee}} \xi=(\ol{w},\eta) \bullet_{k+\ensuremath{\textnormal{h}^\vee}} \xi=\ol{w}(\xi+\rho)-\rho+(k+\ensuremath{\textnormal{h}^\vee})\eta, \qquad \xi \in \ensuremath{\mathfrak{t}}^\ast, \quad \ol{w} \in W, \eta \in \Pi. \end{equation} Thus, $m$ has a \emph{unique} extension to a map \[ m \colon \Pi^\ast \rightarrow \ensuremath{\mathbb{Z}} \] which is \emph{alternating} under \eqref{eqn:ShiftedAction}, i.e. \[ m(w\bullet_{k+\ensuremath{\textnormal{h}^\vee}} \xi)=(-1)^{l(w)}m(\xi),\] where $l(w)$ is the length of the affine Weyl group element $w$. The extension of $m$ vanishes on the boundary of the fundamental domain $(k+\ensuremath{\textnormal{h}^\vee})\ensuremath{\mathfrak{a}}-\rho$. This defines an isomorphism of abelian groups \begin{equation} \label{eqn:AlternatingFormal} R_k(G) \xrightarrow{\sim} R^{-\infty}(T)^{W_{\ensuremath{\textnormal{aff}}}-\ensuremath{\textnormal{anti}},\,(k+\ensuremath{\textnormal{h}^\vee})} \end{equation} where the right hand side denotes the formal characters of $T$ which are alternating under the action \eqref{eqn:ShiftedAction}. That \eqref{eqn:AlternatingFormal} is an isomorphism can be deduced more or less immediately from the Weyl-Kac character formula (cf. \cite{KacBook,PressleySegal}). A positive energy representation has a formal character $\chi \in R^{-\infty}(S^1_{\ensuremath{\tn{rot}}}\times T)$ given by a formula analogous to the Weyl character formula for compact Lie groups, but with the numerator and denominator both being formal infinite expressions. As in the Weyl character formula, the denominator $\Delta$ is a universal expression (the same for any $\chi \in R_k(G)$); multiplying $\chi$ by $\Delta$ and then restricting to $q=1 \in S^1_{\ensuremath{\tn{rot}}}$, one obtains an element $(\chi \cdot \Delta)|_{q=1} \in R^{-\infty}(T)^{W_{\ensuremath{\textnormal{aff}}}-\ensuremath{\textnormal{anti}},\,(k+\ensuremath{\textnormal{h}^\vee})}$, and this correspondence is one-one. \subsection{Dixmier-Douady bundles from positive energy representations.}\label{sec:DDoverG} Loop groups are closely related to twisted K-theory of $G$. One manifestation of this is that positive energy representations can be used to construct Dixmier-Douady bundles over $G$. Let $\ell>0$ and let $LG^\tau$ denote the central extension of $LG$ corresponding to $\ell$ times the basic inner product ($\ell=1$ corresponds to $LG^{\ensuremath{\tn{bas}}}$). Let $V$ be a level $\ell$ positive energy representation, or in other words, a positive energy representation of $LG^\tau$ such that the central circle acts with weight $1$. The dual space $V^\ast$ carries a negative energy representation such that the central circle acts with weight $-1$. Let $PG$ denote the space of quasi-periodic paths in $G$ of Sobolev level $s > \tfrac{1}{2}$, that is, $PG$ is the space of paths $\gamma \colon \ensuremath{\mathbb{R}} \rightarrow G$ such that $\gamma(t)\gamma(t+1)^{-1}$ is a fixed element of $G$, independent of $t \in \ensuremath{\mathbb{R}}$. The group $LG\times G$ acts on $PG$, with $LG$ acting by right multiplication, and $G$ by left multiplication (cf. \cite{LMSspinor} for further discussion). The map \[ q \colon \gamma \in PG \mapsto \gamma(t)\gamma(t+1)^{-1} \in G \] makes $PG$ into a $G$-equivariant principal $LG$-bundle over $G$. The adjoint action of $LG^\tau$ on the algebra of compact operators $\ensuremath{\mathbb{K}}(V^\ast)$ descends to an action of $LG$, and the associated bundle \begin{equation} \label{eqn:defA} \ensuremath{\mathcal{A}}=PG \times_{LG} \ensuremath{\mathbb{K}}(V^\ast) \end{equation} is a Dixmier-Douady bundle over $G$ such that $\tn{DD}(\ensuremath{\mathcal{A}})=\ell \in \ensuremath{\mathbb{Z}} \simeq H^3_G(G,\ensuremath{\mathbb{Z}})$, cf. \cite{MeinrenkenConjugacyClasses}. \subsection{The Freed-Hopkins-Teleman theorem.} The following is a special case (for $G$ connected, simply connected, simple) of the Freed-Hopkins-Teleman theorem. \begin{theorem}[Freed-Hopkins-Teleman \cite{FHTI,FHTII,FHTIII}] Let $k>0$, and let $\ensuremath{\textnormal{h}^\vee}$ be the dual Coxeter number of $G$. Let $\ensuremath{\mathcal{A}}$ be a $G$-equivariant Dixmier-Douady bundle over $G$ with $\tn{DD}(\ensuremath{\mathcal{A}})=k+\ensuremath{\textnormal{h}^\vee} \in \ensuremath{\mathbb{Z}} \simeq H^3_G(G,\ensuremath{\mathbb{Z}})$. The group $\ensuremath{\textnormal{K}}^G_1(G,\ensuremath{\mathcal{A}})=0$, and there is an isomorphism of rings \[ R_k(G) \simeq \ensuremath{\textnormal{K}}^G_0(G,\ensuremath{\mathcal{A}}).\] \end{theorem} Let $\iota \colon \{e \} \hookrightarrow G$ be the inclusion of the identity element in $G$. Consider the model \eqref{eqn:defA} for $\ensuremath{\mathcal{A}}$. The Hilbert space $V^\ast$ gives a (canonical) $G$-equivariant Morita trivialization of $\iota^\ast \ensuremath{\mathcal{A}}$. Freed-Hopkins-Teleman prove that their isomorphism $R_k(G) \rightarrow \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}})$ moreover fits into a commutative diagram \begin{equation} \label{diagram:FHT} \begin{CD} R(G) @>>> R_k(G)\\ @V \simeq VV @V \simeq VV\\ \ensuremath{\textnormal{K}}_0^G(\ensuremath{\textnormal{pt}})@>(\iota,V^\ast)>>\ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}) \end{CD} \end{equation} where the top horizontal arrow is the quotient map and the bottom horizontal arrow is induced by the evaluation map $\iota^\ast \colon C(\ensuremath{\mathcal{A}}) \rightarrow \ensuremath{\mathcal{A}}|_{e}$ composed with the Morita trivialization $V^\ast \colon \ensuremath{\mathcal{A}}|_{e} \dashrightarrow \ensuremath{\mathbb{C}}$. \section{Crossed products and twisted K-homology} \label{sec:CrossProdDD} In this section we describe some general facts involving crossed product algebras, central extensions, and generalized fixed-point algebras. Throughout this section $\Gamma$, $S$, $N$ are locally compact, second countable topological groups equipped with left Haar measure, and $A$ is a separable $C^\ast$ algebra. \subsection{Twisted crossed-products.} \label{sec:TwistedCross} Let $\Gamma$ be a locally compact group with left invariant Haar measure, and let $\Gamma^\tau$ be a $U(1)$-central extension: \[ 1 \rightarrow U(1) \rightarrow \Gamma^\tau \rightarrow \Gamma \rightarrow 1.\] Normalize Haar measure on $\Gamma^\tau$ such that the integral of a function over $\Gamma^\tau$ is given by first averaging over $U(1)$ (using normalized Haar measure) followed by integration over $\Gamma$. A choice of section $\Gamma \rightarrow \Gamma^\tau$ is not needed. In detail, for $f \in C_c(\Gamma^\tau)$ let \begin{equation} \label{eqn:AvgU1} \bar{f}(\ensuremath{\widehat{\gamma}})=\int_{U(1)} f(z\ensuremath{\widehat{\gamma}})\, dz. \end{equation} Then $\bar{f}$ is a $U(1)$-invariant function on $\ensuremath{\widehat{\Gamma}}$ so descends to a function on $\Gamma$, and \begin{equation} \label{eqn:IntGamma} \int_{\Gamma^\tau} f(\ensuremath{\widehat{\gamma}})\, d\ensuremath{\widehat{\gamma}}=\int_\Gamma \bar{f}(\gamma)\,d\gamma. \end{equation} Let $A$ be a $\Gamma$-$C^\ast$ algebra. Note that $A$ can be regarded as a $\Gamma^\tau$-$C^\ast$ algebra such that the central circle in $\Gamma^\tau$ acts trivially. The (maximal) crossed product algebra $\Gamma^\tau \ltimes A=C^\ast(\Gamma^\tau,A)$ (we use both notations interchangeably) decomposes into a direct sum of its homogeneous ideals \begin{equation} \label{eqn:Homogeneous} \Gamma^\tau \ltimes A=\bigoplus_{n \in \ensuremath{\mathbb{Z}}} (\Gamma^\tau \ltimes A)_{(n)} \end{equation} where $(\Gamma^\tau \ltimes A)_{(n)}$ denotes the norm closure (in the maximal crossed product algebra $\Gamma^\tau \ltimes A$) of the set of compactly supported functions $a \in C_c(\Gamma^\tau,A)$ satisfying \[ a(z^{-1}\wh{\gamma})=z^na(\wh{\gamma}), \qquad z \in U(1), \wh{\gamma} \in \Gamma^\tau.\] There is a $^\ast$-homomorphism from $C^\ast(U(1))$ into the multiplier algebra $M(\Gamma^\tau \ltimes A)$ (cf. \cite[II.10.3.10-12]{BlackadarCAlgebras}) making $\Gamma^\tau \ltimes A$ into a $C^\ast(U(1))=C_0(\ensuremath{\mathbb{Z}})$-algebra, and the ideals $(\Gamma^\tau \ltimes A)_{(n)}$ are the fibres. The decomposition \eqref{eqn:Homogeneous} is also not difficult to prove directly. A short calculation using \eqref{eqn:AvgU1} shows that the $(\Gamma^\tau \ltimes A)_{(n)}$ are 2-sided ideals, and hence one has a $^\ast$-homomorphism from the right hand side of \eqref{eqn:Homogeneous} to $\Gamma^\tau \ltimes A$. One also has a $^\ast$-homomorphism in the opposite direction, given by `taking Fourier coefficients'. For further details see for example \cite[Proposition 3.2]{LaurentTuXuDiffStacks} or \cite{Takata1}. \begin{definition} \label{def:TwistedCross} We define the $\tau$-\emph{twisted crossed product algebra} $\Gamma \ltimes_{\tau} A$ to be the ideal \[ \Gamma \ltimes_\tau A=(\Gamma^\tau \ltimes A)_{(1)}.\] The special case $A=\ensuremath{\mathbb{C}}$ gives the twisted group $C^\ast$ algebra \[ C^\ast_\tau(\Gamma)=C^\ast(\Gamma^\tau)_{(1)}.\] \end{definition} \begin{remark} \label{rem:ChooseSection} One often sees the twisted crossed-product algebra defined in terms of a cocycle for the central extension, cf. \cite{MarcolliMathaiTwistedGroup}. One can translate to this definition by choosing a section $\Gamma \rightarrow \Gamma^\tau$. One reason we take the approach above is that later on we will consider the action of a second group $S \circlearrowright \Gamma \ltimes_\tau A$, and it seems slightly awkward to describe this in terms of a section $\Gamma \rightarrow \Gamma^\tau$; for example, it is not clear to us that one can find an $S$-invariant section. \end{remark} The twisted crossed product algebra $\Gamma \ltimes_\tau A$ has the important universal property that non-degenerate $^\ast$-representations of $\Gamma \ltimes_\tau A$ are in 1-1 correspondence with \emph{covariant pairs} $(\pi_A,\pi_\Gamma^\tau)$, where $\pi_A$ is a $^\ast$-representation of $A$, $\pi_\Gamma^\tau$ is representation of $\Gamma^\tau$ such that the central circle acts with weight $1$ (a $\tau$-projective representation of $\Gamma$), and \begin{equation} \label{eqn:covpair} \pi_\Gamma^\tau(\wh{\gamma})\pi_A(a)\pi_\Gamma^\tau(\wh{\gamma})^{-1}=\pi_A(\ensuremath{\widehat{\gamma}} \cdot a) \end{equation} for all $\ensuremath{\widehat{\gamma}} \in \Gamma^\tau$, $a \in A$. The space $L^2(\Gamma^\tau)$ splits into an $\ell^2$-direct sum of its homogeneous subspaces \begin{equation} \label{eqn:L2HomogSub} L^2(\Gamma^\tau)=\bigoplus_{n \in \ensuremath{\mathbb{Z}}} L^2(\Gamma^\tau)_{(n)}, \end{equation} where $L^2(\Gamma^\tau)_{(n)}$ denotes the subspace of $L^2(\Gamma^\tau)$ consisting of functions $f \in L^2(\Gamma^\tau)$ satisfying \[ f(z^{-1}\ensuremath{\widehat{\gamma}})=z^nf(\ensuremath{\widehat{\gamma}}), \qquad z \in U(1), \ensuremath{\widehat{\gamma}} \in \Gamma^\tau.\] Recall the left and right regular representations of $\Gamma^\tau$ on $L^2(\Gamma^\tau)$ are given, respectively, by \[ \lambda(\ensuremath{\widehat{\gamma}})f(\ensuremath{\widehat{\gamma}}_1)=f(\ensuremath{\widehat{\gamma}}^{-1}\ensuremath{\widehat{\gamma}}_1), \qquad \rho(\ensuremath{\widehat{\gamma}})f(\ensuremath{\widehat{\gamma}}_1)=f(\ensuremath{\widehat{\gamma}}_1\ensuremath{\widehat{\gamma}}).\] Both actions preserve the decomposition \eqref{eqn:L2HomogSub}. \begin{definition} \label{def:TwistedReg} The \emph{left} $\tau$-\emph{twisted regular representation} of $\Gamma$ is the restriction of the left regular representation of $\Gamma^\tau$ on $L^2(\Gamma^\tau)$ to the subspace \[ L^2_{\tau}(\Gamma):=L^2(\Gamma^\tau)_{(1)}.\] The restriction of the right regular representation to $L^2_{\tau}(\Gamma)$ is the \emph{right} $(-\tau)$-\emph{twisted regular representation} of $\Gamma$. (Note that under the right regular representation, the central circle of $\Gamma^\tau$ acts on $L^2_{\tau}(\Gamma)$ with weight $-1$.) \end{definition} \subsection{Dixmier-Douady bundles from crossed-products.}\label{sec:DDbdleCrossProd} Let $X$ be a locally compact Hausdorff space with a continuous proper action of a locally compact group $\Gamma$. The quotient $X/\Gamma$ equipped with the quotient topology is then also a locally compact Hausdorff space. Let $\Gamma$ act on $L^2(\Gamma)$ by right translation, and on $\ensuremath{\mathbb{K}}:=\ensuremath{\mathbb{K}}(L^2(\Gamma))$ by the adjoint action. Define the algebra of sections of a field of $C^\ast$-algebras over $X/\Gamma$, suggestively denoted $C_0(X\times_\Gamma \ensuremath{\mathbb{K}})$, consisting of $\Gamma$-equivariant continuous maps $X \rightarrow \ensuremath{\mathbb{K}}$ vanishing at infinity in $X/\Gamma$. The algebra $C_0(X\times_\Gamma \ensuremath{\mathbb{K}})$ is an example of a \emph{generalized fixed-point algebra}. The following result is attributed to Rieffel (for example \cite[Proposition 4.3]{RieffelFiniteGroup}, \cite{RieffelProperActions}); see especially \cite[Corollary 2.11]{EchterhoffEmerson} for a statement formulated in the same terms used here. Another reference is \cite[Proposition 4.3]{LaurentTuXuDiffStacks} where a quite general statement appears for twisted convolution algebras of locally compact proper groupoids. \begin{proposition} \label{prop:RieffelFixedPt} Let $X$ be a locally compact Hausdorff space with a continuous proper action of a locally compact group $\Gamma$, and let $\ensuremath{\mathbb{K}}=\ensuremath{\mathbb{K}}(L^2(\Gamma))$. Then \[ \Gamma \ltimes C_0(X)\simeq C_0(X\times_\Gamma \ensuremath{\mathbb{K}}).\] \end{proposition} \begin{remark} \label{rem:RieffelMap} We mention briefly how a map $\Gamma \ltimes C_0(X) \rightarrow C_0(X\times_\Gamma \ensuremath{\mathbb{K}})$ is constructed. Using conventions as in \cite[Section 3.7]{KasparovNovikov}, a function $a \in C_c(\Gamma,C_c(X))$ is sent to the $\Gamma$-equivariant family $K_a(x) \in \ensuremath{\mathbb{K}}$, $x \in X$ of compact operators defined by the family of integral kernels \begin{equation} \label{eqn:RieffelIso} k_a(\gamma_1,\gamma_2;x)=\mu(\gamma_2)^{-1}a(\gamma_1\gamma_2^{-1};\gamma_1x) \end{equation} where $\mu \colon \Gamma \rightarrow \ensuremath{\mathbb{R}}_{>0}$ is the modular homomorphism of $\Gamma$. \end{remark} \begin{remark} \label{rem:LeftReg} Proposition \ref{prop:RieffelFixedPt} can be viewed as a generalization of the Stone-von Neumann theorem (obtained from the special case $\Gamma=\ensuremath{\mathbb{R}}$ acting on $X=\ensuremath{\mathbb{R}}$ by translations). More generally for $X=\Gamma$, Proposition \ref{prop:RieffelFixedPt} specializes to a well-known isomorphism \begin{equation} \label{eqn:SpecialCaseRieffel} \Gamma \ltimes C_0(\Gamma) \xrightarrow{\sim} \ensuremath{\mathbb{K}}(L^2(\Gamma)). \end{equation} Using equation \eqref{eqn:RieffelIso} one verifies that the induced map on multiplier algebras sends $C_0(\Gamma)$ to multiplication operators and $\Gamma$ to the \emph{left} regular representation. \end{remark} If the action of $\Gamma$ on $X$ is free, then $X \rightarrow X/\Gamma$ is a principal $\Gamma$-bundle, and the generalized fixed-point algebra is the algebra of continuous sections vanishing at infinity of the associated bundle \begin{equation} \label{eqn:BoringDD} \ensuremath{\mathcal{A}}=X \times_\Gamma \ensuremath{\mathbb{K}}. \end{equation} This is a Dixmier-Douady bundle, with typical fibre $\ensuremath{\mathbb{K}}(L^2(\Gamma))$. In fact $\ensuremath{\mathcal{A}}$ is Morita trivial with Morita trivialization $X \times_\Gamma L^2(\Gamma)$. To obtain something more interesting from the construction \eqref{eqn:BoringDD}, we adjust it slightly in two ways. First we consider the equivariant situation, where a second group $S$ acts on $X$ and $L^2(\Gamma)$. It may happen that the Morita trivialization $X \times_\Gamma L^2(\Gamma)$ is not $S$-equivariant. Second, we replace $\Gamma \ltimes C_0(X)$ with a twisted crossed product algebra, as in Definition \ref{sec:TwistedCross}. This will be important later on, when central extensions of the loop group come into the picture. Consider a semi-direct product $S\ltimes \Gamma$, where $S$, $\Gamma$ are locally compact groups, and assume the $S$ action preserves Haar measure on $\Gamma$. Let $\Gamma^\tau$ be a $U(1)$-central extension, and assume the action of $S$ on $\Gamma$ lifts to an action on $\Gamma^\tau$, so that we have a $U(1)$-central extension \[ 1 \rightarrow U(1) \rightarrow S \ltimes \Gamma^\tau \rightarrow S \ltimes \Gamma \rightarrow 1.\] \ignore{ Let \[ \kappa \colon \Gamma \rightarrow \ensuremath{\textnormal{Hom}}(S,U(1)), \qquad \gamma \mapsto \kappa_{\gamma} \in \ensuremath{\textnormal{Hom}}(S,U(1)) \] be the group homomorphism determined by the commutator map: \[ \kappa_{\gamma}(s)=\wh{\gamma} s \wh{\gamma}^{-1} s^{-1} \in U(1),\] where $\wh{\gamma} \in \Gamma^\tau$ is any lift of $\gamma \in \Gamma$. } The right $(-\tau)$-twisted regular representation $(L^2_{\tau}(\Gamma),\rho)$ (Definition \ref{def:TwistedReg}) extends to a representation of $S \ltimes \Gamma^\tau$ (such that the central circle acts with weight $-1$) according to \begin{equation} \label{eqn:SGammaAction} \rho(s,\wh{\gamma})f(\ensuremath{\widehat{\gamma}}_1)=f(s^{-1}\ensuremath{\widehat{\gamma}}_1s\ensuremath{\widehat{\gamma}}). \end{equation} \ignore{The inverse kappa on the left side I think is necessary, because we are assuming that the central circle acts with weight $-1$ on $L^2_{-\tau}(\Gamma)$, meaning that the commutation relation between $S$ and $\Gamma$ should involve $\kappa_{\gamma}(s)^{-1}$ instead.}The adjoint action $\ensuremath{\textnormal{Ad}}(\rho)$ on $\ensuremath{\mathbb{K}}=\ensuremath{\mathbb{K}}(L^2_{\tau}(\Gamma))$ descends to an action of $S \ltimes \Gamma$. Let $X$ be a locally compact $S\ltimes \Gamma$-space, such that the action of $\Gamma$ on $X$ is proper. The generalized fixed point algebra $C_0(X \times_\Gamma \ensuremath{\mathbb{K}})$ is an $S$-$C^\ast$ algebra. \begin{remark} \label{rem:ExtendSLeft} For later reference note that the left $\tau$-twisted regular representation $(L^2_\tau(\Gamma),\lambda)$ (Definition \ref{def:TwistedReg}) also extends to a representation of $S\ltimes \Gamma^\tau$ (such that the central circle acts with weight $1$) according to \[ \lambda(s,\wh{\gamma})f(\ensuremath{\widehat{\gamma}}_1)=f(\ensuremath{\widehat{\gamma}}^{-1}s^{-1}\ensuremath{\widehat{\gamma}}_1s).\] \end{remark} If $A$ is a $(S \ltimes \Gamma)$-$C^\ast$ algebra, the twisted crossed product $\Gamma \ltimes_\tau A$ is an $S$-$C^\ast$ algebra, with the $S$ action being the continuous extension of the $S$ action on $C_c(\Gamma^\tau,A)$ given by \begin{equation} \label{eqn:HAction} (s\cdot a)(\ensuremath{\widehat{\gamma}})= s.a(s^{-1}\ensuremath{\widehat{\gamma}} s). \end{equation} This applies in particular to the $(S \ltimes \Gamma)$-$C^\ast$ algebra $C_0(X)$, and one has the following variation of Proposition \ref{prop:RieffelFixedPt}. \begin{proposition} \label{prop:modRieffelFixedPt} Consider a semi-direct product $S\ltimes \Gamma$, where $S$, $\Gamma$ are locally compact groups, and the $S$-action preserves Haar measure on $\Gamma$. Let $\Gamma^\tau$ be a $U(1)$-central extension, and assume the action of $S$ on $\Gamma$ lifts to an action on $\Gamma^\tau$. Let $(L^2_{\tau}(\Gamma),\rho)$ be the right $(-\tau)$-twisted regular representation (Definition \ref{def:TwistedReg}), extended to a representation of $S\ltimes \Gamma^\tau$ as in equation \eqref{eqn:SGammaAction}, and let $S\ltimes \Gamma^\tau$ act on $\ensuremath{\mathbb{K}}=\ensuremath{\mathbb{K}}(L^2_{\tau}(\Gamma))$ by the adjoint action $\ensuremath{\textnormal{Ad}}(\rho)$. Let $X$ be a locally compact $S \ltimes \Gamma$ space, such that the $\Gamma$ action is proper. There is an isomorphism of $S$-$C^\ast$ algebras \[ \Gamma \ltimes_\tau C_0(X) \simeq C_0(X\times_{\Gamma} \ensuremath{\mathbb{K}}).\] \end{proposition} \begin{proof} This follows in a straight-forward manner from Proposition \ref{prop:RieffelFixedPt} applied to $\Gamma^\tau$. The action of $\Gamma$ on $X$ induces a proper action of $\Gamma^\tau$ on $X$ with the central circle acting trivially. Applying Proposition \ref{prop:RieffelFixedPt} to $\Gamma^\tau$, \begin{equation} \label{eqn:ApplyRieffelFixedPt} \Gamma^\tau \ltimes C_0(X) \simeq C_0\big(X\times_{\Gamma^\tau} \ensuremath{\mathbb{K}}(L^2(\Gamma^\tau))\big)=C_0\big(X\times_\Gamma \ensuremath{\mathbb{K}}(L^2(\Gamma^\tau))^{U(1)}\big), \end{equation} where for the second equality we use the fact that the central circle acts trivially on $X$. The algebra on the left hand side of \eqref{eqn:ApplyRieffelFixedPt} splits into a direct sum of its homogeneous ideals \[ \Gamma^\tau \ltimes C_0(X)=\bigoplus_{n \in \ensuremath{\mathbb{Z}}} (\Gamma^\tau \ltimes C_0(X))_{(n)}.\] Decompose $L^2(\Gamma^\tau)$ into isotypic components for the action of the central circle, as in \eqref{eqn:L2HomogSub}: \begin{equation} \label{eqn:U(1)isotypic} L^2(\Gamma^\tau)=\bigoplus_{n \in \ensuremath{\mathbb{Z}}} L^2(\Gamma^\tau)_{(n)}. \end{equation} The subalgebra $\ensuremath{\mathbb{K}}(L^2(\Gamma^\tau))^{U(1)} \subset \ensuremath{\mathbb{K}}(L^2(\Gamma^\tau))$ is the set of compact operators preserving the decomposition \eqref{eqn:U(1)isotypic}; hence \begin{equation} \label{eqn:U(1)compact} \ensuremath{\mathbb{K}}(L^2(\Gamma^\tau))^{U(1)}=\bigoplus_{n \in \ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{K}}(L^2(\Gamma^\tau)_{(n)}). \end{equation} We claim the isomorphism \eqref{eqn:ApplyRieffelFixedPt} restricts to an isomorphism \[ (\Gamma^\tau \ltimes C_0(X))_{(n)} \rightarrow C_0\big(X\times_\Gamma \ensuremath{\mathbb{K}}(L^2(\Gamma^\tau)_{(n)})\big).\] To see this let $a \in C_c(\Gamma^\tau \ltimes C_c(X))_{(n)}$, and let $K_a$ be the corresponding family of operators defined by the integral kernels $k_a$ in \eqref{eqn:RieffelIso}. We suppress the basepoint $x \in X$ from the notation as it plays no role in the argument. The homogeneity of $a$ (and $U(1)$ invariance of $\mu$) implies (see \eqref{eqn:RieffelIso}) $k_a(\wh{\gamma}_1,z^{-1}\wh{\gamma}_2)=z^{-n}k_a(\wh{\gamma}_1,\wh{\gamma}_2)$, $z \in U(1)$. For $f \in L^2(\Gamma^\tau)$, \[ (K_af)(\wh{\gamma}_1)=\int_{\Gamma^\tau} k_a(\wh{\gamma}_1,\wh{\gamma}_2)f(\wh{\gamma}_2)\,d\wh{\gamma}_2.\] According to \eqref{eqn:AvgU1}, \eqref{eqn:IntGamma} the integral over $\Gamma^\tau$ can be carried out by first averaging with respect to the $U(1)$ action, and then integrating over $\Gamma$. Note that \[ \int_{U(1)} k_a(\wh{\gamma}_1,z^{-1}\wh{\gamma}_2)f(z^{-1}\wh{\gamma}_2)\,dz=k_a(\wh{\gamma}_1,\wh{\gamma}_2)\int_{U(1)} z^{-n} f(z^{-1}\wh{\gamma}_2)\,dz.\] The integral over $z \in U(1)$ gives the projection to the $(n)$-isotypical component, hence $K_a$ is contained in the ideal $\ensuremath{\mathbb{K}}(L^2(\Gamma^\tau)_{(n)})$. In particular for $n=1$ \[ \Gamma \ltimes_\tau C_0(X) \simeq C_0\big(X\times_\Gamma \ensuremath{\mathbb{K}}(L^2(\Gamma^\tau)_{(1)})\big)=C_0\big(X\times_\Gamma \ensuremath{\mathbb{K}}(L^2_\tau(\Gamma))\big).\] \end{proof} Assuming $\Gamma$ acts on $X$ freely, we can form the associated $S$-equivariant Dixmier-Douady bundle over $X/\Gamma$ \[ \ensuremath{\mathcal{A}}=X\times_\Gamma \ensuremath{\mathbb{K}},\] and $\Gamma \ltimes_\tau C_0(X) \simeq C_0(\ensuremath{\mathcal{A}})$ as $S$-$C^\ast$ algebras. \subsection{An example: a Dixmier-Douady bundle $\ensuremath{\mathcal{A}}_T$ over $T$.}\label{sec:MoritaMorphism} Let $LG^\tau$ denote a $U(1)$ central extension of the loop group, corresponding to $0 < \ell \in \ensuremath{\mathbb{Z}}$ times the generator $LG^\ensuremath{\tn{bas}}$. Let $T\ltimes \Pi^\tau$ denote the corresponding $U(1)$ central extension of the subgroup $T \times \Pi$ (see Section \ref{sec:loopgroup}). Carrying out the construction of the previous section with $S=T$, $\Gamma^\tau=\Pi^\tau$, $X=\ensuremath{\mathfrak{t}}$ we obtain a Dixmier-Douady bundle over $T=\ensuremath{\mathfrak{t}}/\Pi$: \begin{definition} Let $\ensuremath{\mathcal{A}}_T$ be the $T$-equivariant associated bundle \begin{equation} \label{eqn:defAT} \ensuremath{\mathcal{A}}_T=\ensuremath{\mathfrak{t}} \times_{\Pi} \ensuremath{\mathbb{K}}\big(L^2_{\tau}(\Pi)\big) \rightarrow \ensuremath{\mathfrak{t}}/\Pi=T. \end{equation} $\ensuremath{\mathcal{A}}_T$ is a $T$-equivariant Dixmier-Douady bundle over $T$. \end{definition} Recall the $G$-equivariant Dixmier-Douady bundle $\ensuremath{\mathcal{A}}$ described in Section \ref{sec:DDoverG}: \[ \ensuremath{\mathcal{A}}=PG \times_{LG} \ensuremath{\mathbb{K}}(V^\ast) \rightarrow G \] where $V$ is a level $\ell$ positive energy representation. The map \[ \ensuremath{\mathfrak{t}} \hookrightarrow PG, \qquad \xi \mapsto \gamma_\xi \] where \[ \gamma_\xi(s)=\exp(s\xi), \qquad s \in \ensuremath{\mathbb{R}}/\ensuremath{\mathbb{Z}} \] embeds $\ensuremath{\mathfrak{t}}$ into $PG$, $\Pi$-equivariantly. Restricting to $\ensuremath{\mathfrak{t}} \subset PG$ in \eqref{eqn:defA2}, we obtain a Dixmier-Douady bundle \begin{equation} \label{eqn:defA2} \ensuremath{\mathcal{A}}|_T=\ensuremath{\mathfrak{t}} \times_\Pi \ensuremath{\mathbb{K}}(V^\ast), \end{equation} over the maximal torus. The central circle in $T\ltimes \Pi^\tau$ acts on both $L^2_{\tau}(\Pi)$, $V^\ast$ with weight $-1$ (recall that for $L^2_\tau(\Pi)$ we use the \emph{right} regular representation $\rho$ in Definition \ref{def:TwistedReg}), hence the diagonal action of $T \ltimes \Pi^\tau$ on the tensor product \[ L^2_{\tau}(\Pi)\otimes V \] descends to an action of $T\times \Pi$. Define \begin{equation} \label{eqn:MoritaMorphismT} \ensuremath{\mathcal{E}}=\ensuremath{\mathfrak{t}} \times_\Pi \big(L^2_{\tau}(\Pi)\otimes V \big), \end{equation} a bundle of Hilbert spaces over $T$. By \eqref{eqn:defAT} and \eqref{eqn:defA2}, $\ensuremath{\mathcal{E}}$ defines a $T$-equivariant Morita morphism $\ensuremath{\mathcal{A}}|_T \dashrightarrow \ensuremath{\mathcal{A}}_T$. \subsection{A Green-Julg isomorphism.}\label{sec:GreenJulg} For a compact group $K$, the Green-Julg theorem states that the $K$-equivariant K-theory of a $K$-$C^\ast$ algebra $A$ is isomorphic to the K-theory of the crossed-product algebra $K\ltimes A$. There is a `dual' version of the Green-Julg theorem (cf. \cite[Theorem 20.2.7(b)]{Blackadar}) which applies to discrete groups instead of compact groups and K-homology instead of K-theory. \begin{proposition} \label{prop:GreenJulg} Let $\Gamma$ be a discrete group, and let $A$ be a $\Gamma$-$C^\ast$ algebra. Then \[ \ensuremath{\textnormal{KK}}_\Gamma(A,\ensuremath{\mathbb{C}}) \simeq \ensuremath{\textnormal{KK}}(\Gamma \ltimes A,\ensuremath{\mathbb{C}}).\] More generally, suppose a locally compact group $S$ acts on $\Gamma$ preserving Haar measure. If $A$ is an $S\ltimes \Gamma$-$C^\ast$ algebra then \[ \ensuremath{\textnormal{KK}}_{S\ltimes \Gamma}(A,\ensuremath{\mathbb{C}}) \simeq \ensuremath{\textnormal{KK}}_S(\Gamma\ltimes A,\ensuremath{\mathbb{C}}),\] where $\Gamma \ltimes A$ is equipped with the $S$-action in equation \eqref{eqn:HAction}. \end{proposition} The isomorphism is simple to describe at the level of cycles. Let $(H,\pi,F)$ be a cycle representing a class in $\ensuremath{\textnormal{KK}}(\Gamma \ltimes A,\ensuremath{\mathbb{C}})$. We may assume $\pi$ is non-degenerate. The universal property of the crossed product $\Gamma \ltimes A$ guarantees $\pi$ comes from a covariant pair $(\pi_A,\pi_\Gamma)$. For the triple $(H,\pi_A,F)$ to represent a class in $\ensuremath{\textnormal{KK}}_\Gamma(A,\ensuremath{\mathbb{C}})$, one needs the operators \begin{equation} \label{eqn:needcompact} \pi_A(a)(1-F^2), \quad [F,\pi_A(a)], \quad \pi_A(a)(\ensuremath{\textnormal{Ad}}_{\pi_\Gamma(\gamma)}F-F) \end{equation} to be compact, for all $a \in A$, $\gamma \in \Gamma$. The assumption that $\Gamma$ is discrete means that $A$ is a \emph{sub-algebra} of $\Gamma \ltimes A$, so $\pi_A$ is simply the restriction of $\pi$ to $A$, and the first two operators in \eqref{eqn:needcompact} are compact. For the last operator, note that \begin{equation} \label{eqn:IdentityForCommutator} \pi_A(a)(\ensuremath{\textnormal{Ad}}_{\pi_\Gamma(\gamma)}F-F)=[F,\pi(a)]-[F,\pi(a)\pi_\Gamma(\gamma)]\pi_\Gamma(\gamma)^{-1}. \end{equation} The operator $\pi(a)\pi_\Gamma(\gamma) \in \pi(\Gamma \ltimes A)$, hence the compactness of both terms follows because $(H,\pi,F)$ is a cycle. The inverse map is similar: a triple $(H,\pi_A,F)$ representing a class in $\ensuremath{\textnormal{KK}}_\Gamma(A,\ensuremath{\mathbb{C}})$ is sent to the triple $(H,\pi,F)$, where $\pi\colon \Gamma \ltimes A \rightarrow \ensuremath{\mathbb{B}}(H)$ is the representation induced by the covariant pair $(\pi_A,\pi_\Gamma)$. The crossed product $\pi(\Gamma \ltimes A)$ contains a dense sub-algebra consisting of finite linear combinations of operators of the form $\pi(a)\pi_\Gamma(\gamma)=\pi_A(a)\pi_\Gamma(\gamma)$. The operator $\pi_A(a)\pi_\Gamma(\gamma)(F^2-1)=\pi_\Gamma(\gamma)\pi_A(\gamma^{-1} \cdot a)(F^2-1)$ is compact, while the commutator $[F,\pi_A(a)\pi_\Gamma(\gamma)]$ is compact using \eqref{eqn:IdentityForCommutator} (multiply both sides by $\pi_\Gamma(\gamma)$ on the right). The maps are well-defined on homotopy classes because one can apply the same maps to cycles for the pair $(A,C([0,1]))$ (resp. $(\Gamma \ltimes A,C([0,1]))$). \begin{definition} Let $N$ be a locally compact group with $U(1)$ central extension $N^\tau$. Let $A$, $B$ be $N$-$C^\ast$ algebras (trivial $U(1)$ action). For $n \in \ensuremath{\mathbb{Z}}$ define \[ \ensuremath{\textnormal{KK}}_{N^\tau}(A,B)_{(n)} \] to be the direct summand of $\ensuremath{\textnormal{KK}}_{N^\tau}(A,B)$ generated by cycles where the central circle of $N^\tau$ acts with weight $n$. \end{definition} \ignore{ \begin{remark} One has a restriction homomorphism \[ \ensuremath{\textnormal{KK}}_{\widehat{N}}(A,B) \rightarrow \ensuremath{\textnormal{KK}}_{U(1)}(A,B).\] Since $U(1)$ acts trivially on $A$, $B$, the latter group is isomorphic to \[ \ensuremath{\textnormal{KK}}(A,B)\otimes R(U(1))=\bigoplus_{n \in \ensuremath{\mathbb{Z}}} \ensuremath{\textnormal{KK}}(A,B).\] The subgroup $\ensuremath{\textnormal{KK}}_{\widehat{N}}(A,B)^{(1)}$ is the inverse image of the $n=1$ summand. \end{remark} } \begin{proposition} \label{prop:modGreenJulg} Consider a semi-direct product $N=S\ltimes \Gamma$, where $S$, $\Gamma$ are locally compact groups and $\Gamma$ is discrete. Let $\Gamma^\tau$ be a $U(1)$-central extension, and assume the action of $S$ on $\Gamma$ lifts to an action on $\Gamma^\tau$. Let $A$ be an $S \ltimes \Gamma$-$C^\ast$ algebra. The twisted crossed product $\Gamma \ltimes_\tau A$ is an $S$-$C^\ast$ algebra with action given by \eqref{eqn:HAction}, and \[ \ensuremath{\textnormal{KK}}_{S\ltimes \Gamma^\tau}(A,\ensuremath{\mathbb{C}})_{(1)} \simeq \ensuremath{\textnormal{KK}}_S(\Gamma \ltimes_\tau A,\ensuremath{\mathbb{C}}).\] \end{proposition} \begin{proof} Let $(H,\pi,F,\pi_S)$ represent a class in $\ensuremath{\textnormal{KK}}_S(\Gamma \ltimes_\tau A,\ensuremath{\mathbb{C}})$. We may assume $\pi$ is non-degenerate. The universal property of $\Gamma \ltimes_\tau A$ implies that there is a covariant pair $(\pi_A,\pi_{\Gamma^\tau})$. Define $\pi_{S\ltimes \Gamma^\tau}(s,\ensuremath{\widehat{\gamma}})=\pi_S(s)\pi_{\Gamma^\tau}(\ensuremath{\widehat{\gamma}})$. At the level of cycles, the map sends $(H,\pi,F,\pi_S)$ to $(H,\pi_A,F,\pi_{S\ltimes \Gamma^\tau})$. We first check that $\pi_{S\ltimes \Gamma^\tau}$ is indeed a representation of $S\ltimes \Gamma^\tau$. The action of $S$ on $\Gamma \ltimes_\tau A$ extends to an action on the multiplier algebra $M(\Gamma \ltimes_\tau A)$. By non-degeneracy the representation $\pi$ of $\Gamma \ltimes_\tau A$ extends to $M(\Gamma \ltimes_\tau A)$, and one obtains a covariant pair extending $(\pi,\pi_S)$. For $\ensuremath{\widehat{\gamma}} \in \Gamma^\tau$, the function \[ u_{\ensuremath{\widehat{\gamma}}}(\ensuremath{\widehat{\gamma}}^\prime)=\begin{cases} z &\text{ if } \ensuremath{\widehat{\gamma}}^\prime=z^{-1}\ensuremath{\widehat{\gamma}}, \quad z \in U(1) \\ 0 &\text{ else}\end{cases}\] lies in $M(\Gamma \ltimes_\tau A)$ and satisfies $\pi(u_{\ensuremath{\widehat{\gamma}}})=\pi_\Gamma^\tau(\ensuremath{\widehat{\gamma}})$, $u_{s\ensuremath{\widehat{\gamma}} s^{-1}}(\ensuremath{\widehat{\gamma}}^\prime)=u_{\ensuremath{\widehat{\gamma}}}(s^{-1}\ensuremath{\widehat{\gamma}}^\prime s)$. \ignore{As far as I can remember, the reason for saying multiplier algebra here is because this element lies in $\Gamma \ltimes A^+$ rather than $(\Gamma \ltimes A)^+$, the latter being a slightly smaller algebra (take cross product before or after taking cross product)}By \eqref{eqn:HAction}, \begin{equation} \label{eqn:CommutatorGammaS} \pi_S(s)\pi_\Gamma^\tau(\wh{\gamma})\pi_S(s)^{-1}=\pi_S(s)\pi(u_{\ensuremath{\widehat{\gamma}}})\pi_S(s)^{-1}=\pi(s\cdot u_{\ensuremath{\widehat{\gamma}}})=\pi(u_{s\ensuremath{\widehat{\gamma}} s^{-1}})=\pi_\Gamma^\tau(s\wh{\gamma}s^{-1}). \end{equation} Equation \eqref{eqn:CommutatorGammaS} implies that $\pi_{S\ltimes \Gamma^\tau}$ is a representation of $S\ltimes \Gamma^\tau$. The algebra $A$ can be regarded as a \emph{sub-algebra} of $\Gamma \ltimes_\tau A$, via the embedding $a \mapsto \ti{a}$, where \[ \ti{a}(\wh{\gamma})=\begin{cases} za &\text{ if } \wh{\gamma}=z^{-1}1_{\Gamma^\tau}, \quad z \in U(1) \\ 0 & \text{ else} \end{cases}\] and $\pi_A(a)=\pi(\ti{a})$. The argument that $(H,\pi_A,F)$ represents a class in $\ensuremath{\textnormal{KK}}_{S\ltimes \Gamma^\tau}(A,\ensuremath{\mathbb{C}})$ is then similar to Proposition \ref{prop:GreenJulg}. For example, \eqref{eqn:IdentityForCommutator} now reads \begin{align*} \pi_A(a)(\ensuremath{\textnormal{Ad}}_{\pi_{S\ltimes\Gamma^\tau}(s,\wh{\gamma})}F-F)=[F,\pi(\ti{a})]&+\pi(\ti{a})\pi_\Gamma^\tau(\wh{\gamma})(\ensuremath{\textnormal{Ad}}_{\pi_S(s)}F-F)\pi_\Gamma^\tau(\wh{\gamma})^{-1}\\ &-[F,\pi(\ti{a})\pi_\Gamma^\tau(\wh{\gamma})]\pi_\Gamma^\tau(\wh{\gamma})^{-1} \end{align*} (we have used \eqref{eqn:CommutatorGammaS}). Note $\pi(\ti{a})\pi_\Gamma^\tau(\wh{\gamma}) \in \pi(\Gamma \ltimes_\tau A)$, hence compactness of all three terms follows because $(H,\pi,F)$ is a cycle. In the reverse direction, let $(H,\pi_A,F,\pi_{S\ltimes \Gamma^\tau})$ represent a class in $\ensuremath{\textnormal{KK}}_{S\ltimes \Gamma^\tau}(A,\ensuremath{\mathbb{C}})_{(1)}$, and let $\pi_\Gamma^\tau$ (resp. $\pi_S$) be the restriction of $\pi_{S\ltimes \Gamma^\tau}$ to $\Gamma^\tau$ (resp. $S$). The representations $(\pi_A,\pi_\Gamma^\tau)$ form a covariant pair as in \eqref{eqn:covpair}, and the map sends $(H,\pi_A,F,\pi_{S\ltimes \Gamma^\tau})$ to $(H,\pi,F,\pi_S)$ where $\pi$ is the representation of $\Gamma \ltimes_\tau A$ guaranteed by the universal property. \ignore{To see why the action of $S$ on $\Gamma \ltimes_\tau A$ is as in \eqref{eqn:HAction}, let $s \in S$, $a \in C_c(\Gamma^\tau,A)$ and $v \in H$, then \begin{align*} \pi_S(s)\pi(a)v&=\int_{\Gamma^\tau} \pi_S(s)\pi_A(a(\wh{\gamma}))\pi_S(s)^{-1}\pi_S(s) \pi_\Gamma^\tau(\ensuremath{\widehat{\gamma}})v \\ &=\int_{\Gamma^\tau} \pi_A(s.a(\wh{\gamma})) \pi_S(s)\pi_{\Gamma}^\tau(\ensuremath{\widehat{\gamma}})v\\ &=\int_{\Gamma^\tau}\pi_A(s.a(\wh{\gamma}))\kappa_\gamma(s)^{-1}\pi_{\Gamma}^\tau(\ensuremath{\widehat{\gamma}})\pi_S(s)v \end{align*} and this equals $\pi(s\cdot a)\pi_S(s)v$ if we set $(s\cdot a)(\wh{\gamma})=\kappa_\gamma(s)^{-1}s.a(\wh{\gamma})$. }One checks that the result is a cycle similar to before. The maps are well-defined on homotopy classes because one may apply the same maps to cycles for $(A,C([0,1]))$ (resp. $(\Gamma \ltimes_\tau A,C([0,1]))$). \end{proof} \subsection{The analytic assembly map.}\label{sec:Assembly} Let $X$ be a locally compact space with a proper action of a locally compact group $N$. If the action of $N$ is \emph{cocompact}, i.e. $X/N$ is compact, then there is a map \[ \mu_N\colon \ensuremath{\textnormal{K}}_0^N(X)=\ensuremath{\textnormal{KK}}_N(C_0(X),\ensuremath{\mathbb{C}}) \rightarrow \ensuremath{\textnormal{KK}}(\ensuremath{\mathbb{C}},C^\ast(N))=\ensuremath{\textnormal{K}}_0(C^\ast(N)),\] known as the \emph{analytic assembly map}. If $N$ is compact, the analytic assembly map is just the equivariant index: \[ \mu_N([(H,\rho,F)])=[\ensuremath{\textnormal{ker}}(F^+)]-[\ensuremath{\textnormal{ker}}(F^-)] \in \ensuremath{\textnormal{K}}_0(C^\ast(N))\simeq R(N).\] For non-compact $N$, the definition of the assembly map is more involved. We give a brief description here and refer the reader to e.g. \cite{BaumConnesHigson}, \cite[Section 2]{MislinValette}, \cite[Section 4.2]{EchterhoffBaumConnesReview2017}, \cite{KasparovTransversallyElliptic} for details. Let $(H,\rho,F)$ be a cycle representing a class $[F] \in \ensuremath{\textnormal{KK}}_N(C_0(X),\ensuremath{\mathbb{C}})$. Assume the operator $F$ is \emph{properly supported}, in the sense that for any $f \in C_c(X)$ one can find an $h \in C_c(X)$ such that $\rho(h)F\rho(f)=F\rho(f)$ (this can always be achieved by perturbing $F$, cf. \cite[Section 3]{BaumConnesHigson}). To define $\mu_N$, the first step is to define a $C_c(N)$-valued inner product $(-,-)_N$ on the subspace $\rho(C_c(X))H \subset H$, by \[ (f_1,f_2)_N(n)=(f_1,n\cdot f_2)_{L^2}.\] Complete $\rho(C_c(X))H$ in the norm $\|f\|_N=\|(f,f)_N\|^{1/2}_{C^\ast(N)}$, where $\|-\|_{C^\ast(N)}$ denotes the norm of the $C^\ast$ algebra $C^\ast(N)$, to obtain a Hilbert $C^\ast(N)$-module $\ensuremath{\mathcal{H}}$. Then $F$ acts on $\rho(C_c(X))H$ (here use that $F$ is properly supported) and extends to an adjointable operator $\ensuremath{\mathcal{F}}$ on $\ensuremath{\mathcal{H}}$. The pair $(\ensuremath{\mathcal{H}},\ensuremath{\mathcal{F}})$ represents a class in $\ensuremath{\textnormal{K}}_0(C^\ast(N))$, and \[ \mu_N([F])=[(\ensuremath{\mathcal{H}},\ensuremath{\mathcal{F}})] \in \ensuremath{\textnormal{K}}_0(C^\ast(N)).\] Since $\ensuremath{\mathcal{F}}$ commutes with the $C^\ast(N)$ action, $\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^{\pm})$ are $C^\ast(N)$-modules, but unfortunately in general they need not be finitely generated and projective, so that `$[\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^+)]-[\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^-)]$' is not a K-theory class. If the range of $\ensuremath{\mathcal{F}}$ is closed and $\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^{\pm})$ are finitely generated and projective, then indeed $\mu_N([F])=[\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^+)]-[\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^-)]$ (cf. \cite[Proposition 3.27]{HigsonPrimer}); more generally it is necessary to perturb $\ensuremath{\mathcal{F}}$ to obtain such a description. There is another description of the analytic assembly map due to Kasparov that we briefly recall; see for example \cite[Section 4.2]{EchterhoffBaumConnesReview2017} for a recent review, and \cite[Section 2.4]{MislinValette} for a discussion of the relation between the two descriptions of $\mu_N$ (at least for $N$ discrete). As the action of $N$ on $X$ is cocompact, one can find a continuous compactly supported `cut-off function' $c \colon X \rightarrow [0,\infty)$ such that for all $x \in X$, \[ \int_N c(n^{-1}.x)^2=1 .\] Define $p_c \colon G \times X \rightarrow [0,\infty)$ by \[ p_c(n,x)=\mu(n)^{-1/2}c(n^{-1}.x)c(x),\] where $\mu$ is the modular homomorphism of $N$. The function $p_c$ defines a self-adjoint projection in $G\ltimes C_0(X)$, and hence an element $[c] \in \ensuremath{\textnormal{KK}}(\ensuremath{\mathbb{C}},N\ltimes C_0(X))$. Kasparov's definition of the assembly map is as a Kasparov product \[ \mu_N([F])=[c]\otimes_{N\ltimes C_0(X)} j_N([F]),\] where $j_N \colon \ensuremath{\textnormal{KK}}_N(C_0(X),\ensuremath{\mathbb{C}}) \rightarrow \ensuremath{\textnormal{KK}}(N\ltimes C_0(X),C^\ast(N))$ is the descent homomorphism. \section{The group $T\ltimes \Pi^\ensuremath{\tn{bas}}$.}\label{sec:SemiDirect} In this section we collect results about the group $T\ltimes \Pi^\ensuremath{\tn{bas}}$ and the K-theory of its group $C^\ast$ algebra. For another discussion of the K-theory of $C^\ast(T\ltimes \Pi^\ensuremath{\tn{bas}})$ see \cite{Takata1}. Throughout $G$ is assumed to be connected, simply connected, simple. Let $T \subset G$ be a fixed maximal torus and we identify $\ensuremath{\mathfrak{t}} \times \ensuremath{\mathfrak{t}}^\ast$ with the basic inner product, and hence $\Pi$ is identified with a sublattice of $\Pi^\ast$. \subsection{The group $\Pi^\ensuremath{\tn{bas}}$.} Let $\Pi^\ensuremath{\tn{bas}}$ denote the restriction to $\Pi \subset LG$ of the basic central extension $LG^{\ensuremath{\tn{bas}}}$ of the loop group. We give an explicit 2-cocycle $\sigma$ for $\Pi^\ensuremath{\tn{bas}}$. Recall that the cocycle of a $U(1)$ central extension associated to a splitting $\eta \in \Pi \mapsto \ensuremath{\widehat{\eta}} \in \Pi^\ensuremath{\tn{bas}}$ is the function $\sigma \colon \Pi \times \Pi \rightarrow U(1)$ defined by the equation \[ \ensuremath{\widehat{\eta}}_1 \ensuremath{\widehat{\eta}}_2=\sigma(\eta_1,\eta_2)\wh{\eta_1\eta_2}.\] (The group operation in $\Pi$ is written multiplicatively.) Let $\beta_1,...,\beta_r \in \Pi$ be a lattice basis for $\Pi$. It is known \cite[Proposition 4.8.1]{PressleySegal}, \cite[Theorem 3.2.1]{LaredoPosEnergy} that one can choose lifts $\wh{\beta}_1,...,\wh{\beta}_r \in \Pi^\ensuremath{\tn{bas}}$ such that \begin{equation} \label{eqn:CommutatorLambda} \wh{\beta}_i \wh{\beta}_j \wh{\beta}_i^{-1}\wh{\beta}_j^{-1}=(-1)^{B(\beta_i,\beta_j)}, \end{equation} where $B$ is the basic inner product. For $\eta=\sum n_i \beta_i \in \Pi$ let \begin{equation} \label{eqn:DefOfLift} \ensuremath{\widehat{\eta}}=\wh{\beta}_1^{n_1} \cdots \wh{\beta}_r^{n_r}. \end{equation} Define a bilinear map \[ \epsilon \colon \Pi \times \Pi \rightarrow \ensuremath{\mathbb{Z}} \] by \[ \epsilon(\beta_i,\beta_j)=\begin{cases} B(\beta_i,\beta_j) &\text{ if } i>j\\ 0 &\text{ if } i \le j \end{cases}\] and extend bilinearly. \begin{proposition} The cocycle associated to the splitting \eqref{eqn:DefOfLift} is \[ \sigma(\eta_1,\eta_2)=(-1)^{\epsilon(\eta_1,\eta_2)}, \qquad \eta_1,\eta_2 \in \Pi.\] \end{proposition} \begin{remark} The function $(-1)^{\epsilon}$ is the `off-diagonal' part of what Kac \cite[Section 7.8]{KacBook} calls an \emph{asymmetry function}. \end{remark} \begin{proof} If $i>j$ then using \eqref{eqn:CommutatorLambda} we have \[ \wh{\beta}_i \wh{\beta}_j=(-1)^{B(\beta_i,\beta_j)}\wh{\beta}_j\wh{\beta}_i=(-1)^{B(\beta_i,\beta_j)}\wh{\beta_i\beta_j},\] while if $i\le j$ then \[ \wh{\beta}_i\wh{\beta}_j=\wh{\beta_i\beta_j}.\] This verifies \[ \sigma(\beta_i,\beta_j)=(-1)^{\epsilon(\beta_i,\beta_j)}\] for $i,j=1,...,r$. On the other hand, using the definition of the lift \eqref{eqn:DefOfLift} and the commutation relation \eqref{eqn:CommutatorLambda}, one sees that $\sigma$ is bimultiplicative: \[ \sigma(\eta_1+\eta_2,\eta)=\sigma(\eta_1,\eta)\sigma(\eta_2,\eta), \qquad \sigma(\eta,\eta_1+\eta_2)=\sigma(\eta,\eta_1)\sigma(\eta,\eta_2).\] \end{proof} \subsection{The group $T\ltimes \Pi^\ensuremath{\tn{bas}}$.} Define a group homomorphism \begin{equation} \label{eqn:defkappa} \kappa \colon \Pi \rightarrow \ensuremath{\textnormal{Hom}}(T,U(1))=\Pi^\ast, \qquad \kappa_\eta(t)=t^{-B^\flat(\eta)}. \end{equation} It is known (cf. \cite[Section 2.2]{FHTII}, \cite{PressleySegal}) that in the subgroup $T\ltimes \Pi^\ensuremath{\tn{bas}} \subset LG^\ensuremath{\tn{bas}}$, elements $t \in T$ and $\eta \in \Pi^\ensuremath{\tn{bas}}$ satisfy the commutation relation \[ \ensuremath{\widehat{\eta}} t \ensuremath{\widehat{\eta}}^{-1} t^{-1}=\kappa_\eta(t).\] Moreover the data $(\sigma,\kappa)$ determine the group $T\ltimes \Pi^\ensuremath{\tn{bas}}$ (up to isomorphism). Let $T\ltimes \Pi^\ensuremath{\textnormal{triv}}$ denote the analogous group defined by the data $(1,\kappa)$, i.e. $\Pi^\ensuremath{\textnormal{triv}}=\Pi \times U(1)$ is the trivial central extension, and the commutator map for $T$, $\Pi^\ensuremath{\textnormal{triv}}$ is the same $\kappa$ defined in \eqref{eqn:defkappa}. \ignore{I believe this is exactly the same convention as in FHTII, because their $\kappa$ is indeed related to $-B$. This is also consistent with $-B$ being the restriction of the DD class to $T$, as in Eckhard's conjugacy classes paper. In our Witten deformation paper, we used the same commutation relation $\eta t \eta^{-1}t^{-1}=t^{-\ell B^\flat(\ol{\eta})}$, except that our $\kappa$ there was exactly $\ell B^\flat$, i.e. our convention for $\kappa$ was opposite, although our formulas for the commutation relation was correct...I believe!} In detail, if we use the section $\Pi \rightarrow \Pi^\ensuremath{\tn{bas}}$ defined in \eqref{eqn:DefOfLift} to view $\Pi^\ensuremath{\tn{bas}}$ (topologically) as a product $\Pi \times U(1)$, then the group multiplication in $T\ltimes \Pi^\ensuremath{\tn{bas}}$ is \begin{equation} \label{eqn:MultNonTriv} (t_1,\eta_1,z_1)(t_2,\eta_2,z_2)=(t_1t_2,\eta_1+\eta_2,\kappa_{\eta_1}(t_2)\sigma(\eta_1,\eta_2)z_1z_2) \end{equation} while in $T\ltimes \Pi^\ensuremath{\textnormal{triv}}$ the group multiplication is \begin{equation} \label{eqn:MultTriv} (t_1,\eta_1,z_1)(t_2,\eta_2,z_2)=(t_1t_2,\eta_1+\eta_2,\kappa_{\eta_1}(t_2)z_1z_2). \end{equation} As we saw above, in general $\Pi^\ensuremath{\tn{bas}}$ is not isomorphic to $\Pi^\ensuremath{\textnormal{triv}}$ ($\Pi^\ensuremath{\tn{bas}}$ need not be abelian). Perhaps surprisingly, the distinction between $\Pi^\ensuremath{\tn{bas}}$, $\Pi^\ensuremath{\textnormal{triv}}$ disappears after taking semi-direct product with $T$. \begin{proposition} \label{prop:NonCanonicalIso} The groups $T\ltimes \Pi^\ensuremath{\tn{bas}}$, $T\ltimes \Pi^\ensuremath{\textnormal{triv}}$ are (non-canonically) isomorphic. \end{proposition} \begin{proof} We will show that the additional sign $\sigma(\eta_1,\eta_2)$ can be absorbed into the phase $\kappa_{\eta_1}(t_2)$, by choosing an appropriate identification $T\ltimes \Pi^\ensuremath{\tn{bas}} \rightarrow T\ltimes \Pi^\ensuremath{\textnormal{triv}}$. For $\eta \in \Pi$, define \[ \eta_\epsilon=\tfrac{1}{2}B^\sharp(\epsilon(-,\eta)) \in \ensuremath{\mathfrak{t}},\] where here one views the contraction $\epsilon(-,\eta)$ as an element of $\ensuremath{\mathfrak{t}}^\ast$, and then uses $B^\sharp$ to convert this to an element of $\ensuremath{\mathfrak{t}}$. The image of the map $\eta \in \Pi \mapsto \eta_\epsilon \in \ensuremath{\mathfrak{t}}$ is contained in $\tfrac{1}{2}B^\sharp(\Pi^\ast)$. By construction \begin{equation} \label{eqn:AbsorbingProperty} \exp(\eta_\epsilon)^{B^\flat(\mu)}=e^{\pi \i \epsilon(\mu,\eta)}=\sigma(\mu,\eta), \qquad \eta, \mu \in \Pi. \end{equation} Define \[ \Psi \colon T\ltimes \Pi^\ensuremath{\tn{bas}} \rightarrow T\ltimes \Pi^\ensuremath{\textnormal{triv}}, \qquad \Psi(t,\eta,z)=(t\exp(\eta_\epsilon),\eta,z). \] A short calculation using \eqref{eqn:AbsorbingProperty} shows that $\Psi$ is a group homomorphism. \end{proof} \subsection{The $C^\ast$ algebra of $T\ltimes \Pi^\ensuremath{\tn{bas}}$.} Using Proposition \ref{prop:NonCanonicalIso}, $T\ltimes \Pi^\ensuremath{\tn{bas}} \simeq T\ltimes \Pi^\ensuremath{\textnormal{triv}}$. There is an obvious isomorphism $(t,\eta,z) \in T\ltimes \Pi^{\ensuremath{\textnormal{triv}}}\mapsto (t,z,\eta) \in T^\ensuremath{\textnormal{triv}} \rtimes \Pi$, where $T^\ensuremath{\textnormal{triv}}=T \times U(1)$ is the trivial central extension. If $G_1\ltimes G_2$ is a semi-direct product of locally compact groups, then there is an isomorphism \[ C^\ast(G_1\ltimes G_2) \simeq G_1 \ltimes C^\ast(G_2) \] induced by the natural map $C_c(G_1\times G_2) \rightarrow C_c(G_1,C_c(G_2))$, cf. \cite{WilliamsCrossedProducts}. Thus \[ C^\ast(T^\ensuremath{\textnormal{triv}} \rtimes \Pi)\simeq C^\ast(T^\ensuremath{\textnormal{triv}}) \rtimes \Pi.\] The group $T^\ensuremath{\textnormal{triv}}=T\times U(1)$ is abelian, hence $C^\ast(T^\ensuremath{\textnormal{triv}})$ is isomorphic to $C_0(\Pi^\ast \times \ensuremath{\mathbb{Z}})$ (the Pontryagin dual). Thus \[ C^\ast(T^\ensuremath{\textnormal{triv}}) \rtimes \Pi \simeq C_0(\Pi^\ast \times \ensuremath{\mathbb{Z}})\rtimes \Pi.\] If $\xi \in \Pi^\ast$ and $\ell \in \ensuremath{\mathbb{Z}}$, the isomorphism $C_0(\Pi^\ast \times \ensuremath{\mathbb{Z}}) \rightarrow C^\ast(T^\ensuremath{\textnormal{triv}})$ sends $\delta_{(\xi,\ell)}\in C_0(\Pi^\ast \times \ensuremath{\mathbb{Z}})$ to its Fourier transform \[ e_{\xi,\ell} \in C^\ast(T^\ensuremath{\textnormal{triv}}), \qquad e_{\xi,\ell}(t,z)=t^\xi z^\ell. \] Using the commutation relation in $T\ltimes \Pi^{\ensuremath{\textnormal{triv}}}\simeq T^{\ensuremath{\textnormal{triv}}}\rtimes \Pi$, the action of $\eta \in \Pi$ on $e_{\xi,\ell}$ is \[ (\eta \cdot e_{\xi,\ell})(t,z)=e_{\xi,\ell}(t,t^{\eta}z)=t^{\xi+\ell\eta}z^\ell.\] This corresponds to the action of $\Pi$ on the Pontryagin dual $\Pi^\ast \times \ensuremath{\mathbb{Z}}$ by \begin{equation} \label{eqn:LevelNAction} \eta \cdot (\xi,\ell)=(\xi+\ell\eta,\ell). \end{equation} We see that \[ C_0(\Pi^\ast \times \ensuremath{\mathbb{Z}}) \rtimes \Pi=\bigoplus_{\ell \in \ensuremath{\mathbb{Z}}} C_0(\Pi^\ast) \rtimes_\ell \Pi \] where $C_0(\Pi^\ast) \rtimes_\ell \Pi$ denotes the crossed product formed using the `level $\ell$' action \eqref{eqn:LevelNAction}. The sub-algebra $C^\ast(T\ltimes \Pi^\ensuremath{\tn{bas}})_{(\ell)}$ corresponds to the $\ell^{th}$ summand. For $\ell=0$ the action of $\Pi$ on $\Pi^\ast$ is trivial, hence \[ C_0(\Pi^\ast)\rtimes_0 \Pi \simeq C_0(\Pi^\ast)\otimes C^\ast(\Pi) \simeq C_0(\Pi^\ast \times T^\vee) \] where $T^\vee=\ensuremath{\mathfrak{t}}^\ast/\Pi^\ast$ is the Pontryagin dual of $\Pi$. For $\ell \ne 0$, the algebra $C_0(\Pi^\ast)\rtimes_\ell\Pi$ is isomorphic to a direct sum of finitely many copies of the compact operators on $L^2(\Pi)$, indexed by the finite quotient $\Pi^\ast/\ell\Pi$. One can deduce this from the Takai duality theorem, but it is also not difficult to argue directly as follows. One has a faithful Schr{\"o}dinger-type representation of $C_0(\Pi^\ast)\rtimes_\ell\Pi$ on $L^2(\Pi^\ast)$, where $C_0(\Pi^\ast)$ acts by multiplication operators, and $\Pi$ acts by translations as in \eqref{eqn:LevelNAction}. The decomposition of $\Pi^\ast$ into cosets $[\xi]=\xi+\ell\Pi$ for the $\Pi$ action gives a direct sum decomposition \begin{equation} \label{eqn:DirectSumLambda} L^2(\Pi^\ast)=\bigoplus_{[\xi] \in \Pi^\ast/\ell\Pi} L^2([\xi]), \end{equation} and the action of $C_0(\Pi^\ast)\rtimes_\ell\Pi$ preserves this decomposition. For $\eta \in \Pi$ and $\mu \in \Pi^\ast$ let $\theta_{\eta,\mu} \in C_c(\Pi,C_c(\Pi^\ast))=C_c(\Pi \times \Pi^\ast)$ be the delta function at $(\eta,\mu) \in \Pi \times \Pi^\ast$. As an element of $C_0(\Pi^\ast)\rtimes_n \Pi$, $\theta_{\eta,\mu}$ acts on $L^2(\Pi^\ast)$ by the rank $1$ linear transformation mapping $\delta_{\mu}$ to $\delta_{\mu+\ell\eta}$. Such rank $1$ operators generate the algebra of all compact operators on $L^2(\Pi^\ast)$ that preserve the direct sum decomposition \eqref{eqn:DirectSumLambda}, and thus \[ C_0(\Pi^\ast)\rtimes_\ell \Pi \simeq \bigoplus_{[\xi] \in \Pi^\ast/\ell\Pi} \ensuremath{\mathbb{K}}(L^2([\xi])).\] We summarize these observations with a proposition. \begin{proposition} \label{prop:AlgTPi} The group $C^\ast$ algebra $C^\ast(T\ltimes \Pi^\ensuremath{\tn{bas}})$ is an infinite direct sum of its homogeneous ideals $C^\ast(T\ltimes \Pi^\ensuremath{\tn{bas}})_{(\ell)}$, $\ell \in \ensuremath{\mathbb{Z}}$. A choice of group isomorphism $T\ltimes \Pi^\ensuremath{\tn{bas}} \xrightarrow{\sim}T\ltimes \Pi^\ensuremath{\textnormal{triv}}$ determines isomorphisms of $C^\ast$ algebras \[ C^\ast(T\ltimes \Pi^\ensuremath{\tn{bas}})_{(0)}\xrightarrow{\sim} C_0(\Pi^\ast \times T^\vee),\] and for $\ell \ne 0$ \[ C^\ast(T\ltimes \Pi^\ensuremath{\tn{bas}})_{(\ell)}\xrightarrow{\sim} \bigoplus_{[\xi] \in \Pi^\ast/\ell\Pi} \ensuremath{\mathbb{K}}(L^2([\xi])),\] where $[\xi]=\xi+\ell\Pi \subset \Pi^\ast$ is a coset for the `level $\ell$' action of $\Pi$ on $\Pi^\ast$. \end{proposition} \subsection{The map $\ensuremath{\textnormal{K}}_0(C^\ast_\tau(T\times \Pi)) \rightarrow R^{-\infty}(T)^{\ell \Pi}$.}\label{sec:KThyTPi} Let $\tau$ be some integer multiple $0\ne \ell \in \ensuremath{\mathbb{Z}}$ of the basic central extension of $LG$, and let $T\ltimes \Pi^\tau$ denote the restriction of $LG^\tau$ to the subgroup $T\times \Pi \subset LG$. For $\ell=1$ this is precisely the group $T\ltimes \Pi^\ensuremath{\tn{bas}}$ considered above. Elements $t \in T$ and $\wh{\eta} \in \Pi^\tau$ satisfy the commutation relation \[ \ensuremath{\widehat{\eta}} t \ensuremath{\widehat{\eta}}^{-1} t^{-1}=\kappa_\eta^\ell(t)=t^{-\ell B^\flat(\eta)} \in U(1),\] see equation \eqref{eqn:defkappa}. \ignore{I believe this is exactly the same convention as in FHTII, because their $\kappa$ is indeed related to $-B$. This is also consistent with $-B$ being the restriction of the DD class to $T$, as in Eckhard's conjugacy classes paper. In our Witten deformation paper, we used the same commutation relation $\eta t \eta^{-1}t^{-1}=t^{-\ell B^\flat(\ol{\eta})}$, except that our $\kappa$ there was exactly $\ell B^\flat$, i.e. our convention for $\kappa$ was opposite, although our formulas for the commutation relation was correct...I believe!} \ignore{ Recall that we are using the basic inner product to identify $\ensuremath{\mathfrak{t}}=\ensuremath{\mathfrak{t}}^\ast$, and hence $\Pi$ is identified with a sub-lattice of $\Pi^\ast$. For $\ell=0$, the algebra $C^\ast_\tau(T\times \Pi) \simeq C^\ast(T \times \Pi)$ is abelian, hence isomomorphic to the algebra of continuous functions vanishing at infinity on the Pontryagin dual $\Pi^\ast \times T^\vee$, where $T^\vee=\ensuremath{\mathfrak{t}}^\ast/\Pi^\ast$ is the Pontryagin dual of $\Pi$. For $\ell \ne 0$, choose a lattice basis $\eta_1,...,\eta_r$ of $\Pi$, and lifts $\ensuremath{\widehat{\eta}}_1,...,\ensuremath{\widehat{\eta}}_r \in \Pi^\tau$. Any element $\ensuremath{\widehat{\eta}} \in \Pi^\tau$ has a unique representation as a product \begin{equation} \label{eqn:PhaseFactor} z\ensuremath{\widehat{\eta}}_1^{n_1} \cdots \ensuremath{\widehat{\eta}}_r^{n_r}, \qquad z \in U(1), n_i \in \ensuremath{\mathbb{Z}}. \end{equation} Use this to define a representation of $T\ltimes \Pi^\tau$ on $L^2(\Pi^\ast)$ by \[ (t,\ensuremath{\widehat{\eta}})\cdot f(\xi)=zt^\xi f(\xi-\ell \eta),\] where $z$ is the phase factor appearing in the decomposition \eqref{eqn:PhaseFactor} for $\ensuremath{\widehat{\eta}}$. The induced representation of $C^\ast_\tau(T\times \Pi)$ turns out to be faithful, with image equal to the block diagonal sub-algebra: \begin{equation} \label{eqn:AlgTLambda} \bigoplus_{[\xi] \in \Pi^\ast/ \ell \Pi} \ensuremath{\mathbb{K}}\big(L^2([\xi])\big), \end{equation} where $[\xi]=\xi+\ell \Pi \subset \Pi^\ast$ denotes a coset for the action of $\ell \Pi$ (viewed as a sub-lattice of $\Pi^\ast$) on $\Pi^\ast$ by translation. \begin{remark} The reason for the slightly awkward definition involving a lattice basis and \eqref{eqn:PhaseFactor} is that for general $G$ the central extension $\Pi^\tau$ pulled back from $LG^\tau$ is non-trivial; this means there is a small operator ordering ambiguity, and so to get a representation we must choose an ordering. In the appendix we describe a slightly surprising fact: although $\Pi^\tau$ is not isomorphic to $\Pi \times U(1)$ in general, after taking semi-direct product with $T$, it is as if it were: $T\ltimes \Pi^\tau \simeq T\ltimes (\Pi \times U(1))$, although the isomorphism is not canonical. With this in hand one easily deduces a description of $C^\ast(T\ltimes \Pi^\tau)$, see the appendix. \end{remark}} The structure of $C^\ast_\tau(T\times \Pi)$ follows immediately from Proposition \ref{prop:AlgTPi}, and in particular its K-theory is \[ \ensuremath{\textnormal{K}}_0(C^\ast_\tau(T\times \Pi))\simeq \bigoplus_{[\xi] \in \Pi^\ast/\ell \Pi} \ensuremath{\textnormal{K}}_0\big(\ensuremath{\mathbb{K}}(L^2([\xi]))\big).\] The K-theory of $\ensuremath{\mathbb{K}}(L^2([\xi]))$ is a copy of the integers, generated by the finitely generated, projective module $L^2([\xi])$. Let $R^{-\infty}(T)^{\ell\Pi}$ denote the subspace of $R^{-\infty}(T)$ consisting of formal characters invariant under the `level $\ell$' action of $\Pi$, that is, formal sums \[ \sum_{\xi \in \Pi^\ast} a_\xi e_\xi, \qquad e_\xi(t)=t^\xi \] where the coefficients satisfy $a_{\xi+\ell \eta}=a_\xi$ for all $\eta \in \Pi$ (we identify $\Pi$ with a sublattice of $\Pi^\ast$ using the basic inner product). There is a map \[ \ensuremath{\textnormal{K}}_0\big(\ensuremath{\mathbb{K}}(L^2([\xi]))\big) \rightarrow R^{-\infty}(T)^{\ell \Pi}\] sending the generator $L^2([\xi])$ to its formal $T$-character: \[ L^2([\xi]) \mapsto \sum_{\eta \in \Pi} e_{\xi+\ell \eta}.\] Put differently this formal character has multiplicity function given by the indicator function of the coset $[\xi]$ in $\Pi^\ast$. It is clear that this map gives an isomorphism of abelian groups: \begin{equation} \label{eqn:IsoFormalChar} \ensuremath{\textnormal{K}}_0\big(C^\ast_\tau(T\times \Pi)\big) \xrightarrow{\sim} R^{-\infty}(T)^{\ell \Pi}. \end{equation} \section{The map $\scr{I}\colon \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}) \rightarrow R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}},\, \ell}$}\label{sec:defI} Let $\ensuremath{\mathcal{A}}$ be a $G$-equivariant Dixmier-Douady bundle over $G$, with Dixmier-Douady class $\ell \in \ensuremath{\mathbb{Z}}\simeq H^3_G(G,\ensuremath{\mathbb{Z}})$ and $\ell>0$. In this section we construct a map \[ \scr{I}\colon \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}) \rightarrow R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}},\, \ell} \] and show that in a suitable sense it is an inverse of the Freed-Hopkins-Teleman isomorphism. We begin by fixing a model for $\ensuremath{\mathcal{A}}$ as in Section \ref{sec:DDoverG}: \begin{equation} \label{eqn:defA2} \ensuremath{\mathcal{A}}=PG \times_{LG} \ensuremath{\mathbb{K}}(V^\ast), \end{equation} where $V$ is a level $\ell$ positive energy representation of $LG^\ensuremath{\tn{bas}}$. Let $LG^\tau$ denote the central extension of $LG$ corresponding to $\ell$ times the generator $LG^\ensuremath{\tn{bas}}$, thus $V$ is a representation of $LG^\tau$ such that the central circle acts with weight $1$. \ignore{ subsection{A tubular neighborhood of $T$.}\label{sec:TubNeigh} Let $U$ be a small $N(T)$-invariant tubular neighborhood of $T$ in $G$, with projection map $\pi_T \colon U \rightarrow T$. A neighborhood $U$ can be described explicitly: for $\epsilon>0$ sufficiently small, and $\st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp)$ an $\epsilon$-ball in $\ensuremath{\mathfrak{t}}^\perp \subset \ensuremath{\mathfrak{g}}$, the map \[ T \times \st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp), \qquad (t,\xi)\mapsto t\exp(\xi),\] is a $N(T)$-equivariant diffeomorphism onto its image, which we may take to be $U$, with $\pi_T$ the projection to the first factor. Recall the Dixmier-Douady bundle $\ensuremath{\mathcal{A}}_T \rightarrow T$ constructed in Section \ref{sec:MoritaMorphism}. Let $\ensuremath{\mathcal{A}}_U=\pi_T^\ast \ensuremath{\mathcal{A}}_T$. By pullback of \eqref{eqn:MoritaMorphismT} we obtain a Morita equivalence $\ensuremath{\mathcal{A}}|_U \dashrightarrow \ensuremath{\mathcal{A}}_U$, and hence also an isomorphism \begin{equation} \label{eqn:MoritaMorph} \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}|_U) \xrightarrow{\sim} \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}_U). \end{equation} There is a canonical identification $\ensuremath{\mathfrak{t}}^\perp \simeq \ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}}$. The complexification $(\ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}})_\ensuremath{\mathbb{C}} \simeq \ensuremath{\mathfrak{n}}_+\oplus \ensuremath{\mathfrak{n}}_-$, where $\ensuremath{\mathfrak{n}}_+$ (resp. $\ensuremath{\mathfrak{n}}_-$) is the direct sum of the positive (resp. negative) root spaces. We choose a complex structure on $\ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}}$ such that $(\ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}})^{1,0}=\ensuremath{\mathfrak{n}}_-$. This choice of complex structure determines a Bott-Thom isomorphism \begin{equation} \label{eqn:BottThom} \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}_U) \simeq \ensuremath{\textnormal{K}}_0^T(T,\ensuremath{\mathcal{A}}_T). \end{equation} \ignore{ \begin{remark} \label{rem:LiftU} We think of $U$ as a `thickening' of $T$. In \cite[Section 6.4]{LMSspinor} we showed that $U$ has a `lift' to $PG$, i.e. there is a smooth submanifold $\ensuremath{\mathcal{U}} \subset PG$ with $\ensuremath{\mathfrak{t}} \subset \ensuremath{\mathcal{U}}$ and $\ensuremath{\textnormal{dim}}(\ensuremath{\mathcal{U}})=\ensuremath{\textnormal{dim}}(U)=\ensuremath{\textnormal{dim}}(G)$, such that $\ensuremath{\mathcal{U}}/\Pi=U$. Briefly, one chooses a connection on the principal $LG$-bundle $q \colon PG \rightarrow G$, which is used to lift the Euler vector field for $U \rightarrow T$ to $q^{-1}(U)$. The flow of this vector field determines a tubular neighborhood embedding $q^{-1}(T) \times \st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp) \hookrightarrow PG$, and $\ensuremath{\mathcal{U}}$ is obtained as the image of $\ensuremath{\mathfrak{t}} \times \st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp) \subset q^{-1}(T)\times \st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp)$. \end{remark} } subsection{The definition of $\scr{I}$.}\label{sec:DefI}} Let $U$ be a small $N(T)$-invariant tubular neighborhood of $T$ in $G$, with projection map $\pi_T \colon U \rightarrow T$. A neighborhood $U$ can be described explicitly: for $\epsilon>0$ sufficiently small, and $\st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp)$ an $\epsilon$-ball in $\ensuremath{\mathfrak{t}}^\perp \subset \ensuremath{\mathfrak{g}}$, the map \[ T \times \st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp), \qquad (t,\xi)\mapsto t\exp(\xi),\] is a $N(T)$-equivariant diffeomorphism onto its image, which we may take to be $U$, with $\pi_T$ the projection to the first factor. The first stage in the definition of $\scr{I}$ is the restriction map \begin{equation} \label{map:restrict} \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}) \rightarrow \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}|_U) \end{equation} induced by the `extension by $0$' algebra homomorphism $C_0(\ensuremath{\mathcal{A}}|_U)\hookrightarrow C(\ensuremath{\mathcal{A}})$. Recall the Dixmier-Douady bundle $\ensuremath{\mathcal{A}}_T \rightarrow T$ constructed in Section \ref{sec:MoritaMorphism}. Let $\ensuremath{\mathcal{A}}_U=\pi_T^\ast \ensuremath{\mathcal{A}}_T$. By pullback of \eqref{eqn:MoritaMorphismT} we obtain a Morita equivalence $\ensuremath{\mathcal{A}}|_U \dashrightarrow \ensuremath{\mathcal{A}}_U$, and hence also an isomorphism \begin{equation} \label{eqn:MoritaMorph} \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}|_U) \xrightarrow{\sim} \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}_U). \end{equation} There is a canonical identification $\ensuremath{\mathfrak{t}}^\perp \simeq \ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}}$. The complexification $(\ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}})_\ensuremath{\mathbb{C}} \simeq \ensuremath{\mathfrak{n}}_+\oplus \ensuremath{\mathfrak{n}}_-$, where $\ensuremath{\mathfrak{n}}_+$ (resp. $\ensuremath{\mathfrak{n}}_-$) is the direct sum of the positive (resp. negative) root spaces. We choose a complex structure on $\ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}}$ such that $(\ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}})^{1,0}=\ensuremath{\mathfrak{n}}_-$. This choice of complex structure determines a Bott-Thom isomorphism \begin{equation} \label{eqn:BottThom} \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}_U) \xrightarrow{\sim} \ensuremath{\textnormal{K}}_0^T(T,\ensuremath{\mathcal{A}}_T). \end{equation} \ignore{ The Morita morphism \eqref{eqn:MoritaMorph}, composed with the Bott-Thom isomorphism \eqref{eqn:BottThom} gives an isomorphism \begin{equation} \label{map:MoritaBott} \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}|_U) \xrightarrow{\sim} \ensuremath{\textnormal{K}}_0^T(T,\ensuremath{\mathcal{A}}_T). \end{equation} } By equation \eqref{eqn:defAT} and Proposition \ref{prop:modRieffelFixedPt}, the algebra of sections $C_0(\ensuremath{\mathcal{A}}_T)$ has an alternate description as a twisted crossed product algebra $\Pi \ltimes_\tau C_0(\ensuremath{\mathfrak{t}})$. The isomorphism $C_0(\ensuremath{\mathcal{A}}_T)\xrightarrow{\sim} \Pi\ltimes_{\tau}C_0(\ensuremath{\mathfrak{t}})$ yields an isomorphism of K-homology groups \begin{equation} \label{map:CrossedProd} \ensuremath{\textnormal{K}}_0^T(T,\ensuremath{\mathcal{A}}_T) \xrightarrow{\sim}\ensuremath{\textnormal{K}}^0_T(\Pi \ltimes_{\tau} C_0(\ensuremath{\mathfrak{t}})). \end{equation} By Proposition \ref{prop:modGreenJulg} there is a Green-Julg isomorphism \begin{equation} \label{map:GreenJulg} \ensuremath{\textnormal{K}}^0_T(\Pi \ltimes_{\tau} C_0(\ensuremath{\mathfrak{t}}))\xrightarrow{\sim} \ensuremath{\textnormal{K}}^0_{T\ltimes \Pi^\tau}(C_0(\ensuremath{\mathfrak{t}}))_{(1)}. \end{equation} Since $\Pi$ (hence also $T\ltimes \Pi^\tau$) acts cocompactly on $\ensuremath{\mathfrak{t}}$, we can apply the analytic assembly map: \begin{equation} \label{map:assembly} \ensuremath{\textnormal{K}}^0_{T\ltimes \Pi^\tau}(C_0(\ensuremath{\mathfrak{t}})) \rightarrow \ensuremath{\textnormal{K}}_0(C^\ast(T\ltimes \Pi^\tau)). \end{equation} Restricted to $\ensuremath{\textnormal{K}}^0_{T\ltimes \Pi^\tau}(C_0(\ensuremath{\mathfrak{t}}))_{(1)}$, the image of the assembly map is contained in the direct summand isomorphic to $\ensuremath{\textnormal{K}}_0(C^\ast_\tau(T\times \Pi))$, and the latter is isomorphic to $R^{-\infty}(T)^{\ell \Pi}$ by \eqref{eqn:IsoFormalChar}. Composing the maps \eqref{map:restrict}---\eqref{map:assembly} completes the construction of the desired map \[ \scr{I}\colon \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}) \rightarrow R^{-\infty}(T)^{\ell \Pi}.\] We verify in the next two subsections that the range is the subspace $R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}}, \, \ell}$. \begin{remark} The vector space $\ensuremath{\mathfrak{t}}$ is a classifying space for proper actions of $T\ltimes \Pi^\tau$. The Baum-Connes conjecture says that the assembly map \eqref{map:assembly} is an isomorphism. The conjecture has been proved for a very large class of groups including, for example, all amenable groups, of which $T\ltimes \Pi^\tau$ is an example (we thank Shintaro Nishikawa for pointing this out). Consequently, each of the maps in the definition of $\scr{I}$ except the first \eqref{map:restrict} are isomorphisms. \end{remark} \begin{remark} There are slight variations in the order of the maps in the definition of $\scr{I}$ that are equivalent. For example, let $\ensuremath{\mathcal{U}} \simeq \ensuremath{\mathfrak{t}} \times \st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp)$ be the fibre product $\ensuremath{\mathfrak{t}} \times_T U$, and for $x \in \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}})$ let $x_U$ denote the class in $\ensuremath{\textnormal{KK}}_{T\ltimes \Pi^\tau}(C_0(\ensuremath{\mathcal{U}}),\ensuremath{\mathbb{C}})_{(1)}$ obtained by applying the composition \[ \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}) \rightarrow \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}|_U)\xrightarrow{\sim} \ensuremath{\textnormal{K}}_0^T(U,\ensuremath{\mathcal{A}}_U) \xrightarrow{\sim} \ensuremath{\textnormal{KK}}_T(\Pi\ltimes_\tau C_0(\ensuremath{\mathcal{U}}),\ensuremath{\mathbb{C}}) \xrightarrow{\sim} \ensuremath{\textnormal{KK}}_{T\ltimes \Pi^\tau}(C_0(\ensuremath{\mathcal{U}}),\ensuremath{\mathbb{C}})_{(1)} \] similar to the definition of $\scr{I}$ given above. Then, identifying $C_0(\ensuremath{\mathcal{U}})\simeq C_0(\st{B}_{\epsilon}(\ensuremath{\mathfrak{t}}^\perp))\otimes C_0(\ensuremath{\mathfrak{t}})$, we have \begin{equation} \label{eqn:DescentFormula} \scr{I}(x)=\mu_{T\ltimes \Pi^\tau}(\beta \otimes_{C_0(\st{B}_{\epsilon}(\ensuremath{\mathfrak{t}}^\perp))} [x_U])=[c]\otimes_{S \ltimes C_0(\ensuremath{\mathfrak{t}})} j_S(\beta \otimes_{C_0(\st{B}_{\epsilon}(\ensuremath{\mathfrak{t}}^\perp))} x_U), \end{equation} where $\beta \in \ensuremath{\textnormal{K}}^0_T(\st{B}_{\epsilon}(\ensuremath{\mathfrak{t}}^\perp))$ is the Bott-Thom element, $S=T\ltimes \Pi^\tau$, and for the second equality we use Kasparov's description of the assembly map (Section \ref{sec:Assembly}). \end{remark} \subsection{Weyl group symmetry.} The subgroup $N(T)\subset G$ normalizes $\Pi^\tau$ inside $LG^\tau$. It follows that there is an action of $N(T)$ by conjugation on $T\ltimes \Pi^\tau$, $L^2_\tau(\Pi)$, and $\Pi \ltimes_\tau C_0(\ensuremath{\mathfrak{t}}) \simeq C_0(\ensuremath{\mathcal{A}}_T)$. Hence each of the $C^\ast$ algebras appearing in the definition of $\scr{I}$ is in a natural way an $N(T)$-$C^\ast$ algebra. There is only one aspect of the definition which is not $N(T)$-equivariant, namely the Bott-Thom map. Let $N$ be a locally compact group and let $H$ be the connected component of the identity in $N$. Assume $N$ is unimodular for simplicity. Let $A$ be an $N$-$C^\ast$ algebra, with $\alpha_A \colon N \rightarrow \tn{Aut}(A)$ the action map. For $n \in N$ we can view $\alpha_A(n)$ as an isomorphism of $H$-$C^\ast$ algebras $A \rightarrow A^{(n)}$, where $A^{(n)}$ denotes the $C^\ast$ algebra $A$ equipped with the conjugated $H$-action $\alpha_{A^{(n)}}(n^\prime):=\alpha_A(nn^\prime n^{-1})$. Thus if $A$, $B$ are $N$-$C^\ast$ algebras then any $n \in N$ induces a map \[ \ensuremath{\textnormal{KK}}_H(A,B) \rightarrow \ensuremath{\textnormal{KK}}_H(A^{(n)},B^{(n)}). \] Composing with the `restriction homomorphism' (\cite[Definition 3.1]{KasparovNovikov}) for the automorphism $\ensuremath{\textnormal{Ad}}_n \in \ensuremath{\textnormal{Aut}}(H)$, we obtain an automorphism \[ \theta_n \colon \ensuremath{\textnormal{KK}}_H(A,B) \rightarrow \ensuremath{\textnormal{KK}}_H(A,B). \] See \cite[Appendix A]{LSQuantLG} for details (note that the notation in \cite[Appendix A]{LSQuantLG} is different). The automorphism $\theta_n$ acts trivially on elements in the image of the restriction map from $\ensuremath{\textnormal{KK}}_N(A,B)$, and only depends on the class of the element $n$ in the component group $N/H$. Let $A$ be an $N$-$C^\ast$ algebra. A group element $n \in N$ gives rise to an algebra automorphism \[ \theta_n^A \colon H \ltimes A \rightarrow H \ltimes A,\] defined on the dense subspace $C_c(H,A)$ by the formula $\theta_n^A(f)(h)=n^{-1}.f(\ensuremath{\textnormal{Ad}}_nh)$. In \cite[Appendix A]{LSQuantLG} we show that the corresponding element $\theta_n^A \in \ensuremath{\textnormal{KK}}(H\ltimes A,H\ltimes A)$ intertwines $\theta_n$ and the descent homomorphism; more precisely \begin{equation} \label{eqn:IntertwiningProperty} j_H(\theta_n(x))=\theta_n^A \otimes j_H(x) \otimes (\theta_n^B)^{-1}, \end{equation} for any $x \in \ensuremath{\textnormal{KK}}_H(A,B)$. As a special case of the above, consider $H=T \subset N(T)=N$. As the automorphism $\theta_n$ (resp. $\theta_n^A$, $\theta_n^B$) only depends on the class $w=[n] \in N(T)/T=W$, we denote it $\theta_w$ (resp. $\theta_w^A$, $\theta_w^B$). The Bott-Thom element $\beta \in \ensuremath{\textnormal{K}}^0_T(\ensuremath{\mathfrak{t}}^\perp)$ is not $N(T)$-equivariant, but instead satisfies (\cite[Proposition 4.8]{LSQuantLG}) \begin{equation} \label{eqn:BottAntisymmetry} \theta_w(\beta)=(-1)^{l(w)}\ensuremath{\mathbb{C}}_{\rho-w\rho}\otimes \beta, \end{equation} where $\rho$ is the half sum of the positive roots, and $l(w)$ is the length of the Weyl group element $w$. This is a simple consequence of the fact that (1) $\ensuremath{\textnormal{Ad}}_n|_{\ensuremath{\mathfrak{t}}^\perp}$ reverses orientation (hence grading) according to the length of $w$, (2) the weight decomposition for $\wedge \ensuremath{\mathfrak{n}}_-$ is not symmetric under the Weyl group. To simplify notation let $S=T\ltimes \Pi^\tau$. By \eqref{eqn:DescentFormula} and using an argument similar to that given in \cite[Section 4.5]{LSQuantLG}, we have \begin{align*} \scr{I}(x) \otimes (\theta_w^{\ensuremath{\mathbb{C}}})^{-1} &= [c]\otimes j_S(\beta \otimes x_U) \otimes (\theta_w^{\ensuremath{\mathbb{C}}})^{-1}\\ &=[c]\otimes (\theta_w^{C_0(\ensuremath{\mathfrak{t}})})^{-1}\otimes \theta_w^{C_0(\ensuremath{\mathfrak{t}})}\otimes j_S(\beta \otimes x_U) \otimes (\theta_w^{\ensuremath{\mathbb{C}}})^{-1}\\ &=[c] \otimes (\theta_w^{C_0(\ensuremath{\mathfrak{t}})})^{-1} \otimes j_S(\theta_w(\beta \otimes x_U))\\ &=(-1)^{l(w)}[c]\otimes j_S(\beta \otimes x_U) \otimes \ensuremath{\mathbb{C}}_{\rho-w\rho}. \end{align*} In third line we used \eqref{eqn:IntertwiningProperty}. In the fourth line we used \eqref{eqn:BottAntisymmetry}, the $N(T)$-equivariance of $x_U$ (it lies in the image of the restriction map from $\ensuremath{\textnormal{KK}}_{N(T)\ltimes \Pi^\tau}(C_0(\ensuremath{\mathcal{U}}),\ensuremath{\mathbb{C}})$), and the fact that the cut-off function $c \colon \ensuremath{\mathfrak{t}} \rightarrow [0,\infty)$ may be chosen to be $N(T)$-invariant, which implies $[c]\otimes (\theta_w^{C_0(\ensuremath{\mathfrak{t}})})^{-1}=[c]$. In the last line we are also using that $\ensuremath{\textnormal{K}}_0(C^\ast(T\ltimes \Pi^\tau))$ is an $R(T)$-module. \begin{corollary} The image of $\scr{I}$ is contained in $R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}},\, \ell}$, the space of formal characters that are alternating under the $\rho$-shifted level $\ell$ action \eqref{eqn:ShiftedAction} of the affine Weyl group. \end{corollary} \subsection{Inverse of the Freed-Hopkins-Teleman map.} The commutative diagram \eqref{diagram:FHT} in the Freed-Hopkins-Teleman theorem implies $\ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}})$ has a particularly simple $\ensuremath{\mathbb{Z}}$-basis obtained by pushforward from $\ensuremath{\textnormal{K}}_G^0(\ensuremath{\textnormal{pt}})$ (together with the Morita morphism $V^\ast \colon \ensuremath{\mathcal{A}}|_E \dashrightarrow \ensuremath{\mathbb{C}}$). These elements are represented by Kasparov triples $x_\lambda$ with trivial operator $F=0$: \begin{equation} \label{eqn:GeneratorTriple} x_\lambda=[(V^\ast \otimes R_\lambda, \iota^\ast \otimes \ensuremath{\textnormal{id}}_{R_\lambda}, 0)] \end{equation} Here $R_\lambda \in R(G)$ is the finite-dimensional irreducible representation of $G$ with highest weight $\lambda \in \Pi^\ast_k$, and $\iota^\ast \colon C_0(\ensuremath{\mathcal{A}}) \rightarrow \ensuremath{\mathcal{A}}_e \simeq \ensuremath{\mathbb{K}}(V^\ast)$ is restriction of a section of $\ensuremath{\mathcal{A}}$ to the fibre over the identity $e \in G$, so that $\iota^\ast \otimes \ensuremath{\textnormal{id}}_{R_\lambda}\colon C(\ensuremath{\mathcal{A}}) \rightarrow \ensuremath{\mathbb{B}}(V^\ast \otimes R_\lambda)$ is a representation of $C(\ensuremath{\mathcal{A}})$ on the Hilbert space $V^\ast \otimes R_\lambda$, with range contained in the compact operators. By \eqref{diagram:FHT} the corresponding element of $R_k(G)$ is the image $[R_\lambda] \in R_k(G)$ of $R_\lambda \in R(G)$ under the quotient map. Under the isomorphism \eqref{eqn:AlternatingFormal}, $[R_\lambda]$ is sent to the formal character \begin{equation} \label{eqn:ImgFHT} \sum_{w \in W_{\ensuremath{\textnormal{aff}}}} (-1)^{l(w)} e_{w\bullet_{\ell} \lambda} \in R^{-\infty}(T). \end{equation} It is easy to determine $\scr{I}(x_\lambda)$. Let $R_\lambda^T$ denote the $\ensuremath{\mathbb{Z}}_2$-graded representation of $T$ corresponding to the numerator of the Weyl character formula for $R_\lambda$, thus $R_\lambda^T$ has character \[ \sum_{\ol{w} \in W} (-1)^{l(\ol{w})}e_{\ol{w}(\lambda+\rho)-\rho}. \] By the Weyl character formula the characters $\chi(R_\lambda|_T)$, $\chi(R_\lambda^T)$ are related by \[ \chi(R_\lambda^T)=\chi(R_\lambda|_T)\cdot \chi(\wedge \ensuremath{\mathfrak{n}}_-) \] where $\wedge \ensuremath{\mathfrak{n}}_-$ denotes the $\ensuremath{\mathbb{Z}}_2$-graded representation of $T$ with character \[ \prod_{\alpha \in \ensuremath{\mathcal{R}}_-} (1-e_\alpha).\] In defining the Bott-Thom map we used a complex structure on $\ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}}$ such that $(\ensuremath{\mathfrak{g}}/\ensuremath{\mathfrak{t}})^{1,0}=\ensuremath{\mathfrak{n}}_-$. It follows that the image of $x_\lambda$ under restriction to $U \subset G$, followed by the Bott-Thom map is \begin{equation} \label{eqn:ImgGenBott} [(V^\ast\otimes R_\lambda^T,\iota^\ast \otimes \ensuremath{\textnormal{id}}_{R_\lambda^T},0)]. \end{equation} Applying the Morita morphism $\ensuremath{\mathcal{A}}|_T \dashrightarrow \ensuremath{\mathcal{A}}_T$ to \eqref{eqn:ImgGenBott} swaps $L^2_{\tau}(\Pi)$ for $V^\ast$. The Green-Julg map followed by the assembly map send this element to the class of the $C^\ast_\tau(T\times \Pi)$-module \begin{equation} \label{eqn:ImgAssembly} L^2_{\tau}(\Pi)\otimes R_\lambda^T \end{equation} in $\ensuremath{\textnormal{K}}_0(C^\ast_\tau(T\times \Pi))$, where $T \ltimes \Pi^\tau$ acts on $L^2_{\tau}(\Pi)$ (see Remarks \ref{rem:LeftReg}, \ref{rem:ExtendSLeft}) by \[ (t,\ensuremath{\widehat{\eta}})\cdot f(\ensuremath{\widehat{\eta}}^\prime)=\kappa_{\eta^\prime}(t)^{-1}f(\ensuremath{\widehat{\eta}}^{-1}\ensuremath{\widehat{\eta}}^\prime)=t^{\ell \eta^\prime}f(\ensuremath{\widehat{\eta}}^{-1}\ensuremath{\widehat{\eta}}^\prime).\] Since the formal character of \eqref{eqn:ImgAssembly} is exactly \eqref{eqn:ImgFHT}, we have proven the following. \begin{proposition} \label{prop:InverseFHT} Let $k>0$ and let $\ensuremath{\mathcal{A}}$ be a Dixmier-Douady bundle over $G$ with Dixmier-Douady class $\ell=k+\ensuremath{\textnormal{h}^\vee} \in \ensuremath{\mathbb{Z}} \simeq H^3_G(G,\ensuremath{\mathbb{Z}})$. The isomorphism \[ R_k(G) \simeq R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}},\,\ell}\] intertwines $\scr{I}$ with the inverse of the Freed-Hopkins-Teleman isomorphism. \end{proposition} \begin{remark} Without using the Freed-Hopkins-Teleman theorem, the arguments above show that the map $\scr{I}\colon \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}) \rightarrow R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}}, \, \ell}$ is at least surjective. \end{remark} \section{Specialization to geometric cycles}\label{sec:IndexMap} Throughout this section let $\ensuremath{\mathcal{A}}$ be a $G$-equivariant Dixmier-Douady bundle over $G$ with $\tn{DD}(\ensuremath{\mathcal{A}})=\ell=k+\ensuremath{\textnormal{h}^\vee} \in \ensuremath{\mathbb{Z}} \simeq H^3_G(G,\ensuremath{\mathbb{Z}})$, with $k>0$. Let $(M,E,\Phi,\ensuremath{\mathcal{S}})$ be a D-cycle representing the class $x=(\Phi,\ensuremath{\mathcal{S}})_\ast[\scr{D}^E] \in \ensuremath{\textnormal{K}}^G_0(G,\ensuremath{\mathcal{A}})$. In this section we exhibit $\scr{I}(x)$ as the $T$-equivariant $L^2$-index of a $1^{st}$-order elliptic operator on a non-compact manifold. \subsection{A cycle for the K-homology push-forward.}\label{sec:Pushforward} As a first step we describe an analytic cycle representing $x \in \ensuremath{\textnormal{K}}^G_0(G,\ensuremath{\mathcal{A}})$. To put this in context, one should compare the standard example \ref{ex:DeRhamDirac}. The result will be a cycle given in terms of a `Dirac operator' acting on sections of a Clifford module, except that the module will have infinite rank (since $\ensuremath{\mathcal{S}}$ has infinite rank). The action of the $C^\ast$ algebra $C(\ensuremath{\mathcal{A}})$ plays an essential role in making the result a well-defined analytic cycle. The construction works more generally, with the target space $G$ replaced by any compact Riemannian $G$-manifold $X$. The push-forward $(\Phi,\ensuremath{\mathcal{S}})_\ast [\scr{D}^E]$ is given by the $\ensuremath{\textnormal{KK}}$-product $[\ensuremath{\mathcal{S}}]\otimes [\scr{D}^E]$, see \eqref{eqn:PushNotation} and \eqref{eqn:ProdNotation}. The Hilbert space of the $\ensuremath{\textnormal{KK}}$-product is described by: \begin{proposition} \label{prop:HilbertSpaceIso1} There is an isomorphism \[ C_0(\ensuremath{\mathcal{S}})\wh{\otimes}_{\ensuremath{\textnormal{Cl}}(M)} L^2(M,\ensuremath{\textnormal{Cliff}}(TM)\otimes E) \simeq L^2(M,\ensuremath{\mathcal{S}}\otimes E)\] of $\ensuremath{\mathbb{Z}}_2$-graded representations of $C_0(\ensuremath{\mathcal{A}})$. \end{proposition} \noindent The proof is essentially the same as for the standard example \ref{ex:DeRhamDirac}. For the reader's benefit we include a proof in the appendix. Recall that $\ensuremath{\mathcal{S}}$ is a right $\ensuremath{\textnormal{Cliff}}(TM)$-module, and let \begin{equation} \label{eqn:CliffActionS} \ensuremath{\mathsf{c}} \colon \ensuremath{\textnormal{Cliff}}(TM) \rightarrow \ensuremath{\textnormal{End}}(\ensuremath{\mathcal{S}}) \end{equation} denote the action. Let \begin{equation} \label{eqn:TwistedCliffActionS} \ensuremath{\widehat{\mathsf{c}}} \colon \ensuremath{\textnormal{Cliff}}(TM) \rightarrow \ensuremath{\textnormal{End}}(\ensuremath{\mathcal{S}}), \qquad \ensuremath{\widehat{\mathsf{c}}}(v)s=(-1)^{\deg(s)} \ensuremath{\mathsf{c}}(v)s, \quad v \in TM \end{equation} denote the action with a `twist' coming from the grading. Choose $G$-invariant Hermitian connections $\nabla^E$ and $\nabla^{\ensuremath{\mathcal{S}}}$ on $\ensuremath{\mathcal{S}}$, and let $\nabla^{\ensuremath{\mathcal{S}} \otimes E}$ denote the induced connection on $\ensuremath{\mathcal{S}} \otimes E$. Assume moreover that $\nabla^{\ensuremath{\mathcal{S}}}$ is chosen satisfying \begin{equation} \label{eqn:CliffConnection} \nabla^{\ensuremath{\mathcal{S}}}_v (\ensuremath{\mathsf{c}}(\varphi)s)=\ensuremath{\mathsf{c}}(\nabla_v\varphi)s+\ensuremath{\mathsf{c}}(\varphi)\nabla^{\ensuremath{\mathcal{S}}}_v s, \end{equation} i.e. $\nabla^{\ensuremath{\mathcal{S}}}$ is a \emph{Clifford connection} (cf. \cite[Definition 3.39]{BerlineGetzlerVergne}). Such a connection can be constructed as in the case of a finite dimensional Clifford module. In short, one constructs the connection locally and then patches the local definitions together with a partition of unity. Locally on $U \subset M$ one can find a spin structure $S^{\tn{spin}}$, and $\ensuremath{\mathcal{S}}|_U\simeq S^{\tn{spin}} \otimes \ensuremath{\mathcal{S}}^\prime$ as $\ensuremath{\textnormal{Cliff}}(TM)$-modules, with $\ensuremath{\mathcal{S}}^\prime=\ensuremath{\textnormal{Hom}}_{\ensuremath{\textnormal{Cliff}}(TM)}(S^{\tn{spin}},\ensuremath{\mathcal{S}}|_U)$. Using the spin connection on $S^{\tn{spin}}$ and any Hermitian connection on $\ensuremath{\mathcal{S}}^\prime$ produces a Clifford connection on $\ensuremath{\mathcal{S}}|_U$. The candidate Dirac-type operator $\st{D}^E$ acting on smooth sections of $\ensuremath{\mathcal{S}} \otimes E$ is the composition \begin{equation} \label{eqn:DefDE} \Gamma^\infty(\ensuremath{\mathcal{S}}\otimes E) \xrightarrow{\nabla^{\ensuremath{\mathcal{S}}\otimes E}} \Gamma^\infty(T^\ast M \otimes \ensuremath{\mathcal{S}} \otimes E) \xrightarrow{g^\sharp} \Gamma^\infty(TM\otimes \ensuremath{\mathcal{S}} \otimes E) \xrightarrow{\ensuremath{\widehat{\mathsf{c}}}} \Gamma^\infty(\ensuremath{\mathcal{S}}\otimes E). \end{equation} \begin{proposition} \label{prop:TwistedKHomCyc} The operator $\st{D}^E$ defined in \eqref{eqn:DefDE} is essentially self-adjoint. The triple $(L^2(M,\ensuremath{\mathcal{S}}\otimes E),\rho,\st{D}^E)$ is an unbounded cycle for an element of $\ensuremath{\textnormal{K}}^G_0(X,\ensuremath{\mathcal{A}})$. \end{proposition} \begin{proof} The presence of a vector bundle $E$ does not alter the proof, so we set $E=\ensuremath{\mathbb{C}}$ to simplify notation. The condition that $\nabla^{\ensuremath{\mathcal{S}}}$ is a Clifford connection ensures $\st{D}$ is symmetric, as for a finite dimensional Clifford module (cf. \cite[Proposition 5.3]{LawsonMichelsohn}). It is possible to extend certain proofs of the essential self-adjointness of a Dirac operator on a finite dimensional vector bundle over a compact manifold quite directly to the case of a smooth Hilbert bundle, cf. \cite[Proposition 1.16]{Ebert2016Index} for details. It suffices to check that for a dense set of $a \in \Gamma^\infty(\ensuremath{\mathcal{A}})$, (1) the commutator $[\st{D},\rho(a)]$ is bounded, and (2) the operator $\rho(a)(1+\st{D}^2)^{-1}$ is compact. Since the underlying space $X$ is compact, we can find a finite open cover such that for each $U$ in the cover, $\ensuremath{\mathcal{A}}|_{U} \simeq U \times \ensuremath{\mathbb{K}}(H)$ for some Hilbert space $H$, $\ensuremath{\mathcal{S}}|_{U} \simeq U \times (H \otimes F)$ with $F$ a finite dimensional vector space, and the action $\rho$ of $\ensuremath{\mathcal{A}}|_{U}$ on $\ensuremath{\mathcal{S}}|_{U}$ is given by the defining representation of $\ensuremath{\mathbb{K}}(H)$ on the first factor in $H \otimes F$. Using a partition of unity subordinate to the cover, we can assume $a$ has support contained in a single $U$, and moreover that $a$ is of the form $a=fb$ where $f \in C^\infty_c(U)$ and $b \in \ensuremath{\mathbb{K}}(H)$ is a constant operator. For the first assertion, note that \[ [\st{D},\rho(fb)]=\ensuremath{\widehat{\mathsf{c}}}(g^\sharp(df))\rho(b)+f[\st{D},\rho(b)].\] The first term is bounded since $f$ is smooth. The second term is bounded because on $U$, $\st{D}=\st{D}_0+A$, where $\st{D}_0$ is defined in the same way as $\st{D}$ but using the trivial connection on $U$ (hence $[\st{D}_0,\rho(b)]=0$), and $A$ is a bounded bundle endomorphism. For the second assertion, it is convenient to assume that $b$ also has finite constant rank. The range of the operator $(1+\st{D}^2)^{-1}$ is contained in the Sobolev space $\ensuremath{\mathcal{H}}^2(M,\ensuremath{\mathcal{S}})$ of sections with two derivatives in $L^2$, hence the range of $\rho(a)(1+\st{D}^2)^{-1}$ is contained in the space $f \cdot \ensuremath{\mathcal{H}}^2(U,\ensuremath{\textnormal{ran}}(b)\otimes F)$. It follows that the operator $\rho(a)(1+\st{D}^2)^{-1}$ factors through the inclusion \[ f\cdot \ensuremath{\mathcal{H}}^2(U,\ensuremath{\textnormal{ran}}(b)\otimes F) \hookrightarrow L^2(U,\ensuremath{\textnormal{ran}}(b)\otimes F).\] Since $\ensuremath{\textnormal{ran}}(b)\otimes F$ is finite dimensional, the Rellich Lemma implies this inclusion is compact.\ignore{This almost seems a little too slick. But I think the argument is correct. The range of the operator is contained in the space of $L^2$ sections of $\ensuremath{\textnormal{ran}}(b)\otimes V$ to which we can apply $\st{D}$ twice and still land in $L^2$, i.e. in the ordinary Sobolev space $\ensuremath{\mathcal{H}}^2(U_i,\ensuremath{\textnormal{ran}}(b)\otimes V)$!} \end{proof} \begin{theorem} The cycle $(L^2(M,\ensuremath{\mathcal{S}}\otimes E),\rho,\st{D}^E)$ represents the class $[\ensuremath{\mathcal{S}}]\otimes [\scr{D}^E] \in \ensuremath{\textnormal{K}}^G_0(X,\ensuremath{\mathcal{A}})$. \end{theorem} The proof is essentially the same as the standard example \ref{ex:DeRhamDirac}, see the appendix. \subsection{The Morita morphism $\ensuremath{\mathcal{A}}|_U \dashrightarrow \ensuremath{\mathcal{A}}_U$.} Recall from Section \ref{sec:defI} that $U$ denotes an $N(T)$-invariant tubular neighborhood of $T$ in $G$, and $\pi_T \colon U \rightarrow T$ the projection map. Let \[ Y=\Phi^{-1}(U) \subset M, \qquad \Phi_T=\pi_T \circ \Phi.\] The restriction of the Morita morphism $\ensuremath{\textnormal{Cliff}}(TM) \dashrightarrow \Phi^\ast \ensuremath{\mathcal{A}}$ to $Y$ is a morphism $\ensuremath{\textnormal{Cliff}}(TY) \dashrightarrow \Phi^\ast \ensuremath{\mathcal{A}}|_U$. Composing with the morphism $\ensuremath{\mathcal{A}}|_U \dashrightarrow \ensuremath{\mathcal{A}}_U$ of Section \ref{sec:defI} gives a Morita morphism \begin{equation} \label{eqn:YAT} \ensuremath{\mathcal{V}} \colon \ensuremath{\textnormal{Cliff}}(TY) \dashrightarrow \Phi^\ast \ensuremath{\mathcal{A}}_U. \end{equation} The pullback $\ensuremath{\mathcal{A}}_{\ensuremath{\mathfrak{t}}}=\exp^\ast \ensuremath{\mathcal{A}}_T=\ensuremath{\mathfrak{t}} \times \ensuremath{\mathbb{K}}(L^2_\tau(\Pi))$ has a canonical $T\ltimes \Pi^\tau$-equivariant Morita trivialization $\ensuremath{\mathcal{A}}_{\ensuremath{\mathfrak{t}}} \dashrightarrow \underline{\ensuremath{\mathbb{C}}}$ given by the $\ensuremath{\mathcal{A}}_{\ensuremath{\mathfrak{t}}}^{\ensuremath{\textnormal{op}}}$-module $\ensuremath{\mathfrak{t}} \times L^2_\tau(\Pi)^\ast$. Hence, we have a pullback diagram \[ \begin{CD} \ensuremath{\mathcal{Y}} @>\Phi_{\ensuremath{\mathfrak{t}}}>> \ensuremath{\mathfrak{t}}\\ @Vq_Y VV @VV\exp V\\ Y @>\Phi_T >> T \end{CD}\] and the pullback of $\ensuremath{\mathcal{V}}$ to $\ensuremath{\mathcal{Y}}$ is a Morita morphism \begin{equation} \label{PullbackV} q_Y^\ast \ensuremath{\mathcal{V}} \colon \ensuremath{\textnormal{Cliff}}(q_Y^\ast TY)\simeq \ensuremath{\textnormal{Cliff}}(T\ensuremath{\mathcal{Y}}) \dashrightarrow \Phi_{\ensuremath{\mathfrak{t}}}^\ast \ensuremath{\mathcal{A}}_{\ensuremath{\mathfrak{t}}}. \end{equation} Composing \eqref{PullbackV} with the Morita trivialization of $\ensuremath{\mathcal{A}}_{\ensuremath{\mathfrak{t}}}$, we obtain a $T\ltimes \Pi^\tau$-equivariant Morita trivialization \[ S \colon \ensuremath{\textnormal{Cliff}}(T\ensuremath{\mathcal{Y}}) \dashrightarrow \underline{\ensuremath{\mathbb{C}}}, \] or in other words, a $T\ltimes \Pi^\tau$-equivariant spinor module for $\ensuremath{\textnormal{Cliff}}(T\ensuremath{\mathcal{Y}})$. Thus $S$ is a finite dimensional $T\ltimes \Pi^\tau$-equivariant $\ensuremath{\mathbb{Z}}_2$-graded Hermitian vector bundle over $\ensuremath{\mathcal{Y}}$, together with an isomorphism $\ensuremath{\mathsf{c}} \colon \ensuremath{\textnormal{Cliff}}(T\ensuremath{\mathcal{Y}})\xrightarrow{\sim} \ensuremath{\textnormal{End}}(S)$. The central circle in $\Pi^\tau$ acts on $L^2_\tau(\Pi)$, $S$ with opposite weight (for the action on $L^2_\tau(\Pi)$ we use the right regular representation, for which the weight of the central circle action is $-1$), and hence the diagonal $\Pi^\tau$ action on $L^2_\tau(\Pi) \otimes S$ descends to an action of $\Pi$. By construction, the $\Phi^\ast \ensuremath{\mathcal{A}}_U$-$\ensuremath{\textnormal{Cliff}}(TY)$ bimodule $\ensuremath{\mathcal{V}}$ is the quotient \begin{equation} \label{eqn:RepresentV} \ensuremath{\mathcal{V}}=(L^2_\tau(\Pi) \otimes S)/\Pi. \end{equation} Let $[\ensuremath{\mathcal{V}}] \in \ensuremath{\textnormal{KK}}_T(C_0(\ensuremath{\mathcal{A}}_U),\ensuremath{\textnormal{Cl}}(Y))$ denote the corresponding $\ensuremath{\textnormal{KK}}$-element defined by the pair $(\Phi|_Y,\ensuremath{\mathcal{V}})$. The action of $C_0(\ensuremath{\mathcal{A}}_U)$ on the right hand side in \eqref{eqn:RepresentV} is as follows. Given $a \in C_0(\ensuremath{\mathcal{A}}_U)$, the pullback $q_Y^\ast \Phi^\ast a$ is a $\Pi$-invariant map $\ensuremath{\mathcal{Y}} \rightarrow \ensuremath{\mathbb{K}}(L^2(\Pi))$, hence acts on the first factor of $L^2(\Pi)\otimes S$ by the defining representation for $\ensuremath{\mathbb{K}}(L^2_\tau(\Pi))$. This action preserves the space of $\Pi$-invariant sections of $L^2_\tau(\Pi) \otimes S$, hence descends to an action $\rho$ of $C_0(\ensuremath{\mathcal{A}}_U)$ on $C_0(\ensuremath{\mathcal{V}})$. The action of $\ensuremath{\textnormal{Cl}}(Y)$ on the right hand side in \eqref{eqn:RepresentV} can be described in similar terms. The restriction of the fundamental class $[\scr{D}]$ of $M$ to $Y$ is the fundamental class of $Y$, and we will abuse notation slightly denote it by $[\scr{D}]$ also. By functoriality of the Kasparov product, the image of $(\Phi,\ensuremath{\mathcal{S}})_\ast[\scr{D}^E]|_U$ under the Morita morphism $\ensuremath{\mathcal{A}}|_U \dashrightarrow \ensuremath{\mathcal{A}}_U$ equals the $\ensuremath{\textnormal{KK}}$-product \[ [\ensuremath{\mathcal{V}}] \otimes [\scr{D}^E] \in \ensuremath{\textnormal{KK}}_T(C_0(\ensuremath{\mathcal{A}}_U),\ensuremath{\mathbb{C}}).\] \subsection{The Dirac operator on $\ensuremath{\mathcal{Y}}$.} Choose a complete $N(T)$-invariant Riemannian metric on $Y$. The Kasparov product $[\ensuremath{\mathcal{V}}]\otimes [\scr{D}^E] \in \ensuremath{\textnormal{KK}}_T(C_0(\ensuremath{\mathcal{A}}_U),\ensuremath{\mathbb{C}})$ is represented by a cycle $(H,\rho,\st{D}^E)$ similar to Section \ref{sec:Pushforward}, with now $H=L^2(Y,\ensuremath{\mathcal{V}}\otimes E)$. This cycle has an alternate interpretation as the class represented by a Dirac operator on the covering space $\ensuremath{\mathcal{Y}}$. The correspondence between differential operators on $Y$ and $\ensuremath{\mathcal{Y}}$ that we make use of is well-known, cf. \cite[Section 7.5]{SchickL2}, \cite{AtiyahL2,SingerL2} for further details. \begin{proposition} \label{prop:IsoWithCoveringSpace} There is a $N(T)$-equivariant isomorphism of Hilbert spaces \[ L^2(Y,\ensuremath{\mathcal{V}}\otimes E) \simeq L^2(\ensuremath{\mathcal{Y}},S\otimes E),\] intertwining the Clifford actions and preserving the subspaces of smooth compactly supported sections. Under this isomorphism the operator $\st{D}^E$ in $L^2(Y,\ensuremath{\mathcal{V}}\otimes E)$ corresponds to the Dirac operator in $L^2(\ensuremath{\mathcal{Y}},S\otimes E)$. \end{proposition} \begin{proof} Let $s \in C^\infty_c(\ensuremath{\mathcal{Y}},S)$ be a smooth compactly supported section of $S$, and let $\delta \in L^2_\tau(\Pi)$ denote the function \[ \delta(\ensuremath{\widehat{\gamma}})=\begin{cases} z &\text{ if } \ensuremath{\widehat{\gamma}}=z^{-1}1_{\Gamma^\tau}\\ 0 &\text{ else.}\end{cases} \] (This element plays the role of the delta function of $L^2(\Pi)$ supported at $1_{\Pi}$.) Define a smooth section $\ti{s}$ of the bundle of Hilbert spaces $L^2_\tau(\Pi) \otimes S$ over $\ensuremath{\mathcal{Y}}$ by `averaging over $\Pi$': \[ \ti{s}(y)=\sum_{\eta \in \Pi} \eta.\big(\delta \otimes s(\eta^{-1}.y)\big) \] where here we use the fact that $\Pi$ acts on $L^2_\tau(\Pi)\otimes S$ (the summand on the right could also be written $\ensuremath{\widehat{\eta}}.\delta \otimes \ensuremath{\widehat{\eta}}.s(\eta^{-1}.y)$, for any lift $\ensuremath{\widehat{\eta}} \in \Pi^\tau$ of $\eta$). The section $\ti{s}$ is $\Pi$-invariant, hence descends to a section of $\ensuremath{\mathcal{V}}$, which is again smooth and compactly supported. The map intertwines the $L^2$ norms, hence extends to a unitary mapping. It's clear that the map intertwines the Clifford actions, and hence also the corresponding Dirac operators. \end{proof} Abusing notation slightly, we continue to write $\st{D}^E$ (resp. $\rho$) for the Dirac operator on the covering space $\ensuremath{\mathcal{Y}}$ acting on sections of $S\otimes E$ (resp. the representation of $C_0(\ensuremath{\mathcal{A}}_U)$ on $L^2(\ensuremath{\mathcal{Y}},S\otimes E)$ induced by the isomorphism in Proposition \ref{prop:IsoWithCoveringSpace}). \begin{corollary} The product $[\ensuremath{\mathcal{V}}]\otimes [\scr{D}^E]$ is the class $[\st{D}^E]$ represented by the triple \[(L^2(\ensuremath{\mathcal{Y}},S\otimes E),\rho,\st{D}^E).\] \end{corollary} \subsection{The Bott-Thom map.}\label{sec:BottThom} Recall we chose a complex structure on $\ensuremath{\mathfrak{t}}^\perp$ such that $(\ensuremath{\mathfrak{t}}^\perp)^{1,0}=\ensuremath{\mathfrak{n}}_-$; thus the complex weights of the $T$-action on $\ensuremath{\mathfrak{t}}^\perp$ in the adjoint representation are the negative roots. The Bott-Thom class $[\beta \in \ensuremath{\textnormal{K}}^0_T(\ensuremath{\mathfrak{t}}^\perp)$ is represented by the triple $(C_0(\ensuremath{\mathfrak{t}}^\perp)\otimes \wedge \ensuremath{\mathfrak{n}}_-,\rho,\beta)$, where $\beta \colon \ensuremath{\mathfrak{t}}^\perp \rightarrow \ensuremath{\textnormal{End}}(\wedge \ensuremath{\mathfrak{n}}_-)$ is the bundle endomorphism given at $\xi \in \ensuremath{\mathfrak{t}}^\perp$ by the Clifford action of $\xi$ on the spinor module $\wedge \ensuremath{\mathfrak{n}}_-$ for $\ensuremath{\textnormal{Cl}}(\ensuremath{\mathfrak{t}}^\perp)$. Choose a diffeomorphism $\st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp) \xrightarrow{\sim} \ensuremath{\mathfrak{t}}^\perp$ which we use to pull the Bott element back to an element of $\ensuremath{\textnormal{K}}^0_T(\st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp))$. Taking the external product with the identity element in $\ensuremath{\textnormal{KK}}_T(C(\ensuremath{\mathcal{A}}_T),C(\ensuremath{\mathcal{A}}_T))$ and using the isomorphism \[ C_0(\ensuremath{\mathcal{A}}_U)\simeq C_0(\st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp))\otimes C(\ensuremath{\mathcal{A}}_T)\] we obtain an invertible element, still denoted $[\beta]$, in the group \[ \ensuremath{\textnormal{KK}}_T(C(\ensuremath{\mathcal{A}}_T),C_0(\ensuremath{\mathcal{A}}_U)). \] The Bott-Thom isomorphism $\ensuremath{\textnormal{KK}}_T(C_0(\ensuremath{\mathcal{A}}_U),\ensuremath{\mathbb{C}}) \xrightarrow{\sim} \ensuremath{\textnormal{KK}}_T(C(\ensuremath{\mathcal{A}}_T),\ensuremath{\mathbb{C}})$ is given by Kasparov product with this element. The next step is to describe a cycle representing the product \[ [\beta] \otimes [\st{D}^E] \in \ensuremath{\textnormal{KK}}_T(C(\ensuremath{\mathcal{A}}_T),\ensuremath{\mathbb{C}}).\] We studied a similar product in \cite[Section 4.7]{LSQuantLG}, and we simply state the result. The operator $\st{D}^E$ is extended to sections of $\wedge \ensuremath{\mathfrak{n}}_- \wh{\otimes} S \otimes E$ (we use the same symbol for the extension) such that \[ \st{D}^E( \alpha \wh{\otimes} \sigma)=(-1)^{\tn{deg}(\alpha)}\alpha \wh{\otimes}\st{D}^E \sigma \] whenever $\alpha \in \wedge^{\tn{deg}(\alpha)} \ensuremath{\mathfrak{n}}_-$ is constant and $\sigma$ is a section of $S\wh{\otimes}E$. The product is represented by the triple \[ (L^2(\ensuremath{\mathcal{Y}},\wedge \ensuremath{\mathfrak{n}}_- \wh{\otimes} S \otimes E),\rho \circ \pi_T^\ast, \st{D}^E_\beta), \qquad \st{D}^E_\beta = \st{D}^E+\beta_\ensuremath{\mathcal{Y}}\] where $\beta_\ensuremath{\mathcal{Y}}$ is the pullback, via the map \[\ensuremath{\mathcal{Y}} \xrightarrow{\pi} Y \xrightarrow{\Phi} U \simeq T \times \st{B}_\epsilon(\ensuremath{\mathfrak{t}}^\perp) \simeq T \times \ensuremath{\mathfrak{t}}^\perp \xrightarrow{\ensuremath{\textnormal{pr}}_2} \ensuremath{\mathfrak{t}}^\perp, \] of the odd bundle endomorphism $\beta \colon \ensuremath{\mathfrak{t}}^\perp \rightarrow \ensuremath{\textnormal{End}}(\wedge \ensuremath{\mathfrak{n}}_-)$ described above. \subsection{The analytic assembly map and the index.} In \cite[Section 4.7]{LSQuantLG} we verified that the operator $\st{D}^E_\beta=\st{D}^E+\beta_\ensuremath{\mathcal{Y}}$ is $T$-Fredholm, i.e. the multiplicity of each irreducible representation of $T$ in the $L^2$-kernel $\ensuremath{\textnormal{ker}}(\st{D}^E_\beta)$ is finite. Thus $\st{D}^E_\beta$ has a well-defined `$T$-index' denoted $\ensuremath{\textnormal{index}}(\st{D}^E_\beta)\in R^{-\infty}(T)$, see \cite[Section 2.5]{LSQuantLG}. Via the isomorphism \[ \ensuremath{\textnormal{KK}}_T(C(\ensuremath{\mathcal{A}}_T),\ensuremath{\mathbb{C}})\simeq \ensuremath{\textnormal{KK}}_{T\ltimes \Pi^\tau}(C_0(\ensuremath{\mathfrak{t}}),\ensuremath{\mathbb{C}})_{(1)} \] the element $[\st{D}^E_\beta]$ is identified with an element $[\st{D}^E_\beta] \in \ensuremath{\textnormal{KK}}_{T\ltimes \Pi^\tau}(C_0(\ensuremath{\mathfrak{t}}),\ensuremath{\mathbb{C}})_{(1)}$. \begin{proposition} The image of the class $[\st{D}^E_\beta]$ under the composition \[ \ensuremath{\textnormal{KK}}_{T\ltimes \Pi^\tau}(C_0(\ensuremath{\mathfrak{t}}),\ensuremath{\mathbb{C}})_{(1)} \xrightarrow{\mu_{T\ltimes \Pi^\tau}} \ensuremath{\textnormal{KK}}(\ensuremath{\mathbb{C}},C^\ast_\tau(T\times \Pi)) \simeq R^{-\infty}(T)^{\ell\Pi} \] is the formal character $\ensuremath{\textnormal{index}}(\st{D}^E_\beta)$. \end{proposition} \begin{proof} Let $N=T\ltimes \Pi^\tau$. Let $H=L^2(\ensuremath{\mathcal{Y}},\wedge \ensuremath{\mathfrak{n}}_- \wh{\otimes} S \otimes E)$ and let $\ensuremath{\mathcal{H}}$ be the Hilbert $C^\ast(N)$-module obtained as the completion of $C_c(\ensuremath{\mathfrak{t}})H$ with respect to the norm defined by the $C^\ast(N)$-valued inner product \[ (s_1,s_2)_{C^\ast(N)}(n)=(s_1,n\cdot s_2)_{L^2} \] as in Section \ref{sec:Assembly}. This inner product takes values in the ideal $C^\ast(N)_{(1)} \subset C^\ast(N)$. Let $\chi \colon \ensuremath{\mathbb{R}} \rightarrow [-1,1]$ be a smooth \emph{normalizing function}, that is, $\chi$ is an odd function, $\chi(t)>0$ for $t >0$ and $\lim_{t \rightarrow \pm \infty}\chi(t)=\pm 1$. We can moreover choose $\chi$ to have compactly supported Fourier transform. The operator $F=\chi(\st{D}^E_\beta)$ is then a bounded, properly supported operator on $H$, with the same $T$-index as $\st{D}^E_\beta$, see \cite[Chapter 10]{HigsonRoe}. $F$ preserves the subspace $C_c(\ensuremath{\mathfrak{t}})H$, and its restriction extends to a bounded operator $\ensuremath{\mathcal{F}}$ on $\ensuremath{\mathcal{H}}$. The image of $[\st{D}^E_\beta]$ under the analytic assembly map $\mu_N$ is the class in $\ensuremath{\textnormal{K}}_0(C^\ast(N)_{(1)})$ represented by the pair $(\ensuremath{\mathcal{H}},\ensuremath{\mathcal{F}})$. Recall that the ideal $C^\ast(N)_{(1)}$ is isomorphic to a finite direct sum of copies of the compact operators on $L^2(\Pi)$: \begin{equation} \label{eqn:BlockDiag} C^\ast(N)_{(1)} \simeq \bigoplus_{[\xi] \in \Pi^\ast/\ell \Pi} \ensuremath{\mathbb{K}}(L^2([\xi])), \end{equation} where $[\xi] \subset \Pi^\ast$ is viewed as a coset of the action of $\ell \Pi$ on $\Pi^\ast$. There is in particular a faithful representation \[\rho \colon C^\ast(N)_{(1)} \rightarrow \ensuremath{\mathbb{K}}(L^2(\Pi^\ast)) \] with image the block diagonal subalgebra \eqref{eqn:BlockDiag} of $\ensuremath{\mathbb{K}}(L^2(\Pi^\ast))$. For $s_1,s_2 \in C_c(\ensuremath{\mathfrak{t}})H$, a short calculation shows that \begin{equation} \label{eqn:TraceNorm} \ensuremath{\textnormal{Tr}}(\rho(f))=(s_1,s_2)_{L^2}, \qquad f=(s_1,s_2)_{C^\ast(N)}. \end{equation} The norm of an element $f \in C^\ast(N)_{(1)}$ is equal to the operator norm of $\rho(f)$. Thus for $s \in C_c(\ensuremath{\mathfrak{t}})H$, its norm in $\ensuremath{\mathcal{H}}$ is $\|\rho(f)\|^{1/2}$, where $f=(s,s)_{C^\ast(N)}$. Using \eqref{eqn:TraceNorm} and since $f$ is a positive element, one has $\|\rho(f)\| \le \ensuremath{\textnormal{Tr}}(\rho(f))=\|s\|^2_{L^2}$. It follows that $H \hookrightarrow \ensuremath{\mathcal{H}}$, and corresponds to the subspace of $s \in \ensuremath{\mathcal{H}}$ such that $\rho(f)$ is trace class, where $f=(s,s)_{C^\ast(N)}$. The Hilbert $C^\ast(N)_{(1)}$-module $\ensuremath{\mathcal{H}}$ splits into a finite direct sum: \[ \ensuremath{\mathcal{H}}=\bigoplus_{[\xi]\in \Pi^\ast/\ell \Pi} \ensuremath{\mathcal{H}}_{[\xi]}, \qquad \ensuremath{\mathcal{H}}_{[\xi]}=\ol{\ensuremath{\mathcal{H}}\cdot \ensuremath{\mathbb{K}}(L^2([\xi]))} \] with $\ensuremath{\mathcal{H}}_{[\xi]}$ a Hilbert $\ensuremath{\mathbb{K}}(L^2([\xi]))$-module. The operator $\ensuremath{\mathcal{F}}$ commutes with the $C^\ast(N)_{(1)}$ action, hence preserves this decomposition, and induces a generalized Fredholm operator $\ensuremath{\mathcal{F}}_{[\xi]}$ on each $\ensuremath{\mathcal{H}}_{[\xi]}$. By the strong Morita equivalence $\ensuremath{\mathbb{K}}(L^2([\xi])) \sim \ensuremath{\mathbb{C}}$, any countably generated Hilbert $\ensuremath{\mathbb{K}}(L^2([\xi]))$-module can be realized as a direct summand of $\ensuremath{\mathbb{K}}(V)$, for some $V$. The generalized Fredholm operator $\ensuremath{\mathcal{F}}_{[\xi]}$ can be extended by the identity to $\ensuremath{\mathbb{K}}(V)$, giving a generalized Fredholm operator $\ensuremath{\mathcal{F}}_V$ on $\ensuremath{\mathbb{K}}(V)$. Let $V$ be an infinite dimensional Hilbert space and $\ensuremath{\mathbb{K}}(V)$ the compact operators. When $\ensuremath{\mathbb{K}}(V)$ is viewed as a right Hilbert $\ensuremath{\mathbb{K}}(V)$-module, the space of (bounded) adjointable operators is naturally identified with $\ensuremath{\mathbb{B}}(V)$ acting by left multiplication, while the space of generalized compact operators is $\ensuremath{\mathbb{K}}(V) \subset \ensuremath{\mathbb{B}}(V)$ \cite{WeggeOlsen}. Thus the generalized Fredholm operators, in the sense of Hilbert modules, on $\ensuremath{\mathbb{K}}(V)$, are precisely the operators given by left multiplication by a Fredholm operator on $V$ in the ordinary sense. It follows from Atkinson's theorem that a generalized Fredholm operator $\ensuremath{\mathcal{F}}_V$ on $\ensuremath{\mathbb{K}}(V)$ has closed range. If $\ensuremath{\mathcal{F}}_V$ is left multiplication by $F_V \in \ensuremath{\mathbb{B}}(V)$ then $\ensuremath{\textnormal{ran}}(\ensuremath{\mathcal{F}})=\ensuremath{\mathbb{K}}(V,\ensuremath{\textnormal{ran}}(F_V))$ while $\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}})=\ensuremath{\mathbb{K}}(V,\ensuremath{\textnormal{ker}}(F_V))$. As $\ensuremath{\textnormal{ker}}(F_V)$ is finite-dimensional, $\ensuremath{\mathbb{K}}(V,\ensuremath{\textnormal{ker}}(F_V))\simeq V \otimes \ensuremath{\textnormal{ker}}(F_V)$ is a finitely generated, projective $\ensuremath{\mathbb{K}}(V)$-module, and also a Hilbert space; moreover, the Hilbert space inner product is given by the composition of the $\ensuremath{\mathbb{K}}(V)$-valued inner product with the trace. By the above generalities, the generalized Fredholm operator $\ensuremath{\mathcal{F}}_{[\xi]}$ on $\ensuremath{\mathcal{H}}_{[\xi]}$ must have closed range, and hence the same is true for $\ensuremath{\mathcal{F}}$. Moreover \[ \mu_N([\st{D}^E_{\beta}])=[\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^+)]-[\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^-)] \in \ensuremath{\textnormal{K}}_0(C^\ast(N)_{(1)}), \] with $\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^{\pm})$ being Hilbert spaces, with the inner product given by the composition of the $\ensuremath{\mathbb{K}}(L^2(\Pi^\ast))$-valued inner product with the trace. But the latter agrees with the $L^2$-inner product in $H$ by \eqref{eqn:TraceNorm}, hence $\ensuremath{\textnormal{ker}}(\ensuremath{\mathcal{F}}^{\pm}) \subset H$. On $H$ the operator $\ensuremath{\mathcal{F}}$ coincides with $F$, so this completes the proof. \end{proof} \begin{corollary} \label{cor:AssemblyAsIndex} Let $\ell>0$ and let $\ensuremath{\mathcal{A}}$ be a Dixmier-Douady bundle on $G$ with $\tn{DD}(\ensuremath{\mathcal{A}})=\ell \in \ensuremath{\mathbb{Z}} \simeq H^3_G(G,\ensuremath{\mathbb{Z}})$. Let $x=(\Phi,\ensuremath{\mathcal{S}})_\ast[\scr{D}^E] \in \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}})$ be the class represented by a D-cycle $(M,E,\Phi,\ensuremath{\mathcal{S}})$. The formal character $\scr{I}(x) \in R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}}, \, \ell}$ is given by the $T$-index of a $1^{st}$ order elliptic operator $\st{D}^E_\beta$ acting on sections of a vector bundle $\wedge \ensuremath{\mathfrak{n}}_- \wh{\otimes} S \otimes E$ over the space $\ensuremath{\mathcal{Y}}=\ensuremath{\mathfrak{t}} \times_T \Phi^{-1}(U)$, where $U \supset T$ is a tubular neighborhood of the maximal torus. \end{corollary} \subsection{Application to Hamiltonian loop group spaces.}\label{sec:HamLGSpace} A proper Hamiltonian $LG$-space $(\ensuremath{\mathcal{M}},\omega_\ensuremath{\mathcal{M}},\Phi_{\ensuremath{\mathcal{M}}})$ is a Banach manifold $\ensuremath{\mathcal{M}}$ with a smooth action of $LG$, equipped with a weakly non-degenerate $LG$-invariant closed 2-form $\omega_{\ensuremath{\mathcal{M}}}$, and a proper $LG$-equivariant map \[ \Phi_\ensuremath{\mathcal{M}} \colon \ensuremath{\mathcal{M}} \rightarrow L\ensuremath{\mathfrak{g}}^\ast \] satisfying the moment map condition \[ \iota(\xi_\ensuremath{\mathcal{M}})\omega_\ensuremath{\mathcal{M}}=-d\pair{\Phi_\ensuremath{\mathcal{M}}}{\xi}, \qquad \xi \in L\ensuremath{\mathfrak{g}}.\] A \emph{level $k$ prequantization} of $\ensuremath{\mathcal{M}}$ is a $LG^{\ensuremath{\tn{bas}}}$-equivariant prequantum line bundle $L \rightarrow \ensuremath{\mathcal{M}}$, such that the central circle in $LG^{\ensuremath{\tn{bas}}}$ acts with weight $k$. See for example \cite{MWVerlindeFactorization,AlekseevMalkinMeinrenken} for further background on Hamiltonian loop group spaces. The subgroup $\Omega G \subset LG$ acts freely on $\ensuremath{\mathcal{M}}$, hence the quotient $M=\ensuremath{\mathcal{M}}/\Omega G$ is a smooth finite-dimensional $G$-manifold fitting into a pullback diagram \begin{equation} \begin{CD} \ensuremath{\mathcal{M}} @>\Phi_{\ensuremath{\mathcal{M}}}>> L\ensuremath{\mathfrak{g}}^\ast\\ @VVV @VVV\\ M@>\Phi >>G \end{CD} \end{equation} where the vertical maps are the quotient maps by $\Omega G$. The quotient $M$ is a \emph{quasi-Hamiltonian} (or \emph{q-Hamiltonian}) $G$-\emph{space}, and the pullback diagram above gives a 1-1 correspondence between proper Hamiltonian $LG$-spaces and compact q-Hamiltonian $G$-spaces \cite{AlekseevMalkinMeinrenken}. Let $G$ be compact and connected. It was shown in \cite{DDDFunctor} (see \cite{LMSspinor} for a simpler construction) that every q-Hamiltonian $G$-space gives rise, in a canonical way, to a D-cycle $(M,\ensuremath{\mathbb{C}},\Phi,\ensuremath{\mathcal{S}}_{\tn{spin}})$ for $\ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}})$ for a suitable Dixmier-Douady bundle $\ensuremath{\mathcal{A}}$ over $G$; the Morita morphism $\ensuremath{\mathcal{S}}_{\tn{spin}}$ is referred to as a \emph{twisted spin-c structure} in \cite{DDDFunctor,MeinrenkenKHomology,LMSspinor}. For $G$ simple and simply connected, the Dixmier-Douady class of $\ensuremath{\mathcal{A}}$ is $\ensuremath{\textnormal{h}^\vee} \in \ensuremath{\mathbb{Z}} \simeq H^3_G(G,\ensuremath{\mathbb{Z}})$, and we denote it by $\ensuremath{\mathcal{A}}^{(\ensuremath{\textnormal{h}^\vee})}$. We will assume $G$ is simple and simply connected below. A \emph{level $k$ prequantization} \cite{MeinrenkenKHomology} of a q-Hamiltonian space is a Morita morphism \[ \ensuremath{\mathcal{E}} \colon \underline{\ensuremath{\mathbb{C}}} \dashrightarrow \Phi^\ast \ensuremath{\mathcal{A}}^{(k)} \] where $\tn{DD}(\ensuremath{\mathcal{A}}^{(k)})=k \in \ensuremath{\mathbb{Z}} \simeq H^3_G(G,\ensuremath{\mathbb{Z}})$. Isomorphism classes of level $k$ prequantizations $\ensuremath{\mathcal{E}}$ of $M$ are in 1-1 correspondence with isomorphism classes of level $k$ prequantum line bundles $L$ over $\ensuremath{\mathcal{M}}$, see \cite{MeinrenkenKHomology,ZohrehPrequant} and references therein. Let $\ensuremath{\mathcal{S}}=\ensuremath{\mathcal{S}}_{\tn{spin}}\otimes \ensuremath{\mathcal{E}}$, then $(M,\ensuremath{\mathbb{C}},\Phi,\ensuremath{\mathcal{S}})$ is a D-cycle for $\ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}^{(k+\ensuremath{\textnormal{h}^\vee})})$. The \emph{level $k$ quantization} of $(M,\ensuremath{\mathcal{E}})$ was defined by Meinrenken in \cite{MeinrenkenKHomology} as the image of the D-cycle $(M,\ensuremath{\mathbb{C}},\Phi,\ensuremath{\mathcal{S}})$ in the analytic twisted K-homology group: \begin{equation} \label{eqn:MeinrenkenDefinition} (\Phi,\ensuremath{\mathcal{S}})_\ast[\scr{D}] \in \ensuremath{\textnormal{K}}_0^G(G,\ensuremath{\mathcal{A}}^{(k+\ensuremath{\textnormal{h}^\vee})}). \end{equation} In light of the Freed-Hopkins-Teleman theorem, as well as the 1-1 correspondence between q-Hamiltonian $G$-spaces and Hamiltonian $LG$-spaces, it would seem reasonable to \emph{define} the level $k$ `quantization' of the prequantized loop group space $(\ensuremath{\mathcal{M}},\omega_\ensuremath{\mathcal{M}},\Phi_\ensuremath{\mathcal{M}},L)$ as the element of $R_k(G)$ corresponding to $(\Phi,\ensuremath{\mathcal{S}})_\ast [\scr{D}]$ under the Freed-Hopkins-Teleman isomorphism. This definition satisfies many desirable properties. For example, the quantization of a prequantized integral coadjoint orbit is the corresponding irreducible positive energy representation. Also, the definition satisfies a `quantization commutes with reduction' principle, see \cite{MeinrenkenKHomology}. In \cite{LSQuantLG}, building on constructions in \cite{LMSspinor}, we suggested an alternative definition of the quantization of a Hamiltonian loop group space in terms of the $T$-equivariant $L^2$-index of a Dirac-type operator on a non-compact spin-c submanifold of $\ensuremath{\mathcal{M}}$. The latter submanifold and operator can be identified, respectively, with the manifold $\ensuremath{\mathcal{Y}}$ and the operator $\st{D}_\beta$ that we discussed in Section \ref{sec:IndexMap}; see \cite{LSQuantLG} for details. As mentioned earlier, we proved in \cite[Section 4.7]{LSQuantLG} that $\st{D}_\beta$ has a well-defined $T$-equivariant $L^2$-index, with formal character lying in $R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}},\, (k+\ensuremath{\textnormal{h}^\vee})}$, and proposed that the quantization of $\ensuremath{\mathcal{M}}$ be \emph{defined} as the corresponding element the Verlinde ring $R_k(G)$. The following is now an immediate consequence of Corollary \ref{cor:AssemblyAsIndex} and Proposition \ref{prop:InverseFHT}. \begin{corollary} \label{cor:DefsAgree} The two definitions of the quantization of $\ensuremath{\mathcal{M}}$ agree, that is, under the identification $R^{-\infty}(T)^{W_\ensuremath{\textnormal{aff}}-\ensuremath{\textnormal{anti}},\,(k+\ensuremath{\textnormal{h}^\vee})}\simeq R_k(G)$, the $T$-equivariant $L^2-\ensuremath{\textnormal{index}}(\st{D}_\beta)$ coincides with the image of $(\Phi,\ensuremath{\mathcal{S}})_\ast[\scr{D}] \in \ensuremath{\textnormal{K}}^G_0(G,\ensuremath{\mathcal{A}}^{(k+\ensuremath{\textnormal{h}^\vee})})$ under the Freed-Hopkins-Teleman isomorphism. \end{corollary} Our principal motivation in \cite{LSQuantLG} was to give a definition amenable to study with the Witten deformation/non-abelian localization, and using this to obtain a new proof of the quantization-commutes-with-reduction theorem for Hamiltonian loop group spaces. This was mostly carried out in \cite{LSWittenDef} (combined with certain results of \cite{YiannisThesis} or \cite{LMVerlindeQR}). Thus a consequence of Corollary \ref{cor:DefsAgree} is that this new proof applies also to Meinrenken's \cite{MeinrenkenKHomology} definition \eqref{eqn:MeinrenkenDefinition}.
{ "timestamp": "2019-07-03T02:19:41", "yymm": "1804", "arxiv_id": "1804.05213", "language": "en", "url": "https://arxiv.org/abs/1804.05213" }
\section{\@startsection{section}{1}% \z@{.7\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\bfseries\centering}} } \titl {Renewal theory for asymmetric $U$-statistics} \date{14 April, 2018} \author{Svante Janson} \thanks{Partly supported by the Knut and Alice Wallenberg Foundation} \address{Department of Mathematics, Uppsala University, PO Box 480, SE-751~06 Uppsala, Sweden} \email{svante.janson@math.uu.se} \urladdr{http://www.math.uu.se/svante-janson} \subjclass[2010]{60F05; 60F17, 60K05} \overfullrule 0pt \numberwithin{equation}{section} \renewcommand\le{\leqslant} \renewcommand\ge{\geqslant} \allowdisplaybreaks \theoremstyle{plain \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{example}[theorem]{Example} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{tom}{} \newtheorem*{acks}{Acknowledgements} \newtheorem*{ack}{Acknowledgement} \theoremstyle{remark} \newenvironment{romenumerate}[1][-10pt] \addtolength{\leftmargini}{#1}\begin{enumerate \renewcommand{\labelenumi}{\textup{(\roman{enumi})}}% \renewcommand{\theenumi}{\textup{(\roman{enumi})}}% }{\end{enumerate}} \newenvironment{PXenumerate}[1] \begin{enumerate \renewcommand{\labelenumi}{\textup{(#1\arabic{enumi})}}% \renewcommand{\theenumi}{\labelenumi}% }{\end{enumerate}} \newenvironment{PQenumerate}[1] \begin{enumerate \renewcommand{\labelenumi}{\textup{(#1)}}% \renewcommand{\theenumi}{\labelenumi}% }{\end{enumerate}} \newcounter{oldenumi} \newenvironment{romenumerateq {\setcounter{oldenumi}{\value{enumi}} \begin{romenumerate} \setcounter{enumi}{\value{oldenumi}}} {\end{romenumerate}} \newcounter{thmenumerate} \newenvironment{thmenumerate} {\setcounter{thmenumerate}{0}% \renewcommand{\thethmenumerate}{\textup{(\roman{thmenumerate})}}% \def\item{\pa \refstepcounter{thmenumerate}\textup{(\roman{thmenumerate})\enspace}} } {} \newcounter{xenumerate} \newenvironment{xenumerate} {\begin{list} {\upshape(\roman{xenumerate})} {\setlength{\leftmargin}{0pt} \setlength{\rightmargin}{0pt} \setlength{\labelwidth}{0pt} \setlength{\itemindent}{\labelsep} \setlength{\topsep}{0pt} \usecounter{xenumerate}} } {\end{list}} \newcommand\xfootnote[1]{\unskip\footnote{#1}$ $} \newcommand\pfitem[1]{\par(#1):} \newcommand\pfitemx[1]{\par#1:} \newcommand\pfitemref[1]{\pfitemx{\ref{#1}}} \newcommand\pfcase[2]{\smallskip\noindent\emph{Case #1: #2} \noindent} \newcommand\step[2]{\smallskip\noindent\emph{Step #1: #2} \noindent} \newcounter{steps} \newcommand\stepx{\smallskip\noindent\refstepcounter{steps}% \emph{Step \arabic{steps}. }} \newcommand{\refT}[1]{Theorem~\ref{#1}} \newcommand{\refTs}[1]{Theorems~\ref{#1}} \newcommand{\refC}[1]{Corollary~\ref{#1}} \newcommand{\refL}[1]{Lemma~\ref{#1}} \newcommand{\refR}[1]{Remark~\ref{#1}} \newcommand{\refS}[1]{Section~\ref{#1}} \newcommand{\refSS}[1]{Section~\ref{#1}} \newcommand{\refStep}[1]{Step~\ref{#1}} \newcommand{\refP}[1]{Proposition~\ref{#1}} \newcommand{\refD}[1]{Definition~\ref{#1}} \newcommand{\refE}[1]{Example~\ref{#1}} \newcommand{\refF}[1]{Figure~\ref{#1}} \newcommand{\refApp}[1]{Appendix~\ref{#1}} \newcommand{\refTab}[1]{Table~\ref{#1}} \newcommand{\refand}[2]{\ref{#1} and~\ref{#2}} \newcommand\REM[1]{{\raggedright\texttt{[#1]}\par\marginal{XXX}}} \newcommand\XREM[1]{\relax} \newcommand\rem[1]{{\texttt{[#1]}\marginal{XXX}}} \begingroup \count255=\time \divide\count255 by 60 \count1=\count255 \multiply\count255 by -60 \advance\count255 by \time \ifnum \count255 < 10 \xdef\klockan{\the\count1.0\the\count255} \else\xdef\klockan{\the\count1.\the\count255}\fi \endgroup \newcommand\nopf{\qed} \newcommand\noqed{\renewcommand{\qed}{}} \newcommand\qedtag{\eqno{\qed}} \DeclareMathOperator*{\sumx}{\sum\nolimits^{*}} \DeclareMathOperator*{\sumxx}{\sum\nolimits^{**}} \newcommand{\sum_{i=0}^\infty}{\sum_{i=0}^\infty} \newcommand{\sum_{j=0}^\infty}{\sum_{j=0}^\infty} \newcommand{\sum_{k=0}^\infty}{\sum_{k=0}^\infty} \newcommand{\sum_{m=0}^\infty}{\sum_{m=0}^\infty} \newcommand{\sum_{n=0}^\infty}{\sum_{n=0}^\infty} \newcommand{\sum_{i=1}^\infty}{\sum_{i=1}^\infty} \newcommand{\sum_{j=1}^\infty}{\sum_{j=1}^\infty} \newcommand{\sum_{k=1}^\infty}{\sum_{k=1}^\infty} \newcommand{\sum_{m=1}^\infty}{\sum_{m=1}^\infty} \newcommand{\sum_{n=1}^\infty}{\sum_{n=1}^\infty} \newcommand{\sum_{i=1}^n}{\sum_{i=1}^n} \newcommand{\sum_{i=1}^d}{\sum_{i=1}^d} \newcommand{\sum_{j=1}^d}{\sum_{j=1}^d} \newcommand{\sum_{k=1}^d}{\sum_{k=1}^d} \newcommand{\sum_{k=1}^n}{\sum_{k=1}^n} \newcommand{\prod_{i=1}^n}{\prod_{i=1}^n} \newcommand\set[1]{\ensuremath{\{#1\}}} \newcommand\bigset[1]{\ensuremath{\bigl\{#1\bigr\}}} \newcommand\Bigset[1]{\ensuremath{\Bigl\{#1\Bigr\}}} \newcommand\biggset[1]{\ensuremath{\biggl\{#1\biggr\}}} \newcommand\lrset[1]{\ensuremath{\left\{#1\right\}}} \newcommand\xpar[1]{(#1)} \newcommand\bigpar[1]{\bigl(#1\bigr)} \newcommand\Bigpar[1]{\Bigl(#1\Bigr)} \newcommand\biggpar[1]{\biggl(#1\biggr)} \newcommand\lrpar[1]{\left(#1\right)} \newcommand\bigsqpar[1]{\bigl[#1\bigr]} \newcommand\Bigsqpar[1]{\Bigl[#1\Bigr]} \newcommand\biggsqpar[1]{\biggl[#1\biggr]} \newcommand\lrsqpar[1]{\left[#1\right]} \newcommand\xcpar[1]{\{#1\}} \newcommand\bigcpar[1]{\bigl\{#1\bigr\}} \newcommand\Bigcpar[1]{\Bigl\{#1\Bigr\}} \newcommand\biggcpar[1]{\biggl\{#1\biggr\}} \newcommand\lrcpar[1]{\left\{#1\right\}} \newcommand\abs[1]{|#1|} \newcommand\bigabs[1]{\bigl|#1\bigr|} \newcommand\Bigabs[1]{\Bigl|#1\Bigr|} \newcommand\biggabs[1]{\biggl|#1\biggr|} \newcommand\lrabs[1]{\left|#1\right|} \def\rompar(#1){\textup(#1\textup)} \newcommand\xfrac[2]{#1/#2} \newcommand\xpfrac[2]{(#1)/#2} \newcommand\xqfrac[2]{#1/(#2)} \newcommand\xpqfrac[2]{(#1)/(#2)} \newcommand\parfrac[2]{\lrpar{\frac{#1}{#2}}} \newcommand\bigparfrac[2]{\bigpar{\frac{#1}{#2}}} \newcommand\Bigparfrac[2]{\Bigpar{\frac{#1}{#2}}} \newcommand\biggparfrac[2]{\biggpar{\frac{#1}{#2}}} \newcommand\xparfrac[2]{\xpar{\xfrac{#1}{#2}}} \newcommand\innprod[1]{\langle#1\rangle} \newcommand\expbig[1]{\exp\bigl(#1\bigr)} \newcommand\expBig[1]{\exp\Bigl(#1\Bigr)} \newcommand\explr[1]{\exp\left(#1\right)} \newcommand\expQ[1]{e^{#1}} \def\xexp(#1){e^{#1}} \newcommand\ceil[1]{\lceil#1\rceil} \newcommand\floor[1]{\lfloor#1\rfloor} \newcommand\lrfloor[1]{\left\lfloor#1\right\rfloor} \newcommand\frax[1]{\{#1\}} \newcommand\setn{\set{1,\dots,n}} \newcommand\nn{[n]} \newcommand\ntoo{\ensuremath{{n\to\infty}}} \newcommand\Ntoo{\ensuremath{{N\to\infty}}} \newcommand\asntoo{\text{as }\ntoo} \newcommand\ktoo{\ensuremath{{k\to\infty}}} \newcommand\mtoo{\ensuremath{{m\to\infty}}} \newcommand\stoo{\ensuremath{{s\to\infty}}} \newcommand\ttoo{\ensuremath{{t\to\infty}}} \newcommand\xtoo{\ensuremath{{x\to\infty}}} \newcommand\bmin{\wedge} \newcommand\normx[2]{\|#2\|_{#1}} \newcommand\norm[1]{\|#1\|_2} \newcommand\normp[1]{\normx{p}{#1}} \newcommand\bignorm[1]{\bigl\|#1\bigr\|} \newcommand\Bignorm[1]{\Bigl\|#1\Bigr\|} \newcommand\downto{\searrow} \newcommand\upto{\nearrow} \newcommand\half{\tfrac12} \newcommand\thalf{\tfrac12} \newcommand\punkt{.\spacefactor=1000} \newcommand\iid{i.i.d\punkt} \newcommand\ie{i.e\punkt} \newcommand\eg{e.g\punkt} \newcommand\viz{viz\punkt} \newcommand\cf{cf\punkt} \newcommand{a.s\punkt}{a.s\punkt} \newcommand{a.e\punkt}{a.e\punkt} \renewcommand{\ae}{\vu} \newcommand\whp{w.h.p\punkt} \newcommand\ii{\mathrm{i}} \newcommand{\longrightarrow}{\longrightarrow} \newcommand\dto{\overset{\mathrm{d}}{\longrightarrow}} \newcommand\pto{\overset{\mathrm{p}}{\longrightarrow}} \newcommand\asto{\overset{\mathrm{a.s.}}{\longrightarrow}} \newcommand\eqd{\overset{\mathrm{d}}{=}} \newcommand\neqd{\overset{\mathrm{d}}{\neq}} \newcommand\op{o_{\mathrm p}} \newcommand\Op{O_{\mathrm p}} \newcommand\bbR{\mathbb R} \newcommand\bbC{\mathbb C} \newcommand\bbN{\mathbb N} \newcommand\bbT{\mathbb T} \newcommand\bbQ{\mathbb Q} \newcommand\bbZ{\mathbb Z} \newcommand\bbZleo{\mathbb Z_{\le0}} \newcommand\bbZgeo{\mathbb Z_{\ge0}} \newcounter{CC} \newcommand{\CC}{\stepcounter{CC}\CCx} \newcommand{\CCx}{C_{\arabic{CC}}} \newcommand{\CCdef}[1]{\xdef#1{\CCx}} \newcommand{\CCname}[1]{\CC\CCdef{#1}} \newcommand{\CCreset}{\setcounter{CC}0} \newcounter{cc} \newcommand{\cc}{\stepcounter{cc}\ccx} \newcommand{\ccx}{c_{\arabic{cc}}} \newcommand{\ccdef}[1]{\xdef#1{\ccx}} \newcommand{\ccname}[1]{\cc\ccdef{#1}} \newcommand{\ccreset}{\setcounter{cc}0} \renewcommand\Re{\operatorname{Re}} \renewcommand\Im{\operatorname{Im}} \newcommand\E{\operatorname{\mathbb E{}}} \renewcommand\P{\operatorname{\mathbb P{}}} \newcommand\PP{\operatorname{\mathbb P{}}} \newcommand\Var{\operatorname{Var}} \newcommand\Cov{\operatorname{Cov}} \newcommand\Corr{\operatorname{Corr}} \newcommand\Exp{\operatorname{Exp}} \newcommand\Po{\operatorname{Po}} \newcommand\Bi{\operatorname{Bi}} \newcommand\Bin{\operatorname{Bin}} \newcommand\Be{\operatorname{Be}} \newcommand\Ge{\operatorname{Ge}} \newcommand\NBi{\operatorname{NegBin}} \newcommand\Res{\operatorname{Res}} \newcommand\fall[1]{^{\underline{#1}}} \newcommand\rise[1]{^{\overline{#1}}} \newcommand\supp{\operatorname{supp}} \newcommand\sgn{\operatorname{sgn}} \newcommand\Tr{\operatorname{Tr}} \newcommand\ga{\alpha} \newcommand\gb{\beta} \newcommand\gd{\delta} \newcommand\gD{\Delta} \newcommand\gf{\varphi} \newcommand\gam{\gamma} \newcommand\gG{\Gamma} \newcommand\gk{\varkappa} \newcommand\kk{\kappa} \newcommand\gl{\lambda} \newcommand\gL{\Lambda} \newcommand\go{\omega} \newcommand\gO{\Omega} \newcommand\gs{\sigma} \newcommand\gS{\Sigma} \newcommand\gss{\sigma^2} \newcommand\gth{\theta} \newcommand\eps{\varepsilon} \newcommand\ep{\varepsilon} \renewcommand\phi{\xxx} \newcommand\cA{\mathcal A} \newcommand\cB{\mathcal B} \newcommand\cC{\mathcal C} \newcommand\cD{\mathcal D} \newcommand\cE{\mathcal E} \newcommand\cF{\mathcal F} \newcommand\cG{\mathcal G} \newcommand\cH{\mathcal H} \newcommand\cI{\mathcal I} \newcommand\cJ{\mathcal J} \newcommand\cK{\mathcal K} \newcommand\cL{{\mathcal L}} \newcommand\cM{\mathcal M} \newcommand\cN{\mathcal N} \newcommand\cO{\mathcal O} \newcommand\cP{\mathcal P} \newcommand\cQ{\mathcal Q} \newcommand\cR{{\mathcal R}} \newcommand\cS{{\mathcal S}} \newcommand\cT{{\mathcal T}} \newcommand\cU{{\mathcal U}} \newcommand\cV{\mathcal V} \newcommand\cW{\mathcal W} \newcommand\cX{{\mathcal X}} \newcommand\cY{{\mathcal Y}} \newcommand\cZ{{\mathcal Z}} \newcommand\tA{\tilde A} \newcommand\tB{\tilde B} \newcommand\tC{\tilde C} \newcommand\tD{\tilde D} \newcommand\tE{\tilde E} \newcommand\tF{\tilde F} \newcommand\tG{\tilde G} \newcommand\tH{\tilde H} \newcommand\tI{\tilde I} \newcommand\tJ{\tilde J} \newcommand\tK{\tilde K} \newcommand\tL{{\tilde L}} \newcommand\tM{\tilde M} \newcommand\tN{\tilde N} \newcommand\tO{\tilde O} \newcommand\tP{\tilde P} \newcommand\tQ{\tilde Q} \newcommand\tR{{\tilde R}} \newcommand\tS{{\tilde S}} \newcommand\tT{{\tilde T}} \newcommand\tU{{\widetilde U}} \newcommand\tV{\tilde V} \newcommand\tW{\widetilde W} \newcommand\tX{{\tilde X}} \newcommand\tY{{\tilde Y}} \newcommand\tZ{{\tilde Z}} \newcommand\tf{\tilde f} \newcommand\td{\tilde d} \newcommand\tmu{\tilde \mu} \newcommand\bJ{\bar J} \newcommand\bW{\overline W} \newcommand\ett[1]{\boldsymbol1\xcpar{#1}} \newcommand\bigett[1]{\boldsymbol1\bigcpar{#1}} \newcommand\Bigett[1]{\boldsymbol1\Bigcpar{#1}} \newcommand\etta{\boldsymbol1} \newcommand\smatrixx[1]{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)} \newcommand\limn{\lim_{n\to\infty}} \newcommand\limN{\lim_{N\to\infty}} \newcommand\qw{^{-1}} \newcommand\qww{^{-2}} \newcommand\qq{^{1/2}} \newcommand\qqw{^{-1/2}} \newcommand\qqq{^{1/3}} \newcommand\qqqb{^{2/3}} \newcommand\qqqw{^{-1/3}} \newcommand\qqqbw{^{-2/3}} \newcommand\qqqq{^{1/4}} \newcommand\qqqqc{^{3/4}} \newcommand\qqqqw{^{-1/4}} \newcommand\qqqqcw{^{-3/4}} \newcommand\intoi{\int_0^1} \newcommand\intoo{\int_0^\infty} \newcommand\intoooo{\int_{-\infty}^\infty} \newcommand\oi{\ensuremath{[0,1]}} \newcommand\ooi{(0,1]} \newcommand\ooo{[0,\infty)} \newcommand\ooox{[0,\infty]} \newcommand\oooo{(-\infty,\infty)} \newcommand\setoi{\set{0,1}} \newcommand\dtv{d_{\mathrm{TV}}} \newcommand\dd{\,\mathrm{d}} \newcommand\ddx{\mathrm{d}} \newcommand\ddd[1]{\frac{\ddx}{\ddx#1}} \newcommand{probability generating function}{probability generating function} \newcommand{moment generating function}{moment generating function} \newcommand{characteristic function}{characteristic function} \newcommand{uniformly integrable}{uniformly integrable} \newcommand\rv{random variable} \newcommand\lhs{left-hand side} \newcommand\rhs{right-hand side} \newcommand\gnp{\ensuremath{G(n,p)}} \newcommand\gnm{\ensuremath{G(n,m)}} \newcommand\gnd{\ensuremath{G(n,d)}} \newcommand\gnx[1]{\ensuremath{G(n,#1)}} \newcommand\etto{\bigpar{1+o(1)}} \newcommand\nj{_{n,j}} \newcommand\mj{_{m,j}} \newcommand\Doo{D\ooo} \newcommand\gDa{\gD a} \newcommand\ux{\widehat U} \newcommand\WW{\mathbf W} \newcommand\oT{[0,T]} \newcommand\oas{o_{\mathrm{a.s.}}} \newcommand\oasx{o} \newcommand\fXXd{f(X_1,\dots,X_d)} \newcommand\FF{\widehat F} \newcommand\xx[1]{^{(#1)}} \newcommand\fx{f_*} \newcommand\fS{\mathfrak S} \newcommand\NN[1]{N_{#1}} \newcommand\flnx{\floor{n(x)}} \newcommand\ZZ{\widehat Z} \newcommand\BB{\mathbf B} \newcommand\UU{U^*} \newcommand\tUU{\tU^*} \newcommand\nona{nonarithmetic} \newcommand\Roo{R_\infty} \newcommand\xxx{\gD x} \newcommand\SSS{S^*} \newcommand\Uoi{U(0,1)} \newcommand\perm{\relax} \newcommand\permA{\perm{231}, \perm{321}} \newcommand\permB{\perm{231}, \perm{312}} \newcommand\permD{\perm{132}, \perm{312}} \newcommand\permE{\perm{132}, \perm{321}} \newcommand\permAAA{\perm{231},\perm{312}, \perm{321}} \newcommand\permBBB{\perm{132},\perm{231}, \perm{321}} \newcommand\permCCC{\perm{132},\perm{231}, \perm{312}} \newcommand\permEEE{\perm{132},\perm{213}, \perm{321}} \newcommand\xoo{_1^\infty} \newcommand\ELL{L} \newcommand\Nxn[1]{N_{#1,n}} \newcommand\Ngsn{\Nxn\gs} \newcommand{H\"older}{H\"older} \newcommand{P\'olya}{P\'olya} \newcommand\CS{Cauchy--Schwarz} \newcommand\CSineq{\CS{} inequality} \newcommand{L\'evy}{L\'evy} \newcommand\ER{Erd\H os--R\'enyi} \newcommand{Lov\'asz}{Lov\'asz} \newcommand{Fr\'echet}{Fr\'echet} \newcommand{\texttt{Maple}}{\texttt{Maple}} \newcommand\citex{\REM} \newcommand\refx[1]{\texttt{[#1]}} \newcommand\xref[1]{\texttt{(#1)}} \hyphenation{Upp-sala} \begin{document} \begin{abstract} We extend a functional limit theorem for symmetric $U$-statistics [Miller and Sen, 1972] to asymmetric $U$-statistics, and use this to show some renewal theory results for asymmetric $U$-statistics. Some applications are given. \end{abstract} \maketitle \section{Introduction}\label{S:intro} Let $X,X_1,X_2,\dots,$ be an \iid{} sequence of random variables taking values in an arbitrary measurable space $S=(S,\cS)$. (In most cases, $S=\bbR$ or perhaps $\bbR^k$, or a Borel subset of one of these, but we can just as well consider the general case.) Furthermore, let $d\ge1$ and let $f:S^d\to \bbR$ be a given measurable function. We then define the (real-valued) random variables \begin{equation} \label{U} U_n=U_n(f):=\sum_{1\le i_1<\dots<i_d\le n} f\bigpar{X_{i_1},\dots,X_{i_d}}, \qquad n\ge0. \end{equation} We call $U_n$ a \emph{$U$-statistic}, following \citet{Hoeffding}. \begin{remark} Many authors, including \citet{Hoeffding}, normalize $U_n$ by dividing the sum in \eqref{U} by $\binom nd$, the number of terms in it; the traditional definition (which assumes $n\ge d$) is thus in our notation $U_n/\binom nd$. We find it more convenient for our purposes to use the unnormalized version above. \end{remark} It is common, following \citet{Hoeffding}, to assume that $f$ is a symmetric function of its $d$ variables. In this case, the order of the variables does not matter, and we can in \eqref{U} sum over all sequences $i_1,\dots,i_d$ of $d$ distinct elements of $\setn$, up to an obvious factor of $d!$. (\cite{Hoeffding} gives both versions.) Conversely, if we sum over all such sequences, we may without loss of generality assume that $f$ is symmetric. However, in the present paper we consider the general case of \eqref{U} without assuming symmetry, which we for emphasis may call \emph{asymmetric $U$-statistics}. One of the purposes of this paper is to generalize a result by \cite{MillerSen} on functional convergence from the symmetric case to the general, asymmetric case. We then use this result to derive some renewal theory results for the sequence $U_n$. One motivation for this is some applications to random restricted permutations, see \refS{Sex}. Univariate limit results, \ie, limits in distribution of $U_n$ after suitable normalization, are well-known also in the asymmetric case, see \eg{} \cite[Chapter 11.2]{SJIII}. The possibility of functional limits is briefly mentioned in \cite[Remark 11.25]{SJIII}, and a special case ($d=2$ and $f$ antisymmetric) was studied in \cite{SJ22}, see \refE{E22}; However, we are not aware of functional limit theorems in the generality of the present paper. The main results are stated in \refS{Smain}. The proofs are given in \refS{Spf}; they use standard methods, in particular the decomposition and projection method of \citet{Hoeffding}, but some complications arise in the asymmetric case; this includes applications to random restricted permutations that gave the initial motivation to write the present paper. Some examples and applications are discussed in \refS{Sex}. We end with some further comments and open problems in \refS{Sadd}; this includes more comments on the relation between the symmetric and asymmetric cases. The results in the present paper focus on the non-degenerate case, where the covariance matrix $\gS=(\gs_{ij})$ defined by \eqref{gsij} below is non-zero. In the degenerate case when $\gS=0$, the result still holds but are less interesting, since the obtained limits in \eg{} \refT{T1} are degenerate. See \refR{Rdeg} for further comments on the degenerate case. \section{Some notation}\label{Snot} We consider as in the introduction, unless otherwise said, some given \iid{} random variables $X_i\in S$ and a given function $f:S^d\to\bbR$. In particular, $d\ge1$ is fixed, and we therefore often omit it from the notation. We assume throughout $f(X_1,\dots,X_d)\in L^1$ (and usually $L^2$), and define \begin{equation}\label{mu} \mu:=\E \fXXd. \end{equation} We study $U_n=U_n(f)$ defined by \eqref{U}. Let \begin{equation} \label{U*} U^*_n=U^*_n(f):=\max_{1\le m\le n}|U_m(f)|. \end{equation} We use $\normp\,$ for the $L^p$-norm: $\normp{Y}:=\xpar{\E Y^p}^{1/p}$ for any random variable $Y$ and $p>0$, and $\normp{f}:=\normp{ f(X_1,\dots,X_d)}$ (and similarly for other functions). $\cF_n$ is the $\gs$-field generated by $X_1,\dots,X_n$. If we consider a limit as \ntoo{}, and $a_n$ is a given sequence, then $\oas(a_n)$ denotes a sequence of random variables $R_n$ such that $R_n/a_n\asto0$. This extends to other limits such as \xtoo, \emph{mutatis mutandis}. $C$ denotes positive constants that may change from one occurence to the next; they may depend on $d$ (or $\td$) but not on $f$ or $n$ or other variables. Similarly, $C_f$ denote constants that may depend on $f$, $C_p$ denotes constants that may depend on the parameter $p$ (and $d$), and so on. \section{Main results} \label{Smain} \subsection{Limit theorems} For completeness, we begin with the law of large numbers, extending the result by \citet{HoeffdingLLN} to the asymmetric case. \begin{theorem}\label{TLLN} Suppose that $\fXXd\in L^1$. Then, as \ntoo, \begin{equation}\label{tlln} U_n/\binom nd \asto \mu. \end{equation} \end{theorem} Next we state a functional limit theorem, extending the theorem by \citet{MillerSen} for the symmetric case. We use the space $\Doo$ with the usual Skorohod topology, see \eg{} \cite[Appendix A2]{Kallenberg}; recall that convergence in $\Doo$ to a continuous limit is equivalent to uniform convergence on any compact interval $\oT$. We define the $d\times d$ matrix $\gS=(\gs_{ij})$ by \begin{equation}\label{gsij} \gs_{ij}:=\Cov\bigpar{f_i(X),f_j(X)} =\E\bigpar{f_i(X)f_j(X)}, \qquad i,j=1,\dots,d, \end{equation} with $f_i,f_j$ defined by \eqref{fi} below. Let $\WW(t):=\bigpar{W_1(t),\dots,W_d(t)}$, $t\ge0$, be a continuous $d$-dimensional Gaussian process with $\WW(0)=0$ and stationary independent increments \begin{equation}\label{WW} \WW(s+t)-\WW(s)\sim N\bigpar{0,t\Sigma}. \end{equation} Note that each component $W_j$ is a standard Brownian motion up to a factor $\gs_{jj}\qq$, and that we can represent $\WW$ as $\WW(t)=\Sigma\qq\BB(t)$, where $\BB(t)$ is a $d$-dimensional standard Brownian motion. Define also the functions \begin{equation}\label{psi} \psi_j(s,t)=\psi_{j;d}(s,t):=\frac{1}{(j-1)!\,(d-j)!}s^{j-1}(t-s)^{d-j}. \end{equation} We extend $U_n$ defined by \eqref{U} to a function of a real variable by $U_x:=U_{\floor x}$, $x>0$. (We tacitly do the same for other sequences later.) \begin{theorem} \label{T1} Suppose that $f(X_1,\dots,X_d)\in L^2$. Then, as \ntoo, \begin{equation}\label{t1} \frac{ U_{nt}-n^dt^d\mu/d!}{n^{d-1/2}} \dto Z_t, \qquad t\ge0, \end{equation} in $\Doo$, where $Z_t$ is a continuous centered Gaussian process that can be defined as \begin{equation}\label{Z} Z_t:=\sum_{j=1}^d \int_0^t\psi_j(s,t)\dd W_j(s). \end{equation} Equivalently, $Z_t$ has the covariance function, for $0\le s\le t$, {\multlinegap=0pt\begin{multline} \label{t1cov} \Cov(Z_s,Z_t) =\sum_{i,j=1}^d \gs_{ij} \int_0^s \psi_i(u,s)\psi_j(u,t)\dd u \\%& = \sum_{i,j=1}^d \frac{\gs_{ij} }{(i-1)!\,(j-1)!\,(d-i)!\,(d-j)!} \int_0^s u^{i+j-2}(s-u)^{d-i}(t-u)^{d-j}\dd u. \end{multline}} Moreover, \eqref{t1} holds jointly for several functions $f\xx{k}$, possibly with different $d\xx k$, with limits given by \eqref{Z}, where the corresponding $W_j\xx k$ together form a Gaussian process with stationary independent increments given by the covariances \begin{equation}\label{xul} \Cov\bigpar{W\xx k_i(s),W_j\xx \ell(t)} = \Cov\bigpar{f_i\xx k(X),f_j\xx\ell(X)}\cdot(s\land t). \end{equation} \end{theorem} The It\^o integrals in \eqref{Z} can by \eqref{psi} be written as linear combinations of $t^k\int_0^t s^{d-1-k}\dd W_j(s)$ with $0\le k\le d-j$; thus $Z_t$ is well-defined and continuous for $t\ge0$, with $Z_0=0$. These stochastic integrals can also by integration by parts be expressed as Riemann integrals of continuous stochastic processes, see \eqref{crux}. Note that the final integral in \eqref{t1cov} is elementary, for any given $i,j,d$, and that the covariance function in \eqref{t1cov} is a homogeneous polynomial in $s$ and $t$ of degree $2d-1$. \begin{example}\label{E2} In the case $d=2$, we obtain from \eqref{t1cov}, still for $0\le s\le t$, \begin{equation}\label{e2} \Cov(Z_s,Z_t)= \tfrac12 \bigpar{\gs_{11}+\gs_{12}}s^2t +\tfrac16\bigpar{2\gs_{22}-\gs_{11}-\gs_{12}}s^3. \end{equation} \end{example} \begin{remark} By \eqref{psi} and the binomial theorem, \begin{equation}\label{psisum} \sum_{j=1}^d \psi_j(s,t)=\frac{t^{d-1}}{(d-1)!}. \end{equation} In the symmetric case, all $f_i$ are equal and thus all $\gs_{ij}$ are equal, see \eqref{gsij}. Hence, \eqref{t1cov} simplifies by \eqref{psisum} to \begin{equation}\label{t1cov=} \Cov(Z_s,Z_t) =\gs_{11} \int_0^s \frac{s^{d-1}}{(d-1)!}\,\frac{t^{d-1}}{(d-1)!}\dd u =\frac{\gs_{11}}{(d-1)!^2}s^{d}t^{d-1} . \end{equation} Equivalently, $t^{-(d-1)}Z_t$ is $\gs_{11}\qq(d-1)!\qw B_t$ for a standard Brownian motion $B_t$. This recovers the result by \citet{MillerSen} for the symmetric case. Note that our general result \refT{T1} is similar to the symmetric case, with a continuous Gaussian limit process, but that the covariance function in general is more complicated, as seen for $d=2$ in \eqref{e2}, and that the limit thus is not a Brownian motion. \end{remark} By restricting attention to $t=1$, we obtain the following univariate limit, shown in \cite[Corollary 11.20]{SJIII}. \begin{corollary}\label{C1} Suppose that $f(X_1,\dots,X_d)\in L^2$. Then, as \ntoo, \begin{equation}\label{c1} \frac{U_n-\binom nd \mu}{n^{d-1/2}} \dto N\bigpar{0,\gss}, \end{equation} where \begin{equation}\label{c1var} \begin{split} \gss&:= \lim_\ntoo \frac{\Var(U_n)}{n^{2d-1}} = \Var(Z_1) \\& \phantom:= \sum_{i,j=1}^d \frac{(i+j-2)!\,(2d-i-j)!}{(i-1)!\,(j-1)!\,(d-i)!\,(d-j)!\,(2d-1)!}\gs_{ij} . \end{split} \end{equation} Moreover, \begin{equation}\label{c1=0} \gss=0 \iff f_i(X)=0 \text{ a.s\punkt{} for every $i=1,\dots,d$}. \end{equation} \end{corollary} \begin{example} For $d=1$, \refC{C1} reduces to the Central Limit Theorem; indeed, \eqref{c1var} then yields $\gss=\gs_{11}$. For $d=2$, \eqref{c1var} yields \begin{equation}\label{c1var2} \gss=\frac{\gs_{11}+\gs_{12}+\gs_{22}}{3}. \end{equation} \end{example} \subsection{Renewal theory} For $x>0$, let \begin{align} \NN-(x)&:=\sup\set{n\ge0:U_n\le x},\label{NN-} \\ \NN+(x)&:=\inf\set{n\ge0:U_n>x}. \label{NN+} \end{align} Note that if $f\ge0$, then $\NN+(x)=\NN-(x)+1$, but if $f$ attains negative values, then $\NN-(x)>\NN+(x)$ is possible. Most of our results apply to both $\NN+$ and $\NN-$; we then use $\NN\pm$ to denote any of them. The results above easily imply some renewal theorems for $U$-statistics generalizing well-known results for $S_n$ (i.e., the case $d=1$). We begin with a law of large numbers. \begin{theorem} \label{TNN} Suppose that $\fXXd\in L^1$ and $\mu>0$. Then a.s\punkt{} $\NN\pm(x)<\infty$ for every $x<\infty$, and \begin{align}\label{tnn} \frac{\NN\pm(x)}{x^{1/d}} \asto \parfrac{d!}{\mu}^{1/d} \qquad \text{as \xtoo}. \end{align} \end{theorem} Assuming $f\in L^2$, we obtain also a central limit theorem for $\NN\pm$. \begin{theorem}\label{TR} Suppose that $\fXXd\in L^2$ and $\mu>0$. Then, as \xtoo, \begin{equation}\label{tr} \frac{\NN\pm(x)-\xpar{d!/\mu}^{1/d}x^{1/d}}{x^{1/2d}} \dto N\Bigpar{0,\bigpar{\xfrac{d!}{\mu}}^{2+1/d}d\qww{ \gss}}, \end{equation} where $\gss$ is given by \eqref{c1var}. \end{theorem} A situation that is common in application is to stop when when one process (such as our $U_n$) reaches a threshold, and then look at the value of another process, say $\tU_n$. For standard renewal theory, \ie{} the case $d=1$ in our setting, this was studied in \cite{SJ50}; we extend the main result there to (asymmetric) $U$-statistics. We consider as above an \iid{} sequence $X_1,X_2,\dots$ with values in $S$, but we now have two functions $f:S^d\to\bbR$ and $\tf:S^{\td}\to\bbR$, where the numbers of variables $d$ and $\td$ may be different. We use notations as above for both $f$ and $\tf$, with $\,\tilde{}\,$ to denote variables defined by $\tf$, for example $\tU_n:=U_n(\tf)$ and $\tmu:=\E \tf$; we furthermore assume that the Gaussian processes $W_i(t)$ and $\tW_j(t)$ have the joint distribution specified by \eqref{xul} (with obvious notational changes), and thus \eqref{t1} holds jointly for $f$ and $\tf$ with limits $Z_t$ and $\tZ_t$. \begin{theorem} \label{TVtau} \begin{thmenumerate} \item \label{TVtauas} Suppose that $\fXXd\in L^1$, $\tf(X_1,\dots,X_{\td})\in L^1$ and $\mu>0$. Then, as \xtoo, \begin{equation}\label{tvtau0} \begin{split} \frac{\tU_{\NN\pm(x)}}{x^{\td/d}} \asto \frac{\tmu}{\td!}\Bigparfrac{d!}{\mu}^{\td/d}. \end{split} \end{equation} \item \label{TVtaud} Suppose that $\fXXd\in L^2$, $\tf(X_1,\dots,X_{\td})\in L^2$ and $\mu>0$. Then, as \xtoo, \begin{equation}\label{tvtau} \frac{\tU_{\NN\pm(x)}-\bigparfrac{d!}{\mu}^{\td/d}\frac{\tmu}{\td!} x^{\td/d}} {x^{\td/d-1/2d}} \dto N\bigpar{0,\gam^2}, \end{equation} where, with $(Z_1,\tZ_1)$ as in \refT{T1}, \begin{equation}\label{tvtau2} \gam^2:=\Bigparfrac{d!}{\mu}^{(2\td-1)/d} \Var\Bigpar{\tZ_1-\frac{(d-1)!\,\tmu}{(\td-1)!\,\mu}Z_1}. \end{equation} \item \label{TVtau=0} Assume the conditions in \ref{TVtaud}. If $\td\ge d$, then $\gam^2=0$ if and only if \begin{equation}\label{ll} \tf_i(X) = \frac{\tmu}{\mu} \sum_j \frac{\binom{d-1}{j-1}\binom{\td-d}{i-j}} {\binom{\td-1}{i-1}} f_j(X) \text{ a.s.}, \quad i=1,\dots,\td. \end{equation} If $\td<d$, then $\gam^2=0$ if and only if \eqref{ll} holds with $f,d,\mu$ and $\tf,\tf,\tmu$ interchanged (and $\tmu\neq0$ unless all $\tf_j(X)=0$ a.s.) In particular, if $\td=d$, then \begin{equation}\label{ll1} \gam^2=0 \iff \mu\tf_i(X)=\tmu f_i(X) \quad a.s.,\qquad i=1,\dots,d, \end{equation} and if $d=1$, then \begin{equation}\label{ll2} \gam^2=0 \iff \mu\tf_i(X)=\tmu f_1(X) \quad a.s.,\qquad i=1,\dots,\td. \end{equation} \end{thmenumerate} \end{theorem} \begin{remark}\label{RVtau} \refT{TR} can be regarded as a special case of \refT{TVtau} with $\td=1$ and $\tf(X)\equiv 1$. \end{remark} The asymptotic variance $\gam^2$ in \refT{TVtau} can easily be calculated exactly using \eqref{Z}, \eqref{xul} and \eqref{psi}, but a general formula seems more messy than illuminating, and we state only the special case $d=1$. (In this case, $U_n$ is the standard partial sum $\sum_{i=1}^n f(X_i)$.) \begin{theorem}\label{CVtau} Suppose that $f(X)\in L^2$, $\tf(X_1,\dots,X_{\td})\in L^2$ and $\mu>0$. Then, as \xtoo, \begin{equation}\label{cvtau} \frac{\tU_{\NN\pm(x)}-{\mu}^{-\td}{\tmu}{\td!}\qw x^{\td}} {x^{\td-1/2}} \dto N\bigpar{0,\gam^2}, \end{equation} where \begin{align}\label{cvtau2} \gam^2 &:= {\mu}^{1-2\td} \sum_{i,j=1}^{\td} \frac{(i+j-2)!\,(2\td-i-j)!}{(i-1)!\,(j-1)!\,(\td-i)!\,(\td-j)!\,(2\td-1)!} \Cov\bigpar{\tf_i(X),\tf_j(X)} \notag\\&\qquad -2\frac{{\mu}^{-2\td}\tmu}{(\td-1)!\,\td!}\sum_{i=1}^{\td} \Cov\bigpar{f(X),\tf_i(X)} \notag\\&\qquad + \frac{{\mu}^{-2\td-1}\tmu^2}{(\td-1)!^2}\Var\bigpar{f(X)}. \end{align} Moreover, \begin{equation}\label{ll22} \gam^2=0 \iff \mu\tf_i(X)=\tmu (f(X)-\mu) \quad a.s.,\qquad i=1,\dots,\td. \end{equation} \end{theorem} Continue to assume that $d=1$, and assume for simplicity that $Y:=f(X)\ge0$ a.s. Thus $U_n(f)=S_n(f):=\sum_1^n Y_i$ is a renewal process, and its \emph{overshoot} (\emph{residual life time}) is \begin{equation}\label{R} R(x):=U_{\NN+(x)}-x>0. \end{equation} A classical result, see \eg{} \cite[Theorem 2.6.2]{Gut-SRW}, says that if $0<\mu<\infty$, then $R(x)$ converges in distribution. Recall that (the distribution of) $Y$ has \emph{span} $d>0$ if $Y\in d\bbZ$ a.s., and $d$ is maximal with this property, and that (the distribution of) $Y$ is \nona{} if no such $d$ exists. \begin{proposition}[e.g.\ \cite{Gut-SRW}]\label{PR} Let $R(x)$ be given by \eqref{R}, and assume that $f(X)\ge0$ a.s. \begin{romenumerate} \item If $f(X)$ is \nona, then $R(x)\dto\Roo$ as \xtoo, with \begin{equation} \label{overa} \P\bigpar{\Roo\le y} = \frac{1}{\mu}\int_0^y \P\bigpar{f(X)>s}\dd s, \qquad y\ge0. \end{equation} \item If $f(X)$ has span $d>0$, then $R(x)\dto\Roo$ as \xtoo{} with $x\in d\bbZ$, with \begin{equation} \label{overd} \P\bigpar{\Roo=kd} =\frac{d}{\mu} \P\bigpar{f(X)\ge kd}, \qquad k\ge1. \end{equation} \end{romenumerate} \nopf \end{proposition} This classical result may be combined with \refT{CVtau} as follows. \begin{theorem}\label{TO} Suppose in addition to the assumptions of \refT{CVtau} that $f(X)\ge0$ a.s. Let $\Roo$ be as in \refP{PR}. \begin{romenumerate} \item \label{TOa} If $f(X)$ is \nona, then \eqref{cvtau} and $R(x)\dto \Roo$ hold jointly as \xtoo. \item \label{TOb} If $f(X)$ has span $d>0$, then \eqref{cvtau} and $R(x)\dto\Roo$ hold jointly as \xtoo{} with $x\in d\bbZ$. \item \label{TOc} If $f(X)$ is integer-valued, then for every fixed integer $k\ge1$, \eqref{cvtau} holds also conditioned on $R(x)=k$, for integers $x=\ntoo$. Moreover, \eqref{cvtau} holds also conditioned on $U_{\NN-(x)}=x$, as $x=\ntoo$. (We consider only $x$ such that we condition on an event of positive probability.) \end{romenumerate} \end{theorem} Note that in \ref{TOc}, the event $U_{\NN-(x)}=x$ holds if and only if some partial sum $U_n:=\sum_1^n f(X_i)=x$. \begin{remark} If $d=\td=1$, \eqref{cvtau2} reduces to $\gam^2=\mu^{-3}\Var\bigpar{\mu\tf(X)-\tmu f(X)}$, as shown in \cite[Theorem 3]{SJ50}. \end{remark} \subsection{Moment convergence}\label{SSmoments} In \refC{C1}, we have convergence of the second moment in \eqref{c1}, and trivially also of the first moment. We have also convergence of higher moments, provided we assume the corresponding integrability of $f$. \begin{theorem} \label{TUp} Suppose that $\fXXd\in L^p$ with $p\ge2$. Then, \eqref{c1} holds with convergence of all moments and absolute moments of order $\le p$. \end{theorem} For moment convergence in the renewal theory theorems, we assume for simplicity that $f$ and $\tf$ have finite moments of all orders; see also \refR{Rmom}. (For the case $d=\td=1$, see \eg{} \cite{SJ52}, \cite{SJ50}, and \cite[Section 3.8 and Theorem 4.2.3]{Gut-SRW}.) \begin{theorem}\label{TRp} Suppose that $\fXXd\in L^p$ for every $p<\infty$, and thet $\mu>0$. Then, \eqref{tnn} and \eqref{tr} hold with convergence of all moments and absolute moments. In particular, as \xtoo, \begin{align}\label{trpe} \E\NN\pm(x)&\sim \Bigparfrac{d!}{\mu}^{1/d}x^{1/d}, \\ \Var\NN\pm(x)&\sim \bigpar{\xfrac{d!}{\mu}}^{2+1/d}d\qww{ \gss} x^{1/d}. \label{trpv} \end{align} \end{theorem} \begin{theorem}\label{TVp} Suppose that $\fXXd\in L^p$ and $\tf(X_1,\dots,X_{\td})\in L^p$ for every $p<\infty$, and that $\mu>0$. Then, \eqref{tvtau0} and \eqref{tvtau} hold with convergence of all moments and absolute moments. In particular, as \xtoo, \begin{align}\label{tvpe} \E\tU_{\NN\pm(x)}&\sim \frac{\tmu}{\td!}\Bigparfrac{d!}{\mu}^{\td/d} x^{\td/d}, \\ \Var\tU_{\NN\pm(x)}&\sim \gam^2 x^{(2\td-1)/d}. \label{tvpv} \end{align} \end{theorem} \begin{theorem}\label{TVp1} Let $d=1$. Suppose that $f(X)\in L^p$ and $\tf(X_1,\dots,X_{\td})\in L^p$ for every $p<\infty$, and that $\mu>0$. \begin{romenumerate} \item \label{TVp1a} Then, \eqref{cvtau} holds with convergence of all moments and absolute moments. \item \label{TVp1b} If furthermore $f(X)$ is integer-valued and $f(X)\ge0$, then \ref{TVp1a} holds also conditioned on $R(x)=k$ or on $U_{\NN-(x)}=x$ as in \refT{TO}\ref{TOc}. \end{romenumerate} \end{theorem} \section{Proofs}\label{Spf} \subsection{Limit theorems}\label{SSpflimit} The method used by \citet{Hoeffding} and many later papers is a decomposition, which in the asymmetric case is as follows. Assume that $f(X_1,\dots,X_d)\in L^2$ and define, recalling \eqref{mu}, \begin{align} f_i(x)&:=\E \bigpar{f(X_1,\dots,X_d)\mid X_i=x}-\mu \notag \\&\phantom: = \E f\bigpar{X_1,\dots,X_{i-1},x,X_{i+1},\dots,X_d}-\mu, \label{fi} \\ \fx(x_1,\dots,x_d) &:= f(x_1,\dots,x_d) - \mu - \sum_{j=1}^d f_j(x_j). \label{h} \end{align} (In general, these are defined only a.e\punkt, but that is no problem.) Then, by the definition \eqref{U}, \begin{equation}\label{unf} \begin{split} U_n(f)&=\binom nd \mu +\sum_{j=1}^d \sum_{1\le i_1<\dots<i_d\le n} f_j\bigpar{X_{i_j}} + U_n(\fx) \\ &=\binom nd \mu +\sum_{j=1}^d \sum_{i=1}^n \binom{i-1}{j-1}\binom{n-i}{d-j}f_j\bigpar{X_{i}} + U_n(\fx). \end{split} \end{equation} We consider the three terms in \eqref{unf} separately. The first is a constant, and we shall see that the third term is negligible, so the main term is the second term. \begin{remark} The decomposition \eqref{unf} may be continued to higher terms by expanding $\fx$ further, see \eg{} \cite{Hoeffding} for the symmetric case and \cite[Chapter 11.2]{SJIII} in general; this is important when treating degenerate cases, see \refR{Rdeg}, but for our purposes we have no need of this. \end{remark} For the second term, we define for convenience, for $1\le j\le d$ and $n\ge1$, \begin{align} a_{n,j}(i)&:=\binom{i-1}{j-1}\binom{n-i}{d-j}, \qquad 1\le i\le n, \label{anj} \\ \gDa\nj(i)&:=a\nj(i+1)-a\nj(i), \qquad 1\le i< n. \label{bnj} \end{align} Recall $\psi(s,t)$ defined in \eqref{psi}, and let $\psi'(s,t)$ denote $\frac{\partial}{\partial s}\psi(s,t)$. \begin{lemma}\label{LA} Uniformly for all $n$, $j$, $i$ such that the variables are defined, \begin{align} a\nj(i)=\psi_j(i,n)+O\bigpar{n^{d-2}},\label{la} \\ \gDa\nj(i)=\psi_j'(i,n)+O\bigpar{n^{d-3}}.\label{la'} \end{align} In particular, $a\nj(i)=O\bigpar{n^{d-1}}$ and $\gDa\nj(i)=O\bigpar{n^{d-2}}$. Furthermore (for $d\le2$), any error term $O(n^{-1})$ or $O(n^{-2})$ here vanishes identically. \end{lemma} \begin{proof} By \eqref{anj}, for $1\le i\le n$, \begin{equation} \begin{split} a\nj &=\frac{i^{j-1}+O(n^{j-2})}{(j-1)!}\cdot \frac{(n-i)^{d-j}+O(n^{d-j-1})}{(d-j)!} \\& =\frac{i^{j-1}(n-i)^{d-j}}{(j-1)!\,(d-j)!}+O\bigpar{n^{d-2}}, \end{split} \end{equation} which is \eqref{la}. Similarly, for $1\le i< n$, with $\binom k{-1}=0$, \begin{equation} \begin{split} \gDa\nj &=\lrpar{\binom{i}{j-1}-\binom{i-1}{j-1}}\binom{n-i}{d-j} \\&\hskip8em +\binom{i}{j-1}\lrpar{\binom{n-i-1}{d-j}-\binom{n-i}{d-j}} \\ &=\binom{i-1}{j-2}\binom{n-i}{d-j} -\binom{i}{j-1}\binom{n-i-1}{d-j-1} \\ &=\frac{(j-1)i^{j-2}(n-i)^{d-j}-(d-j)i^{j-1}(n-i)^{d-j-1}+O\bigpar{n^{d-3}}} {(j-1)!\,(d-j)!} \\ &=\psi_j'(i,n) +O\bigpar{n^{d-3}}. \end{split} \end{equation} \end{proof} We now take case of the second term in \eqref{unf}. \begin{lemma}\label{Lux} Let \begin{equation}\label{lux1} \ux\nj:= \sum_{i=1}^n \binom{i-1}{j-1}\binom{n-i}{d-j}f_j\bigpar{X_{i}} = \sum_{i=1}^n a\nj(i) f_j\bigpar{X_{i}}. \end{equation} Then, as \ntoo, with $W_j$ as in \eqref{WW}, \begin{equation}\label{lux2} n^{-(d-1/2)} \ux_{nt,j}\dto \int_0^t\psi_j(u,t)\dd W_j(u), \qquad t\ge0, \end{equation} in $\Doo$, jointly for $j=1,\dots,d$. \end{lemma} \begin{proof} Let for any function $g:S\to\bbR$, \begin{equation}\label{sn} S_n(g):=U_n(g):=\sum_{i=1}^n g(X_i). \end{equation} Then, by \eqref{sn}, \eqref{bnj}, and a summation by parts, \begin{equation}\label{pi} \begin{split} \ux\nj&= \sum_{i=1}^n a\nj(i)f_j(X_{i}) =\sum_{i=1}^n a\nj(i)\bigpar{S_{i}(f_j)-S_{i-1}(f_j)} \\& =a\nj(n)S_n(f_j)-\sum_{i=1}^{n-1} \gDa\nj(i)S_i(f_j). \end{split} \end{equation} By \eqref{fi}, $\E f_j(X)=0$, and furthermore $f_j(X)\in L^2$. Hence, by Donsker's theorem, \begin{equation}\label{donsker} n\qqw S_{nt}(f_j)\dto W_j(t), \end{equation} in $\Doo$, jointly for $j=1,\dots,d$, where $W_j$ are continuous centered Gaussian processes as in \eqref{WW}. By the Skorohod coupling theorem \cite[Theorem 4.30]{Kallenberg}, we may assume that the convergence in \eqref{donsker} holds a.s\punkt, and thus as \ntoo, \begin{equation}\label{ddonsker} n\qqw S_{nt}(f_j)= W_j(t) + \oas\xpar{1}, \end{equation} uniformly for $t\in\oT$ and all $j$, for every fixed $T<\infty$. (Note that the error term here, $R_{n,j,t}$ say, is random; the uniformity means that $\sup_{j\le d,\,t\le T}|R_{n,j,t}|\allowbreak\asto0$ for every $T$.) Fix $T$, and let $m=ns$ with $s\le T$. Then, by \eqref{pi}, \eqref{ddonsker} and \refL{LA}, uniformly for $s\in\oT$, \begin{equation}\label{hux} \begin{split} n\qqw\ux_{m,j}& =a\mj(m) W_j(s) -\sum_{i=1}^{m-1} \gDa\mj(i) W_j(i/n) +\oas\bigpar{n^{d-1}} \\& ={\psi_j(m,m) W_j(s) -\sum_{i=1}^{m-1} \psi'_j(i,m) W_j(i/n)} +\oas\bigpar{n^{d-1}}. \end{split} \end{equation} Furthermore, since $W_j$ is bounded and uniformly continuous on $\oT$, with $W_j(0)=0$, and $\psi'_j(s,t)=O(t^{d-2})$, $\psi''_j(s,t)=O(t^{d-3})$ for $0\le s\le t$, \begin{equation}\label{flux} \begin{split} \sum_{i=1}^{m-1} \psi'_j(i,m) W_j(i/n)& = \int_{0}^{m} \psi'_j(x,m) W_j(x/n)\dd x+\oasx\bigpar{m^{d-1}} \\& = n\int_{0}^{s} \psi'_j(nu,ns) W_j(u)\dd u+\oasx\bigpar{n^{d-1}} \\& = n^{d-1}\int_{0}^{s} \psi'_j(u,s) W_j(u)\dd u+\oasx\bigpar{n^{d-1}} . \end{split} \end{equation} An integration by parts yields (with stochastic integrals) \begin{equation}\label{crux} \begin{split} \int_{0}^{s} \psi'_j(u,s) W_j(u)\dd u = \psi_j(s,s)W_j(s)-\int_{0}^{s} \psi_j(u,s)\dd W_j(u) \end{split} \end{equation} and combining \eqref{hux}, \eqref{flux} and \eqref{crux} yields, using $\psi_j(m,m)=n^{d-1}\psi_j(s,s)$, \begin{equation} \begin{split} n\qqw\ux_{ns,j}&=n\qqw\ux_{m,j} = n^{d-1}\int_{0}^{s} \psi_j(u,s)\dd W_j(u)+\oas\bigpar{n^{d-1}} , \end{split} \end{equation} uniformly for $0\le s\le T$. Since $T$ is arbitrary, this yields \eqref{lux2}, jointly for all $j$. \end{proof} To show that the final term in \eqref{unf} is negligible, we give another lemma. Cf.\ \cite{Sproule1974} for similar results in the symmetric case. \begin{lemma} \label{LU*} Suppose that $\fXXd\in L^2$. \begin{romenumerate} \item \label{LU*1} Then \begin{equation}\label{lu*1} \E |U_n^*(f-\mu)|^2 \le C n^{2d-1}\norm{f}^2. \end{equation} \item \label{LU*2} If furthermore $f_i=0$ for $i=1,\dots,d$, then \begin{equation}\label{lu*2} \E |U_n^*(f-\mu)|^2 \le C n^{2d-2}\norm{f}^2. \end{equation} \end{romenumerate} \end{lemma} \begin{proof} \pfitemref{LU*1} We introduce another decomposition of $f$ and $U_n$, which unlike the one in \eqref{fi}--\eqref{unf} focusses on the order of the arguments. Let $\FF_0:=\mu$ and, for $1\le k\le d$, \begin{align} \FF_k(x_1,\dots,x_k)&:=\E f(x_1,\dots,x_k,X_{k+1},\dots,X_d),\label{FF} \\ F_k(x_1,\dots,x_k)&:=\FF_k(x_1,\dots,x_k)-\FF_{k-1}(x_1,\dots,x_{k-1}).\label{F} \end{align} In other words, $\FF_k(X_1,\dots,X_k):=\E\bigpar{f(X_1,\dots,X_d)\mid X_1,\dots,X_k}$, and thus $\FF_k(X_1,\dots,X_k)$, $k=0,\dots,d$, is a martingale, with the martingale differences $F_k(X_1,\dots,X_k)$, $k=1,\dots,d$. Hence, \begin{equation} \label{EFk} \E F_k(x_1,\dots,x_{k-1},X_k)=0. \end{equation} By \eqref{FF}--\eqref{F}, $f(x_1,\dots,x_d)-\mu=\sum_{k=1}^d F_k(x_1,\dots,x_k)$, and thus \begin{equation} \begin{split} U_n(f-\mu) &=\sum_{k=1}^d \sum_{i_1<\dots<i_k\le n} \binom{n-i_k}{d-k} F_k\xpar{X_{i_1},\dots,X_{i_k}} \\ &=\sum_{k=1}^d \sum_{i=1}^n \binom{n-i}{d-k}\bigpar{U_i(F_k)-U_{i-1}(F_k)} \\ &=U_n(F_d)+\sum_{k=1}^{d-1} \sum_{i=1}^{n-1} \binom{n-i-1}{d-k-1} U_i(F_k), \end{split} \end{equation} using a summation by parts and the identity $\binom{n-i}{d-k}-\binom{n-i-1}{d-k}=\binom{n-i-1}{d-k-1}$. In particular, \begin{equation}\label{kum} \begin{split} |U_n(f-\mu)| &\le |U_n(F_d)|+\sum_{k=1}^{d-1} \sum_{i=1}^{n-1} \binom{n-i-1}{d-k-1} U^*_n(F_k) \\ &= |U_n(F_d)|+\sum_{k=1}^{d-1} \binom{n-1}{d-k} U^*_n(F_k) \le \sum_{k=1}^{d} n^{d-k} U^*_n(F_k). \end{split} \end{equation} Since the \rhs{} is weakly increasing in $n$, it follows that \begin{equation}\label{kul} U^*_n(f-\mu) \le \sum_{k=1}^{d} n^{d-k} U^*_n(F_k). \end{equation} By the definition \eqref{U}, $\gD U_n(F_k):=U_n(F_k)-U_{n-1}(F_k)$ is a sum of $\binom{n-1}{k-1}$ terms $F_k(X_{i_1},\dots,X_{i_{k-1}},X_n)$ that all have the same distribution, and thus by Minkowski's inequality, \begin{equation}\label{swab} \E|\gD U_n(F_k)|^2 = \norm{\gD U_n(F_k)}^2 \le \binom{n-1}{k-1}^2\norm{F_k}^2 \le n^{2k-2}\norm{f}^2. \end{equation} Furthermore, it follows from \eqref{EFk} that $\E \bigpar{U_n(F_k)-U_{n-1}(F_k)\mid \cF_{n-1}}=0$, and thus $U_n(F_k)$, $n\ge0$, is a martingale. Consequently, using \eqref{swab}, \begin{equation}\label{mb} \E|U_n(F_k)|^2 = \sum_{i=1}^n \abs{\E \gD U_i(F_k)}^2 \le n^{2k-1}\norm{f}^2 \end{equation} and Doob's inequality yields \begin{equation}\label{mba} \norm{U^*_n(F_k)} \le C \norm{U_n(F_k)} \le C n^{k-1/2}\norm{f}. \end{equation} Finally, \eqref{kul}, \eqref{mba} and Minkowski's inequality yield \begin{equation}\label{mbc} \norm{U^*_n(f-\mu)} \le \sum_{k=1}^{d} n^{d-k} \norm{U^*_n(F_k)} \le C n^{d-1/2}\norm{f}, \end{equation} which yields \eqref{lu*1} by squaring. \pfitemref{LU*2} By \eqref{FF}--\eqref{F} and \eqref{fi}, \begin{equation} \E \bigpar{F_k(X_1,\dots,X_k)\mid X_k} = \E \bigpar{f(X_1,\dots,X_d)\mid X_k}-\E f =f_k(X_k). \end{equation} Hence, assuming $f_k=0$, \begin{equation} \label{lie} \E \bigpar{F_k(X_1,\dots,X_k)\mid X_k}=0. \end{equation} It was seen in the proof of \ref{LU*1} that $\gD U_n(F_k)$ is a sum of $\binom{n-1}{k-1}$ terms $F_k(X_{i_1},\dots,X_{i_{k-1}},X_n)$. It now follows from \eqref{lie} that if $\set{i_1,\dots,i_{k-1}}$ and $\set{j_1,\dots,j_{k-1}}$ are two disjoint sets of indices, then, by first conditioning on $X_n$, \begin{equation} \E\bigpar{F_k(X_{i_1},\dots,X_{i_{k-1}},X_n)F_k(X_{j_1},\dots,X_{j_{k-1}},X_n)} =0. \end{equation} Hence, only the $O\bigpar{n^{2k-3}}$ pairs of index sets $\set{i_1,\dots,i_{k-1}}$ and $\set{j_1,\dots,j_{k-1}}$ with at least one common element contribute to $\E\bigpar{\gD U_n(F_k)}^2$, and we obtain, for $1\le k\le d$, that \eqref{swab} is improved to \begin{equation}\label{gul} \E |\gD U_n(F_k)|^2 \le C n^{2k-3}\norm{f}^2. \end{equation} (For $k=1$, $F_1=f_1=0$, and \eqref{gul} still holds.) The result now follows as in \ref{LU*1}, see \eqref{mb}--\eqref{mbc}, by \eqref{gul}, Doob's inequality, \eqref{kul} and Minkowski's inequality. \end{proof} \begin{proof}[Proof of \refT{T1}] We use the decomposition \eqref{unf}, with $n$ replaced by $\floor{nt}$. For the constant term, note that $ \binom {\floor{nt}}d \mu = n^dt^d\mu/d!+O\bigpar{n^{d-1}} $ when $t=O(1)$. The second term in \eqref{unf} is $\sum_{j=1}^d \ux_{nt,j}$, using the notation in \eqref{lux1}, and we use \refL{Lux}; \eqref{lux2} shows that this term divided by $n^{d-1/2}$ converges in $\Doo$ to $Z_t$ defined in \eqref{Z}. For the third term, we apply \refL{LU*} to $\fx$. It follows from the definition \eqref{h} that $\mu_*:=\E \fx(X_1,\dots,X_d)=0$ and that, applying \eqref{fi} to $\fx$, $(\fx)_i=0$ for every $i\le d$. Hence, \refL{LU*}\ref{LU*2} applies to $\fx$ and yields \begin{equation}\label{qul} \E \abs{U_n^*(\fx)}^2\le C n^{2d-2}\norm{\fx}^2\le C n^{2d-2}\norm{f}^2. \end{equation} Let $T>0$ be fixed. Applying \eqref{qul} to $nT$, we see in particular that $n^{-(d-1/2)}U_{nt}(\fx)\pto0$ uniformly on $\oT$. Consequently, \eqref{t1} follows from \eqref{unf}. Joint convergence for several functions $f\xx k$, with limits given by \eqref{xul}, follows by the same proof, using joint convergence for all $f\xx k_i$ in \eqref{donsker}. \end{proof} \begin{proof}[Proof of \refT{TLLN}] We do this in several steps. \stepx\label{stepL2} First, suppose that $\fXXd\in L^2$. We may assume $\mu=0$, and then \refL{LU*}\ref{LU*1} implies, for any $N\ge1$, \begin{equation} \E \sup_{N\le n\le 2N}\bigpar{|U_n|/n^d}^2 \le N^{-2d} \E (U^*_{2N})^2 \le C N^{-1}\norm{f}^2. \end{equation} Summing over all $N=2^m$, $m=0,1,\dots$, we find \begin{equation} \E\sum_{m=0}^\infty \sup_{2^m\le n\le 2^{m+1}}\bigpar{|U_n|/n^d}^2 <\infty. \end{equation} Hence, a.s\punkt{} the terms in the sum tend to 0, which implies $U_n/n^d\to 0$ and thus $U_n/\binom nd\to 0=\mu$. This proves \eqref{tlln} for $f\in L^2$. \stepx\label{stepL1>} Assume now $f\in L^1$ and $f\ge0$. Define the truncation $f_M:=f\land M$. Then $f_M\in L^2$ and \refStep{stepL2} shows that for every $M<\infty$, a.s., \begin{equation} \liminf_\ntoo \frac{U_n(f)}{\binom nd} \ge \liminf_\ntoo \frac{U_n(f_M)}{\binom nd } = \E f_M(X_1,\dots,X_d). \end{equation} Letting $M\to\infty$ yields $\liminf_\ntoo U_n(f)/\binom nd \ge \mu$ a.s. \stepx Continue to assume $f\in L^1$ and $f\ge0$. For every permutation $\pi\in\fS_d$, let $f_\pi(X_1,\dots,X_d):=f(X_{\pi(1)},\dots,X_{\pi(d)})$, and let $F:=\sum_{\pi\in\fS} f_\pi$ and $g:=F-f=\sum_{\pi\neq id} f_\pi$. Note that $f,g\in L^1$ with $f,g\ge0$; thus \refStep{stepL1>} applies to both $f$ and $g$. Furthermore, $F=f+g$ is symmetric, so we have $U_n(F)/\binom nd\asto \E F:=\E F(X_1,\dots,X_d)$ by the theorem by \citet{HoeffdingLLN} for the symmetric case. (This case has a simple reverse martingale proof, see \refR{Rreverse}.) Consequently, a.s., \begin{equation} \limsup_\ntoo \frac{U_n(f)}{\binom nd} = \lim_\ntoo \frac{U_n(F)}{\binom nd} - \liminf_\ntoo \frac{U_n(g)}{\binom nd} \le \E F - \E g = \mu. \end{equation} Combined with \refStep{stepL1>}, this shows \eqref{tlln} for every $f\in L^1$ with $f\ge0$. \stepx The general case follows by linearity. \end{proof} We used for convenience the known symmetric case in this proof. An alternative would be to use suitable truncations, similarly to the original proof of the symmetric case by \citet{HoeffdingLLN}. \begin{lemma} \label{Lvar} Suppose that $\fXXd\in L^2$. Then, as \ntoo, with $Z_1$ defined by \eqref{Z}, \begin{equation}\label{lvar} \begin{split} \frac{\Var U_n}{n^{2d-1}} \to \gss &:= \Var Z_1 \\& \phantom:= \sum_{i,j=1}^d \frac{(i+j-2)!\,(2d-i-j)!}{(i-1)!\,(j-1)!\,(d-i)!\,(d-j)!\,(2d-1)!}\gs_{ij} . \end{split} \end{equation} \end{lemma} \begin{proof} We may assume $\mu=0$. Then \begin{equation} \Var U_n = \E U_n^2= \sum_{i_1<\dots<i_d}\sum_{j_1<\dots<j_d} \E \bigpar{f(X_{i_1},\dots,X_{i_d})f(X_{j_1},\dots,X_{j_d})}, \end{equation} where all terms with $\set{i_1,\dots,i_d} \cap \set{j_1,\dots,j_d} =\emptyset$ vanish. There are only $O\bigpar{n^{2d-2}}$ terms with $|\set{i_1,\dots,i_d} \cap \set{j_1,\dots,j_d}|\ge2$, so we concentrate on the case when, say, $i_k=j_\ell=i$, and all other indices are distinct. Thus, using \eqref{fi} and the notation \eqref{anj} together with \eqref{gsij} and \refL{LA}, \begin{equation} \begin{split} \E U_n^2 &=\sum_{k=1}^d\sum_{\ell=1}^d\sum_{i=1}^n a_{n,k}(i)a_{n,\ell}(i) \E \bigpar{f_k(X_i)f_\ell(X_i)}+O\bigpar{n^{2d-2}} \\ &=\sum_{k=1}^d\sum_{\ell=1}^d\sum_{i=1}^n \psi_{k}(i,n)\psi_{\ell}(i,n) \gs_{k\ell} +O\bigpar{n^{2d-2}} \\ &=\sum_{k=1}^d\sum_{\ell=1}^d \gs_{k\ell} \int_0^n \psi_{k}(x,n)\psi_{\ell}(x,n)\dd x +O\bigpar{n^{2d-2}} \\ &=n^{2d-1}\sum_{k=1}^d\sum_{\ell=1}^d \gs_{k\ell} \int_0^1 \psi_{k}(u,1)\psi_{\ell}(u,1)\dd u +O\bigpar{n^{2d-2}} . \end{split} \end{equation} Consequently, by \eqref{t1cov}, \begin{equation} \frac{\Var U_n}{n^{2d-1}} \to \sum_{k=1}^d\sum_{\ell=1}^d \gs_{k\ell} \int_0^1 \psi_{k}(u,1)\psi_{\ell}(u,1)\dd u = \Var(Z_1). \end{equation} Furthermore, this equals the sum in \eqref{lvar}, as is seen by taking $s=t=1$ in \eqref{t1cov} and evaluating the resulting Beta integral. \end{proof} \begin{remark} Similarly, it follows more generally that $\Cov\bigpar{U_{ns},U_{nt}}/n^{2d-1}\to\Cov(Z_s,Z_t)$ given by \eqref{t1cov}, for any fixed $s,t\ge0$. In other words, \eqref{t1} holds with convergence of second moments. \end{remark} \begin{proof}[Proof of \refC{C1}] The functional limit \eqref{t1} implies, since $Z_t$ is continuous, convergence (in distribution) for each fixed $t\ge0$. Taking $t=1$ we obtain \eqref{c1} with $\gss=\Var Z_1$, which is evaluated by \refL{Lvar}. By \eqref{t1cov} and \eqref{gsij}, \begin{equation} \label{lasse} \begin{split} \Var (Z_1) &=\sum_{i,j=1}^d\Cov\bigpar{f_i(X),f_j(X)} \intoi \psi_i(s,1) \psi_j(s,1)\dd s \\& =\intoi \Var\Bigpar{\sum_{i=1}^d\psi_i(s,1)f_i(X)}\dd s \end{split} \end{equation} Hence, $\gss=0\iff \sum_{i=1}^d\psi_i(s,1)f_i(X)=0$ a.s\punkt{} for (almost) every $s\in\oi$, which is equivalent to $f_i(X)=0$ a.s\punkt{} for every $i$ since the polynomials $\psi_i(s,1)$ are linearly independent. \end{proof} \subsection{Renewal theory}\label{SSpfRenewal} \begin{proof}[Proof of \refT{TNN}] Consider first $\NN-$. Note that \refT{TLLN} and $\mu>0$ imply $U_n\to\infty$ a.s., and then $\NN-(x)<\infty$ for every $x$. Furthermore, it is trivial that $\NN-(x)\to\infty$ as \xtoo. Thus we may substitute $n=\NN-(x)$ in \eqref{tlln} and obtain \begin{equation}\label{ollon} \frac{U_{\NN-(x)}}{\NN-(x)^d} = \frac{U_{\NN-(x)}}{\binom{\NN-(x)}{d}}\cdot \frac{\binom{\NN-(x)}{d}}{\NN-(x)^d} \asto\frac{\mu}{d!} \qquad \text{as \xtoo}. \end{equation} Furthermore, we also have, again by \eqref{tlln}, \begin{equation}\label{kollon} \frac{U_{\NN-(x)+1}}{\NN-(x)^d} = \frac{U_{\NN-(x)+1}}{\binom{\NN-(x)+1}{d}}\cdot \frac{\binom{\NN-(x)+1}{d}}{\NN-(x)^d} \asto\frac{\mu}{d!}. \end{equation} By the definition of $\NN-(x)$, $U_{\NN-(x)}\le x < U_{\NN-(x)+1}$, and thus \eqref{ollon}--\eqref{kollon} imply \begin{equation}\label{bollon} \frac{x}{\NN-(x)^d} \asto\frac{\mu}{d!} \qquad \text{as \xtoo}. \end{equation} which is equivalent to \eqref{tnn} for $\NN-$. The proof for $\NN+$ is the same, using $U_{\NN+(x)-1}\le x< U_{\NN+(x)}$. \end{proof} \begin{proof}[Proof of \refT{TR}] Again, we consider $\NN-$; the argument for $\NN+$ is the same. Let \begin{align}\label{nx} n(x)&:=\xpar{d!/\mu}^{1/d}x^{1/d}, \\ T(x)&:=\NN-(x)/\floor{n(x)}.\label{Tx} \end{align} As \xtoo, $n(x)\to\infty$ and thus \eqref{t1} implies \begin{equation}\label{t1x} \frac{ U_{\flnx t}-(\flnx t)^d\mu/d!}{n(x)^{d-1/2}} \dto Z_t \qquad \text{in }\Doo. \end{equation} Furthermore, \eqref{tnn} implies \begin{equation}\label{tx1} T(x)\to 1 \end{equation} a.s., and thus in probability. Hence, \eqref{t1x} and \eqref{tx1} hold jointly in distribution \cite[Theorem 4.4]{Billingsley}. Now, $(F,t)\mapsto F(t)$ is a measurable mapping $D\ooo\times \ooo\to\bbR$ that is continuous at every $(F,t)$ with $F$ continuous. Hence, by \cite[Theorem 5.1]{Billingsley}, it follows from the joint convergence in \eqref{t1x} and \eqref{tx1}, together with continuity of $Z_t$, that we may substitute $t=T(x)$ in \eqref{t1x} and obtain, as $\xtoo$, \begin{equation}\label{dum} \frac{ U_{\NN-(x)}-\NN-(x)^d \mu/d!}{n(x)^{d-1/2}}\dto Z_1. \end{equation} Taking instead $t=T_1(x):=(\NN-(x)+1)/\flnx$, we similarly obtain \begin{align}\label{dee} \frac{ U_{\NN-(x)+1}-\NN-(x)^d \mu/d!}{n(x)^{d-1/2}} &= \frac{ U_{\NN-(x)+1}-(\NN-(x)+1)^d \mu/d!+O\bigpar{\NN-(x)^{d-1}+1}}{n(x)^{d-1/2}} \notag\\& \dto Z_1, \end{align} using $(\NN-(x)^{d-1}+1)/n(x)^{d-1/2}\pto0$ by \eqref{tnn} and \eqref{nx}. Since $U_{\NN-(x)}\le x< U_{\NN-(x)+1}$, \eqref{dum} and \eqref{dee} together imply, as \xtoo, \begin{equation}\label{dumdee} \frac{ x-\NN-(x)^d \mu/d!}{n(x)^{d-1/2}}\dto Z_1. \end{equation} Hence, recalling \eqref{nx}, \begin{align} \frac{x}{n(x)^{d-1/2}} \lrpar{ \parfrac{\NN-(x)}{n(x)}^d-1} &= \frac{\NN-(x)^d\mu/d!-x}{n(x)^{d-1/2}} \dto -Z_1. \label{dumdum} \end{align} Furthermore, letting $T_2(x):=\NN-(x)/n(x)$, we have $T_2(x)\asto1$ by \eqref{tnn}, and thus, interpreting the quotients as $d$ when $T_2(x)=1$, \begin{equation}\label{hack} \frac{ \bigpar{\xfrac{\NN-(x)}{n(x)}}^d-1 } { \bigpar{\xfrac{\NN-(x)}{n(x)}}-1 } = \frac{T_2(x)^d-1}{T_2(x)-1} \asto d. \end{equation} Dividing \eqref{dumdum} by \eqref{hack} yields \begin{equation}\label{don} \frac{x}{n(x)^{d-1/2}} \lrpar{ \frac{\NN-(x)}{n(x)}-1 } \dto -\frac{1}{d}Z_1. \end{equation} Since \begin{equation}\label{don32} \frac{\NN-(x)-n(x)}{x^{1/2d}} = \parfrac{n(x)}{x^{1/d}}^{d+1/2} \frac{x}{n(x)^{d-1/2}} \lrpar{ \frac{\NN-(x)}{n(x)}-1 }, \end{equation} \eqref{don} and \eqref{nx} imply \begin{equation}\label{don2} \frac{\NN-(x)-n(x)}{x^{1/2d}} \dto-\parfrac{d!}{\mu}^{1+1/2d}d\qw Z_1, \end{equation} which yields \eqref{tr}, since $Z_1\sim N(0,\gss)$ by \refL{Lvar}. \end{proof} \begin{proof}[Proof of \refT{TVtau}] \pfitemref{TVtauas} By \refT{TLLN} for $\tf$ and \eqref{tnn}, \begin{equation} \begin{split} \frac{\tU_{\NN\pm(x)}}{x^{\td/d}} = \frac{\tU_{\NN\pm(x)}}{\NN\pm(x)^{\td}} \frac{\NN\pm(x)^{\td}}{x^{\td/d}} \asto \frac{\tmu}{\td!}\Bigparfrac{d!}{\mu}^{\td/d}. \end{split} \end{equation} \pfitemref{TVtaud} Define again $n(x)$ and $T(x)$ by \eqref{nx}--\eqref{Tx}. We have joint convergence in \eqref{t1} for $f$ and $\tf$, and thus, as \xtoo, \eqref{t1x} holds jointly with \begin{equation}\label{t2x} \frac{ \tU_{\flnx t}-(\flnx t)^{\td}\tmu/\td!}{n(x)^{\td-1/2}} \dto \tZ_t \qquad \text{in }\Doo. \end{equation} By \eqref{tx1} and the argument in the proof of \refT{TR}, now using the mapping $(F,\tF,t)\mapsto(F(t),\tF(t))$ that maps $\Doo\times\Doo\times\ooo\to\bbR^2$, it follows that \eqref{dum} holds jointly with \begin{equation}\label{tdum} \frac{\tU_{\NN-(x)}-\NN-(x)^{\td} \tmu/\td!}{n(x)^{\td-1/2}}\dto \tZ_1. \end{equation} Furthermore, \eqref{dum} and \eqref{dumdee} together with $U_{\NN-(x)}\le x$ imply \begin{equation} \frac{ x-U_{\NN-(x)}}{n(x)^{d-1/2}}\pto 0. \end{equation} Consequently, \eqref{dumdee} and \eqref{tdum} hold jointly. The argument in the proof of \refT{TR} now holds with every convergence in distribution holding jointly with \eqref{tdum}. Hence, \eqref{tdum} holds jointly with \eqref{don2}, which implies, see \eqref{hack} and \eqref{nx}, \begin{equation}\label{uvb} \frac{\NN-(x)^{\td}-n(x)^{\td}}{n(x)^{\td-1/2}} = \frac{\bigpar{\NN-(x)/n(x)}^{\td}-1}{\bigpar{\NN-(x)/n(x)}-1} \frac{\NN-(x)-n(x)}{n(x)^{1/2}} \dto -\td\, \frac{d!}{\mu}\,d\qw Z_1. \end{equation} Consequently, \eqref{tdum} and \eqref{uvb} hold jointly, and thus \begin{equation}\label{tvtaux} \frac{\tU_{\NN-(x)}-\xfrac{n(x)^{\td}\tmu}{\td!}}{n(x)^{\td-1/2}} \dto \ZZ:= \tZ_1 - \frac{(d-1)!\,\tmu}{(\td-1)!\,\mu}Z_1 . \end{equation} We obtain \eqref{tvtau}--\eqref{tvtau2} by substituting the definition \eqref{nx} of $n(x)$. \pfitemref{TVtau=0} By \eqref{tvtau2}, $\gam^2=0\iff \Var\bigpar{\mu(\td-1)!\,\tZ_1-\tmu(d-1)!\, Z_1}=0$, and arguing as in \eqref{lasse}, and recalling \eqref{xul}, this is equivalent to \begin{equation} \Var\Bigpar{\mu(\td-1)!\sum_{i=1}^{\td}\psi_{i;\td}(s,1)\tf_i(X) -\tmu(d-1)!\sum_{j=1}^{d}\psi_{j;d}(s,1)f_j(X)}=0 \end{equation} for (almost) every $s\in\oi$, and by the definition \eqref{psi}, this is the same as \begin{equation}\label{tuna} \mu\sum_{i=1}^{\td}\binom{\td-1}{i-1}s^{i-1}(1-s)^{\td-i}\tf_i(X) = \tmu\sum_{j=1}^{d}\binom{d-1}{j-1}s^{j-1}(1-s)^{d-j}f_j(X) \end{equation} a.s., for every $s$. If $\td\ge d$, multiply the \rhs{} of \eqref{tuna} by $(s+1-s)^{\td-d}= \sum_{k=0}^{\td-d}\binom{\td-d}{k} s^k(1-s)^{\td-d-k}$, which equals 1, and identify the coefficients of $s^{i-1}(1-s)^{\td-i}$ on both sides; this yields \eqref{ll}. Conversely, \eqref{ll} implies \eqref{tuna} by the same argument. The case $\td<d$ follows by the symmetry in \eqref{tuna}. The special cases \eqref{ll1} and \eqref{ll2} are immediate consequences of \eqref{ll}. \end{proof} \begin{proof}[Proof of \refT{CVtau}] Take $d=1$ in \refT{TVtau}\ref{TVtaud}. To obtain the formula \eqref{cvtau2} for $\gam^2$, we use \eqref{tvtau2} and note first that $\Var(\tZ_1)$ is given by \eqref{c1var}, mutatis mutandis, which yields the first term on the \rhs{} of \eqref{cvtau2}. Furthermore, \eqref{c1var} yields also, with $d=1$, $\Var (Z_1)=\gs_{11}=\Var(f(X))$, yielding the third term. Finally, note that when $d=1$, \eqref{psi} yields $\psi_{1;1}(s,t)=1$, and thus \eqref{Z} yields $Z_t=W(t)$; consequently, using \eqref{Z} and \eqref{xul} and a standard Beta integral, \begin{equation} \begin{split} \Cov\bigpar{\tZ_1,Z_1} & =\sum_{j=1}^{\td}\Cov\Bigpar{\intoi\psi_{j;\td}(s,1)\dd \tW_j(s),\intoi\dd W(s)} \\& =\sum_{j=1}^{\td} \intoi\psi_{j;\td}(s,1)\Cov\bigpar{\tf_j(X),f(X)} \dd s \\& =\sum_{j=1}^{\td}\frac{1}{\td!} \Cov\bigpar{\tf_j(X),f(X)} . \end{split} \end{equation} This yields the second term on the \rhs, and completes the proof. \end{proof} \begin{proof}[Proof of \refT{TO}] Let (for $x\ge2$, say) $x_-:=x-\ln x$ in the \nona{} case, and $x_-:=d\floor{(x-\ln x)/d}$ if $f(X)$ has span $d>0$; also, in the latter case, consider only $x\in d\bbZ$. First, run the process until the stopping time $\NN+(x_-)$. Let \begin{equation} \xxx:=x-U_{\NN+(x_-)} = x-x_--R(x_-). \end{equation} As \xtoo, $R(x-)\dto\Roo$ by \refP{PR}, and $x-x_-\ge\ln x\to\infty$; hence $\xxx\pto\infty$. In particular, with probability tending to 1 as \xtoo, $\xxx\ge0$. Restart the process after $\NN+(x_-)$ and continue until $\NN+(x)$. Since $\NN+(x_-)$ is a stopping time, this continuation is independent of what happened up to $\NN+(x_-)$, and thus it can be regarded as a renewal process $\SSS_n$ starting at 0 and running to $\NN+(\xxx)$; in particular, the overshoot $R^*(\xxx)$ of this renewal process equals the overshoot $R(x)$ of the original one. Here $\xxx$ is random, but independent of the renewal process $\SSS_n$, and since $\xxx\pto\infty$, \refP{PR} implies that the overshoot $R(x)=R^*(\xxx)\dto\Roo$. Furthermore, this holds conditioned on any events $\cE(x_-)$ that depend on the original process up to $\NN+(x_-)$, provided $\liminf_{\xtoo}\P(\cE(x_-))>0$. Denote the \lhs{} of \eqref{cvtau} by $\tV(x)$. By \eqref{cvtau}, $\tV(x_-)\dto N(0,\gam^2)$ as \xtoo. Fix $a,b\in\bbR$ and let $\cE(x_-):=\set{\tV(x_-)\le a}$. It then follows from the argument above that, as \xtoo, \begin{equation} \begin{split} \P\bigpar{\tV(x_-)\le a,\,R(x)\le b} &= \P\bigpar{R(x)\le b\mid \tV(x_-)\le a}\P\bigpar{\tV(x_-)\le a} \\& \dto \P\bigpar{\Roo\le b}\P\bigpar{N(0,\gam^2)\le a}. \end{split} \end{equation} Consequently, $\tV(x_-)$ and $R(x)$ converge jointly, with independent limits given by \eqref{cvtau} and \eqref{overa}--\eqref{overd}. It remains only to replace by $\tV(x_-)$ by $\tV(x)$. First, since $x_-=x-O(\ln x)$ it follows that $\tV(x_-)\dto N(0,\gam^2)$ is equivalent to \begin{equation}\label{cvtau-} \frac{\tU_{\NN\pm(x_-)}-{\mu}^{-\td}{\tmu}{\td!}\qw x^{\td}} {x^{\td-1/2}} \dto N\bigpar{0,\gam^2}, \end{equation} Hence, \eqref{cvtau-} and $R(x)\dto\Roo$ hold jointly, with independent limits. Next, suppose first that $\tf(X_1,\dots,X_{\td})\ge0$. Then, $\tU_{\NN\pm(x)}\ge\tU_{\NN\pm(x_-)}$ a.s., and thus \eqref{cvtau} and \eqref{cvtau-} imply \begin{equation}\label{cvtau00} \frac{\tU_{\NN\pm(x)}-\tU_{\NN\pm(x_-)}} {x^{\td-1/2}} \pto 0. \end{equation} By linearity, \eqref{cvtau00} holds for arbitrary $\tf\in L^2$. Finally, \eqref{cvtau00} and \eqref{cvtau-} imply \eqref{cvtau}, and hence \eqref{cvtau00} and the joint convergence of \eqref{cvtau-} and $R(x)\dto\Roo$ imply the joint convergence of \eqref{cvtau} and $R(x)\dto\Roo$, proving \ref{TOa} and \ref{TOb}. For \ref{TOc}, let $d$ be the span of $f(X)$, and assume first $d=1$. Note that $\P(\Roo=k)=0\iff\P\bigpar{f(X)\ge k}=0$ by \eqref{overd}, and then \eqref{R} implies $R(x)\le f(X_{\NN+(x)})<k$ a.s\punkt{} for every $x$; hence we only consider $k$ such that $\P(\Roo=k)>0$, and the first part of \ref{TOc} follows from \ref{TOb}. If the span $d>1$, then $R(x)=k$ implies $x+k=U_{\NN+(x)}\equiv 0\pmod d$ and thus $x\equiv -k\pmod d$, so we consider only $x\in -k+d\bbZ$. Let $k_0:=d\ceil{k/d}$ and $\gD:=k_0-k\in[0,d-1]$. Then $x-\gD\equiv x+k\equiv0\pmod d$, and thus, since $S_n(f)\in d\bbZ$, $\NN+(x)=\NN+(x-\gD)$ and $R(x-\gD)=U_{\NN+(x)}-x+\gD=R(x)+\gD$; hence \begin{equation} R(x)=k \iff R(x-\gD)=k+\gD=k_0. \end{equation} Hence, we may replace $x$ and $k$ by $x-\gD$ and $k_0$, and thus it suffices to consider $x,k\in d\bbZ$, but then we can reduce to the case $d=1$ by replacing $f(X)$ by $f(X)/d$. Finally, for an integer $n$, $U_{\NN-(n)}=n \iff R(n-1)=1$. Hence, \eqref{cvtau} with $x=n-1$ holds as \ntoo, also conditioned on $U_{\NN-(n)}=n$. The argument above showing \eqref{cvtau00} shows also that $\xpfrac{\tU_{\NN\pm(n)}-\tU_{\NN\pm(n-1)}}{n^{\td-1/2}} \pto 0$ as \ntoo, and it follows that \eqref{cvtau} with $x=n$ holds as \ntoo, conditioned on $U_{\NN-(n)}=n$. \end{proof} \subsection{Moment convergence}\label{SSpfmoments} We turn to proving the theorems on moment convergence in \refSS{SSmoments}, and begin by extending \refL{LU*} to higher absolute moments. \begin{lemma} \label{LUp} Suppose that $\fXXd\in L^p$ with $p\ge2$. Then \begin{equation}\label{lup} \E |U_n^*(f-\mu)|^p \le C_p n^{p(d-1/2)}\normp{f}^p. \end{equation} \end{lemma} \begin{proof} We use the same decomposition as in the proof of \refL{LU*}. Note that, by Jensen's inequality, $\normp{\FF_k}\le\normp{f}$, and thus, \begin{equation} \normp{F_k}\le2\normp{f}, \qquad 1\le k\le d. \end{equation} Hence, Minkowski's inequality yields, as in \eqref{swab}, \begin{equation}\label{swabp} \E|\gD U_n(F_k)|^p = \normp{\gD U_n(F_k)}^p \le \binom{n-1}{k-1}^p\normp{F_k}^p \le C_p n^{pk-p}\normp{f}^p. \end{equation} Consequently, the Burkholder inequalities \cite[Theorem 10.9.5(i)]{Gut} applied to the martingale $U_n(F_k)$ yield, using also H\"older's inequality, \begin{align} \label{mbap} \E|U^*_n(F_k)|^p &\le C_p \E \Bigpar{\sum_{i=1}^n \abs{\gD U_i(F_k)}^2}^{p/2} \le C_p \E \Bigpar{n^{p/2-1}\sum_{i=1}^n \abs{\gD U_i(F_k)}^p} \notag\\& =C_p n^{p/2-1}\sum_{i=1}^n \E\abs{\gD U_i(F_k)}^p \le C_p n^{pk-p/2}\normp{f}^p. \end{align} Equivalently, \begin{align} \label{mbapp} \normp{U^*_n(F_k)} \le C_p n^{k-1/2}\normp{f}. \end{align} Finally, \eqref{kul}, \eqref{mbapp} and Minkowski's inequality yield \begin{equation}\label{mbcp} \normp{U^*_n(f-\mu)} \le \sum_{k=1}^{d} n^{d-k} \normp{U^*_n(F_k)} \le C_p n^{d-1/2}\normp{f}, \end{equation} which is \eqref{lup}. \end{proof} We shall also use the following standard result, stated in detail and proved for convenience and completeness. \begin{lemma}\label{Lui} Let $\set{V_\ga:\ga\in \cA}$ be a set of random variables, and let $0<p<q$. Suppose that for every $\eps>0$ there exist decompositions $V_\ga=V_\ga'+V_\ga''$ and a $B_\eps<\infty$ such that, for every $\ga\in \cA$, $\normx{q}{V_\ga'}\le B_\eps$ and $\normp{V_\ga''}\le\eps$. Then the set \set{|V_\ga|^p} is uniformly integrable. \end{lemma} \begin{proof} If $\gd>0$ and $\cE$ is any event with $\P(\cE)\le\gd$, then, using H\"older's inequality, \begin{align} \E \bigpar{|V_\ga|^p\etta_{\cE}} &\le C_p \E \bigpar{|V_\ga'|^p\etta_{\cE}} + C_p \E\bigpar{|V_\ga''|^p\etta_{\cE}} \notag\\& \le C_{p} \normx{q}{V_\ga'}^p\P(\cE)^{1-p/q} + C_p\normx{p}{V_\ga''}^p \notag\\& \le C_{p} B_\eps^{p} \gd^{1-p/q} + C_p \eps^p. \end{align} Since $\eps$ is arbitrary, this can be made arbitrarily small, uniformly in $\ga$, by choosing first choosing $\eps$ and then $\gd$ small. \end{proof} \begin{proof}[Proof of \refT{TUp}] Denote the \lhs{} of \eqref{c1} by $V_n$. Then $\E|V_n|^p$ is bounded by \refL{LUp}. This implies convergence of all moments and absolute moments of order $<p$ in \eqref{c1} by standard arguments, but is not by itself enough to include moments of order $p$. Thus we use a truncation: let $M>0$ and let $f=f'+f''$ with $f':=f\ett{|f|\le M}$. This yields a corresponding decomposition $V_n=V_n'+V_n''$. Let $\eps_M:=\normp{f''}$. Then \begin{equation}\label{emm} \eps_M:=\normp{f \ett{|f|>M}} \to0 \quad \text{as } M\to\infty. \end{equation} \refL{LUp} yields \begin{equation}\label{viip} \normp{V_n''}\le C_p\normp{f''}= C_p\eps_M \end{equation} and also, using $2p$ instead of $p$, \begin{equation}\label{vip} \normx{2p}{V_n'}^{2p}\le C_p\normx{2p}{f'}^{2p} =C_p\E|f'|^{2p}\le C_p M^p \E|f|^p. \end{equation} \eqref{emm}--\eqref{vip} show that the conditions of \refL{Lui} are satisfied; hence, \set{|V_n|^p} is uniformly integrable, and the result follows from \eqref{c1}. \end{proof} We use another simple lemma. \begin{lemma}\label{Lpq} Suppose that, for each $x\ge1$, $V(x)$ is a non-negative random variable and $v(x)>0$ is deterministic. \begin{romenumerate} \item \label{Lpqa} If $p\ge1$, $q\ge1$ and, for some function $h(x)>0$, \begin{equation}\label{lpqpq} \E |V(x)^q-v(x)^q|^p = O\bigpar{v(x)^{pq} h(x)^p}, \qquad x\ge1, \end{equation} then \begin{equation}\label{lpqp} \E |V(x)-v(x)|^p = O\bigpar{v(x)^{p} h(x)^p}, \qquad x\ge1. \end{equation} \item \label{Lpqb} Conversely, if \eqref{lpqp} holds for every $p\ge1$ and $h(x)\le1$, then \eqref{lpqpq} holds for every $p,q\ge1$. \end{romenumerate} \end{lemma} \begin{proof} \pfitemref{Lpqa} If $a>b\ge0$, then \begin{equation} a^q-b^q=a^q\bigpar{1-(b/a)^q} \ge a^q\bigpar{1-(b/a)}=a^{q-1}(a-b)=\max\set{a,b}^{q-1}(a-b). \end{equation} Hence, by symmetry, for all $a,b\ge0$, \begin{equation} \abs{a^q-b^q}\ge \max\set{a,b}^{q-1}|a-b|. \end{equation} In particular, \begin{equation} \abs{V(x)^q-v(x)^q}\ge v(x)^{q-1}|V(x)-v(x)|, \end{equation} and thus \eqref{lpqpq} implies \eqref{lpqp}. \pfitemref{Lpqb} If $V(x)\le 2v(x)$, then, by the mean value theorem, $|V(x)^q-v(x)^q|\le C_q v(x)^{q-1}|V(x)-v(x)|$. Thus, using \eqref{lpqp}, \begin{multline}\label{august} \E\bigpar{\abs{V(x)^q-v(x)^q}^p\ett{V(x)\le 2v(x)}} \\ \le C_{p,q} v(x)^{pq-p}\E|V(x)-v(x)|^p = O\bigpar{v(x)^{pq}h(x)^p}. \end{multline} On the other hand, if $V(x)>2v(x)$, then $|V(x)^q-v(x)^q|\le V(x)^q\le 2^q|V(x)-v(x)|^q$. Thus, using \eqref{lpqp} with $p$ replaced by $pq$, \begin{multline}\label{lotta} \E\bigpar{\abs{V(x)^q-v(x)^q}^p\ett{V(x)> 2v(x)}} \\ \le C_{p,q} \E|V(x)-v(x)|^{pq} = O\bigpar{v(x)^{pq}h(x)^{pq}} . \end{multline} The result follows by \eqref{august} and \eqref{lotta}. \end{proof} \begin{proof}[Proof of \refT{TRp}] As usual, we consider for definiteness $\NN-(x)$. By the definition \eqref{NN-}, $U_{\NN-(x)}\le x< U_{\NN-(x)+1}$. Hence, \begin{multline} -\UU_{\NN-(x)}(f-\mu) \le U_{\NN-(x)}(f-\mu) \le x-\binom{\NN-(x)}{d}\mu \\ \le \UU_{\NN-(x)+1}(f-\mu)+ C_f \NN-(x)^{d-1} \end{multline} and thus \begin{align}\label{bab} \Bigabs{x-\NN-(x)^{d}\frac{\mu}{d!}} \le \UU_{\NN-(x)+1}(f-\mu)+ C_f \NN-(x)^{d-1}. \end{align} Suppose throughout $x\ge1$, and recall $n(x)$ defined by \eqref{nx}. By \eqref{bab} and \refL{LUp}, for any $p>0$ and any $A\ge1$, \begin{align} & \E\Bigpar{\Bigabs{x-\NN-(x)^{d}\frac{\mu}{d!}}^p\ett{\NN-(x)\le An(x)}} \notag\\&\qquad \le C_p \E |\UU_{An(x)+1}(f-\mu)|^p+ C_{p,f} \bigpar{A n(x)}^{p(d-1)} \notag\\&\qquad \le C_{p,f} \bigpar{A n(x)}^{p(d-1/2)} = C_{p,f} A^{p(d-1/2)}x^{p(1-1/2d)}. \label{bai} \end{align} Furthermore, for any constant $A\ge2$, $\NN-(x)\ge An(x)$ implies $\NN-(x)^d\frac{\mu}{d!}-x\ge (A^d-1)x\ge\frac12A^dx$. Hence, for any $p\ge0$ and $q>0$, using \eqref{bai}, \begin{align} &\E\Bigpar{\Bigabs{x-\NN-(x)^{d}\frac{\mu}{d!}}^p\ett{An(x)<\NN-(x)\le 2An(x)}} \notag\\&\qquad \le C_q A^{-dq}x^{-q} \E\Bigpar{\Bigabs{x-\NN-(x)^{d}\frac{\mu}{d!}}^{p+q}\ett{\NN-(x)\le 2An(x)}} \notag\\&\qquad \le C_{p,q,f} A^{(p+q)(d-1/2)-dq}x^{(p+q)(1-1/2d)-q} \notag\\&\qquad = C_{p,q,f} A^{p(d-1/2)-q/2}x^{p(1-1/2d)-q/2d}. \label{baj} \end{align} Choosing $q:=2dp$, we obtain by summing \eqref{bai} with $A=2$ and \eqref{baj} with $A=2^k$, $k=1,2,\dots$, for every $p>0$, \begin{align}\label{bak} \E\bigabs{n(x)^d-\NN-(x)^{d}}^p &= C_{p,f}\E\Bigabs{x-\NN-(x)^{d}\frac{\mu}{d!}}^p \notag\\& \le C_{p,f} x^{p(1-1/2d)} +C_{p,f} \sum_{k=1}^\infty 2^{-kp/2}x^{-p/2d} \notag\\& \le C_{p,f} x^{p(1-1/2d)}. \end{align} By \refL{Lpq}\ref{Lpqa}, with $q=d$ and $h(x):=x^{-1/2d}$, \eqref{bak} implies, for $p\ge1$, \begin{align}\label{bal} \E\bigabs{n(x)-\NN-(x)}^p \le C_{p,f} x^{p/2d}. \end{align} This shows that if $Y(x)$ denotes the \lhs{} of \eqref{tr}, then $\E|Y(x)|^p\le C_{p,f}$ for $x\ge1$. By standard arguments \cite[Chapter 5.4--5]{Gut}, this implies uniform integrability of $|Y(x)|^r$ for any $r<p$, and thus by \eqref{tr} convergence of moments of order $<p$. Since $p$ is arbitrary, convergence of arbitrary moments in \eqref{tr} follows. Moment convergence in \eqref{tnn} is an immediate corollary. Alternatively, \eqref{bak} implies \begin{equation} \label{Nmom} \E \bigpar{\NN-(x)^{dp}} = O\bigpar{x^p}, \qquad x\ge1, \end{equation} for every fixed $p>0$, which implies moment convergence in \eqref{tnn} by the same uniform integrability argument. \end{proof} \begin{proof}[Proof of \refT{TVp}] Recall again the definition \eqref{nx} of $n(x)$, and suppose again $x\ge1$. We decompose the numerator in \eqref{tvtau}: \begin{equation}\label{bb} \tU_{\NN\pm(x)}-\Bigparfrac{d!}{\mu}^{\td/d}\frac{\tmu}{\td!} x^{\td/d} = U_{\NN\pm(x)}(\tf-\tmu) +\frac{\tmu}{\td!}\bigpar{\NN\pm(x)^{\td}-n(x)^{\td}} +O\bigpar{\NN\pm(x)^{\td-1}}. \end{equation} For the first term on the \rhs{} of \eqref{bb}, we argue similarly to the proof of \refT{TRp}. First, for any $A\ge2$, by \refL{LUp}, \begin{multline}\label{bbe} \E\Bigpar{\bigabs{ U_{\NN\pm(x)}(\tf-\tmu)}^p\ett{\NN\pm(x)\le A n(x)}} \le \E\bigabs{U^*_{A n(x)}(\tf-\tmu)}^p \\ \le C_{p,\tf} \bigpar{An(x)}^{p(\td-1/2)} = C_{p,f,\tf} \bigpar{Ax^{1/d}}^{p(\td-1/2)}. \end{multline} Furthermore, for any $q>0$, taking $p=0$ in \eqref{baj}, \begin{equation}\label{bbn} \P\bigpar{An(x) < N(x)\le 2An(x)} \le C_{q,f} \bigpar{Ax^{1/d}}^{-q/2}. \end{equation} Consequently, using the \CSineq, \eqref{bbe}--\eqref{bbn}, and choosing $q:=4(p\td+1)$, \begin{equation}\label{bbg} \begin{split} &\E\Bigpar{\bigabs{\tU_{\NN\pm(x)}(\tf-\tmu)}^p\ett{A n(x)<\NN\pm(x)\le 2A n(x)}} \\&\hskip4em \le \Bigpar{ \E\Bigpar{\bigabs{\tU_{\NN\pm(x)}(\tf-\tmu)}^{2p}\ett{\NN\pm(x)\le 2A n(x)}}}\qq \\&\hskip10em \times \P\bigpar{A n(x)<\NN\pm(x)\le 2A n(x)}\qq \\&\hskip4em \le C_{p,f,\tf} \bigpar{Ax^{1/d}}^{p(\td-1/2)-q/4} \le C_{p,f,\tf} A\qw x^{p(\td-1/2)/d}. \end{split} \end{equation} Summing \eqref{bbe} for $A=2$ and \eqref{bbg} for $A=2^k$, $k=1,2,\dots$, we obtain \begin{equation}\label{bbh} \begin{split} \E\bigabs{\tU_{\NN\pm(x)}(\tf-\tmu)}^p \le C_{p,f,\tf} x^{p(\td-1/2)/d}\Bigpar{1+\sum_{k=1}^\infty 2^{-k}} = C_{p,f,\tf} x^{p(\td-1/2)/d}. \end{split} \end{equation} For the second term on the \rhs{} of \eqref{bb}, we use \eqref{bal} and \refL{Lpq}\ref{Lpqb}, with $q=\td$ and $h(x):=x^{-1/2d}$, and conclude, for every $p\ge1$, \begin{equation}\label{bbb} \E \bigabs{\NN\pm(x)^{\td}-n(x)^{\td}}^p \le C_{p,f,\tf} x^{p(\td-1/2)/d}. \end{equation} Finally, by \refT{TRp} we have moment convergence in \eqref{tnn} and thus \begin{equation}\label{bbd} \E \bigpar{\NN\pm(x)^{p (\td-1)}} =O\bigpar{ x^{p(\td-1)/d}}, \end{equation} which also follows from \eqref{Nmom} (changing $p$). It follows from \eqref{bb} and \eqref{bbh}--\eqref{bbd} that \begin{equation} \E\biggabs{ \frac{\tU_{\NN\pm(x)}-\bigparfrac{d!}{\mu}^{\td/d}\frac{\tmu}{\td!} x^{\td/d}} {x^{(\td-1/2)d}} }^p \le C_{p,f,\tf} . \end{equation} Since $p$ is arbitrary, this implies convergence of arbitrary moments in \eqref{tvtau} by the same standard argument as in the proof of \refT{TRp}. Moment convergence in \eqref{tvtau0} is a corollary. \end{proof} \begin{proof}[Proof of \refT{TVp1}] \pfitemref{TVp1a} This is a special case of \refT{TVp}. \pfitemref{TVp1b} Denote the \lhs{} of \eqref{cvtau} by $V(x)$, for integers $x\ge1$, and let $p>0$. It follows from \ref{TVp1a} that the family $|V(x)|^p$, $x\ge1$, is uniformly integrable. This property is preserved by the conditioning, since we condition on a sequence of events $\cE_x$ with $\liminf_{\xtoo}\P(\cE_x)>0$ by the proof of \refT{TO}; hence the result follows from \refT{TO}. \end{proof} \section{Examples and applications}\label{Sex} \begin{example}\label{E22} Let $d=2$, and let $f$ be anti-symmetric: $f(y,x)=-f(x,y)$; this case was studied in \cite{SJ22}. We have $\mu=0$ and $f_2(x)=\E f(X,x)=-\E f(x,X)=-f_1(x)$; hence $\gs_{11}=-\gs_{12}=\gs_{22}$ and \eqref{WW} implies $W_2(t)=-W_1(t)=\gs B(t)$, where $\gs:=\norm{f_1}\ge0$ and $B(t)$ is a standard Brownian motion. For $d=2$, \eqref{psi} yields $\psi_1(s,t)=t-s$ and $\psi_1(s,t)=s$. Hence, \eqref{t1}, \eqref{Z} and integration by parts, see \eqref{crux}, yield \begin{equation}\label{e22} \begin{split} \frac{U_{nt}}{n^{3/2}}\dto Z_t &=\int_0^t(t-2s)\dd W_1(s) =-tW_1(t)+2\int_0^t W_1(s)\dd s \\ &=\gs tB(t)-2\gs\int_0^t B(s)\dd s \end{split} \end{equation} in $\Doo$, as shown in \cite{SJ22} (where also the degenerate case $\gs=0$ is studied further). \end{example} \begin{example}[Substrings]\label{Estring} Consider a random string $X_1\dotsm X_n$ of length $n$ from a finite alphabet $\cA$, with the letters $X_i$ \iid{} with some distribution $\P(X_i=a)=p_a$, $a\in\cA$. Fix a \emph{pattern} $\cW=w_1\dotsm w_m$; this is an arbitrary string in $\cA^m$, for some $m\ge1$. A \emph{substring} of $X_1\dotsm X_n$ is any string $X_{i_1}\dotsm X_{i_k}$ with $1\le i_1<\dots<i_k\le n$, and we let $N_{n}=N_{\cW}(X_1\dotsm X_n)$ be the number of substrings that have the pattern $\cW$. Obviously, this is an asymmetric $U$-statistic as in \eqref{U} with $\cS=\cA$, $d=m$ and \begin{equation} f(x_1,\dots,x_m):=\ett{x_1\dotsm x_m=w_1\dotsm w_m} =\prod_{i=1}^m\ett{x_i=w_i}. \end{equation} \refC{C1} yields asymptotic normality of $N_{n}$ as \ntoo, as shown by \citet{FlajoletSzV}. For example, let $\cA:=\setoi$, let $X_i\sim\Be(\frac12)$, and let $\cW:=10$. A simple calculation yields $f_1(x)=\frac12(x-\frac12)=-f_2(x)$, and $\gs_{11}=\gs_{22}=-\gs_{12}=1/16$; thus \refC{C1} yields, see \eqref{c1var2}, \begin{equation} \frac{ N_{n}-n/4}{\sqrt n} \dto N\Bigpar{0,\frac{1}{48}}. \end{equation} Furthermore, calculations as in \refE{E22} show that the functional limit \eqref{e22} holds in this case too, with $\gs=1/4$. \end{example} \begin{example}[Patterns in permutations]\label{Eperm} Let $\pi=\pi_1\dotsm\pi_n$ be a uniformly random permutation of length $n$, and let the \emph{pattern} $\gs=\gs_1\dotsm\gs_m$ be a fixed permutation of length $m$. The \emph{number of occurences} of $\gs $ in $\pi$, denoted by $N_{n}=N_{\gs}(\pi)$ is the number of substrings (see \refE{Estring}) of $\pi$ that have the same relative order as $\gs$. We can generate the random permutation $\pi$ by taking \iid{} random variables $X_1,\dots,X_n\sim \Uoi$, and then replacing these numbers by their ranks. Then $N_{n}$ is the $U$-statistic with $d=m$ given by the function \begin{equation} f(x_1,\dots,x_m)=\ett{x_1\dotsm x_m \text{ have the same relative order as } \gs_1\dotsm\gs_m}. \end{equation} \refC{C1} shows that $N_{n}$ is asymptotically normal as \ntoo. For details, including explicit variance calculations, see \cite{SJ287}; see also the earlier proof of asymptotic normality by \citet{Bona-Normal,Bona3}. For example, taking $\gs=21$, $N_{n}$ is the number of inversions in $\pi$, and we obtain by simple calculations the well-known result, see \eg{} \cite[Section X.6]{FellerI}, \begin{equation} \frac{N_{n}-n^2/4}{n^{3/2}}\dto N\Bigpar{0,\frac1{36}}. \end{equation} \end{example} \begin{example}[Restricted permutations I]\label{EpermI} Fix a set $T$ of permutations, and consider only permutations $\pi$ of length $n$ that \emph{avoid} $T$, in the sense that there is no occurence of any $\tau\in T$ in $\pi$. Let $\pi$ be uniformly random from this set, for a given $n$. Several cases are studied in \cite{SJ333}, and some of them yield asymmetric $U$-statistics, sometimes stopped or conditioned as in \refT{CVtau} or \ref{TO}. We sketch two examples here and in the next example, and refer to \cite{SJ333} for details and further similar examples. A permutation $\pi$ avoids $\set{\permB}$ if and only if $\pi$ is an increasing sequence of \emph{blocks} that all are decreasing; in other words, \begin{equation}\label{permB} \pi= (\ELL_1,\dots,1,\ELL_1+\ELL_2,\dots,\ELL_1+1, \ELL_1+\ELL_2+\ELL_3,\dots,\ELL_1+\ELL_2+1,\dots), \end{equation} see \cite[Proposition 12]{SS}. Let the number of blocks be $B\ge1$ and the block lengths $\ELL_1,\dots,\ELL_B$; thus $\ELL_i\ge1$ and $\ELL_1+\dots+\ELL_B=n$. Then, any such sequence $\ELL_1,\dots,\ELL_B$ is possible, and it determines $\pi$ uniquely. Hence, taking $f(L):=1$ and thus $U_n=S_n=\sum_1^n \ELL_i$, it is easily seen that $(\ELL_1,\dots,\ELL_B)$ has the same distribution as the first $\NN-(n)$ elements of an \iid{} sequence $(L_k)_k$ with $L_i\sim\Ge(1/2)$, conditioned on $U_{\NN-(n)}=n$. Let $\gs$ be a fixed permutation that avoids $\set{\permB}$, with block lengths $\ell_1,\dots,\ell_b$. Then the number $\Ngsn=N_\gs(\pi)$ of occurrences of $\gs$ in $\pi$ is given by a $U$-statistic, with $d=b$, based on the sequence of variables $L_1,\dots,L_B$ and the function \begin{equation}\label{fB} \tf(x_1,\dots,x_b):=\prod_{j-1}^b\binom{x_i}{\ell_i}. \end{equation} \refT{TO}\ref{TOc} applies and shows asymptotic normality in the form \begin{equation} \frac{\Ngsn-n^b/b!}{n^{b-1/2}} \dto N\bigpar{0,\gam^2}, \end{equation} for some $\gam^2>0$ depending on $\gs$. For example, taking $\gs=21$, so $\Nxn{21}$ is the number of inversions in $\pi$, $b=1$ and, by a calculation, $\gam^2=6$; hence \begin{equation} \frac{\Nxn{21}-n}{n^{1/2}} \dto N\bigpar{0,6}. \end{equation} We here applied the conditional result in \refT{TO}. Alternatively (since a geometric distribution has no memory), we may avoid the conditioning above and instead truncate the last element $\ELL_B$ such that the sum becomes exactly $n$; using a simple approximation argument, we can then apply the unconditional \refT{CVtau}. \end{example} \begin{example}[Restricted permutations II]\label{EpermII} Continuing \refE{EpermI}, now let $\pi$ be a uniformly random permutation of a given length $n$ such that $\pi$ avoids \set{\permAAA}. A permutation $\pi$ avoids \set{\permAAA} if and only if $\pi$ is of the form \eqref{permB} and furthermore every block length $L_i\le2$, see \cite[Proposition $15^*$]{SS}. Taking again $f(L):=1$, it is easily seen that $(\ELL_1,\dots,\ELL_B)$ has the same distribution as the first $\NN-(n)$ elements of an \iid{} sequence $(L'_k)\xoo$, conditioned on $U_{\NN-(n)}=n$, where we now let \begin{equation} \P(L'_i=1)=p,\qquad \P(L'_i=2)=p^2, \end{equation} where $p+p^2=1$ and thus $p$ is the golden ratio \begin{equation} p: \frac{\sqrt5-1}2. \end{equation} Let $\gs$ be a fixed permutation that avoids $\set{\permAAA}$, with block lengths $\ell_1,\dots,\ell_b\in\set{1,2}$. Then the number $\Ngsn=N_\gs(\pi)$ of occurrences of $\gs$ in $\pi$ is given by a $U$-statistic based on $L_1,\dots,L_B$, with $d=b$ and the function $\tf$ in \eqref{fB}. \refT{TO}\ref{TOc} applies and shows asymptotic normality in the form \begin{equation} \frac{\Ngsn-\mu n^b/b!}{n^{b-1/2}} \dto N\bigpar{0,\gam^2}, \end{equation} for some $\mu>0$ and $\gam^2>0$ depending on $\gs$. For example, taking $\gs=21$, so $\Nxn{21}$ is the number of inversions in $\pi$, $b=1$ and, by calculations, see \cite{SJ333}, $\mu=(3-\sqrt5)/2$ and $\gam^2= 5^{-3/2}$; hence \begin{equation}\label{tazi} \frac{\Nxn{21}-\frac{3-\sqrt5}{2} n}{n^{1/2}} \dto N\bigpar{0,5^{-3/2}}. \end{equation} \end{example} \section{Further comments and open problems}\label{Sadd} \begin{remark}\label{Rmom} In \refTs{TRp} and \ref{TVp}, we assume (for simplicity) existence of all moments for $f$ and $\tf$, and conclude convergence of all moments in \eqref{tnn}--\eqref{tvtau}. If we only want to conclude convergence of a specific moment, \eg{} convergence of second moments in \eqref{tr} or \eqref{tvtau}, the proofs above show that it suffices to assume existence of some specific moment for $f$ and $\tf$. However, we do not know the best possible moment conditions for this, and we leave it as an open problem to find optimal conditions. (The proofs above are not optimized; furthermore, the methods used there are not necessarily optimal.) In particular, we do not know whether convergence of first and second moments always holds in \eqref{tr} and \eqref{tvtau} without further moment assumptions. (For some results when $d=\td=1$, see \cite{SJ52} and \cite[Chapter 3]{Gut-SRW}.) \end{remark} \begin{remark} \label{Rlarge} In the case when $f$ is bounded, subgaussian estimates for large deviations of the \lhs{} of \eqref{c1} are shown in \cite{Hoeffding1963} and \cite{SJ150}. This and the definitions \eqref{NN-}--\eqref{NN+} lead to large deviation estimates for $\NN\pm$, and, provided also $\tf$ is bounded, then further to large deviation estimates for the \lhs{} in \eqref{tvtau}. We leave the details to the reader. \end{remark} \begin{remark}\label{Rdeg} As said in the introduction, the results above are of most interest in the non-degenerate case, where $\gS=(\gs_{ij})$ defined by \eqref{gsij} is non-zero. In the degenerate case, when all $\gs_{ij}=0$, or equivalently, $f_i(X)=0$ a.s\punkt{} for every $i$, the results still hold but then the limits in \eg{} \refT{T1} are degenerate, see also \eqref{c1=0}. A typical degenerate example is the anti-symmetric $f(X_1,X_2)=\sin(X_1-X_2)$, with $X$ uniformly distributed on $[0,2\pi)$ (best regarded as the unit circle), where $f_1=f_2=0$. In the degenerate case, one can instead normalize using a smaller power of $n$ than in \refT{T1} and obtain non-degenerate limits; this is well-known in the symmetric case, see \eg{} \cite{Gregory1977}, \cite{RubinVitale1980}, \cite[Chapter 11]{SJIII} for univariate results and \cite{Neuhaus1977}, \cite{Hall1979}, \cite{DehlingDP1984}, \cite{DenkerGK1985}, \cite{Ronzhin1985}, \cite[Remark 11.11]{SJIII} for functional limits. This extends to the asymmetric case; univariate results are given in \cite[Chapter 11.2]{SJIII} with the possibility of functional limits briefly mentioned in \cite[Remark 11.25]{SJIII}, and the case $d=2$ and $f$ antisymmetric was studied in \cite{SJ22} (functional limits for both the degenerate and non-degenerate cases), see \refE{E22}. We do not consider such refined results for the degenerate case in the present paper. \end{remark} \begin{remark} For multi-sample $U$-statistics, \ie, variables of the form \begin{equation} U_{n_1,\dots,n_\ell}:= \sum f\bigpar{X\xx1_{i_{1,1}},\dots,X\xx1_{i_{1,d(1)}}, \dots, X\xx\ell_{i_{\ell,1}},\dots,X\xx\ell_{i_{\ell,d(\ell)}}}, \end{equation} summing over $1\le i_{j,1}<\dots<i_{j,d(j)}\le n_j$ for every $j=1,\dots,\ell$, a multi-dimensional functional limit theorem has been given by \citet{Sen1974} in the symmetric case (\ie, with $f$ symmetric in each of the $\ell$ sets of variables); see also \eg{} \cite{Neuhaus1977}, \cite{Hall1979}, \cite{DenkerGK1985}. We expect that this too can be extended to the asymmetric case, but we leave this to the interested reader. \end{remark} \begin{remark}\label{Rstand} There is a standard trick to convert an asymmetric $U$-statistic to a symmetric one, see \eg{} \cite{SJIII}. Let $Y_i\sim U(0,1)$ be \iid{} random variables, independent of $(X_j)_1^\infty$, let $Z_i:=(X_i,Y_i)\in \tS:=S\times\bbR$, and define $F:\tS^n\to\bbR$ by \begin{equation} F\bigpar{(x_1,y_1),\dots,(x_d,y_d)}:=f(x_1,\dots,x_d)\ett{y_1<\dots<y_d} \end{equation} and its symmetrized version \begin{equation} F^*(z_1,\dots,z_d):=\sum_{\gs\in S_d}F\bigpar{z_{\gs(1)},\dots,z_{\gs(d)}}, \end{equation} summing over the $d!$ permutations of $\set{1,\dots,d}$. Then, letting $\sumx$ denote the sum over distinct indices, \begin{align} U_n(f)&\eqd \sumx_{\substack{i_1,\dots,i_d\le n\\Y_{i_1}<\dots<Y_{i_d}}} f\bigpar{X_{i_1},\dots,X_{i_d}} =\sumx_{i_1,\dots,i_d\le n}F\bigpar{(X_{i_1},Y_{i_1}),\dots,(X_{i_d},Y_{i_d})} \notag\\& =\sum_{1\le i_1<\dots<i_d\le n}F^*\bigpar{Z_{i_1},\dots,Z_{i_d}} =U_n(F^*). \end{align} This trick often makes it possible to transfer results for symmetric $U$-statistics to the general, asymmetric case. However, this trick works only for a single $n$, and we do not know of any similar trick that can handle the process $(U_n)_{n=0}^\infty$. Hence this method does not seem useful for the results above. \end{remark} \begin{remark}\label{Rreverse} In the symmetric case, it is easily seen that $U_n/\binom nd$, $n\ge d$, is a reverse martingale, which for example yields a simple proof of the law of large numbers; see \cite{Berk} and \eg{} \cite[Chapter 10.16.2]{Gut}. This does not hold in general; thus we used above (in the proof of \refL{LU*}) instead forward martingales similarly to \cite{HoeffdingLLN}. \end{remark} \newcommand\AAP{\emph{Adv. Appl. Probab.} } \newcommand\JAP{\emph{J. Appl. Probab.} } \newcommand\JAMS{\emph{J. \AMS} } \newcommand\MAMS{\emph{Memoirs \AMS} } \newcommand\PAMS{\emph{Proc. \AMS} } \newcommand\TAMS{\emph{Trans. \AMS} } \newcommand\AnnMS{\emph{Ann. Math. Statist.} } \newcommand\AnnPr{\emph{Ann. Probab.} } \newcommand\CPC{\emph{Combin. Probab. Comput.} } \newcommand\JMAA{\emph{J. Math. Anal. Appl.} } \newcommand\RSA{\emph{Random Struct. Alg.} } \newcommand\ZW{\emph{Z. Wahrsch. Verw. Gebiete} } \newcommand\DMTCS{\jour{Discr. Math. Theor. Comput. Sci.} } \newcommand\AMS{Amer. Math. Soc.} \newcommand\Springer{Springer-Verlag} \newcommand\Wiley{Wiley} \newcommand\vol{\textbf} \newcommand\jour{\emph} \newcommand\book{\emph} \newcommand\inbook{\emph} \def\no#1#2,{\unskip#2, no. #1,} \newcommand\toappear{\unskip, to appear} \newcommand\arxiv[1]{\texttt{arXiv:#1}} \newcommand\arXiv{\arxiv} \def\nobibitem#1\par{} \newcommand\xand{\& }
{ "timestamp": "2018-04-17T02:13:49", "yymm": "1804", "arxiv_id": "1804.05509", "language": "en", "url": "https://arxiv.org/abs/1804.05509" }
\section{Introduction} The phenomenon of field reversals occurs commonly in nature, e.g., the polarity of geomagnetic field reverses over a period of several thousand years~\cite{Glatzmaier:NATURE1995}, whereas the large-scale magnetic field in the Sun reverses approximately every 11 years~\cite{Hanasoge:ARFM2016}. Flow reversals are also observed in Rayleigh-B\'{e}nard convection (RBC), an idealized model of thermal convection, where the large-scale circulation (LSC) switches its direction at irregular intervals~\cite{Sreenivasan:PRE2002, Xi:PRE2006, Brown:JFM2009, Mishra:JFM2011, Ni:JFM2015, Breuer:EPL2009, Sugiyama:PRL2010, Petschel:PRE2011, Chandra:PRE2011, Chandra:PRL2013, Verma:POF2015, Podvin:JFM2015, Yanagisawa:PRE2011}. In RBC a fluid placed in a cylindrical or rectangular container is heated from below and cooled from above~\cite{Ahlers:RMP2009, Chilla:EPJE2012, Verma:NJP2017}. Properties of a convective flow depend primarily on the Rayleigh number $\mathrm{Ra}$ and the Prandtl number $\mathrm{Pr}$. The Rayleigh number signifies the relative strength of the thermal driving force compared to the dissipative forces, and the Prandtl number is the ratio of the kinematic viscosity and the thermal diffusivity of the fluid. Another important governing parameter is the aspect ratio $\Gamma$, which is the ratio of the width to the height of the RBC cell. In this paper, we study flow reversals in a two-dimensional (2D) square box for $\mathrm{Pr} = \infty$. Infinite Prandtl number RBC can be utilized to model mantle convection in the Earth, where the Prandtl number is extremely large ($\mathrm{Pr} \approx 10^{24}$)~\cite{Schubert:book2001}. For RBC in a cylinder with $\Gamma \approx 1$, azimuthal orientation of LSC jitters continuously~\cite{Brown:JFM2006, Xi:PRE2006, Mishra:JFM2011}, and the probability distribution of this angular change decays as a powerlaw~\citep{Brown:PRL2005}. An angular change of approximately $180 \deg$ can be perceived as a reversal of LSC. A reversal can also occur following a cessation event, where the strength of LSC ceases momentarily, and it reappears with a different azimuthal orientation~\cite{Brown:PRL2005, Xi:PRE2006, Mishra:JFM2011}. \citet{Mishra:JFM2011} and \citet{Xi:JFM2016} investigated the reversals of LSC in a cylindrical geometry, and observed that during a cessation-led reversal, the strength of secondary flow modes increase, whereas that of the primary mode decreases. In 2D RBC, reversals are constrained to occur through a cessation event, and the dominance of secondary modes during a cessation have also been reported~\cite{Breuer:EPL2009, Sugiyama:PRL2010, Chandra:PRE2011, Petschel:PRE2011, Chandra:PRL2013, Verma:POF2015, Podvin:JFM2015}. Moreover, it has been observed that the first few most energetic flow modes capture the flow pattern and dynamics of LSC reversals very well~\cite{Chandra:PRE2011, Petschel:PRE2011, Chandra:PRL2013, Verma:POF2015, Podvin:JFM2015}. In this paper, we gathered very long time statistics of the temporal evolution of vertical velocities at various locations in our simulation domain, and observe that their evolution and statistical properties can be described very well by the most energetic Fourier modes of the flow. \citet{Niemela:JFM2001} and \citet{Sreenivasan:PRE2002} studied convective reversals for a wide range of $\mathrm{Ra}$ in a $\Gamma = 1$ cylindrical cell filled with cryogenic helium gas ($\mathrm{Pr} \approx 0.7$). They observed that the LSC prefers one direction over the other for $\mathrm{Ra} \lessapprox 10^{11}$, but the probability of being in either direction becomes approximately equal for larger $\mathrm{Ra}$. They found that the waiting time between two consecutive reversals is exponentially distributed for longer waiting times. However, shorter waiting times were observed to be distributed as a powerlaw~\citep{Sreenivasan:PRE2002}. \citet{Brown:JFM2006} however concluded that the powerlaw distribution occurs due to ``crossings" of LSC, which are the reorientation events with angular change of approximately $90 \deg$. They observed that the waiting times between consecutive reversals follow a Poisson distribution for entire range of waiting times~\citep{Brown:JFM2006}, which was also endorsed by the findings of~\citet{Xi:PRE2006} and \citet{Xi:PRE2007, Xi:PRE2008}. In our numerical simulations for infinite Prandtl number, we also observe that the waiting times between successive reversals exhibit exponential distribution, whereas the crossings (to be defined later for the present case) are distributed as a powerlaw. Moreover, the mean waiting time between two consecutive reversals have also been observed to depend on the Rayleigh number~\cite{Niemela:JFM2001, Sreenivasan:PRE2002, Sugiyama:PRL2010, Huang:JFM2016}, the aspect ratio $\Gamma$~\cite{Ni:JFM2015}, and the thermal boundary condition at the bottom plate~\cite{Huang:PRL2015}. Another interesting facet of the present investigation is the statistical properties of fluctuations in the time evolution of velocity field recorded at various probes, which we utilize to investigate reversals. For homogeneous and isotropic turbulence, Kolmogorov~\cite{Kolmogorov:DANS1941a} deduced that the third order structure function is proportional to distance between two points in the inertial range. The higher order structure functions are however more complex. This is known as anomalous scaling~\cite{Sreenivasan:ARFM1997, Lohse:ARFM2010}, which arises due to intermittency of viscous dissipation rate. Temporal structure functions have also been utilized to study the anomalous scaling~\cite{Skrbek:PRE2002, Ching:PRE2000a, He:JFM2014}. \citet{Skrbek:PRE2002} computed the temporal structure functions of temperature field recorded near the sidewall of their cylindrical RBC cell filled with cryogenic helium gas at $\mathrm{Ra} = 1.5 \times 10^{11}$, and found the signature of intermittency. \citet{Ching:PRE2003} studied the temporal structure functions of the velocity field at the center of a cylindrical RBC cell filled with water at $\mathrm{Ra} = 3.7 \times 10^9$, and corroborated the anomalous scaling. Moreover, they observed that the velocity structure functions satisfy the She-Leveque scaling~\citep{She:PRL1994}. In this paper, we study the properties of infinite-$\mathrm{Pr}$ reversals using the time series of vertical velocity at probes located near a sidewall and at the center of our 2D square box. We also study the evolution of dominant Fourier modes, and find that all the odd-odd modes (the modes whose both indices are odd) are statistically similar to the vertical velocity at the sidewall probe, as they switch their signs after a flow reversal, their probability distributions are bimodal, and their power spectra exhibit $1/f^{\alpha}$ scaling for a wide range of frequencies. Additionally, by computing the temporal structure functions, we find a signature of intermittency in the fluctuations of the vertical velocity near the sidewall and the most energetic Fourier mode. The remainder of the paper is organized as follows. In Sec.~\ref{sec:eqns}, we describe the governing equations and numerical method. In Sec.~\ref{subsec:rev_stat}, the statistics of waiting times between consecutive reversals will be discussed. The statistical properties of the time series of the vertical velocities and of the dominant Fourier modes will be presented in Sec.~\ref{subsec:modes}, and intermittency in their fluctuations using structure functions will be examined in Sec.~\ref{subsec:str_fns}. We summarize our main results in Sec.~\ref{sec:conclusion}. \section{Governing equations and numerical method} \label{sec:eqns} Conservation of momentum, energy, and mass lead to equations which govern the dynamics of RBC. For very large Prandtl number~\cite{Pandey:PRE2014, Verma:POF2015}, these equations under the Oberbeck-Boussinesq approximation~\cite{Chandrasekhar:Book, Verma:NJP2017} are \begin{eqnarray} \frac{1}{\mathrm{Pr}} \left[ \frac{\partial {\bf u}}{\partial t} + {\bf u} \cdot \nabla {\bf u} \right] & = & -\nabla \sigma + \theta \hat{z} + \frac{1}{\sqrt{\mathrm{Ra}}} \nabla^2 {\bf u}, \label{eq:u} \\ \frac{\partial \theta}{\partial t} + {\bf u} \cdot \nabla \theta & = & u_z + \frac{1}{\sqrt{\mathrm{Ra}}} \nabla^2 \theta, \label{eq:T} \\ \nabla \cdot {\bf u} & = & 0 \label{eq:m}, \end{eqnarray} where ${\bf u} \, ( = u_x \hat{x} + u_z \hat{z})$ is the velocity field, and $\theta$ and $\sigma$ are the fluctuations in temperature and pressure from the conduction state. Here $\mathrm{Ra} = \alpha g \Delta d^3/(\nu \kappa)$ and $\mathrm{Pr} = \nu/\kappa$, with $\Delta$ being the temperature difference between the top and bottom plates separated by the distance $d$, $g$ is the acceleration due to gravity, and $\alpha, \nu$, and $\kappa$ are the thermal expansion coefficient, the kinematic viscosity, and the thermal diffusivity of the fluid respectively. The above equations are nondimensionalized using $d$, $\Delta$, and $\kappa \sqrt{\mathrm{Ra}}/d$ as the length, temperature, and velocity scales respectively. For $\mathrm{Pr} = \infty$, the left hand side of Eq.~(\ref{eq:u}) vanishes, resulting in a linear equation~\cite{Pandey:PRE2014, Pandey:Pramana2016, Verma:POF2015} \begin{equation} -\nabla \sigma + \theta \hat{z} + \frac{1}{\sqrt{\mathrm{Ra}}} \nabla^2 {\bf u} = 0, \label{eq:u_inf} \end{equation} which can be utilized to compute the Fourier modes of the velocity field from the corresponding modes of the temperature field~\citep{Pandey:PRE2014, Verma:POF2015}. Thus for $\mathrm{Pr} = \infty$, we solve Eqs.~(\ref{eq:u_inf}), (\ref{eq:T}), and (\ref{eq:m}) using a pseudospectral solver {\sc Tarang}~\cite{Verma:Pramana2013} in a two-dimensional square box. Stress-free boundary condition for the velocity field is employed on all the walls. Top and bottom plates are isothermal and sidewalls are adiabatic. Fields are dealiased using 2/3 rule. To satisfy the boundary conditions, velocity and temperature fields are expanded using free-slip basis functions~\cite{Verma:POF2015} as \begin{eqnarray} u_x(x,z) & = & \sum_{k_x,k_z} 4\hat{u}_x(k_x, k_z) \sin(k_x x) \cos(k_z z), \label{eq:basis_ux} \\ u_z(x,z) & = & \sum_{k_x,k_z} 4\hat{u}_z(k_x, k_z) \cos(k_x x) \sin(k_z z), \label{eq:basis_uZ} \\ \theta(x,z) & = & \sum_{k_x,k_z} 4\hat{\theta}(k_x, k_z) \cos(k_x x) \sin(k_z z), \label{eq:basis_th} \end{eqnarray} where $\hat{f}(k_x,k_z)$ represents the Fourier transform of a function $f(x,z)$. Researchers have tried to construct low dimensional models to mimic the dynamics of LSC reversals~\cite{Araujo:PRL2005, Brown:POF2008, Podvin:JFM2015, Ni:JFM2015}. Recently, \citet{Mannattil:EPJB2017} utilized the techniques of nonlinear time series analysis to conclude that the reversals in infinite-Pr RBC are however high dimensional. Therefore direct numerical simulations are important to prudently study the dynamics of reversals in the infinite-Pr RBC. Moreover, for very large Prandtl number RBC, the large- and small-scale quantities exhibit very similar scalings in two and three dimensions~\citep{Schmalzl:EPL2004, Pandey:Pramana2016}. Consequently, very large Prandtl number RBC can be studied in two dimensions, where very long time statistics are accessible at lower computational costs. Therefore, we integrated the governing equations for $\mathrm{Pr} = \infty$ and $\mathrm{Ra} = 10^8$ for a total time $t_{\mathrm{total}} = 3.52 \times 10^5 \, t_f$, where $t_f = \, d^2/(\kappa \sqrt{\mathrm{Ra}})$ is the unit time in the present case. A few important parameters of the simulation are summarized in Table~\ref{table:details}. We checked the resolution criterion by computing the time-averaged Batchelor length scale $\eta$, and find that its product with the largest wavenumber, $k_{\mathrm{max}} \eta \approx 2.6$, which indicates that the smallest length scales are properly resolved in our simulation. Note that the Batchelor length scale $\eta$ and the Kolmogorov length scale $\eta_K$ are related as $\eta = \eta_K/\sqrt{\mathrm{Pr}}$. Therefore, one needs to properly resolve the Batchelor scale in RBC with $\mathrm{Pr} > 1$~\cite{Chilla:EPJE2012, Shishkina:NJP2010}. Moreover, we compared the Nusselt number computed using the correlation of $u_z$ and $\theta$~\cite{Verma:NJP2017} with that using the exact relations derived from the Boussinesq equations~\cite{Shraiman:PRA1990}, and observed that they match within $1\%$, thus again indicating that our simulation is adequately resolved. \begin{table} \begin{ruledtabular} \caption{Important parameters of the direct numerical simulation. $N^2$ is the total number of equidistant grid points in the simulation domain, $u_{\mathrm{rms}}$ is the root mean square velocity, computed as $\sqrt{ \langle u_x^2 + u_z^2 \rangle_{A,t}}$, where $\langle \cdot \rangle_{A,t}$ represents averaging over the entire simulation domain and time.} \begin{tabular}{cccccc} $\mathrm{Pr}$ & $\mathrm{Ra}$ & $N^2$ & Simulation time & $u_{\mathrm{rms}}$ & $k_{\mathrm{max}} \eta$ \\ \hline $\infty$ & $10^8$ & $256^2$ & $3.52 \times 10^5 \, t_f$ & $1.42$ & 2.6 \end{tabular} \label{table:details} \end{ruledtabular} \end{table} \section{Results} To explore the dynamics and statistical properties of reversals, we recorded the time history of the velocity and temperature fields at various locations in our simulation domain. Additionally, to understand the mechanism of reversals we tracked the temporal evolution of some of the most energetic Fourier modes of the flow. \subsection{Statistical properties of reversals} \label{subsec:rev_stat} In this subsection, we discuss the statistics of reversals using the velocity field monitored near the left sidewall. After reaching the statistically steady state, we continued our simulation for a very long time, and observe that the stable convective structure is a single circulating roll occupying the whole box. Figure~\ref{fig:vel_temp}(a) shows a stable structure circulating in the clockwise direction. As the flow evolves, this large-scale circulation (LSC) persists its clockwise direction for some time before switching its motion in the counterclockwise direction -- the other stable configuration, as exhibited in Fig.~\ref{fig:vel_temp}(b). The flow structure keeps oscillating between these two stable configurations during our entire observation time. However during the reversal events, flow structure becomes more complex. Multi-cell patterns dominate during reversals~\cite{Breuer:EPL2009, Petschel:PRE2011, Chandra:PRE2011, Chandra:PRL2013, Verma:POF2015}. In other words, higher-wavenumber Fourier modes become active and dominate during the flow reversals. We show a sequence of flow patterns during a reversal event in the Supplementary Video~\citep{reversal_movie}. \begin{figure} \includegraphics[scale=0.2]{figures/figure1} \caption{Two stable flow configurations of the convective flow in a 2D square box with stress-free walls: LSC in the clockwise direction (a) and in the counterclockwise direction (b). Temperature field is shown as density plots and velocity field is represented by vectors. Time evolution of the temperature and velocity fields are tracked for the entire duration of simulation at the two probes located near the left wall (green circles) and at the center (green squares).} \label{fig:vel_temp} \end{figure} We track the vertical velocity $u_z(t)$ at two different probes in the simulation domain. One probe is located near the center of the left wall (indicated in Fig.~\ref{fig:vel_temp} by green circles) at $(x = 0.0625, z = 0.5)$, and henceforth will be referred to as the left probe (LP). The other probe, located at the center of the box (indicated by green squares in Fig.~\ref{fig:vel_temp}) will be referred to as the center probe (CP). Figure~\ref{fig:time_uz} exhibits the time series of $u_z(\mathrm{LP})$ for the whole duration of simulation. It is evident from the figure that the vertical velocity at the left probe switches sign irregularly, indicating that the flow reverses repeatedly during our simulation. The velocity component $u_z(\mathrm{LP})$ fluctuates around a non-zero mean value between any two reversals. Each occurrence of the sign change of $u_z(\mathrm{LP})$ is termed as a ``crossing" event. \begin{figure} \includegraphics[scale=0.35]{figures/figure2} \caption{Evolution of the vertical velocity at the left probe (indicated in Fig.~\ref{fig:vel_temp} by green circles). Velocity changes its sign irregularly, indicating occurrence of flow reversals.} \label{fig:time_uz} \end{figure} It is important to note however that all the crossings do not lead to flow reversals, as some of them might occur due to momentary decay of the primary flow mode, and (at the same time) the growth of the secondary modes~\citep{Mishra:JFM2011, Chandra:PRL2013}. All reversals however are crossings. This is illustrated in Fig.~\ref{fig:rev_cross}(a) where we show the temporal evolution of $u_z(\mathrm{LP})$ on an extended scale, which exhibits a few reversal and crossing events. Figure~\ref{fig:rev_cross}(b) indicates the instances of the occurrence of reversal and crossing events in Fig.~\ref{fig:rev_cross}(a). We distinguish reversals from crossings by putting a constraint on the waiting time between two consecutive crossings; a crossing is counted as a reversal only if it is separated from its neighboring crossings by at least $40 \, t_f$. \begin{figure} \includegraphics[scale=0.35]{figures/figure3} \caption{(a) Magnification of Fig.~\ref{fig:time_uz} showing a few reversal and crossing events, which are indicated in panel (b).} \label{fig:rev_cross} \end{figure} By counting the number of sign changes in the time series of $u_z(\mathrm{LP})$, we observe 2591 crossings during the entire observation time. Moreover, crossings occur on a fast time scale compared to the mean waiting time between two consecutive flow reversals. Let us denote $t_n$ as the time when $n^{th}$ crossing occurs~\citep{Sreenivasan:PRE2002}, and the time gap between $n^{th}$ and $(n+r)^{th}$ crossings as $\Delta t_r$. The probability distribution function (PDF) of the waiting time between any two consecutive crossings, $\mathcal{P}(\Delta t_1)$, is exhibited in Fig.~\ref{fig:pdf_wait}(a) on a double logarithmic scale. It is evident that the PDF decays as a powerlaw for shorter waiting times, with best fit yielding $\mathcal{P}(\Delta t_1) \sim \Delta t_1^{-2.4}$. The powerlaw decay of $\mathcal{P}(\Delta t_1)$ for shorter waiting times has also been observed in experiments with moderate Prandtl number fluids by \citet{Sreenivasan:PRE2002, Xi:PRE2006}, and \citet{Brown:JFM2006}, albeit with lower scaling exponents around one. The larger exponent observed here might be due to two dimensionality or due to very large Prandtl number of the flow. \begin{figure} \includegraphics[scale=0.4]{figures/figure4} \caption{(a) Probability distribution function of the waiting time between two consecutive crossings $\mathcal{P}(\Delta t_1)$ on a double logarithmic scale, which exhibits a powerlaw distribution for the shorter waiting times. (b) Probability distribution of the waiting time between two consecutive reversals however follows an exponential distribution.} \label{fig:pdf_wait} \end{figure} One can observe in Fig.~\ref{fig:pdf_wait}(a) that the powerlaw region extends only up to $\Delta t_1 \approx 40 \, t_f$, whereas some other scaling holds for larger waiting times. We find that $\mathcal{P}(\Delta t_1)$ decays exponentially for $\Delta t_1 \gtrsim 40 \, t_f$. It has been observed in experiments~\cite{Sreenivasan:PRE2002, Xi:PRE2006, Brown:JFM2006, Xi:PRE2007, Huang:JFM2016} and two-dimensional numerical simulations~\cite{Podvin:JFM2015} of moderate Prandtl number RBC that the waiting times between two consecutive LSC reversals follow a Poissonian distribution. \citet{Podvin:JFM2015} detected flow reversals by tracking sign changes in the global angular momentum of their 2D flow, which correspond to occurrence of flow reversals. Thus it can be inferred that the exponential distribution of $\mathcal{P}(\Delta t_1)$ for $\Delta t_1 \gtrsim 40 \, t_f$ occurs due to reversal events, and that these reversals are separated from crossings by a separation time scale $t_s \approx 40 \, t_f$~\citep{Brown:JFM2006}. Therefore as mentioned earlier, to count the number of true reversal events, we discarded crossings which are separated from their neighboring crossing events by less than $40 \, t_f$. Using this criterion, we find only 359 reversal events compared to 2591 crossings in our time series of $u_z(\mathrm{LP})$. The mean waiting time between two consecutive reversals is $(\Delta t_1)_\mathrm{mean} \approx 975 \, t_f$. In Fig.~\ref{fig:pdf_wait}(b) we plot the PDF of waiting times between two consecutive reversals on a semilogarithmic scale, which exhibits that $\mathcal{P}(\Delta t_1)$ for reversals can indeed be fitted well by an exponential distribution. The best fit yields $\mathcal{P}(\Delta t_1) \sim \exp[(-0.00104 \pm 0.000066) \Delta t_1]$, suggesting a mean waiting time between two consecutive reversals as $(\Delta t_1)_\mathrm{mean} \approx (962 \pm 60) \, t_f$, which is consistent with the aforementioned $(\Delta t_1)_\mathrm{mean} \approx 975 \, t_f$. For $\Delta t_1 \gtrsim 3700 \, t_f$, $\mathcal{P}(\Delta t_1)$ shows deviations from the exponential behavior, which is due to insufficient statistics for very long waiting times. Note that in our simulation the circulation time of LSC is $t_c \approx 4/u_{\mathrm{rms}} \approx 4/1.4 \approx 2.8 \, t_f$, thus the reversals occur at every $350 \, t_c$. Following \citet{Sreenivasan:PRE2002}, we compute the moments of generalized interswitch intervals defined as $\langle |\Delta t_r|^q \rangle$ to gain further insight. Here $\Delta t_r = |t_{n+r}-t_n|$ is the time interval between $n^{th}$ and $(n+r)^{th}$ reversals. In Fig.~\ref{fig:sf_rev}(a), we plot the moments $\log_{10} \langle | \Delta t_r |^q \rangle$ for $q = 1$ to $6$ as function of $\log_{10} r$, where $\langle \cdot \rangle$ represents the running average along the entire time series. We find that the moments exhibit two scaling regimes, one for $r \leq 6$ and another for $r \geq 50$, with $\langle | \Delta t_r |^q \rangle \sim r^{\zeta_q}$ for these regimes. We compute $\zeta_q$ using the least square fit, and plot them as a function of $q$ in Fig.~\ref{fig:exp_sf_rev}(a). We find $\zeta_q \cong q$ for $r \geq 50$, which suggests that the distant reversals are decorrelated. For $r \leq 6$ however $\zeta_q \cong 0.57 q + 0.38$, suggesting a correlation between neighboring reversals, which has been credited to occur due to finite-size effect~\citep{Sreenivasan:PRE2002}. \begin{figure} \includegraphics[scale=0.33]{figures/figure5} \caption{Moments of the generalized interswitch interval $\langle |\Delta t_r|^q \rangle$ between reversals as function of $r$ for (a) $q = 1$ to 6 and (b) $q \leq 1$ ($q$ increases from bottom to top). We observe scaling regions for $r \leq 6$ and for $r \geq 50$.} \label{fig:sf_rev} \end{figure} \begin{figure} \includegraphics[scale=0.35]{figures/figure6} \caption{Scaling exponents $\zeta_q$ as function of the order $q$ of the moments of generalized interswitch interval $\Delta t_r$ between reversals for (a) $q \geq 1$ and (b) $q \leq 1$. Exponents obtained for $r \leq 6$ (red squares) scale linearly for $q \geq 1$, whereas show a very weak nonlinear behavior for $q \leq 1$. However for $r \geq 50$, the exponents scale as $\zeta_q \cong q$ (green circles), revealing a decorrelation between distant reversals.} \label{fig:exp_sf_rev} \end{figure} We also compute the moments of $\Delta t_r$ for $q < 1$, and plot them in Fig.~\ref{fig:sf_rev}(b). Here too, we find that $\zeta_q$ increases linearly with $q$ for $r \geq 50$, again showing a decorrelation of the distant reversal events [see Fig.~\ref{fig:exp_sf_rev}(b)]. For $r \leq 6$ however $\zeta_q$ exhibits a weakly nonlinear behavior as shown in Fig.~\ref{fig:exp_sf_rev}(b), following $\zeta_q = -0.21 q^2 + 1.20 q$. \subsection{Statistical properties of dominant flow modes} \label{subsec:modes} As mentioned earlier, we also record the temporal evolution of vertical velocity at the center of our simulation domain, $u_z(\mathrm{CP})$, as well as the amplitudes of the most energy containing Fourier modes. In Fig.~\ref{fig:time_modes}(a) we plot the evolution of $u_z(\mathrm{LP})$ and $u_z(\mathrm{CP})$ covering two consecutive reversals. We observe that $u_z(\mathrm{LP})$ fluctuates around two nonzero mean values most of the time, but $u_z(\mathrm{CP})$ fluctuates around zero. Moreover, we find that $u_z(\mathrm{CP})$ is anticorrelated with $u_z(\mathrm{LP})$ by computing their cross-correlation, defined as $\langle u_z(\mathrm{CP}, t+\tau) u_z(\mathrm{LP}, t) \rangle$, and getting negative values for time lags $\tau \lesssim 80 \, t_f$. This anticorrelation between velocities at the left and center probes is similar to those observed between the primary and secondary flow modes during a reversal or cessation event as observed in \citet{Mishra:JFM2011, Petschel:PRE2011, Chandra:PRE2011, Chandra:PRL2013}, and \citet{Verma:POF2015}. In Fig.~\ref{fig:time_modes}(b) we plot the amplitudes of the Fourier modes $\hat{u}_z(1,1)$, $\hat{u}_z(2,1)$, and $\hat{u}_z(3,1)$ for the same time interval as in Fig.~\ref{fig:time_modes}(a). These are the most energetic modes (see Table~IV in \citet{Verma:POF2015}). Note that the wavenumber components for a Fourier mode with indices $(m,n)$ are $k_x = m\pi, \, k_z = n\pi$, which represents a pattern with $m$-rolls along $x$-direction and $n$-rolls along $z$-direction. We illustrate a few low-wavenumber Fourier modes in Fig.~\ref{fig:Fourier_modes}, where we can see that the mode $(1,1)$ represents a single roll occupying the whole box. Similarly the modes $(2,1)$ and $(1,2)$ respectively represent two rolls stacked along $x$- and $z$-directions. It is evident from Fig.~\ref{fig:time_modes}(b) that the modes $\hat{u}_z(1,1)$, $\hat{u}_z(3,1)$, and $\hat{u}_z(2,1)$ capture the evolution of $u_z(\mathrm{LP})$ and $u_z(\mathrm{CP})$ very well. For instance, the modes $\hat{u}_z(1,1)$ and $\hat{u}_z(3,1)$ change their sign after the reversal events, as $u_z(\mathrm{LP})$ also does. \begin{figure} \includegraphics[scale=0.29]{figures/figure7} \caption{(a) Vertical velocities at the left and the center probes as function of time, (b) and the evolution of the most dominant Fourier modes during the same time interval. A fraction of the full time series covering two reversals is shown here.} \label{fig:time_modes} \end{figure} \citet{Verma:POF2015} classified the Fourier modes according to their indices. In two dimensions, the modes belong to one of the four classes: odd-odd (OO), even-even (EE), even-odd (EO), and odd-even (OE). For instance, the modes $(1,1), (2,1), (1,2)$, and $(2,2)$ belong to OO, EO, OE, and EE classes respectively. Moreover, these four classes form an abelian group called {\it Klein four-group} $Z_2 \times Z_2$~\cite{Verma:POF2015}. Verma \textit{et al.}~\cite{Verma:POF2015} also deduced that in 2D RBC with free-slip walls, the OO modes switch their sign after a flow reversal, while the modes belonging to other classes do not. Moreover, the OE, EE, and EO modes fluctuate with their mean value around zero. We have analyzed four most dominant modes from each of the aforementioned classes, but for brevity, we focus only on $\hat{u}_z(1,1)$, $\hat{u}_z(2,1)$, and $\hat{u}_z(3,1)$. \begin{figure} \includegraphics[scale=0.62]{figures/figure8} \caption{Velocity (arrows) and temperature fluctuation fields (grey scale) corresponding to Fourier modes with indices $(m,n)$ computed using Eqs.~(\ref{eq:basis_ux}--\ref{eq:basis_th}). Dark and bright regions represent the hottest and coldest fluids respectively.} \label{fig:Fourier_modes} \end{figure} As mentioned above, the OO modes change their sign after a flow reversal. For instance, we detect 2815, 2657 and 2715 crossings for $\hat{u}_z(1,1)$, $\hat{u}_z(3,1)$, and $\hat{u}_z(5,1)$ modes respectively, which is close to 2367 crossings for $u_z(\mathrm{LP})$ during the same time interval. Similarly, we find that the probability distribution of waiting times $\Delta t_1$ for the OO modes (not shown here) are very similar to that for $u_z(\mathrm{LP})$ shown in Fig.~\ref{fig:pdf_wait}. In Fig.~\ref{fig:pdf_modes}(a), we plot the PDFs of $u_z(\mathrm{LP})$ and $u_z(\mathrm{CP})$, and find that $\mathcal{P}[u_z(\mathrm{LP})]$ is bimodal, which agrees with the fact that it fluctuates between two non-zero mean values. However, $\mathcal{P}[u_z(\mathrm{CP})]$ shows a Gaussian-like distribution, with a broad peak at zero. We find that the kurtosis of $\mathcal{P}[u_z(\mathrm{CP})]$ is 4.7, which indicates that it deviates from the Gaussian distribution, but not very strongly. We show the PDFs of $\hat{u}_z(1,1)$, $\hat{u}_z(2,1)$, and $\hat{u}_z(3,1)$ in Fig.~\ref{fig:pdf_modes}(b), and find that the distribution of the OO modes are bimodal, similar to $\mathcal{P}[u_z(\mathrm{LP})]$. Moreover, similar to $\mathcal{P}[u_z(\mathrm{CP})]$, $\mathcal{P}[\hat{u}_z(2,1)]$ also exhibits a Gaussian-like distribution with its kurtosis value nearly equal to 3.4, which reveals that the deviation from the Gaussian behavior is weak. This observation that the statistical properties of $\hat{u}_z(2,1)$ are similar to those of $u_z(\mathrm{CP})$ is not surprising since the vertical velocity at the center gets most dominant contribution from the $\hat{u}_z(2,1)$ Fourier mode (see Fig.~\ref{fig:Fourier_modes}). We also examined the PDFs of EE, OE, and other EO modes (not shown here), that exhibit strongly non-Gaussian behavior with their kurtosis values close to 10. Moreover, we find that the tails of all the PDFs are exponential. \begin{figure} \includegraphics[scale=0.35]{figures/figure9} \caption{Probability distribution function of the vertical velocities at the left and the center probes (a), and of the amplitudes of the dominant Fourier modes (b). The distribution is bimodal for $u_z(\mathrm{LP})$ and the OO modes, consistent with the fact that they change sign after flow reversals. PDFs of $u_z(\mathrm{CP})$ and $\hat{u}_z(2,1)$ exhibit broad peaks centered at zero and are non-Gaussian. The exponential tails are observed for all the distributions, and the exponential regions are indicated by straight lines.} \label{fig:pdf_modes} \end{figure} We also observe from Fig.~\ref{fig:time_modes} that $u_z(\mathrm{LP})$ and the OO modes are autocorrelated for longer time compared to $u_z(\mathrm{CP})$ and the (2,1) mode. Therefore to quantify this, we compute the autocorrelation function for these time series as \begin{equation} C(\tau) = \frac{ \langle u(t+\tau) u(t) \rangle } { \langle u^2(t) \rangle}, \end{equation} where $u(t)$ represents the evolution of any of the aforementioned time series. A useful quantity that can be computed from the correlation function is the integral time scale defined as \begin{equation} \mathscr{T} = \int_0^{t_\mathrm{total}} C(\tau) d\tau, \end{equation} which is a measure of how long a quantity is correlated with itself. We find $\mathscr{T} \approx 340 \, t_f$ for $u_z(\mathrm{LP})$ and the OO modes, whereas $\mathscr{T} \approx 20 \, t_f$ for $u_z(\mathrm{CP})$ and $\hat{u}_z(2,1)$. Thus the OO modes are autocorrelated for much longer time compared to the modes from the other classes. \begin{figure} \includegraphics[scale=0.35]{figures/figure10} \caption{Power spectral densities (PSD) of the vertical velocities at the left and the center probes, and of the Fourier modes. For the OO modes and $u_z(\mathrm{LP})$, we observe $\mathrm{PSD} \sim f^{-2}$ for a wide range of frequencies, whereas this scaling is observed only for a relatively narrower range of frequencies for $u_z(\mathrm{CP})$ and $\hat{u}_z(2,1)$.} \label{fig:fft_modes} \end{figure} To understand the physical process responsible for the evolution of these quantities, we compute their power spectral densities (PSD), and exhibit them in Fig.~\ref{fig:fft_modes}. It is apparent that the PSDs of $u_z(\mathrm{LP})$ and the OO modes show $1/f^{\alpha}$ scaling for a wide range of frequencies. The velocity components $u_z(\mathrm{CP})$ and $\hat{u}_z(2,1)$ show an equal power at small frequencies, but their PSDs decay as $1/f^{\alpha}$ too for a relatively narrower range of frequencies. Best fit to these regions yield $\alpha \approx 1.9 \pm 0.1$ for all the quantities, with the scaling ranges indicated in the figure by dashed lines. The $1/f^2$ power spectra of these signals are reminiscent of the Brownian process. The scaling range in time domain translates approximately from $10$ to $200 \, t_f$ for $u_z(\mathrm{LP})$ and the OO modes, and from 10 to $25 \, t_f$ for $u_z(\mathrm{CP})$ and $\hat{u}_z(2,1)$. The cutoff for the powerlaw behavior is close to the integral time scales for these quantities, whereas the lower cutoff scale is approximately of the order of a circulation time $t_c$ of LSC, with $t_c \approx 3 \, t_f$ in the present case. The $1/f^{\alpha}$ power spectra corresponding to long time fluctuations has been reported in other turbulent flows as well~\cite{Dmitruk:PRE2011}. \begin{figure} \includegraphics[scale=0.35]{figures/figure11} \caption{Power spectral densities of the autocorrelation functions, $\mathrm{PSD}[C(\tau)]$, that also exhibit $f^{-2}$ scaling for similar range of frequencies as those in Fig.~\ref{fig:fft_modes}.} \label{fig:fft_acor_modes} \end{figure} The power spectrum of the autocorrelation function is equivalent to the power spectrum of the signal. Therefore in Fig.~\ref{fig:fft_acor_modes} we plot the power spectra of the autocorrelation function, $\mathrm{PSD}[C(\tau)]$, of the aforementioned quantities. Using the least square fitting, we obtain a very similar $f^{-2}$ scaling for all the PSDs, with the scaling range remaining also very similar to those in power spectra of the original time series. In the next subsection, we study the intermittency in the evolution of the aforementioned quantities by computing the temporal structure functions. \subsection{Structure functions and intermittency} \label{subsec:str_fns} We compute the $q^{th}$ order structure function for a time series $u(t)$ as: \begin{equation} S_q(\tau) = \langle |u(t+\tau)-u(t)|^q \rangle. \end{equation} In Fig.~\ref{fig:SF}(a) and (b), we plot $S_q(\tau)$ for $u_z(\mathrm{LP})$ and $\hat{u}_z(1,1)$ respectively for $q$ = 1 to 8, and observe two different scaling regimes, where $S_q(\tau) \sim \tau^{\mu_q}$. The first scaling regime is observed for $10 \, t_f \lesssim \tau \lesssim 50 \, t_f$ (indicated between two vertical solid red lines), whereas the other scaling regime can be recognized for $100 \, t_f \lesssim \tau \lesssim 300 \, t_f$ (between two vertical dashed blue lines). Moreover, Fig.~\ref{fig:SF} exhibits that the structure functions saturate for $\tau \gtrsim 1000 \, t_f$, which is because $u_z(\mathrm{LP})$ and $\hat{u}_z(1,1)$ become decorrelated for $\tau \gtrapprox \mathcal{O}(\mathscr{T})$, with $\mathscr{T} \approx 350 \, t_f$ for $u_z(\mathrm{LP})$ and the OO modes. \begin{figure} \includegraphics[scale=0.35]{figures/figure12} \caption{Structure functions $S_q(\tau)$ for $q = 1$ to 8 for (a) $u_z(\mathrm{LP})$ and (b) $\hat{u}_z(1,1)$. Two scaling regimes located between red solid and blue dashed lines can be identified.} \label{fig:SF} \end{figure} We compute the scaling exponents $\mu_q$ in the aforementioned scaling regimes, and plot them in Fig.~\ref{fig:exp_SF} as function of $q$. For the first scaling regime, we find that $\mu_q$ increases nonlinearly with increasing $q$, thus indicating that the temporal evolutions of $u_z(\mathrm{LP})$ and $\hat{u}_z(1,1)$ are intermittent. However, $\mu_q$ deviates only weakly from the linear scaling $\mu_q = q/3$, generalization of Kolmogorov's 5/3 theory~\citep{Frisch:Book} (shown as a dashed blue curve). Therefore we construe that the intermittency is not very strong. \citet{She:PRL1994} predicted a universal scaling for the spatial velocity structure functions for homogeneous and isotropic turbulence as $\mu_q = q/9 + 2[1-(2/3)^{q/3}]$ by considering a hierarchy of structures for the moments of locally averaged viscous dissipation rates. Therefore, we plot $\mu_q = q/9 + 2[1-(2/3)^{q/3}]$ as a blue solid curve in Fig.~\ref{fig:exp_SF}, and observe that our computed exponents $\mu_q$ deviate from this scaling too. \begin{figure} \includegraphics[scale=0.4]{figures/figure13} \caption{Structure function exponents $\mu_q$ for $u_z(\mathrm{LP})$ and $\hat{u}_z(1,1)$. The $\mu_q$ increases nonlinearly with $q$ for the first scaling regime at small $\tau$ (open symbols). Blue dashed curve depicts the Kolmogorov's scaling $\mu_q = q/3$~\citep{Kolmogorov:DANS1941a}, whereas blue solid curve represents the scaling deduced by \citet{She:PRL1994}. For the second scaling regime at larger $\tau$ (filled symbols), $\mu_q$ increases markedly only up to $q = 3$, and saturates for $q \geq 4$.} \label{fig:exp_SF} \end{figure} For the second scaling regime at larger delay times, $\mu_q$ saturates with increasing $q$ for $q \geq 4$ (shown as solid symbols in Fig.~\ref{fig:exp_SF}). The velocity field in three-dimensional RBC is expected to exhibit scaling behavior similar to three-dimensional hydrodynamic turbulence~\cite{Verma:NJP2017}. It is interesting however that the temporal structure functions of the Fourier modes exhibit scaling similar to that of the velocity field. This is because the low-wavenumber Fourier modes capture the large-scale dynamics quite well. For a deeper understanding of the aforementioned anomalous scaling, we compute $S_q(\tau)$ for $q \leq 1$~\cite{Cao:PRL1996, Chen:JFM2005}, again identifying the aforementioned scaling regimes. The low order structure functions get contributions primarily from the core of the PDF of velocity differences, as opposed to the higher order structure functions that probe the large amplitude events present in the tails. The exponents $\mu_q$ computed for the first scaling regime are plotted in Fig.~\ref{fig:exp_SF_small}. We observe that the exponents deviate from both the Kolmogorov's scaling $q/3$ as well as from the She and Leveque's~\cite{She:PRL1994} scaling even for the low order structure functions, which is in agreement with the findings of \citet{Cao:PRL1996} and \citet{Chen:JFM2005} that the scaling is anomalous for all orders. \begin{figure} \includegraphics[scale=0.4]{figures/figure14} \caption{Structure function exponents $\mu_q$ for $q \leq 1$ for $u_z(\mathrm{LP})$ and $\hat{u}_z(1,1)$. The exponents deviate from both the Kolmogorov's scaling~\citep{Kolmogorov:DANS1941a} (dashed blue curve) as well as from the She and Leveque's scaling~\citep{She:PRL1994} (solid blue curve).} \label{fig:exp_SF_small} \end{figure} \citet{Benzi:PRE1993} proposed the extended self similarity (ESS) theory, according to which the scaling regions are enhanced if the structure functions are plotted against each other. Therefore, we plot $S_q(\tau)$ vs $S_3(\tau)$ in Fig.~\ref{fig:SF_S3} for $u_z(\mathrm{LP})$ and $\hat{u}_z(1,1)$, and find that we indeed get extended scaling regimes. The structure functions scale as $S_q(\tau) \sim [S_3(\tau)]^{\nu_q}$, with $\nu_q = \mu_q/\mu_3$. We compute $\nu_q$ for the two scaling regimes, and plot them as function of $q$ in Fig.~\ref{fig:exp_SF_S3}, which shows that, similar to $\mu_q$, $\nu_q$ also increases nonlinearly with $q$. \citet{Benzi:PD1996} reported that whereas the absolute exponents $\mu_q$ differ for different systems like RBC, magnetohydrodynamics, and homogeneous and isotropic turbulence (HIT), the relative exponents $\nu_q$ are very similar for these systems. Therefore in Fig.~\ref{fig:exp_SF_S3} we also plot $\nu_q$ for HIT reported in \citet{Benzi:PD1996} and find that they are nearly similar to $\nu_q$ determined here for $u_z(\mathrm{LP})$ and $\hat{u}_z(1,1)$. \begin{figure} \includegraphics[scale=0.35]{figures/figure15} \caption{Structure functions $S_q(\tau)$ plotted against $S_3(\tau)$ for (a) $u_z(\mathrm{LP})$ and (b) $\hat{u}_z(1,1)$. Enhanced scaling regimes compared to those in Fig.~\ref{fig:SF} can be observed here.} \label{fig:SF_S3} \end{figure} \begin{figure} \includegraphics[scale=0.32]{figures/figure16} \caption{ESS exponents $\nu_q (=\mu_q/\mu_3)$ for $u_z(\mathrm{LP})$ and $\hat{u}_z(1,1)$ for the first scaling regime at small $\tau$ (open symbols) and for the second scaling regime at larger $\tau$ (filled symbols). Blue crosses are the ESS exponents reported in \citet{Benzi:PD1996} for the homogeneous and isotropic turbulence.} \label{fig:exp_SF_S3} \end{figure} We would like to mention that we also computed $S_q(\tau)$ for $u_z(\mathrm{CP})$ and $\hat{u}_z(2,1)$. We however failed to detect any discernible scaling regime, specifically for higher order structure functions. For low order moments up to $q = 3$, we could recognize a very narrow regime. We also tried to use the ESS theory by plotting $S_q(\tau)$ as a function of $S_3(\tau)$, but again could not find any discernible scaling regime. There are some similarities between phenomena of LSC under study here, and fluctuation-dominated phase ordering (FDPO)\citep{Das:PRL2000, Das:PRE2001}. The latter refers to a state which shows phase separation, in which large fluctuations lead to macroscopic rearrangements of the ordered region as a function of time. An example of a system which exhibits FDPO consists of passive particles with mutual exclusion, driven by a fluctuating surface. The behavior of long-wavelength Fourier components of the density profile in this case resemble that of the Fourier modes of the LSC system under discussion here. In both cases, the fall of the dominant long-wavelength Fourier mode in time is accompanied by a rise of the amplitude of the next few modes~\citep{Mishra:JFM2011, Chandra:PRL2013, Verma:POF2015, Das:PRL2000, Das:PRE2001, Kapri:PRE2016}, with an amplitude that decreases with increasing mode number. This signifies that both for LSC and FDPO, the system evolves within the subset of states with macroscopic structures, never reaching completely disordered states. The analogy with FDPO suggests some further directions. In their study of the passive particle system, \citet{Kapri:PRE2016} utilized the temporal structure functions to study the intermittency of the dominant Fourier mode, characterized by temporal second and fourth order structure functions. Therefore, we plot the second and fourth order structure functions in Fig.~\ref{fig:S2_S4_kappa}(a,b). It is evident that $u_z(\mathrm{LP})$ and the OO modes show similar scalings for both $S_2(\tau)$ and $S_4(\tau)$. An important quantity is the flatness factor $\kappa(\tau)$ defined as $\kappa(\tau) = S_4(\tau)/[S_2(\tau )]^2$, which is a good indicator of intermittency~\citep{Kapri:PRE2016}. In the scaling regime, $\kappa(\tau)$ scales as $\tau^{\mu_4 - 2 \mu_2}$, since we have $S_q(\tau) \sim \tau^{\mu_q}$ , and to see this we plot $\kappa(\tau)$ in Fig.~\ref{fig:S2_S4_kappa}(c). We see that the flatness factor does not vary much for the small $\tau$ regime ($\tau < 50 \, t_f$), where we observe a weak intermittency. For the intermediate regime of $\tau$ $(100 \, t_f < \tau < 600 \, t_f)$, which corresponds to the second scaling regime, we find that $\kappa(\tau)$ varies approximately as $\tau^{-0.50 \pm 0.05}$ for $u_z(\mathrm{LP})$ and all the OO modes. This shows that the vertical velocity at the left probe and the OO modes are intermittent for the intermediate $\tau$ regime~\citep{Kapri:PRE2016, Das:PRL2000}. \begin{figure} \includegraphics[scale=0.35]{figures/figure17} \caption{The second order (a) and the fourth order (b) structure functions, and the flatness factor $\kappa(\tau) = S_4(\tau)/[S_2(\tau)]^2$ (c) for the vertical velocity at the left probe and for the OO modes as function of $\tau$. Dashed horizontal lines represent the estimated values of $S_2(\tau)$, $S_4(\tau)$, and $\kappa(\tau)$ in the decorrelation regime. The flatness factors are nearly constant in very large $\tau$ and small $\tau$ regimes, whereas scale as $\kappa(\tau) \sim \tau^{-0.50}$ in the intermediate $\tau$ regime indicated between two black vertical lines.} \label{fig:S2_S4_kappa} \end{figure} As we have discussed above, the structure functions saturate when $\tau$ is very large; this is because a signal $u(t)$ becomes uncorrelated with itself after very long time. However, the saturation values of structure functions can be predicted using the steady-state statistical properties of $u(t)$~\citep{Kapri:PRE2016}. For instance, the second order structure function in the uncorrelated regime can be estimated as $S_2(\tau) = \langle |u(t+\tau)-u(t)|^2 \rangle = 2[\langle u(t)^2 \rangle - \langle u(t) \rangle^2]$. Similarly, $S_4(\tau)$ for very large $\tau$ can be estimated as $S_4(\tau) = 2[\langle u(t)^4 \rangle - 8 \langle u(t)^3 \rangle \langle u(t) \rangle + 6 \langle u(t)^2 \rangle^2]$. We compute the values of $S_2(\tau)$ and $S_4(\tau)$ in the decorrelation regime using these relations, and indicate them as dashed horizontal lines in Fig.~\ref{fig:S2_S4_kappa}(a,b). One can observe very good agreement between the computed and predicted values. Consequently the flatness $\kappa(\tau)$ is nearly constant in the decorrelation regime (for very large $\tau$), where we can estimate $\kappa(\tau) = 1.5 + 0.5\langle u(t)^4 \rangle/\langle u(t)^2 \rangle^2$. It is evident that a good agreement is observed between the estimated and the computed values of $\kappa(\tau)$ in the decorrelation regime. Moreover, we find that in this regime $\kappa(\tau) \approx 2$, which is less than that for a Gaussian distribution. \section{Conclusions} \label{sec:conclusion} We have studied the reversals of large-scale circulation for infinite Prandtl number RBC in a 2D square box by monitoring the vertical velocity at a probe near the left sidewall, and observed that the waiting times between two consecutive reversals are distributed exponentially~\cite{Sreenivasan:PRE2002, Brown:JFM2006, Xi:PRE2006}. Moreover, the waiting times between two consecutive ``crossings" (on shorter time scales) are distributed as a powerlaw. We observed that these exponential and powerlaw regimes are separated at $t_s \approx 40 \, t_f$. In addition, by studying the moments of generalized interswitch intervals, we observed some indication of correlation between nearby reversals, whereas there is a lack of correlation between distant reversal events. We also tracked the evolution of a few dominant Fourier modes of the flow, and find that the signs of all the odd-odd (OO) modes are switched after LSC reversals, while the other modes do not switch their signs. Moreover, the statistical properties of the OO modes are observed to be very similar to $u_z(\mathrm{LP})$, in particular, their integral time scales are approximately $350 \, t_f$, their probability distributions are bimodal, and their power spectra exhibit $1/f^2$ scaling for a wide range of frequencies. On the other hand, the statistical properties of $u_z(\mathrm{CP})$ are similar to those of the $\hat{u}_z(2,1)$ mode, which is because $u_z(\mathrm{CP})$ gets most dominant contribution from the $\hat{u}_z(2,1)$ mode. Additionally, we computed the temporal structure functions for $u_z(\mathrm{LP})$ and the OO modes and found that they exhibit anomalous scaling, even for lower order structure functions. However, the intermittency is not very strong, since the scaling exponents do not deviate much from those of three-dimensional hydrodynamic turbulence~\cite{Kolmogorov:DANS1941a}. We also computed the flatness factor for $u_z(\mathrm{LP})$ and the OO modes, and observed that it is nearly a constant in small $\tau$ and in the decorrelation regime, whereas it scales as $\tau^{-0.50 \pm 0.05}$ in the intermediate $\tau$ regime. \section*{Acknowledgement} We thank Sagar Chakraborty and Manu Mannattil for fruitful discussions. The simulations were performed on {\sc Newton} cluster at the department of Physics, IIT Kanpur.
{ "timestamp": "2018-08-30T02:12:13", "yymm": "1804", "arxiv_id": "1804.05194", "language": "en", "url": "https://arxiv.org/abs/1804.05194" }
\section{Introduction} Let $(V,\gen{\blk,\blk})$ be a complex Hilbert space, $\End V$ the set of bounded linear operators on $V$, and $\End^+V$ the set of all positive invertible elements of $\End V$. Let $\overline{M}$ be a compact Riemann surface with boundary ($\partial M$ is automatically a real analytic manifold by the reflection principle). On the bundle $\overline{M}\times V \to \overline{M}$, a hermitian metric $h$ is a collection of hermitian inner products $h_z$ on $V$, for $z\in \overline{M}$, and it can be written $h_z(v,w)=\gen{P(z)v,w}$ with $P: \overline{M} \to \End^+V$, $v$ and $w\in V$. Assume $P$ is $C^2$, the Chern connection of the metric is $P^{-1}\partial P$, and the curvature $R^P=\bar{\partial}(P^{-1}\partial P)=P^{-1}(P_{z\bar{z}}-P_{\bar{z}}P^{-1}P_z)d\bar{z}\wedge dz$ in a chart. In this paper, we will address the Dirichlet problem of extending a given metric on $\partial M \times V$ to a metric on $\overline{M}\times V$ that has zero curvature. Our main result is the following: \begin{thm}\label{thm:4} Let $\overline{M}$ be a compact Riemann surface with boundary and $F\in C^{m}(\partial M, \End^+V)$, where $m=0,\infty, \text{or }\omega$. There exists a unique $P\in C^{m}(\overline{M}, \End^+V)\cap C^2(M,\End^+V)$ such that $R^P=0$ on $M$, and $P|_{\partial M}=F$. The same is true if we replace $C^m$ by $C^{k,\alpha}$ for $k$ a nonnegative integer and $0<\alpha<1$. \end{thm} We mention briefly previous work when $\dim V <\infty$. Masani and Wiener prove a factorization result in \cite{wiener1957prediction} which can be used to solve the Dirichlet problem over the unit disc, with regularity weaker than continuous. In \cite{MR660145}, Lempert proves this factorization to H\"older classes. More generally, in \cite{MR1165874}, Donaldson solves a Dirichlet problem for the Hermitian Yang--Mills equations over K\"{a}hler manifolds with boundary, and in \cite{MR1216432} Coifman and Semmes solve it over domains in $\mathbb{C}^n$ which are regular for the Laplacian. When the base is one dimensional, Donaldson's and Coifman-Semmes' results reduce to existence of flat hermitian metrics. (Coifman and Semmes also solve a Dirichlet problem for norms more general than those coming from hermitian metrics. See also a more recent related paper \cite{2016arXiv160706306B}.) Devinatz \cite{devinatz1961factorization} and Douglas \cite{douglas1966factoring} generalize Wiener-Masani factorization to infinite dimensional separable $V$, with the base still the unit disc (see also \cite[Lecture XI]{MR0171178}). For a general $V$ and various regularity classes, the Dirichlet problem over the unit disc is solved by Lempert in \cite{MR3738363}. Lempert's proof is by the continuity method and proceed by a global factorization of flat metrics. However, such a factorization is not available when the base is multiply connected. Our proof is also by the continuity method. Closedness is proved by a maximum principle and a local holomorphic factorization of flat metrics. Openness turns out to be harder than usual, because to deal with the linear partial differential equation originating from the implicit function theorem, Fredholm theory is not available. However, the linear equation has various symmetries that we can exploit to obtain the requisite a priori estimates. For details, see section \ref{est}. The structure of this paper is as follows. In section \ref{pre} and \ref{est}, we collect a few preliminary lemmas and provide a priori estimates for both the nonlinear equation $R^P=0$ and its linearization. In section \ref{sec:pf}, we prove Theorem \ref{thm:4}. In section \ref{sec:db}, we prove a global factorization for flat hermitian metrics on doubly connected domains, an analog of \cite[Theorem 3.1]{MR3738363}. We consider annuli for convenience. In what follows, $\End^{\times}V$ is the set of invertible elements in $\End V$, and $\End^{\operatorname{self}} V$ is the set of self-adjoint operators. \begin{thm}\label{thm:1.2} Let $M=\{z\in \mathbb{C}:r_1<|z|<r_2\}$, and $F\in C(\partial M,\End^+V)$. There exist $H\in \mathcal{O}(M,\End^{\times}V)$ and $a\in \End^{\operatorname{self}} V$ such that the function \begin{equation*} P(z)= \begin{cases} H^*(z) \exp(a\log|z|^2) H(z) &\text{for $z\in M$} \\ F(z) &\text{for $z \in \partial M$} \end{cases} \end{equation*} is in $C(\overline{M},\End^+ V)$. Moreover, if $F\in C^{k,\alpha}(\partial M,\End^+V)$ for $k$ a nonnegative integer and $0<\alpha<1$, then $H$ extends to a function in $C^{k,\alpha}(\overline{M},\End^{\times}V)$. \end{thm} Straightforward calculations show that $P$ in the above theorem has curvature 0. We conjecture similar factorizations to exist in $m$-connected domains. However, when $m$ is at least three, the fundamental group of $M$ is nonabelian, in addition to our already noncommutative operators, and we haven't been able to write down a meaningful factorization. Such factorizations might provide another proof of Theorem \ref{thm:4} without invoking a priori estimates, as in \cite{MR3738363}. As for nontrivial Hilbert bundles, it is known that such bundles can be trivialized over open Riemann surfaces. It is likely that this is true also over Riemann surfaces with boundary, but we do not pursue such question in this paper. I would like to thank Chi Li, Hengrong Du, and Seongjun Choi for discussions and their critical comments, and Carlos Salinas for his suggestions on the presentation of this paper. I am grateful to L\'aszl\'o Lempert for introducing me to this problem, and for his constant encouragements, discussions, and inspirations. \section{Preliminary lemmas}\label{pre} We will deal with spaces of maps with values in $\End V$, such as $C^{k,\alpha}(\overline{M},\End V)$, and we briefly indicate what they are. First, $\End V$ with the operator norm $||\cdot||_{\op}$ is a Banach space. Smoothness for maps with values in $\End V$ will always refer to this Banach space topology. $\mathcal{O}(M,\End V)$ denote the space of holomorphic maps, those that are complex differentiable in charts. Similarly, given a smooth manifold $\overline{N}$, possibly with boundary, if $k=0,1,2...$ and $0<\alpha<1$, $C^k(\overline{N},\End V)$ and $C^{k,\alpha}(\overline{N}, \End V)$ consist of maps that are $C^k$, respectively $C^{k,\alpha}$ in charts. These two can be given a Banach algebra structure, if $\overline{N}$ is compact and a finite open cover $\{U_i\}$ of $\overline{N}$ is fixed so that each $\overline{U_i}$ is contained in a chart. For $f\in C^{k,\alpha}(\overline{N},\End V)$, say, one just computes the corresponding H\"older norms in each $U_i$ using the local coordinates, and defines $||f||_{k,\alpha,N}$ as the sum of those H\"older norms. With a suitable scaling it can be arranged that $||\cdot||_{k,\alpha,N}$ is submultiplicative, namely, $C^{k,\alpha}(\overline{N}, \End V)$ is a Banach algebra. Similarly, $C^k(\overline{N},\End V)$ also carries a Banach algebra structure. For more details, see \cite[p.610]{MR3738363}. We set $C^{\infty}=\cap_k C^k$, and also write $C$ for $C^0$. Finally, if $\overline{N}$ is a real analytic manifold, we denote by $C^{\omega}(\overline{N},\End V)$ the space of real analytic maps, those that can be expanded at each point of $\overline{N}$ in a power series in a chart. In traditional potential theory, a real-valued harmonic function on a simply connected open set in $\mathbb{C}$ is the real part of a holomorphic function, unique up to a purely imaginary additive constant. There is a corresponding result in noncommutative potential theory. \begin{lem}\label{l,3} If $M$ is a simply connected Riemann surface and $P\in C^2(M, \End^+V)$ is flat, namely $R^P=0$, then $P=H^*H$ where $H\in \mathcal{O}(M,\End^{\times}V)$. If $P=K^*K$ is also such a factorization, then $H=UK$, where $U\in \End V$ is unitary. \end{lem} If $\dim V$=1, we recover the traditional result by taking logarithms. \begin{proof} This lemma is actually true for $M$ a simply connected complex manifold, see \cite[Chapter \RN{5}.6]{demailly1997complex}. Although the bundle is of finite rank there, the proof carries over to infinite rank easily. \end{proof} \begin{lem}\label{lem:2} Let $\overline{M}$ be a compact Riemann surface with boundary, $P_j\in C(\overline{M},\End^+V)\cap C^2(M, \End^+V)$, and $R^{P_j}=0$, $j\in \mathbb{N}$. If $P_j|_{\partial M}$ converges in $C(\partial M, \End^+V)$, then $P_j$ converges in $C(\overline{M}, \End^+V)$. \end{lem} \begin{proof}It is basically the same as the proof of \cite[Corollary 3.3]{MR3738363}.\end{proof} \begin{lem}\label{lem:3}Let $D\subset \mathbb{C}$ be the unit disc, $H_j\in \mathcal{O}(D,\End^{\times}V)$, and $H_j(0)\in \End^+V$. If $H_j^*H_j$ converges to some $P\in C(D,\End^+V)$, then there exists $H\in \mathcal{O}(D,\End^{\times}V)$ such that $H_j$ converges, locally uniformly, to $H$ on $D$.\end{lem} \begin{proof} See the proof of \cite[Theorem 3.1]{MR3738363}.\end{proof} \section{A priori estimates}\label{est} Fix a smooth positive $(1,1)$-form $\omega$ on $\overline{M}$ and define a map $\Lambda$ sending $(1,1)$-forms to functions: $\Lambda(\phi)=-\phi/\omega$, for a $(1,1)$-form $\phi$. Locally, $\omega =\sqrt{-1}g dz\wedge d\bar{z}$, where $g$ is a positive smooth function, so if $\phi=vdz\wedge d\bar{z}$ locally, then $\Lambda (\phi)=\sqrt{-1}v/g$. Fix $0<\alpha<1$, assume $P\in C^{2,\alpha}(\overline{M},\End^+V)$ is flat, and $A=P^{-1}\partial P$. We associate the following differential operator with $P$: \begin{equation*} L:C^{2,\alpha}(\overline{M},\End^{\operatorname{self}} V)\longrightarrow C^{\alpha}(\overline{M}, \End^{\operatorname{self}} V)$$ $$h\longmapsto \sqrt{-1}\Lambda(\bar{\partial}\partial h-A^*\wedge \partial h-\bar{\partial}h\wedge A+A^*\wedge h\wedge A). \end{equation*} On a chart, $Lh=(1/g) \mathscr{L}h$, where $\mathscr{L}h=h_{z\bar{z}}-P_{\bar{z}}P^{-1}h_z-h_{\bar{z}}P^{-1}P_z+P_{\bar{z}}P^{-1}hP^{-1}P_z$. The reason for studying $L$ is that it is the linearization of curvature, as we shall see in section \ref{sec:pf}. The main result in this section is \begin{thm}\label{cor:11} If $h\in C^{2,\alpha}(\overline{M},\End^{\operatorname{self}} V)$ and $h|_{\partial M}=0$, then $$\|h\|_{2,\alpha, M}\leq C\|Lh\|_{0,\alpha,M}$$ where $C=C(\|P\|_{2,\alpha},\|P^{-1}\|_{0,\alpha})$. \end{thm} We begin with a somewhat standard estimate. \begin{lem}\label{lem:9} If $h\in C^{2,\alpha}(\overline{M},\End^{\operatorname{self}} V)$ and $h|_{\partial M}=0$, then$$\|h\|_{2,\alpha, M}\leq C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M})$$ where $C=C(\|P\|_{2,\alpha},\|P^{-1}\|_{0,\alpha})$. \end{lem} The prominent feature of $L$ is the following. On a simply connected open set, we have $H^*PH=1$ with holomorphic $H$ by Lemma \ref{l,3}, and it turns out that $$\frac{1}{2}\Delta(H^*hH)= -H^*(Lh)H.$$ Here $\Delta$ is the Laplace operator with respect to $\omega$, and we use the fact that $\Delta$ when acting on functions is the same as $2\sqrt{-1}\Lambda \bar{\partial}\partial$. Therefore, modulo a gauge transformation $H$, $L$ is the Laplace operator, locally. In a chart, the above equality becomes $(H^*hH)_{z\bar{z}}=H^*(\mathscr{L}h)H$. We will exploit this to reduce Lemma \ref{lem:9} to the corresponding estimates for scalar-valued elliptic partial differential equations. If $L$ had nonpositive zero order term, general theory would imply $\|h\|_{0, M}\leq C\|Lh\|_{0, M}$, which together with Lemma \ref{lem:9} would give Theorem \ref{cor:11}. Nonetheless, the zero order term of $L$ has the opposite sign. To get around this problem we first prove a maximum principle, Lemma \ref{lem:12}, and observe that for $u\in C^2(\overline{M},\mathbb{C})$, $$L(u\cdot P)=\sqrt{-1}\Lambda (-\bar{\partial}\partial u\cdot P+uP\cdot R^P)=(-\frac{1}{2}\Delta u)P,$$ as $R^P=0$. A suitable choice of $u$ will put us in the position of using Lemma \ref{lem:12}, and Theorem \ref{cor:11} will follow quickly. \begin{proof}[Proof of Lemma \ref{lem:9}] Consider two finite open covers $\{U_i\},\{V_i\}$ of $\overline{M}$, such that $\overline{U_i},\overline{V_i}$ are in a chart $\phi_i$ for each $i$, and \begin{equation*} \begin{aligned} \text{for interior chart, }&\begin{cases} \phi_i(U_i)=B(0,1)\\ \phi_i(V_i)=B(0,2). \end{cases}\\ \text{for boundary chart, } &\begin{cases} \phi_i(U_i)=B(0,1)\cap \overline{H}\\ \phi_i(V_i)=B(0,2)\cap \overline{H} \end{cases} \text{where $H\subset \mathbb{C}$ is the upper-half plane.} \end{aligned} \end{equation*} We use $\{U_i\}$ to define the norm on $C^{2,\alpha}(\overline{M},\End^{\operatorname{self}} V)$ and $\{V_i\}$ on $C^{\alpha}(\overline{M},\End^{\operatorname{self}} V)$. Since our arguments will be local, we can assume $U_i,V_i$ are already in $\mathbb{C}$ and $\phi_i$ is the identity. We first consider a boundary chart $\phi_i$. As mentioned above, $(H^*hH)_{z\bar{z}}=H^*(\mathscr{L}h)H$, where $H$ is a holomorphic function in the interior of this chart with $H^*PH=1$. As $P$ is $C^{2,\alpha}$ up to boundary of $\overline{M}$, so is $H$, according to \cite[Theorem 3.7]{MR3738363}. Consider a bounded linear functional $l\in (\End V)^*$ of norm one, and apply $l$ to the equation obtaining $[l(H^*hH)]_{z\bar{z}}=l(H^*(\mathscr{L}h)H)$, a scalar-valued equation. Denote $\phi_i(U_i)=B'$ and $\phi_i(V_i)=B''$. By \cite[Lemma 6.5 or Corollary 6.7]{MR1814364} \begin{equation}\label{eq:1} \begin{aligned}\|l(H^*hH)\|_{2,\alpha, B'}\leq C(\|l(H^*hH)\|_{0,B''}+\|l(H^*(\mathscr{L}h)H)\|_{0,\alpha,B''}) \end{aligned} \end{equation} where $C$ is a uniform constant We can get rid of $l$ and $H$ to have $$\|h\|_{2,\alpha, B'}\leq C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M}).$$ Indeed, at each point in $B''$, $$|l(H^*hH)|\leq \|H^*hH\|_{\op}\leq \|H\|_{\op}^2\|h\|_{\op}\leq C\|h\|_{\op}.$$ The last inequality follows from $P^{-1}=HH^*$. Similarly, \begin{align*} \|l(H^*(\mathscr{L}h)H)\|_{0,\alpha,B''} &\leq \|H^*(\mathscr{L}h)H\|_{0,\alpha,B''}\\ &\leq \|H\|^2_{0,\alpha,B''}\cdot \|\mathscr{L}h\|_{0,\alpha,B''}\\ &\leq C\|H\|^2_{0,\alpha,B''}\cdot \|Lh\|_{0,\alpha,M}\\ &\leq C(\|H\|^2_{0}+\|H_z\|^2_{0})\cdot \|Lh\|_{0,\alpha,M}\\ &\leq C\|Lh\|_{0,\alpha,M} \end{align*} The third inequality is by the definition of the $C^{\alpha}$ norm on $M$. The last inequality follows from $H_z=-P^{-1}P_zH$. Therefore, the right hand side of (\ref{eq:1}) is dominated by $C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M})$. Namely, \begin{align*} C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M})\geq &\|l(H^*hH)\|_{2,\alpha, B'} \\ =&\|l(H^*hH)\|_{0}+\|Dl(H^*hH)\|_{0}+\|D^2l(H^*hH)\|_{0,\alpha}, \end{align*} where $D$ stands for first order and $D^2$ for second order derivatives. Hence, for $x\in B'$, $$|Dl(H^*hH)(x)|\leq C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M})$$ and by the Hahn--Banach Theorem $$\|D(H^*hH)(x)\|_{\op}\leq C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M}).$$ As a consequence, \begin{align*} C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M}) &\geq\|DH^*hH+H^*DhH+H^*hDH \|_{0,B'}\\ &\geq \|H^*DhH\|_{0,B'}-\|H^*hDH \|_{0,B'}-\|DH^*hH\|_{0,B'}\\ &\geq \|H^*DhH\|_{0,B'}-C\|h\|_{0,B'}. \end{align*} So $$C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M})\geq \|H^*DhH\|_{0,B'}.$$ Since \begin{align*} \|Dh\|_{0,B'}&\leq \|H^*DhH\|_{0,B'}\|{H}^{-1}\|_{0,B'}^2=\|H^*DhH\|_{0,B'}\|P\|_{0,B'}\leq C\|H^*DhH\|_{0,B'}, \end{align*} we have $$\|Dh\|_{0, B'}\leq C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M}).$$ We can estimate the second derivatives and their H\"older norms similarly, and obtain \begin{equation}\label{eq:2} \|h\|_{2,\alpha, B'}\leq C(\|h\|_{0,M}+\|Lh\|_{0,\alpha,M}). \end{equation} We next consider an interior chart $\phi_i$. As before $[l(H^*hH)]_{z\bar{z}}=l(H^*(\mathscr{L}h)H)$. We let $\phi_i(U_i)=B'$ and $\phi_i(V_i)=B''$. By \cite[Corollary 6.3]{MR1814364}, \begin{equation*} \begin{aligned} \|Dl(H^*hH)\|_{0,B'}+\|D^2l(H^*hH)\|_{0,B'}+[D^2l(H^*hH)]_{\alpha,B'}\\ \leq C\big[\|l(H^*hH)\|_{0,B''}+\|l(H^*(\mathscr{L}h)H)\|_{0,\alpha, B''}\big]. \end{aligned} \end{equation*} Using the same method as in boundary charts, we can get rid of $l$ and $H$ to obtain the same estimate (\ref{eq:2}). Hence the lemma follows. \end{proof} We next prove a maximum principle, which in turn gives rise to $C^0$ estimates. Recall that $\gen{\blk,\blk}$ is the inner product of $V$, and denote $\|v\|^2_{P(z)}= \langle P(z)v,v\rangle$. \begin{lem}\label{lem:12} Suppose $h\in C^2({M}, \End^{\operatorname{self}} V)$. Define $$S_{P,h}(z)=\sup_{\|v\|_{P(z)}=1}\langle h(z)v,v\rangle.$$ If $Lh\geq 0$, then $S_{P,h}(z)$ is subharmonic. As a result, if additionally $h$ is continuous on $\overline{M}$, then $$\sup_{\overline{M}}S_{P,h}=\sup_{\partial M}S_{P,h}.$$ \end{lem} \begin{proof} First, \begin{align*} S_{P,h}(z)&=\sup_{\langle P(z)v,v\rangle=1}\langle h(z)v,v\rangle=\sup_{\|{P(z)}^{1/2}v\|=1}\langle{P(z)}^{-1/2}h(z){P(z)}^{-1/2}{P(z)}^{1/2}v,{P(z)}^{1/2}v\rangle\\ &=\sup_{\|u\|=1}\langle{P(z)}^{-1/2}h(z){P(z)}^{-1/2}u,u\rangle \end{align*} is continuous, as the sup of a family of equicontinuous functions. Locally, we have $H^*PH=1$ and $(H^*hH)_{z\bar{z}}=H^*(\mathscr{L}h)H$; furthermore, $0\leq Lh=(1/g)\cdot \mathscr{L}h$ means $\mathscr{L}h\geq 0$. Since $$0\leq \langle(\mathscr{L}h)Hv,Hv\rangle=\langle(H^*hH)_{z\bar{z}}v,v\rangle,$$ $\langle(H^*hH)v,v\rangle$ is subharmonic for any $v\in V$. Thus, \begin{align*} S_{P,h}(z)=\sup_{\langle P(z)v,v\rangle=1}\langle h(z)v,v\rangle =\sup_{\langle H^{-1}v,H^{-1}v\rangle=1}\langle h(z)v,v\rangle =\sup_{\langle u,u\rangle=1}\langle H^*hH(z)u,u\rangle \end{align*} is the sup of a family of subharmonic functions. As we already know $S_{P,h}(z)$ is continuous, it is subharmonic. \end{proof} \begin{thm}\label{thm:10} If $h\in C^{2,\alpha}(\overline{M},\End^{\operatorname{self}} V)$ and $h|_{\partial M}=0$, then $$\|h\|_{0, M}\leq C\|Lh\|_{0, M}$$ where $C=C(\|P\|_0, \|P^{-1}\|_0)$. \end{thm} \begin{proof} Recall if $u\in C^2(\overline{M},\mathbb{C})$, then $$L(u\cdot P)=(-\frac{1}{2}\Delta u)P.$$ Let $\Phi$ be the function vanishing on $\partial M$ such that $\Delta \Phi =2$, and let $G=(\Phi-\inf \Phi)\|P^{-1}\|_0P$. Then $G\geq 0$ with $L(G)=-\|P^{-1}\|_0P\leq -1$. Besides, $G\leq C$, where $C$ depends on $\|P\|_0$ and $\|P^{-1}\|_0$. With $F=G\cdot \|Lh\|_{0}$, we have $h\leq F$ on $\partial M$. Moreover, \begin{align*} L (h-F)&=Lh-\|Lh\|_{0}\cdot LG\geq Lh+\|Lh\|_{0}\geq 0. \end{align*} By Lemma \ref{lem:12}, $h-F\leq 0$ on $M$. Therefore, $$h\leq G\cdot \|Lh\|_{0}\leq C\|Lh\|_{0}.$$ Replacing $h$ by $-h$, the theorem follows. \end{proof} Theorem \ref{cor:11} is a consequence of Lemma \ref{lem:9} and Theorem \ref{thm:10}. \section{Proof of Theorem \ref{thm:4}}\label{sec:pf} We start with a regularity result. \begin{lem}\label{lem R} Let $P\in C(\overline{M}, \End^+V)\cap C^2(M,\End^+V)$ be flat. If $P|_{\partial M}$ is $C^{k,\alpha}$, $C^{\infty}$, or $C^{\omega}$, then $P$ has the corresponding regularity on $\overline{M}$. \end{lem} \begin{proof} By Lemma \ref{l,3}, $P=H^*H$ with a holomorphic map $H$ locally, so $P$ is always $C^{\omega}$ in $M$ regardless of its boundary values. Denote $P|_{\partial M}$ by $F$. Suppose $F\in C^{k,\alpha}$, we have on a boundary chart $P=H^*H$, and $H$ is $C^{k,\alpha}$ up to $\partial M$ by \cite[Theorem 3.7]{MR3738363}; therefore, $P$ is $C^{k,\alpha}$ up to $\partial M$. Next suppose $F$ is $C^{\infty}$, then by the $C^{k,\alpha}$ result, $P$ is $C^k$ up to $\partial M$ for any positive integer $k$, hence $C^{\infty}$. Finally, suppose $F\in C^{\omega}$. On a boundary chart, that we identify with the upper-half disc in $\mathbb{C}$, $P=H^*H$ with $H$ continuous up to the real axis by \cite[Theorem 3.7]{MR3738363}. Since $F\in C^{\omega}$, it has a holomorphic extension in a neighborhood of the real axis in the disc, so the map ${H^*}^{-1}(\bar{z})\cdot F(z)$ provides $H$ a holomorphic extension across the real axis, it follows that $P$ is real analytic across the real axis. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:4}] The uniqueness follows from the maximum principle (see \cite[Lemma 3.2]{MR3738363} or \cite{MR3314125}). We consider first the case $F\in C^{\omega}$ and prove the existence by the continuity method. Fix $0<\alpha<1$, let $\phi_{t}=tF+(1-t)\textrm{Id}$, and $$T=\left\{ t\in[0,1] \:\middle|\: \begin{gathered} \text{ If } 0\leq s\leq t, \text{ then } \phi_s= P_s|_{\partial M}, \\ \text{ for some } P_s\in C^{2,\alpha}(\overline{M},\End^+V), \text{ and } R^{P_s}=0 \end{gathered} \right\}.$$ We will say those $\phi_s$ ``have an extension.'' The goal is to show $T=[0,1]$. If so, $\phi_{1}=F$ has a $C^{2,\alpha}$ extension, and we can improve the regularity from $C^{2,\alpha}$ to $C^{\omega}$ by Lemma \ref{lem R}. Because 0 is in $T$, $T$ is nonempty. First we prove $T$ is closed. Suppose $T \ni t_j\to t_0$. For $s<t_0$, we can find $t_j>s$, therefore $\phi_s$ has an extension. We have to show $\phi_{t_0}$ extends. For brevity, we write $P_j$ instead of $P_{t_j}$. Since $P_j|_{\partial M }=\phi_{t_j} \to \phi_{t_0}$, $P_j$ converges by Lemma \ref{lem:2}, say to $P_{\infty}\in C(\overline{M}, \End^+V)$, and $P_{\infty}|_{\partial M}=\phi_{t_0}$. For any interior point of $\overline M$, choose a chart with image the unit disc $D$ in $\mathbb{C}$. Thus, $P_j=H^*_jH_j$, where $H_j\in \mathcal{O}(D, \End^{\times}V)$ by Lemma \ref{l,3}, and after multiplying with the unitary operator $(H^*_j(0)H_j(0))^{1/2}H^{-1}_j(0)$ we can assume $H_j(0)\in \End^+V$. By Lemma \ref{lem:3}, there exists $H$ holomorphic on $D$ such that $H_j\to H$ locally uniformly. Hence, $P_{\infty}=\lim H_j^*H_j = H^*H$ on $D$ which implies $P_{\infty}\in C^{\infty}(M,\End^+V)$ and $R^{P_{\infty}}=0$. By Lemma \ref{lem R}, $P_{\infty}$ is $C^{\omega}$, especially $C^{2,\alpha}$ on $\overline{M}$. Hence, $t_0$ is in $T$ and $T$ is closed. Now we prove that $T$ is open. If $t_0\in T$ then $\phi_t$ has an extension $P_t$, for $0\leq t\leq t_0$. Consider the smooth map \begin{align*} \Psi:C^{2,\alpha}(\overline{M},\End^+V)&\to C^{\alpha}(\overline{M}, \End^{\operatorname{self}} V)\times C^{2,\alpha}(\partial M,\End^+V)\\ h&\mapsto (\sqrt{-1}\Lambda (h\bar{\partial}(h^{-1}\partial h)),h|_{\partial M}). \end{align*} Then $\Psi(P_{t_0})=(0,\phi_{t_0} )$. We denote $P^{-1}_{t}\partial P_{t}=A_{t}$, so the linearization of $\Psi$ at $P_{t_0}$ is \begin{align*} C^{2,\alpha}(\overline{M},\End^{\operatorname{self}} V)&\to C^{\alpha}(\overline{M}, \End^{\operatorname{self}} V)\times C^{2,\alpha}(\partial M,\End^{\operatorname{self}} V)\\ h&\mapsto (\sqrt{-1}\Lambda(\bar{\partial}\partial h-A^*_{t_0}\wedge \partial h-\bar{\partial}h\wedge A_{t_0}+A^*_{t_0}\wedge h\wedge A_{t_0}),h|_{\partial M}). \end{align*} It is here the operator in section \ref{est} turns up. We will show that the linearization is an isomorphism. Then $\Psi$ is a diffeomorphism in a neighborhood of $P_{t_0}$ by the implicit function theorem, and that implies $T$ is open. To show that the linearization is an isomorphism, it suffices to prove it is bijective because of the Open Mapping Theorem. That is, given $$(f_1,f_2)\in C^{\alpha}(\overline{M}, \End^{\operatorname{self}} V)\times C^{2,\alpha}(\partial M,\End^{\operatorname{self}} V),$$ the equation \begin{equation}\label{eq4.1} \begin{cases} \sqrt{-1}\Lambda(\bar{\partial}\partial h-A^*_{t_0}\partial h-\bar{\partial}h A_{t_0}+A^*_{t_0}hA_{t_0})=f_1\\ h|_{\partial M}=f_2 \end{cases} \end{equation} has a unique solution. That there is at most one solution easily follows from the maximum principle, Lemma \ref{lem:12} or Theorem \ref{cor:11}. If $\dim V <\infty$, existence follows from uniqueness by Fredholm alternative. However, if $\dim V=\infty$, Fredholm alternative is not available, because the embedding $C^{2,\alpha}(\overline{M},\End V)\rightarrow C^{\alpha}(\overline{M},\End V)$ is no longer compact. The way we solve (\ref{eq4.1}) is again the continuity method, based on the next lemma: \begin{lem}\label{l,14} Let $B,V$ be two Banach spaces, and $\{L_t\}_{0\leq t\leq 1}$ a family of bounded linear operators from $B$ to $V$. Suppose $t\mapsto L_t$ is continuous in operator norm; moreover, there exists a constant $C$ such that $\|x\|\leq C\|L_tx\|$ for any $x\in B$ and any $t$. Then $L_1$ is onto if and only if $L_0$ is onto.\end{lem} This is a variant of \cite[Theorem 5.2]{MR1814364}. The proof is almost the same, so we skip it. Unsurprisingly, we are going to deform our equation to the Laplace equation. The naive way of deforming is by convex combination, but this breaks the symmetry of our equation (after all we want to use the a priori estimates from Theorem \ref{cor:11}). It is here the solution set $T$ plays its role; it tells us how to deform. First in equation (\ref{eq4.1}), $f_2$ can be extended to $C^{2,\alpha}(\overline{M},\End^{\operatorname{self}} V)$. If we subtract $f_2$ from $h$, we only need to consider the case of zero boundary value. In other words, we have to show that \begin{equation}\label{eq4.2} L_t :\{h\in C^{2,\alpha}(\overline{M},\End^{\operatorname{self}} V):h|_{\partial M}=0\}\to C^{\alpha}(\overline{M}, \End^{\operatorname{self}} V)$$ $$h\mapsto \sqrt{-1}\Lambda(\bar{\partial}\partial h-A^*_{t}\wedge \partial h-\bar{\partial}h\wedge A_{t}+A^*_{t}\wedge h\wedge A_{t}) \end{equation} is surjective when $t=t_0$. Note that $L_0$ is the Laplace operator, for $P_0=1$. We start with the following lemma, which is stronger than what we need. \begin{lem}\label{15} Let $k$ be a nonnegative integer. If $t,s\in [0,t_0]$ and $t\to s$, then $\|P_t-P_s\|_{C^k}\to 0$, and $\|{P_t}^{-1}-{P_s}^{-1}\|_{C^k}\to 0$. \end{lem} \begin{proof} By Lemma \ref{lem R}, $P_t\in C^k(\overline{M}, \End^+V)$. Since $P_t|_{\partial M}=\phi_t\to \phi_s$, $P_t$ converges to $P_s$ in $C(\overline{M},\End^+V)$ by Lemma \ref{lem:2}. For the derivatives, we do estimates on charts and consider $\partial_z$ only, as $\partial_{\bar{z}}$ can be done in the same way. On an interior chart, $P_t=H^*_tH_t, P_s=H^*H$ where $H_t, H$ are holomorphic. As in the proof of closedness, $H_t\to H$ locally uniformly, and so do all their derivatives. Therefore, $$(P_t)_{z}=H^*_t(H_t)_{z} \to H^*H_z=(P_s)_{z}$$ locally uniformly. On a boundary chart, that again we identify with the upper-half disc in $\mathbb{C}$, we similarly have $(P_t)_{z}\to (P_s)_{z}$ locally uniformly but only away from the boundary. The convergence near the boundary can be resolved as follows. $P_t=H^*_tH_t$ with $H_t$ continuous up to boundary of $\overline{M}$ (in the current situation, this means the real axis in the disc) by \cite[Theorem 3.7]{MR3738363}. Similarly, $P_s=H^*H$ with $H$ contiunous up to boundary of $\overline{M}$. As in the proof of Lemma \ref{lem R}, since $\phi_t$ is $C^{\omega}$, it has a holomorphic extension in a neighborhood of the real axis in the disc, the map $${H^*_t}^{-1}(\bar{z})\cdot \phi_t(z)$$ provides an analytic continuation of $H_t$ across the real axis that we continue denoting $H_t$. For a compact set in the disc, consider a contour around it. By Cauchy's Integral Formula and the fact $\|H_t\|$ has a uniform upper bound, the Bounded Convergence Theorem implies that $H_t$ converges to $H$ uniformly on this compact set, and the same holds for derivatives of all orders. Hence, $P_t\to P_s$ in $C^k$ for any nonnegative integer $k$, locally uniformly in this boundary chart. Therefore, we conclude the $C^k$ convergence on $\overline{M}$. Since $C^k(\overline{M},\End V)$ is a Banach algebra, ${P_t}^{-1}\to {P_s}^{-1}$ in $C^k$. \end{proof} This lemma implies that $\|L_t-L_s\|\to 0$ as $t\to s$, where the norm on $L_t$ is the operator norm from (4.2). From Theorem \ref{cor:11} and the continuity Lemma \ref{15}, we get the desired estimates: if $h\in C^{2,\alpha}(\overline{M},\End^{\operatorname{self}} V)$ and $h|_{\partial M}=0$, then $$\|h\|_{2,\alpha, M}\leq C\|L_th\|_{0,\alpha, M}$$ where $C$ is independent of $t$. Therefore, by Lemma \ref{l,14} and the fact $L_0=\Delta /2$ is onto, $L_{t_0}$ is also onto, which implies the equation (\ref{eq4.1}) is uniquely solvable, so $T$ is open and therefore $T=[0,1]$. This completes the proof of Theorem \ref{thm:4} for $C^{\omega}$ case. If the boundary data $F$ is only $C^0$, it can be approximated by a sequence $F_j\in C^{\omega}(\partial M, \End^+ V)$ in sup norm, for the following reason: $\partial M$ as a real analytic manifold can be real analytically embedded in some $\mathbb{R}^N$ by an embedding theorem of Grauert and Morrey \cite{MR0098847} \cite{10.2307/1970048}; $F$ has a continuous extension to $\mathbb{R}^N$, which can be approximated by polynomials $P_j$; after composing $P_j$ with the embedding, we have the desired $F_j$. Each $F_j$ has a real analytic flat extension $P_j$ according to the $C^{\omega}$ case. By Lemma \ref{lem:2}, $P_j$ converges in $C(\overline{M}, \End^+V)$, say to $P$. As in the proof of closedness, $P$ is $C^2$ in the interior and has curvature $0$. If $F$ is $C^{k,\alpha}$ or $C^{\infty}$, the $P$ constructed in the previous paragraph is $C^{k,\alpha}$, respectively $C^{\infty}$ on $\overline{M}$, by Lemma \ref{lem R}. \end{proof} \section{Factorization in doubly connected domains}\label{sec:db} We prove Theorem \ref{thm:1.2} in this section. \begin{proof} There exists a flat $P\in C(\overline{M}, \End^+V)$ with $P|_{\partial M}=F$ by Theorem \ref{thm:4}. The exponential map $e^{2\pi i z}$ is a universal covering map from the strip $$\{z\in \mathbb{C}:-\log r_2/2\pi <\Im(z)< -\log r_1/2\pi \}$$ to $M$. The composition $P(e^{2\pi i z})$ is flat on the strip, so by Lemma \ref{l,3}, $P(e^{2\pi iz})=H^*(z)H(z)$ where $H$ is holomorphic in the strip. Since $P(e^{2\pi iz})$ has period 1, $$H^*(z+1)H(z+1)=H^*(z)H(z).$$This implies $${H^*}^{-1}(z+1)H^*(z)=H(z+1)H^{-1}(z).$$ But in the last equality, one side is holomorphic, the other is antiholomorphic, so both must be a constant, say $U$. Moreover, $$(U^*)^{-1}={[H(z+1)H^{-1}(z)]^*}^{-1}={H^*}^{-1}(z+1)H^*(z)=U,$$ so $U$ is unitary, and we have $UH(z)=H(z+1)$. By Borel functional calculus (for example, see \cite[Chapter 12]{MR1157815}), $U=e^{iA}$ where $A\in \End^{\operatorname{self}}V$. Define $$K(z)=\exp\left(-iAz\right)\cdot H(z).$$ We have \begin{equation*} \begin{aligned} K(z+1)&=\exp\left(-iA(z+1)\right)\cdot H(z+1)=\exp\left(-iAz-iA\right)\cdot UH(z)\\ &=\exp\left(-iAz\right)\cdot H(z)=K(z). \end{aligned} \end{equation*} That is, $K(z)$ is periodic. As a result, $$P(w)=H^*H(\log w/2\pi i)=K^*(\log w/2\pi i)\cdot\exp\left[A(\log |w|^2/2\pi)\right]\cdot K(\log w/2\pi i).$$ Here $K(\log w/2\pi i)$ is single-valued, because $K$ is periodic. Since $A/2\pi$ is self adjoint, we have the desired factorization. (If $F$ is $C^{k,\alpha}$, then so are $P$ and $H$, therefore also $K$.) \end{proof} \bibliographystyle{amsalpha}
{ "timestamp": "2018-04-17T02:08:47", "yymm": "1804", "arxiv_id": "1804.05278", "language": "en", "url": "https://arxiv.org/abs/1804.05278" }
\section{Introduction} \rev{ Consider the quadratic optimization problem with indicator variables \[ \text{(QOI)} \ \ \ \min \bigg \{ a'x + b'y + y'Ay \ : \ (x,y) \in C, \ 0 \le y \le x, \ x \in \{0,1\}^N \bigg \}, \] where $N=\{1,\ldots,n\}$, $a$ and $b$ are $n$-vectors, $A$ is an $n \times n$ symmetric matrix and $C \subseteq \ensuremath{\mathbb{R}}^{N \times N}$. Binary variables $x$ indicate a selected subset of $N$ and are often used to model non-convexities such as cardinality constraints and fixed charges. (QOI) arises in linear regression with best subset selection \citep{Bertsimas2016}, control \citep{Gao2011}, filter design \citep{Wei2013} problems, and portfolio optimization \cite{Bienstock1996}, among others. In this paper, we give strong convex relaxations for the related mixed-integer set \[ S=\big \{(x,y,t)\in \{0,1\}^N\times \ensuremath{\mathbb{R}}^{N}\times \ensuremath{\mathbb{R}}:y'Qy\leq t,\; 0 \le y_i\leq x_i \text{ for all }i\in N\big\}, \] where $Q$ is an M-matrix \citep{plemmons1977m}, i.e., $Q\succeq 0$ and $Q_{ij}\leq 0$ if $i\neq j$. M-matrices arise in the analysis of Markov chains \cite{markov-m}. Convex quadratic programming with an M-matrix is also studied on its own right \cite{qp-m}. Quadratic minimization with an M-matrix arises directly in a variety of applications including portfolio optimization \rev{with transaction costs} \citep{Lobo2007} and image segmentation \citep{Hochbaum2013}. There are numerous approaches in the literature for deriving strong formulations for (QOI) and $S$. \citet{DL:ipco-qp-ind} describe lifted inequalities for (QOI) from its continuous quadratic optimization counterpart over bounded variables. \citet{BM:conv-noncov} give a characterization linear inequalities obtained by strengthening gradient inequalities of a convex objective function over a non-convex set. Convex relaxations of $S$ can also be constructed from the mixed-integer epigraph of the bilinear function $\sum_{i\neq j}Q_{ij}y_iy_j$. There is an increasing amount of recent work focusing on bilinear functions \cite[e.g.,][]{boland2017bounding,boland2017extended,Luedtke2012}. However, the convex hull of such functions is not fully understood even in the continuous case. More importantly, considering the bilinear functions independent from the quadratic function $\sum_{i\in N}Q_{ii}y_i^2$ may result in weaker formulations for $S$. Another approach, applicable to general mixed-integer optimization, is to derive strong formulation based on disjunctive programming \citep{balas1985disjunctive,Ceria1999,stubbs1999branch}. Specifically, if a set is defined as the disjunction of convex sets, then its convex hull can be represented in an extended formulation using perspective functions. Such extended formulations, however, require creating a copy of each variable for each disjunction, and lead to prohibitively large formulations even for small-scale instances. There is also a increasing body of work on characterizing the convex hulls in the original space of variables, but such descriptions may be highly complex even for a single disjunction, e.g., see \cite{AN:conicmir:ipco,belotti2015conic,kilincc2015two,modaresi2016intersection}. } \rev{The convex hull of $S$ is well-known for a couple of special cases. When} the matrix $Q$ is diagonal, the quadratic function \rev{$y'Qy$} is separable and the convex hull of $S$ can be described using the \emph{perspective reformulation} \citep{ Frangioni2006}. This perspective formulation has a compact conic quadratic representation \citep{akturk2009strong,Gunluk2010} and is \rev{by} now a standard \rev{model strengthening} technique for mixed-integer nonlinear optimization \citep{BLTW:mp-indicator,HBCO:on-off,Mahajan2017,Wu2017}. In particular, a convex quadratic function $y'Ay$ is decomposed as $y'Dy+y'Ry$, where $A=D+R$, $D, R\succeq 0$ and $D$ is diagonal and then each diagonal term $D_{ii} y_i^2 \le t_i$, $i \in N$, is reformulated as $y_i^2 \le t_i x_i$. Such decomposition and strengthening of the diagonal terms are also standard for the binary restriction, where $y_i=x_i$, $i\in N$, in which case $x'Ax \Leftrightarrow \sum_{i\in N}D_{ii}x_i+x'Rx$ \citep[e.g.][]{anstreicher2012convex,poljak1995convex}. The binary restriction of $S$, where $y_i=x_i$ and $Q_{ij}\leq 0$, \rev{$i \neq j$,} is also well-understood, since in that case the quadratic function $x'Qx$ is submodular \citep{Nemhauser1978} and min $\{a' x + x'Qx: x \in \{0,1\}^n\}$ is a minimum cut problem \rev{\citep{ivuanescu1965,picard1975minimum}} and, \rev{therefore}, is solvable in poynomial time. Whereas the set $S$ with an M-matrix is interesting on its own, the convexification results on $S$ can also be used to strengthen a general quadratic $y'Ay$ by decomposing $A$ as $A=Q+R$, where $Q$ is an M-matrix, \rev{and then applying the convexification results in this paper only on the $y'Qy$ term with negative off-diagonal coefficients}, generalizing the perspective reformulation approach above. \rev{We demonstrate this approach for portfolio optimization problems with negative as well as positive correlations through computations that indicate significant additional strengthening over the perspective formulation through exploiting the negative correlations.} The key idea for deriving strong formulations for $S$ is decompose the quadratic function in the definition of $S$ as the sum of quadratic functions involving one or two variables: \begin{equation} \label{eq:quadraticDecomposition} y'Qy=\sum_{i=1}^n\left(\sum_{j=1}^nQ_{ij}\right)y_i^2-\sum_{i=1}^n\sum_{j=i+1}^n Q_{ij}(y_i-y_j)^2. \end{equation} Since a \rev{univariate} quadratic function with \rev{an indicator} is well-understood, we turn our attention to studying the mixed-integer set with two \rev{ continuous and two indicator} variables: \begin{equation*} X=\left\{(x,y,t)\in \{0,1\}^2\times \ensuremath{\mathbb{R}}^2\times \ensuremath{\mathbb{R}}: (y_1-y_2)^2\leq t,\; 0 \le y_i\leq x_i, \ i=1,2\right\}. \end{equation*} \rev{\citet{FGH:2x2decomp} also construct strong formulations for (QOI) based on $2\times 2$ decompositions. In particular, they characterize quadratic functions that can be decomposed as the sum of convex quadratic functions with at most two variables. They utilize the disjunctive convex extended formulation for the mixed-integer quadratic set $$\hat{X}=\left\{(x,y,t)\in \{0,1\}^2\times \ensuremath{\mathbb{R}}^2\times \ensuremath{\mathbb{R}}: q(y)\leq t,\; 0 \le y_i\leq x_i, \ i=1,2\right\},$$ where $q(y)$ is a general convex quadratic function. The authors report that the formulations are weaker when the matrix $A$ is an M-matrix, and remark on the high computational burden of solving the convex relaxations due the large number of additional variables. Additionally, \citet{Jeon2017} give conic quadratic valid inequalities for $\hat{X}$, which can be easily projected into the original space of variables, and demonstrate their effectiveness via computations. However, a convex hull description of $\hat{X}$ in the original space of variable is unknown.} In this paper, we improve upon \rev{previous} results for the sets $S$ and $X$. In particular, our main contributions are ($i$) showing, under mild assumptions, that the minimization of a quadratic function with an M-matrix and \rev{indicator} variables is equivalent to a submodular minimization problem and, hence, solvable in polynomial time; ($ii$) \rev{giving} the convex hull description of $X$ \rev{in the original space of variables --- the resulting formulations for $S$ are at least as strong as the ones used by Frangioni et al. and require substantially fewer variables}; ($iii$) \rev{proposing} conic quadratic inequalities amenable to use with conic quadratic MIP solvers --- the proposed inequalities dominate the ones given by Jeon et al.; ($iv$) \rev{demonstrating} the \rev{strength and performance of the resulting formulations for (QOI)}. \vskip 1mm \noindent \textit{Outline} The rest of the paper is organized as follows. In Section~\ref{sec:preliminaries} we review the previous results for $S$ and $X$. In Section~\ref{sec:convexHullUnbounded} we study the relaxations of $S$ and $X$, where the \rev{constraints $0 \le y_i\leq x_i$ are relaxed to $y_i(1-x_i)=0$,} and the related optimization problem. In Section~\ref{sec:convexHullBounded} we give the convex hull description of $X$. The convex hulls obtained in Sections~\ref{sec:convexHullUnbounded} and \ref{sec:convexHullBounded} cannot be immediately implemented with off-the-shelf solvers \rev{in the original space of variables}. Thus, in Section~\ref{sec:valid} we propose valid conic quadratic inequalities and discuss their strength. In Section~\ref{sec:extensions} we give extensions to quadratic functions with positive off-diagonal entries and continuous variables unrestricted in sign. In Section~\ref{sec:computations} we provide a summary computational experiments and in Section~\ref{sec:conclusions} we conclude the paper. \paragraph{Notation}Throughout the paper, we use the following convention for division by $0$: $\nicefrac{0}{0}=0$ and $\nicefrac{a}{0}=\infty$ if $a>0$. In particular, the function $p:[0,1]\times \ensuremath{\mathbb{R}}_+\to \ensuremath{\mathbb{R}}_+$ given by $p(x,y)=\nicefrac{y^2}{x}$ is the closure of the perspective function of the quadratic function $q(y)=y^2$, and is convex \citep[e.g.][p. 160]{Hiriart2013}. For a set $X\subseteq \ensuremath{\mathbb{R}}^N$, $\text{conv}(X)$ denotes the convex hull of $X$. Throughout, $Q$ denotes an $n\times n$ M-matrix, i.e., $Q \succeq 0$ and $Q_{ij}\leq 0$ for $i\neq j$. \section{Preliminaries} \label{sec:preliminaries} In this section we briefly review the relevant results on the binary restriction of $S$ and the previous results on set $X$. \subsection{The binary restriction of $S$} \label{subsec:binary} Let $S_B$ be the binary restriction of $S$, i.e. $y=x \in \{0,1\}^n$. In this case, the decomposition \begin{align} \label{eq:bin-decomp} x'Qx = \sum_{i=1}^n\left(\sum_{j=1}^nQ_{ij}\right)x_i^2-\sum_{i=1}^n\sum_{j=i+1}^n Q_{ij}(x_i-x_j)^2 \le t \end{align} leads to $\ensuremath{\text{conv}}(S_B)$, by simply taking the convex hull of each term. Indeed, the quadratic problem $\min \big \{x'Qx: x \in\{0,1\}^n \big \}$ is equivalent to an undirected min-cut problem \cite[e.g.][]{picard1975minimum} and can be formulated as \[ \min \sum_{i=1}^n\left(\sum_{j=1}^nQ_{ij}\right)x_i - \sum_{i=1}^n\sum_{j=i+1}^n Q_{ij} t_{ij}: x_i - x_j \le t_{ij}, \ x_j - x_i \le t_{ij}, \ 0 \le x \le 1. \] Decomposition \eqref{eq:bin-decomp} leading to a simple convex hull description of $S_B$ in the binary case is \rev{our} main motivation for studying decomposition \eqref{eq:quadraticDecomposition} with the \rev{indicator} variables. \subsection{Previous results for set $X$} Here we review the valid inequalities of Jeon et al. \cite{Jeon2017} for $X$. Although their construction is not directly applicable \rev{as they assume a strictly convex function}, one can utilize it to obtain limiting inequalities. For $q(y)=y'Ay$ the inequalities of Jeon et al. are described via the inverse of the Cholesky factor of $A$. However, for $X$, we have $q(y)=(y_1-y_2)^2$ or $q(y)=y'Ay$, where $A=\left [\begin{smallmatrix} 1 & -1 \\ -1 & 1\end{smallmatrix} \right ]$ is a singular matrix and the Cholesky factor is not invertible. However, if the matrix is given by $A= \left [\begin{smallmatrix} d_1 & -1 \\ -1 & d_2\end{smallmatrix} \right ]$ with $d_1,d_2> 1$, then their approach yields three valid inequalities: \begin{align*} d_2\frac{y_2^2}{x_2}-\frac{1}{d_1}x_1+\left(\frac{d_1d_2-1}{d_1}\right)\frac{y_2^2}{x_2}\leq t\\ (d_2-1)\frac{y_2^2}{x_2}+d_1\frac{y_1^2}{x_1}+\frac{x_2}{d_1}-2x_2\leq t\\ \left(\frac{d_1d_2-1}{d_1}\right)\frac{y_2^2}{x_2}+\frac{\left(\sqrt{d_1}y_1-\sqrt{\frac{1}{d_1}}y_2\right)^2}{x_1+x_2}\leq t. \end{align*} As $d_1, d_2 \rightarrow 1$, we arrive at three limiting valid inequalities for $X$. \begin{proposition \label{prop:validJeon} The following convex inequalities are valid for $X$: \begin{align} \frac{y_2^2}{x_2}-x_1&\leq t,\label{eq:jeff1}\\ \frac{y_1^2}{x_1}-x_2&\leq t, \label{eq:jeff2}\\ \frac{\left(y_1-y_2\right)^2}{x_1+x_2}&\leq t. \label{eq:jeff3} \end{align} \end{proposition} \rev{For completeness, we verify here the validity of the limiting inequalities directly. The validity of inequality \eqref{eq:jeff1} is easy to see: observe that $\nicefrac{y_2^2}{x_2}\leq 1$ for $(x,y)\in X$; then, for $x_1=0$, \eqref{eq:jeff1} reduces to the perspective formulation for the quadratic constraint $y_2^2\leq t$, and for $x_1=1$ we have $\nicefrac{y_2^2}{x_2}-x_1\leq 0 \leq t$. The validity of inequality \eqref{eq:jeff2} is proven identically. Finally, inequality \eqref{eq:jeff3} is valid since it forces $y_1=y_2$ when $x_1=x_2=0$, and is dominated by the original inequality $(y_1-y_2)^2\leq t$ for other integer values of $x$.} Inequalities \eqref{eq:jeff1}--\eqref{eq:jeff3} are not sufficient to describe conv($X$) \rev{though}. In the next two sections we describe conv($X$) and give new conic quadratic valid inequalities dominating \eqref{eq:jeff1}--\eqref{eq:jeff3} for $X$. \section{The unbounded relaxation} \label{sec:convexHullUnbounded} In this section we study the unbounded relaxations of $S$ and $X$ obtained by dropping the upper bound on the continuous variables: \begin{align*} S_U&=\left\{(x,y,t)\in \{0, 1\}^N\times \ensuremath{\mathbb{R}}_+^{N}\times \ensuremath{\mathbb{R}}:y'Qy\leq t,\;y_i(1-x_i)=0 \text{ for all }i\in N\right\},\\ X_U&=\left\{(x,y,t)\in \{0,1\}^2\times \ensuremath{\mathbb{R}}_+^2\times \ensuremath{\mathbb{R}}: (y_1-y_2)^2\leq t: y_i(1-x_i)=0,\; i=1,2\right\}.\end{align*} In Section~\ref{sec:optimizationUnbounded} we show that the minimization of a linear function over $S_U$ is equivalent to a submodular minimization problem \rev{and, consequently, solvable in polynomial time}. In Section~\ref{sec:inequalitiesUnbounded}, we describe $\ensuremath{\text{conv}}(X_U)$ and in Section~\ref{sec:validUnbounded} we use the results in Section~\ref{sec:inequalitiesUnbounded} to derive valid inequalities for $S_U$. \subsection{Optimization over $S_U$} \label{sec:optimizationUnbounded} We now show that the optimization of a linear function over $S_U$ can be solved in polynomial time under a mild assumption on the objective function. Consider the problem \begin{align*} \text{(P)} \ \ \ \min \left \{ a'x+b'y+t: (x,y,t) \in S_U \right \}, \end{align*} where $Q$ is a positive definite M-matrix and $b\leq 0$. We show that (P) is a submodular minimization problem. The positive definiteness assumption on $Q$ ensures that an optimal solution exists. Otherwise, if there is $y \ge 0$ with $y'Qy = 0$, the problem may be unbounded. The assumption $b\leq 0$ is satisfied in most applications (e.g., see Sections~\ref{subsec:dualNetwork} and \ref{subsec:dense}). If $b > 0$, then $y=0$ in any optimal solution. \begin{proposition}[Characterization 15 \cite{plemmons1977m}] A positive definite M-matrix $Q$ is \emph{inverse-positive}, i.e., its inverse satisfies $Q_{ij}^{-1}\geq 0$ for all $i,j$. \end{proposition} \begin{proposition} Problem (P) \rev{is equivalent to a submodular minimization problem and it is, therefore, solvable} in polynomial time. \end{proposition} \begin{proof} We assume that $a\geq 0$ (otherwise $x=1$ in any optimal solution) and that an optimal solution exists. Given an optimal solution $(x^*,y^*)$ to (P), let $T=\left\{i\in N: y_i^*>0\right\}$, $b_T$ the subvector of $b$ induced by $T$, and by $Q_T$ the submatrix of $Q$ induced by $T$. Then, from KKT conditions, we find $ b_T+2Q_T y_T=0 \Leftrightarrow y_T=-\nicefrac{Q_T^{-1}b_T}{2} \cdot $ Thus, an optimal solution satisfies $b'y^*+{y^*}'Qy^*=-\frac{b_T'Q_T^{-1}b_T}{4} \cdot $ \rev{Consequently,} defining $\theta_{ij}:2^N\to \ensuremath{\mathbb{R}}$ for $i,j\in N$ as $ \theta_{ij}(T)= (Q_T^{-1})_{ij} \text{ if }i, j\in T \text{ and } 0 \text{ o.w.,} $ observe that (P) is equivalent to the binary minimization problem $$\min_{T\subseteq N} \ \ a(T)-\frac{1}{4}\sum_{i\in N}\sum_{j\in N}b_ib_j \theta_{ij}(T) \cdot$$ Note that since $Q_T$ is a positive definite $M$-matrix for any $T\subseteq N$, $Q_T= \mu I_T-P_T$, where $P_T$ is a nonnegative matrix and the largest eigenvalue of $P_T$ is less than $\mu$. By scaling, we may assume that $\mu=1$. Moreover, $Q_T^{-1}=(I-P_T)^{-1}=\sum_{\ell=0}^\infty P_T^{\ell}$ \cite[e.g.][]{Young81}. For $\ell\in \mathbb{Z}_+$ and all $i,j\in N$ let $ \bar \theta_{ij}^\ell(T)=(P_T^\ell)_{ij} \text{ if }i,j\in T, \text{ and } 0 \text{ o.w.} $ Note that $\theta_{ij}(T)=\sum_{\ell=0}^\infty \bar \theta_{ij}^\ell(T)$. Finally, define for $k\in N$ and $T\subseteq N\setminus\{k\}$ the \rev{increment} function $\rho_{ij}^\ell(k,T)= \bar \theta_{ij}^\ell(T\cup\{k\})-\bar \theta_{ij}^\ell(T)$. \begin{claim} For all $i,j\in N$ and $\ell \in \ensuremath{\mathbb{Z}}_+$, $\bar \theta_{ij}^\ell$ is a monotone supermodular function. \end{claim} \begin{proof} The claim is proved by induction on $\ell$. $\bullet$ Base case, $\ell=0$: Let $k\in N$ and $T\subseteq N\setminus \{k\}$. Note that $P_T^0=I_T$. Thus $\rho_{kk}^0(k,T)=1$, and $\rho_{ij}^0(k,T)=0$ for all cases except $i=j=k$. Thus, the marginal contributions are constant and $\bar \theta_{ij}^0$ is supermodular. Monotonicity can be checked easily. $\bullet$ Induction step: Suppose $\bar \theta_{ij}^\ell$ is supermodular and monotone for all $i,j\in N$. Observe that $\bar \theta_{ij}^{\ell+1}(T)=\sum_{t\in N}\bar \theta_{it}^{\ell}(T)P_{tj}$ if $i,j\in T$ and $\bar \theta_{ij}^{\ell+1}(T)=0$ otherwise. Monotonocity of $\bar \theta_{ij}^{\ell+1}$ follows immediately from the monotonicity of the functions $\bar \theta_{it}^{\ell}$. Now let $k\in N$ and $T_1\subseteq T_2\subseteq N\setminus\{k\}$. To prove supermodularity, we check that $\rho_{ij}^{\ell+1}(k,T_2)-\rho_{ij}^{\ell+1}(k,T_1)\geq 0$ by considering all cases: \begin{description} \item[ $k\not\in \{i,j\}$] If $\{i,j\}\subseteq T_1$ then $\rho_{ij}^{\ell+1}(k,T_2)-\rho_{ij}^{\ell+1}(k,T_1)=\sum_{t\in N}(\rho_{it}^{\ell}(k,T_2)-\rho_{it}^{\ell}(k,T_1))P_{tj}\geq 0$ by supermodularity of functions $\bar \theta_{it}^\ell$; if $\{i,j\}\not\subseteq T_1$ and $\{i,j\}\subseteq T_2$ then $\rho_{ij}^{\ell+1}(k,T_2)-\rho_{ij}^{\ell+1}(k,T_1)=\rho_{ij}^{\ell+1}(k,T_2)\geq 0$ by monotonicity; finally, if $\{i,j\}\not\subseteq T_2$ then $\rho_{ij}^{\ell+1}(k,T_2)-\rho_{ij}^{\ell+1}(k,T_1)=0$. \item[$k=i$] If $j\in T_1$ then $\rho_{kj}^{\ell+1}(k,T_2)-\rho_{kj}^{\ell+1}(k,T_1)=\sum_{t\in N}(\rho_{kt}^{\ell}(k,T_2)-\rho_{kt}^{\ell}(k,T_1))P_{tj}\geq 0$ by supermodularity of functions $\bar \theta_{kt}^\ell$; if $j\not\in T_1$ and $j\in T_2$ then $\rho_{kj}^{\ell+1}(k,T_2)-\rho_{kj}^{\ell+1}(k,T_1)=\bar\theta_{kj}^{\ell+1}(T_2\cup\{k\})\geq 0$; finally, if $j\not\in T_2$ then $\rho_{kj}^{\ell+1}(k,T_2)-\rho_{kj}^{\ell+1}(k,T_1)=0$. The case $k=j$ is identical. \end{description} \end{proof} As $\theta_{ij}(T)=\sum_{\ell=0}^\infty \bar \theta_{ij}^\ell(T)$ is a sum of supermodular functions, it is supermodular. Consequently, $\nicefrac{1}{4}\sum_{i\in N}\sum_{j\in N}b_ib_j \theta_{ij}(T)$ is a supermodular function and (P) is a submodular minimization problem, solvable \rev{with a strongly polynomial number of calls to a value oracle} \cite[e.g.][]{Orlin2009}. \rev{Evaluating the submodular function for a given set $T$, i.e., computing $a(T)-\nicefrac{b_T'Q_T^{-1}b_T}{4}$, requires only matrix multiplication and inversion, and can be done in strongly polynomial time. Therefore (P) is solvable in strongly polynomial time.} \end{proof} \subsection{Convex hull of $X_U$} \label{sec:inequalitiesUnbounded} Consider the function $f:[0,1]^2\times \ensuremath{\mathbb{R}}_+^2\to \ensuremath{\mathbb{R}}_+$ defined as \begin{equation} \label{eq:defF}f(x,y)=\begin{cases}\frac{(y_1-y_2)^2}{x_1}& \text{if }y_1\geq y_2\\\frac{(y_2-y_1)^2}{x_2}& \text{if }y_1\leq y_2\end{cases} \end{equation} and the corresponding nonlinear inequality \begin{equation} \label{eq:unboundedCut} f(x,y)\leq t. \end{equation} \begin{remark} Observe that that inequality \eqref{eq:unboundedCut} dominates inequality \eqref{eq:jeff3} since \begin{equation*} \frac{(y_1-y_2)^2}{x_1+x_2}\leq\frac{(y_1-y_2)^2}{\max\{x_1,x_2\}}\leq f(x,y). \end{equation*} Inequalities \eqref{eq:jeff1}--\eqref{eq:jeff2} are not valid for the unbounded relaxation \rev{as the conditions $\nicefrac{y_i^2}{x_i}\leq 1$ are not satisfied by all feasible points in $X_U$. For example, feasible points with $x_1=x_2=1$, $y_1=y_2>1$ and $t=0$ are cut off by \eqref{eq:jeff1}--\eqref{eq:jeff2}.} \end{remark} \begin{proposition} Inequality \eqref{eq:unboundedCut} is valid for $X_U$. \end{proposition} \begin{proof} There are four cases to consider. If $x_1=x_2=1$, then $f(x,y)$ reduces to the original quadratic inequality $(y_1-y_2)^2$, thus the inequality is valid. If $x_1=x_2=0$, then the points in $X_U$ satisfy $y_1=y_2=0$ and $t\geq 0$; since $f(0,0)=0$, none of these points are cut off by \eqref{eq:unboundedCut}. If $x_1=1$ and $x_2=0$, then $y_2=0$ in any point in $X_U$ and, in particular, $y_1\geq y_2$; thus $f(x,y)$ reduces to the original inequality. The case where $x_1=0$ and $x_2=1$ is similar. \ \end{proof} \rev{Observe that function $f$ is a piecewise nonlinear function, where each piece is conic quadratic representable. However, the pieces are not valid outside of the region where they are defined, e.g., $(y_1-y_2)^2\leq tx_1$ is invalid when $y_2>y_1$ as it cuts off feasible points with $x_1=y_1=0$ and $y_2>0$. Thus, inequality \eqref{eq:unboundedCut} is not equivalent to the system given by $(y_1-y_2)^2\leq tx_i$, $i=1,2$. Nevertheless, as shown in Proposition~\ref{prop:convexityF} below, \eqref{eq:unboundedCut} is a convex inequality. } \begin{proposition} \label{prop:convexityF} The function $f$ is convex on its domain. \end{proposition} \begin{proof} Let $(\bar{x},\bar{y}),(\hat{x},\hat{y})\in [0,1]^2\times \ensuremath{\mathbb{R}}_+^2$ and let $(x^*,y^*)=(1-\lambda)(\bar{x},\bar{y})+\lambda (\hat{x},\hat{y})$ for $0\leq \lambda\leq 1$ be a convex combination of $(\bar{x},\bar{y})$ and $(\hat{x},\hat{y})$. We need to prove that \begin{equation} \label{eq:convexF} f(x^*,y^*)\leq (1-\lambda)f(\bar{x},\bar{y}) + \lambda f(\hat{x},\hat{y}). \end{equation} If $\bar{y}_1\geq \bar{y}_2$ and $\hat{y}_1\geq \hat{y}_2$, or $\bar{y}_1\leq \bar{y}_2$ and $\hat{y}_1\leq \hat{y}_2$, inequality \eqref{eq:convexF} holds by convexity of the individual functions in the definition of $f$. Otherwise, assume, without loss of generality, that $\bar{y}_1\geq \bar{y}_2$, $\hat{y}_1\leq \hat{y}_2$, and $y_1^*\leq y_2^*$. Letting $\gamma=\lambda -(1-\lambda)\frac{\bar{y}_1-\bar{y}_2}{\hat{y}_2-\hat{y}_1}$, observe that\begin{itemize} \item $\gamma\leq \lambda \leq 1$. \item $\gamma\geq 0$, which is equivalent to $y_2^*-y_1^*\geq 0$. \item $y_2^*-y_1^*=\gamma(\hat{y}_2-\hat{y}_1)$. \item $\gamma \hat{x}_2\leq \lambda \hat{x}_2\leq x_2^*$. \end{itemize} Then, we find \begin{align*}f(x^*,y^*)=\frac{(y_2^*-y_1^*)^2}{x_2^*}\leq \frac{(y_2^*-y_1^*)^2}{\gamma \hat{x}_2}=&\gamma\frac{(\hat{y}_2-\hat{y}_1)^2}{ \hat{x}_2} \leq \lambda f(\hat{x},\hat{y})+(1-\lambda)f(\bar{x},\bar{y}). \ \ \ \end{align*} \end{proof} \rev{A consequence of Proposition~\ref{prop:convexityF} is that the convex inequality \eqref{eq:unboundedCut} can be implemented (with off-the-shelf solvers) using subgradient inequalities as for a subgradient $\xi \in \partial f(\bar{x},\bar{y})$ at a given point $(\bar{x},\bar{y})$, we have $f(\bar{x},\bar{y})+\xi'(x-\bar{x},y-\bar{y})\leq f(x,y),$ for all points $(x,y)$ in the domain of the convex function $f$. In particular, the linear cuts \begin{equation} \label{eq:subgradientCut} f(\bar{x},\bar{y})+\xi'(x-\bar{x},y-\bar{y})\leq t \text{ for } \xi \in \partial f(\bar{x},\bar y) \end{equation} provide an outer-approximation of $f(x,y) \le t$ at $(\bar{x},\bar{y})$ and are valid everywhere on the domain. A subgradient $\xi$ can be found simply by taking the gradient of the relevant piece of the function at $(\bar{x},\bar{y})$. In particular, for $\bar y_1\geq \bar y_2$ and $\bar x_1>0$, a subgradient inequality is \begin{equation} \label{eq:subgradientUnbounded}-\left(\frac{\bar y_1-\bar y_2}{\bar x_1}\right)^2x_1+2\left(\frac{\bar y_1-\bar y_2}{\bar x_1}\right)(y_1-y_2)\leq t.\end{equation} The process outlined here to find subgradient cuts \eqref{eq:subgradientCut} for $f$ can be utilized for any convex piecewise nonlinear function, and will be used for other functions in the rest of the paper. Convex piecewise nonlinear functions also arise in strong formulations for mixed-integer conic quadratic optimization \cite{atamturk2017polymatroid}, and subgradient linear cuts for such functions were recently used in the context of the pooling problem \cite{luedtke2018strong}.} As Theorem~\ref{theo:convexHullUnbounded} below states, inequality \eqref{eq:unboundedCut} and bound constraints for the binary variables describe the convex hull of $X_U$. \begin{theorem}[Convex hull of $X_U$] \label{theo:convexHullUnbounded} $$\text{conv}(X_U)=\left\{(x,y,t)\in [0,1]^2\times \ensuremath{\mathbb{R}}_+^2\times \ensuremath{\mathbb{R}}: f(x,y)\leq t\right\}.$$ \end{theorem} \begin{proof} Consider the optimization problems \begin{align*} (P_0)\ \ \ \ \ \ \ \ \ &\min_{(x,y,t)\in X_U} a'x+b'y+ct;\\ (P_1)\ \ \ \ \ \ \ \ \ &\min_{(x,y,t)\in [0,1]^2\times \ensuremath{\mathbb{R}}_+^2\times \ensuremath{\mathbb{R}}} a'x+b'y+ct\text{ s.t. } f(x,y)\leq t. \end{align*} To prove the result we show that for any value of $a,b,c$, either $(P_0)$ and $(P_1)$ are both unbounded, or there exists a solution integral in $x$ that is optimal for both problems. If $c<0$, then $(P_0)$ and $(P_1)$ are both unbounded, and if $c=0$ then $(P_1)$ corresponds to an optimization problem over an integral polyhedron and it is easily checked that $(P_0)$ and $(P_1)$ are equivalent. Thus, the interesting case is $c>0$ or, by scaling, $c=1$. Note that $t=(y_1-y_2)^2$ in any optimal solution of $(P_0)$, and $t=f(x,y)$ in any optimal solution of $(P_1)$. If $b_1, b_2\geq 0$, then $y_1=y_2=0$ is optimal with corresponding integer $x$ optimal for both $(P_0)$ and $(P_1)$. Moreover, if $b_1+b_2<0$, then both problems are unbounded: $x_1=x_2=1$, $y_1=y_2=\lambda$ is feasible for any $\lambda > 0$ for both problems. Thus, one needs to consider only the case where $b_1+b_2 \ge 0$ and $b_1 < 0$ or $b_2 < 0$. Without loss of generality, let $b_1<0$ and $b_2>0$. \vspace{1mm} \noindent \textbf{Optimal solutions of $(P_0)$}. There exists an optimal solution with $y_2=0$ (if $0<y_2 \leq y_1$, subtracting $\epsilon>0$ from both $y_1$ and $y_2$ does not increase the objective -- and if $y_2>y_1$, then swapping the values of $y_1$ and $y_2$ reduces the objective). Thus, $y_2=0$, $x_2=0$ if $a_2\geq 0$ and $x_2=1$ otherwise, and either $x_1=y_1=0$ or $x_1=1$ and $y_1=-\frac{b_1}{2}$, which is the stationary point of $b_1 y_1 + y_1^2$. \vspace{1mm} \noindent \textbf{Optimal solutions of $(P_1)$}. Note that there exists an optimal solution of $(P_1)$ where at least one of the continuous variables is $0$ (if $0<y_1,y_2$, subtracting $\epsilon>0$ from both variables does not increase the objective value --- this operation does not change the relative order of $y_1$ and $y_2$). Then, we conclude that $y_2=0$ in an optimal solution (if $y_1=0$ and $y_2>0$, then setting $y_2=0$ reduces the objective value). Moreover, when $y_2=0$, then $f(x,y)=y_1^2/x_1$. Thus, in the optimal solution $y_1=-b_1x_1/2$. Substituting in the objective, we see that $(P_1)$ simplifies to $ \min_{0\leq x_1, x_2\leq 1} a_2x_2+ \big (a_1-b_1^2/4 \big )x_1. $ For an optimal solution, $x_2=0$ if $a_2\geq 0$ and $x_2=1$ otherwise, and $x_1=0$ if $a_1-b_1^2/4\geq 0$ and $x_1=1$ otherwise. And, if $x_1=1$, then $y_1=-b_1/2$. Hence, the optimal solutions coincide. \ \end{proof} \subsection{Valid inequalities for $S_U$} \label{sec:validUnbounded} \paragraph{Inequalities in an extended formulation} Let $\bar Q_i = \sum_{j=1}^nQ_{ij}$ and $P = \{i \in N: \bar Q_i > 0\}$ and $\bar P = N \setminus P$. Using decomposition \eqref{eq:quadraticDecomposition} and introducing $t_{ij}$, $1 \le i \le j \le n$, one can write a convex relaxation of $S_U$ as \begin{align*} \sum_{i \in \bar P} \bar Q_i y_i + \sum_{i \in P} \bar Q_i y_i^2/x_i - \sum_{i=1}^n \sum_{j=i+1}^n Q_{ij} t_{ij} & \le t \\ f(x_i, x_j, y_i, y_j) & \le t_{ij}, \ \ 1 \le i \le j \le n. \end{align*} \paragraph{Inequalities in the original space of variables} By projecting out the auxiliary variables $t_{ij}$ one obtains valid inequalities in the original space of variables. \rev{By re-indexing variables if necessary,} assume that $y_1\geq y_2\geq \ldots \geq y_n$ to obtain the \rev{convex} inequality \begin{equation} \label{eq:nonlinearValidUnbounded} \sum_{i \in \bar P} \bar Q_i y_i + \sum_{i \in P} \bar Q_i y_i^2/x_i - \sum_{i=1}^n\sum\limits_{j=i+1}^n Q_{ij}(y_i-y_j)^2/x_i\leq t. \end{equation} Observe that the nonlinear inequality \eqref{eq:nonlinearValidUnbounded} is valid only if $y_1\geq \ldots \geq y_n$ holds. However, we can obtain linear inequalities that are valid for $S_U$ by underestimating \rev{the convex function $ \sum_{i \in \bar P} \bar Q_i y_i + \sum_{i \in P} \bar Q_i y_i^2/x_i - \sum_{i=1}^n \sum_{j=i+1}^n Q_{ij} f(x_i,x_j,y_i,y_j)$} \rev{by its subgradients}. Let $(\bar{x},\bar{y})\in [0,1]^N\times \ensuremath{\mathbb{R}}_+^N$ be such that $\bar{y}_1\geq \ldots \geq \bar{y}_n$ and $\bar{x}>0$. Then, the \rev{subgradient} inequality \begin{align*} &-\sum_{i\in P}\bar Q_i \left(\frac{\bar y_i}{\bar x_i}\right)^2 x_i+\sum_{i=1}^n \left(\sum\limits_{j=i+1}^n\frac{ Q_{ij}(\bar{y}_i-\bar{y}_j)^2}{\bar{x}_i^2}\right) x_i\\ &+2\sum_{i\in P}\bar Q_i \frac{\bar y_i}{\bar x_i} y_i+\sum_{i\in \bar P}\bar Q_i y_i+2\sum_{i=1}^n\left( \sum_{j=1}^{i-1}\frac{Q_{ij}(\bar{y}_j-\bar{y}_i)}{\bar{x}_j} -\sum\limits_{j=i+1}^n\frac{ Q_{ij}(\bar{y}_i-\bar{y}_j)}{\bar{x}_i} \right)y_i\leq t, \end{align*} corresponding to a first order approximation of \eqref{eq:nonlinearValidUnbounded} around $(\bar{x},\bar{y})$, is valid for $S_U$ (regardless of the ordering of the variables). \section{The bounded set $X$} \label{sec:convexHullBounded} Let $g:[0,1]^2\times \ensuremath{\mathbb{R}}_+^2\to \ensuremath{\mathbb{R}}_+$ be defined as \begin{equation} \label{eq:defG} g(x,y)=\begin{cases} \frac{(y_1-x_2)^2}{x_1-x_2}+\frac{(x_2-y_2)^2}{x_2} & \text{if }y_2\leq x_2\leq y_1 \text{ and }x_2(x_1-y_1)\leq y_2(x_1-x_2) \\ \frac{(y_2-x_1)^2}{x_2-x_1}+\frac{(x_1-y_1)^2}{x_1} & \text{if }y_1\leq x_1\leq y_2 \text{ and }x_1(x_2-y_2)\leq y_1(x_2-x_1)\\ f(x,y) & \text{otherwise,} \end{cases} \end{equation} where $f$ is the function defined in \eqref{eq:defF}. This section is devoted to proving the main result: \begin{theorem}[Convex hull of $X$] \label{theo:convexHullBounded} $$\text{conv}(X)=\left\{(x,y,t)\in [0,1]^2\times \ensuremath{\mathbb{R}}_+^3: g(x,y)\leq t,\; y_i \leq x_i,\;i=1,2 \right\}.$$ \end{theorem} \begin{remark} Observe that for the binary restriction $X_B$ with $y_i=x_i$, $i=1,2$, $g(x,y) \le t$ reduces to $|x_1 - x_2| \leq t$, which together with the bound constraints describe $\ensuremath{\text{conv}}(X_B)$. \end{remark} The rest of this section is organized as follows. In Section~\ref{sec:convexHullBinary1} we give the convex hull description of the intermediate set with \rev{two continuous} variables and one \rev{indicator} variable: $$X_1=\left\{(x,y,t)\in \{0,1\}\times \ensuremath{\mathbb{R}}_+^2\times \ensuremath{\mathbb{R}}: (y_1-y_2)^2\leq t,\; y_1\leq x,\; y_2\leq 1\right\}.$$ In Section~\ref{sec:convexHullBinary2} we use this results to prove Theorem~\ref{theo:convexHullBounded}. Finally, in Section~\ref{sec:counterexample} we give valid inequalities for $S$. Unlike in Section~\ref{sec:convexHullUnbounded}, the convex hull proofs in this section are constructive, \rev{i.e., we show how $g$ is constructed from the mixed-binary description of $X$, instead of just verifying that $g$ does indeed result in conv$(X)$}. \subsection{Convex hull description of $X_1$} \label{sec:convexHullBinary1} Let $g_1:[0,1]\times \ensuremath{\mathbb{R}}_+^2\to \ensuremath{\mathbb{R}}_+$ be given by $$ g_1(x,y_1, y_2)=\begin{cases}\frac{\left(y_2-x\right)^2}{1-x}+\frac{\left(x-y_1\right)^2}{x} & \text{if }x-y_1\leq x(y_2-y_1)\\ \frac{\left(y_1-y_2\right)^2}{x} & \text{if }y_2\leq y_1\\ (y_2-y_1)^2 & \text{otherwise.} \end{cases}$$ \begin{proposition} \label{prop:ConvexHull1} $\ensuremath{\text{conv}}(X_1)=\left\{(x,y,t)\in [0,1]\times \ensuremath{\mathbb{R}}_+^2\times \ensuremath{\mathbb{R}}: g_1(x,y_1,y_2)\leq t,\; y_1\leq x, \; y_2\leq 1 \right\}$. \end{proposition} \begin{proof} Note that a point $(x,y,t)$ belongs to $\ensuremath{\text{conv}}(X_1)$ if and only if there exists $(\bar{x},\bar{y},\bar{t})$, $(\hat{x},\hat{y},\hat{t})$ and $0\leq \lambda \leq 1$ such that \begin{align} &t=(1-\lambda)\bar{t}+\lambda \hat{t}\label{eq:convT}\\ &x=(1-\lambda)\bar{x}+\lambda \hat{x}\label{eq:convX}\\ &y_1=(1-\lambda)\bar{y}_1+\lambda \hat{y}_1\label{eq:convY}\\ &y_2=(1-\lambda)\bar{y}_2+\lambda \hat{y}_2\label{eq:convZ}\\&\bar{x}=0,\; \hat{x}=1\label{eq:defX}\\ &\bar{y}_1=0,\; 0\leq \hat{y}_1\leq 1\label{eq:defY}\\ &0\leq \bar{y}_2, \ \hat{y}_2\leq 1\label{eq:defZ}\\ &\bar{t}\geq \bar{y}_2^2\\ &\hat{t}\geq (\hat{y}_1-\hat{y}_2)^2.\label{eq:defT2} \end{align} \rev{The non-convex system \eqref{eq:convT}--\eqref{eq:defT2} follows directly from the definition of the convex hull. Note that a convex extended formulation of conv($X_1$) could also be obtained using the approach proposed by \citet{Ceria1999}. See also \citet{V:cayley} for a recent approach to eliminate the auxiliary variables using Cayley embedding. We now show how to project out the additional variables $(\bar{x},\bar{y},\bar{t})$, $(\hat{x},\hat{y},\hat{t})$ to find conv$(X_1)$ in the original space of variables, which can be done directly from the non-convex formulation above. } From constraints \eqref{eq:convX} and \eqref{eq:defX} we see $\lambda =x$, from constraint \eqref{eq:convY} $\hat{y}_1=\frac{y_1}{x}$, from \eqref{eq:defY} $y_1\leq x$, from \eqref{eq:convZ} we find $\bar{y}_2=\frac{y_2-x\hat{y}_2}{1-x}$, and from \eqref{eq:defZ} we get $0\leq \hat{y}_2\leq 1$ and $0\leq \frac{y_2-x\hat{y}_2}{1-x}\leq 1$. Thus, \eqref{eq:convT}--\eqref{eq:defT2} is feasible if and only if $0\leq y_1 \leq x$, $0\leq y_2 \leq 1$ and there exists $\hat{y}_2$ such that \begin{align*} &t\geq\frac{\left(y_2-x\hat{y}_2\right)^2}{1-x}+\frac{\left(x\hat{y}_2-y_1\right)^2}{x}, \ \ 0\leq \hat{y}_2\leq 1, \ \ \frac{y_2}{x}-\frac{1-x}{x}\leq \hat{y}_2\leq \frac{y_2}{x} \cdot \end{align*} The existence of such $\hat{y}_2$ can be checked by solving the convex optimization problem \begin{align*} \text{(M1)} \ \ \ \ \ \ \ \ \ \min\; &\varphi(\hat{y}_2):= \frac{\left(y_2-x\hat{y}_2\right)^2}{1-x}+\frac{\left(x\hat{y}_2-y_1\right)^2}{x}\\ \text{s.t.}\;&\max\left\{0,\frac{y_2}{x}-\frac{1-x}{x} \right\}\leq \hat{y}_2\leq \min\left\{1, \frac{y_2}{x}\right\}. \end{align*} The equation $\varphi'(\hat{y}_2)=0$ yields \begin{align*} &-\frac{\left(y_2-x\hat{y}_2\right)}{1-x}+\frac{\left(x\hat{y}_2-y_1\right)}{x}=0\\ \Leftrightarrow & \hat{y}_2=y_2+y_1\frac{1-x}{x}:=\eta(x,y). \end{align*} Let $\hat{y}_2^*$ be an optimal solution to (M1). Note that $\hat{y}_2^*> 0$ whenever $ \eta(x,y)> 0$. Moreover, $\eta(x,y)\leq \frac{y_2}{x}-\frac{1-x}{x}\implies y_1+1\leq y_2$, which can only happen if $y_1=0$ and $y_2=1$, in which case $\frac{y_2}{x}-\frac{1-x}{x}=1$ . Thus, we may assume that $\hat{y}_2^*$ is not equal to one of its lower bounds. Now observe that $\frac{y_2}{x}\leq \eta(x,y)\Leftrightarrow y_2\leq y_1$, in which case $\eta(x,y)\leq \frac{y_1}{x}\leq 1$. Additionally, if $1\leq \eta(x,y)$, then $x\leq y_2$ and in particular $y_1\leq y_2$. Therefore, the cases $ \eta(x,y) \leq \min\{1,\frac{y_2}{x}\}$, $\eta(x,y)\geq 1$, and $\eta(x,y)\geq \frac{y_2}{x}$ are mutually exclusive if $\frac{y_2}{x}\neq x$, and the optimal solution of (M1) corresponds to setting $\hat{y}_2^*=\eta(x,y)$, $\hat{y}_2^*=1$, or $\hat{y}_2^*=\frac{y_2}{x}$, respectively. By calculating the objective function of (M1) with the appropriate value of $\hat{y}_2^*$, we find $\varphi(\hat{y}_2^*) = g_1(x,y_1,y_2)$. Hence, $(x,y,t)\in \ensuremath{\text{conv}}(X_1)$ if and only if $t\geq g_1(x,y_1,y_2)$ and $0\leq y_1\leq x\leq 1$, $0\leq y_2\leq 1$.\ \end{proof} \subsection{Convex hull description of $X$} \label{sec:convexHullBinary2} We use a similar argument as in the proof of Proposition~\ref{prop:ConvexHull1} to prove Theorem~\ref{theo:convexHullBounded}. Let $(x,y,t)$ be a point such that $0\leq y_i\leq x_i\leq 1$ and \emph{we additionally assume that $y_1\geq y_2$}. A point $(x,y,t)$ belongs to $\ensuremath{\text{conv}}(X)$ if and only if there exists $(\bar{x},\bar{y},\bar{t})$, $(\hat{x},\hat{y},\hat{t})$, and $0\leq \lambda \leq 1$ such that \begin{align} &t=(1-\lambda)\bar{t}+\lambda \hat{t}\label{eq:convT1}\\ &x_1=(1-\lambda)\bar{x}_1+\lambda \hat{x}_1\label{eq:convX1}\\ &x_2=(1-\lambda)\bar{x}_2+\lambda \hat{x}_2\label{eq:convW1}\\ &y_1=(1-\lambda)\bar{y}_1+\lambda \hat{y}_1\label{eq:convY1}\\ &y_2=(1-\lambda)\bar{y}_2+\lambda \hat{y}_2\label{eq:convZ1}\\ &\bar{x}_2=0,\; \hat{x}_2=1\label{eq:defW1}\\ &\bar{y}_2=0,\; 0\leq \hat{y}_2\leq 1\label{eq:defZ1}\\ &0\leq \bar{y}_1\leq \bar{x}_1\leq 1, \; 0\leq \hat{y}_1\leq \hat{x}_1\leq 1\label{eq:defYX}\\ &\bar{t}\geq \bar{y}_1^2/\bar{x}_1\\ &\hat{t}\geq g_1(\hat{x}_1,\hat{y}_1,\hat{y}_2).\label{eq:defT22} \end{align} \rev{The system \eqref{eq:convT1}--\eqref{eq:defT22} corresponds to $\ensuremath{\text{conv}}(K_0\cup K_1)$, where $K_0=\{(x,y,t)\in [0,1]^2\times \ensuremath{\mathbb{R}}_+^2\times \ensuremath{\mathbb{R}}:\nicefrac{y_1^2}{x_1}\leq t,\; y_2=x_2=0\}$ and $K_1=\{(x,y,t)\in [0,1]^2\times \ensuremath{\mathbb{R}}_+^2\times \ensuremath{\mathbb{R}}:g_1(x_1,y_1,y_2)\leq t,\; x_2=1\}$. Observe that $K_0$ and $K_1$ are the convex hulls of the restrictions of $X$, where $x_2=0$ and $x_2=1$, respectively.} Using a similar reasoning as in the proof of Proposition \ref{prop:ConvexHull1}, we find $\lambda=x_2$, $\hat{y}_2=\frac{y_2}{x_2}$, $\bar{x}_1=\frac{x_1-x_2\hat{x}_1}{1-x_2}$, $\bar{y}_1=\frac{y_1-x_2\hat{y}_1}{1-x_2}$, and \begin{align} \text{(M2)} \ \ \ \ \ \ \ \ \ t\geq \min_{\hat{x}_1,\hat{y}_1}\;&\psi(\hat{x}_1,\hat{y}_1)\notag\\ \text{s.t.}\;& 0\leq \hat{y}_1\leq \hat{x}_1\leq 1\label{eq:constraints1}\\ &\hat{y}_1\leq \frac{y_1}{x_2},\; \hat{x}_1-\hat{y}_1\leq\frac{x_1-y_1}{x_2},\; \frac{x_1}{x_2}-\frac{1-x_2}{x_2}\leq \hat{x}_1, \label{eq:constraints2} \end{align} where $$\psi(\hat{x}_1,\hat{y}_1):=\frac{\left(y_1-x_2\hat{y}_1\right)^2}{x_1-x_2\hat{x}_1}+ x_2 g_1(\hat x_1, \hat y_1, y_2/x_2) \cdot $$ Thus, to find the convex hull of $X$, we need to compute in closed form the solutions of the optimization problem (M2). \begin{lemma} \label{lem:functionPsi} There exists an optimal solution $(\hat{x}_1^*,\hat{y}_1^*)$ to (M2) such that $\hat{y}_1^*\geq \frac{y_2}{x_2}$. \end{lemma} \begin{proof} Note that if $\hat{y}_1< \frac{y_2}{x_2}$, the function $\psi$ is non-increasing in $\hat{y}_1$ for any value of $\hat{x}_1$. Thus there exists an optimal solution where $\hat{y}_1$ is set to one of its upper bounds, i.e., either $\hat{y}_1^*=\nicefrac{y_1}{x_2}$ or $\hat{y}_1^*=\hat{x}_1^*$. Since we assume $y_1\geq y_2$ and $\hat{y}_1< \nicefrac{y_2}{x_2}$, the case $\hat{y}_1^*=\nicefrac{y_1}{x_2}$ is not possible. Now suppose that $\hat{y}_1=\hat{x}_1$. Then observe that $1\leq \frac{y_2}{x_2} + \hat{y}_1\frac{1-\hat{x}_1}{\hat{x}_1}\Leftrightarrow \hat{x}_1\leq \frac{y_2}{x_2}$. Thus $$\psi(\hat{x}_1)=\frac{\left(y_1-x_2\hat{x}_1\right)^2}{x_1-x_2\hat{x}_1}+\frac{\left(y_2-x_2\hat{x}_1\right)^2}{x_2-x_2\hat{x}_1}$$ in this case (substituting $\hat{y}_1=\hat{x}_1$). Taking the derivative, we find \begin{align*} \psi'(\hat{x}_1) &=x_2\frac{y_1-x_2\hat{x}_1}{(x_1-x_2\hat{x}_1)^2}\left(-2x_1+x_2\hat{x}_1+y_1\right)+x_2\frac{(y_2-x_2\hat{x}_1)}{(x_2-x_2\hat{x}_1)^2}\left(-2x_2+x_2\hat{x}_1+y_2\right) \cdot \end{align*} Note that $y_1-x_2\hat{x}_1\geq 0$ since $\hat{x}_1=\hat{y}_1\leq \nicefrac{y_1}{x_2}$ in any feasible solution, and $y_2-x_2\hat{x}_1\geq 0$, by assumption. Additionally \begin{itemize} \item since $y_1\leq x_1$ and $\hat{x}_1=\hat{y}_1\leq \nicefrac{y_1}{x_2}\leq \nicefrac{x_1}{x_2}$, we find that $-2x_1+x_2\hat{x}_1+y_1\leq 0$, \item since $y_2\leq x_2$ and $\hat{x}_1\leq 1$, we find that $-2x_2+x_2\hat{x}_1+y_2\leq 0$. \end{itemize} Therefore, $\psi'(x_1)$ is non-positive, i.e., $\psi$ is non-increasing. Then, increasing $\hat{y}_1=\hat{x}_1$ another optimal solution can be found. In particular, an optimal solution with $\hat{y}_1^*\geq \nicefrac{y_2}{x_2}$ exits.\ \end{proof} From Lemma \ref{lem:functionPsi} we can assume, without loss of generality, that \begin{equation} \label{eq:psiForm} \psi(\hat{x}_1,\hat{y}_1)=\frac{(y_1-x_2\hat{y}_1)^2}{x_1-x_2\hat{x}_1}+\frac{(x_2\hat{y}_1-y_2)^2}{x_2\hat{x}_1} \cdot \end{equation} Taking partial derivatives, we find that \begin{align*} \frac{\partial \psi}{\partial \hat{y}_1}(\hat{x}_1,\hat{y}_1)=& \ 2x_2\left(-\frac{y_1-x_2\hat{y}_1}{x_1-x_2\hat{x}_1}+\frac{x_2\hat{y}_1-y_2}{x_2\hat{x}_1}\right),\\ \frac{\partial \psi}{\partial \hat{x}_1}(\hat{x}_1,\hat{y}_1)=& \ x_2 \left(\frac{y_1-x_2\hat{y}_1}{x_1-x_2\hat{x}_1}\right)^2- x_2 \left(\frac{x_2\hat{y}_1-y_2}{x_2\hat{x}_1}\right)^2. \end{align*} Lemmas~\ref{lem:case1}--\ref{lem:case3} characterize the optimal solutions of (M2), depending on the values of $(x,y)$. Note that if \begin{equation}\label{eq:optSufficient} \hat{y}_1=\frac{y_2}{x_2}+\frac{\hat{x}_1}{x_1}(y_1-y_2), \end{equation} then $\frac{\partial \psi}{\partial \hat{y}_1}(\hat{x}_1,\hat{y}_1)=\frac{\partial \psi}{\partial \hat{x}_1}(\hat{x}_1,\hat{y}_1)=0$, independently of the values of $\hat{x}_1$ and $\hat{y}_1$. Thus, any feasible point that satisfies \eqref{eq:optSufficient} is an optimal solution of (M2), as is the case for Lemmas~\ref{lem:case1} and \ref{lem:case2}. In contrast, under the conditions of Lemma \ref{lem:case3}, no feasible point satisfies \eqref{eq:optSufficient} as it would violate upper bound constraints. \begin{lemma} \label{lem:case1} If $x_1\leq x_2$ then $\hat{x}_1^*=\frac{x_1-\epsilon}{x_2}$, where $\epsilon>0$ is a sufficiently small number, and $\hat{y}_1^*=\frac{y_2}{x_2}+\frac{\hat{x}_1^*}{x_1}(y_1-y_2)$ is an optimal solution to $(M2)$ with objective $\psi(\hat{x}_2^*,\hat{y}_2^*)=\frac{(y_1-y_2)^2}{x_1} \cdot$ \end{lemma} \begin{proof} We have $\frac{\partial \psi}{\partial \hat{y}_1}(\hat{x}_1^*,\hat{y}_1^*)=\frac{\partial \psi}{\partial \hat{x}_1}(\hat{x}_1^*,\hat{y}_1^*)=0$ and $(x_1^*,y_1^*)$ satisfies all constraints \eqref{eq:constraints1}--\eqref{eq:constraints2}. Thus, $(x_1^*,y_1^*)$ is a KKT point and, by convexity, is an optimal solution. Substituting in \eqref{eq:psiForm}, we get the result.\ \end{proof} \begin{lemma} \label{lem:case2} If $x_1> x_2$ and $y_2(x_1-x_2)+y_1x_2\leq x_2x_1$, then $\hat{x}_1^*=1$ and $\hat{y}_1^*=\frac{y_2}{x_2}+\frac{\hat{x}_1^*}{x_1}(y_1-y_2)$ is an optimal solution to $(M2)$ with objective $\psi(\hat{x}_2^*,\hat{y}_2^*)=\frac{(y_1-y_2)^2}{x_1} \cdot$ \end{lemma} \begin{proof} Observe that $(\hat{x}_1^*,\hat{y}_1^*)$ is feasible as $ \hat{y}_1^*=\frac{y_2}{x_2}+\frac{y_1-y_2}{x_1}\leq \frac{y_2}{x_2}+\frac{y_1-y_2}{x_2}=\frac{y_1}{x_2}; \hat{y}_1^*=\frac{y_2}{x_2}+\frac{y_1-y_2}{x_1}=\frac{y_2x_1+y_1x_2-y_2x_2}{x_1x_2}\leq 1=\hat{x}_1^*; \hat{x}_1^*-\hat{y}_1^*= 1-\frac{y_2}{x_2}-\frac{y_1-y_2}{x_1}\leq 1-\frac{y_2}{x_1}-\frac{y_1-y_2}{x_1}=\frac{x_1-y_1}{x_1}\leq \frac{x_1-y_1}{x_2}; \frac{x_1}{x_2}-\frac{1-x_2}{x_2}=\frac{x_1-1}{x_2}+1\leq 1= \hat{x}_1^*. $ Additionally, note that $\frac{\partial \psi}{\partial \hat{y}_1}(\hat{x}_1^*,\hat{y}_1^*)=\frac{\partial \psi}{\partial \hat{x}_1}(\hat{x}_1^*,\hat{y}_1^*)=0$. Thus, $(x_1^*,y_1^*)$ is a KKT point and, by convexity, is an optimal solution. Substituting in \eqref{eq:psiForm}, we find the result.\ \end{proof} \begin{lemma} \label{lem:case3} If $x_1> x_2$ and $y_2(x_1-x_2)+y_1x_2\geq x_2x_1$, then $\hat{x}_1^*=1$ and $\hat{y}_1^*=1$ is an optimal solution to $(M2)$ with objective $\psi(\hat{x}_2^*,\hat{y}_2^*)=\frac{(y_1-x_2)^2}{x_1-x_2}+\frac{(x_2-y_2)^2}{x_2} \cdot$ \end{lemma} \begin{proof} Note that since $x_2\geq y_2$ and $y_2(x_1-x_2)+y_1x_2\geq x_2x_1$, we have $x_2(x_1-x_2)+y_1x_2\geq x_2x_1\Leftrightarrow y_1\geq x_2$ and, in particular, $\hat{y}_1^*\leq \frac{y_1}{x_2}$. Additionally, it is easily checked that all other constraints \eqref{eq:constraints1}--\eqref{eq:constraints2} are satisfied. From $y_2(x_1-x_2)+y_1x_2\geq x_2x_1$ we find that $\frac{x_2-y_2}{x_2}\leq\frac{y_1-x_2}{x_1-x_2}$. Now let $\mu_1$ and $\mu_2$ be the dual variables associated with constraints $\hat{y}_1\leq \hat{x}_1$ and $\hat{x}_1\leq 1$, respectively. Since both constraints are satisfied at equality at $(\hat{x}_1^*,\hat{y}_1^*)$, then we see that the dual variables $\mu_1$ and $\mu_2$ may take positive values without violating complementary slackness. In particular, let $\mu_1^*=2x_2\left(\frac{y_1-x_2}{x_1-x_2}-\frac{x_2-y_2}{x_2}\right)\geq 0$ and $\mu_2^*=x_2\left(\frac{y_1-x_2}{x_1-x_2}-\frac{x_2-y_2}{x_2}\right)\left(\frac{x_1-y_1}{x_1-x_2}+\frac{y_2}{x_2}\right)\geq 0$. Then, $ \frac{\partial \psi}{\partial \hat{y}_1}(\hat{x}_1^*,\hat{y}_1^*)=\mu_1^* \text{ and } \frac{\partial \psi}{\partial \hat{x}_1}(\hat{x}_1^*,\hat{y}_1^*)=-\mu_1^*+\mu_2^*. $ Thus $(\hat{x}_1^*,\hat{y}_1^*)$ corresponds to a KKT point and, by convexity, is optimal. Substituting in \eqref{eq:psiForm} gives the result.\ \end{proof} Note that Lemmas~\ref{lem:case1}, \ref{lem:case2} and \ref{lem:case3} cover all cases with $y_1\geq y_2$. We can now prove the main result. \begin{proof}[Theorem~\ref{theo:convexHullBounded}] If $y_1\geq y_2$, the description of the convex hull follows directly from Lemmas~\ref{lem:case1}, \ref{lem:case2} and \ref{lem:case3}. If $y_1\leq y_2$, the result follows from symmetry.\ \end{proof} \subsection{Valid inequalities for $S$} \label{sec:counterexample} Similar to the discussion in Section~\ref{sec:validUnbounded}, the description of $\ensuremath{\text{conv}}(X)$ can be used to derive strong extended convex relaxations for $S$. In order to obtain \rev{(nonlinear)} inequalities in the original space of variables, we project out the auxiliary variables for a given ordering $y_1\geq\ldots\geq y_n$ of the continuous variables with additional restrictions corresponding to conditions $x_j(x_i-y_i)\leq y_j(x_i-x_j)$ in \eqref{eq:defG}. Finally, to obtain linear inequalities valid independent of the conditions, we derive the first order approximations. Suppose $y_1\geq\ldots\geq y_n$, and $x_j(x_i-y_i)\leq y_j (x_i-x_j)$ for $j>i$, which holds, in particular, if $x=y$. By eliminating the auxiliary variables under these conditions we obtain the inequality \begin{equation} \label{eq:nonlinearValidBounded} \phi(x,y)=\sum_{i \in \bar P} \bar Q_i y_i + \sum_{i \in P} \bar Q_i y_i^2/x_i - \sum_{i=1}^n\sum\limits_{j=i+1}^n Q_{ij}\left(\frac{(y_1-x_2)^2}{x_1-x_2}+\frac{(x_2-y_2)^2}{x_2}\right)\leq t. \end{equation} \rev{Inequality \eqref{eq:nonlinearValidBounded} is only valid for the particular permutation of the continuous variables and when conditions $x_j(x_i-y_i)\leq y_j (x_i-x_j)$ for $j>i$ hold. Since $ \sum_{i \in \bar P} \bar Q_i \bar y_i + \sum_{i \in P} \bar Q_i \bar y_i^2/\bar x_i - \sum_{i=1}^n \sum_{j=i+1}^n Q_{ij} g(\bar x_i,\bar x_j,\bar y_i,\bar y_j)=\phi(\bar x, \bar y)$, we can find valid subgradient inequalities by taking gradients of the left-hand-side of \eqref{eq:nonlinearValidBounded}. } Let $\pi_i=Q_{ii}+2\sum_{j=i}^{i-1}Q_{ij}$ and $\alpha_i=2\sum_{j=1}^iQ_{ij}$, and recall $\bar Q_i=\sum_{j=1}^nQ_{ij}$. The partial derivatives of $\phi$ evaluated at a point $(\bar{x},\bar{y})$ where $\bar{x}=\bar{y}$ are as follows: \begin{align*} \frac{\partial \phi}{\partial x_i}(\bar{x},\bar{y})&=\sum_{j=i+1}^n Q_{ij}+\sum_{j=i+1}^{i-1}Q_{ij}-\bar{Q}_i=-Q_{ii}=\pi-\alpha_i,&\quad i\in P\\ \frac{\partial \phi}{\partial x_i}(\bar{x},\bar{y})&=\sum_{j=i+1}^n Q_{ij}+\sum_{j=i+1}^{i-1}Q_{ij}=\pi-\alpha_i+\bar Q_i,&\quad i\in \bar P\\ \frac{\partial \phi}{\partial y_i}(\bar{x},\bar{y})&=-2\sum_{j=i+1}^n Q_{ij}+2\bar{Q}_i=-\alpha_i,&\quad i \in P\\ \frac{\partial \phi}{\partial y_i}(\bar{x},\bar{y})&=-2\sum_{j=i+1}^n Q_{ij}+\bar{Q}_i=-\alpha_i-\bar Q_i,&\quad i \in \bar P. \end{align*} Thus, since $\phi(\bar{x},\bar{y})+\nabla \phi(\bar{x},\bar{y})(x-\bar{x},y-\bar{y})\leq g(x,y)\leq t$, we obtain the linear inequality \begin{equation} \label{eq:polymatroidX} \sum_{i=1}^n\pi_i x_i\leq t+\sum_{i=1}^n\alpha_i(x_i-y_i)-\sum_{i\in \bar P}\bar Q_i(x_i-y_i). \end{equation} Observe that inequality \eqref{eq:polymatroidX} depends only on the ordering of $\bar{x}$, but not on the actual values. \begin{remark} Consider the submodular function given by $q(x)=x'Qx$. The extreme points of the extended polymatroid \citep{Edmonds1970} associated with $q$, $\Pi$, correspond to the vectors $\pi$ in inequality \eqref{eq:polymatroidX}; thus, the convex lower envelope of $q$ is described by the function $\bar{q}(x)=\max_{\pi\in \Pi}\pi'x$ \cite{L:submodular-convex}. \citet{AB:prob-nd} employ these polymatroid inequalities for the binary case. For the \rev{mixed-integer} case, the inequality \eqref{eq:polymatroidX} is tight for the binary restriction $x=y$, and the right hand side is relaxed as the distance between $x$ and $y$ increases. \end{remark} \begin{remark} The values $\alpha_i$ in inequality \eqref{eq:polymatroidX} corresponds to the value of derivative of $q(x)$ with respect to $x_i$ when $x_j=1$ for all $j\leq i$ and $x_j=0$ for $j>i$. Atamt{\"u}rk and Jeon \cite{Atamturk2017} use lifting to derive similar inequalities for another class of nonlinear functions with \rev{indicator} variables and submodular binary restriction. \end{remark} \section{Valid \rev{conic quadratic} inequalities for $X$} \label{sec:valid} The inequalities $f(x,y)\leq t$ and $g(x,y)\leq t$ derived in Sections~\ref{sec:convexHullUnbounded} and \ref{sec:convexHullBounded} for $X_U$ and $X$, respectively, cannot be directly used within off-the-shelf solvers \rev{in the original space of variables} as they are piecewise functions. However, since they are convex, they can be implemented using gradient outer-approximations at differentiable points (as discussed in Sections~\ref{sec:validUnbounded} and \ref{sec:counterexample}): given a fractional point $(\bar{x},\bar{y})$ with $\bar{x}>0$ \rev{and a subgradient $\xi \in \partial g(\bar{x},\bar{y})$}, the inequality \begin{equation} \label{eq:gradient} g(\bar{x},\bar{y})+\xi'(x-\bar{x},y-\bar{y})\leq t \end{equation} can be used as a cutting plane to improve the continuous relaxation. However, such an approach may require adding too many inequalities \eqref{eq:gradient} to the formulation, possibly resulting in poor performance (see also Sections~\ref{subsec:dualNetwork} and \ref{subsec:dense} for additional discussion on computations). \rev{Alternatively, an extended formulation could be used \cite[e.g.,][]{Ceria1999,FGH:2x2decomp}; however, such formulations may require a prohibitively large number of variables, resulting in hard-to-solve convex formulations and poor performance in branch-and-bound algorithms.} Therefore, in this section we give valid conic quadratic inequalities that provide a strong approximation of $\ensuremath{\text{conv}}(X)$ and can be readily used within conic quadratic solvers. \subsection{Derivation of the inequalities} Let $L_2=\left\{(x,y,t)\in X: x_2=0\right\}$ and observe that $$\ensuremath{\text{conv}}(L_2)=\left\{(x,y,t)\in [0,1]^2\times \ensuremath{\mathbb{R}}_+^2\times \ensuremath{\mathbb{R}}:\frac{y_1^2}{x_1}\leq t,\; y_1\leq x_1,\; x_2=y_2=0\right\}.$$ We now consider inequalities obtained by lifting the valid inequality $\frac{y_1^2}{x_1}\leq t$ for $\ensuremath{\text{conv}}(L_2)$, i.e., inequalities of the form \begin{equation} \label{eq:form}\frac{y_1^2}{x_1}+h(x_2,y_2)\leq t\end{equation} for $X$, where $h:[0,1]\times \ensuremath{\mathbb{R}}_+\to \ensuremath{\mathbb{R}}$. We additionally require the left hand side of \eqref{eq:form} to be convex, which is the case if and only if $h$ is convex. \begin{proposition} \label{prop:lifting} Inequality \begin{equation} \label{eq:valid1} \frac{y_1^2}{x_1}+\frac{y_2^2}{x_2}-2y_2\leq t \end{equation} is valid for $X$ and is the strongest convex inequality of the form \eqref{eq:form}. \end{proposition} \begin{proof} Any valid inequality of the form \eqref{eq:form} needs to satisfy \begin{align*} h(x_2,y_2)\leq \alpha =\min\; & \left \{ (y_1-y_2)^2 -\frac{y_1^2}{x_1} \ : \ 0\leq y_1\leq x_1,\; x_1\in \{0,1\} \right \} \cdot \end{align*} If $x_1=0$, then $\alpha=y_2^2$; else, $\alpha=-2y_1y_2+y_2^2$. Thus, $y_1=x_1=1$ is a minimizer. We also find that $h(x_2,y_2)\leq y_2^2-2y_2$ for $x_2\in \{0,1\}$. To find the strongest convex inequality, we compute $\text{conv}(W)$, where $$W=\left\{(x_2,y_2,t_2)\in \{0,1\}\times \ensuremath{\mathbb{R}}_+^2: y_2^2-2y_2\leq t_2,\; y_2\leq x_2\right\}.$$ Using the perspective reformulation, one sees that $$\text{conv}(W)=\left\{(x_2,y_2,t_2)\in [0,1]\times \ensuremath{\mathbb{R}}_+^2: \frac{y_2^2}{x_2}-2y_2\leq t_2,\; y_2\leq x_2\right\},$$ and we get inequality \eqref{eq:valid1}. \ \end{proof} By changing the lifting order, we also get that valid inequality $ \frac{y_1^2}{x_1}+\frac{y_2^2}{x_2}-2y_1\leq t $, or, writing the inequalities more compactly, we arrive at the convex valid inequality \begin{equation} \label{eq:valid12} \frac{y_1^2}{x_1}+\frac{y_2^2}{x_2}-2\min\{y_1,y_2\}\leq t. \end{equation} \begin{remark} Observe that inequality \eqref{eq:valid12} dominates inequality \eqref{eq:jeff2} since \begin{align*} \frac{y_1^2}{x_1}-x_2=\frac{y_1^2}{x_1}-y_2-(x_2-y_2)\leq \frac{y_1^2}{x_1}-y_2-(x_2-y_2)\frac{y_2}{x_2}= \frac{y_1^2}{x_1}+\frac{y_2^2}{x_2}-2y_2. \end{align*} Similarly, we find that \eqref{eq:valid12} dominates inequality \eqref{eq:jeff1}. \ignore{Similarly, \eqref{eq:valid2} dominates inequality \eqref{eq:jeff1}.} \end{remark} \begin{remark} For the binary case, $y_i=x_i$, $i=1,2$, inequality \eqref{eq:valid12} reduces to $|x_1 - x_2| \le t$. \end{remark} \subsection{Strength of the inequalities} In order to assess the strength of inequality \eqref{eq:valid12}, we consider the optimization problem \begin{align*} \min\;&a_1x_1+a_2x_2+b_1y_1+b_2y_2+t\\ \text{s.t.}\;& (y_1-y_2)^2\leq t\\ \text{(SR)} \ \ \ \ \ \ \ \ \ &\frac{y_1^2}{x_1}+\frac{y_2^2}{x_2}-2\min\{y_1,y_2\}\leq t\\ & 0\leq y_1\leq x_1\leq 1\\ & 0 \leq y_2\leq x_2 \leq 1. \end{align*} \rev{Inequalities \eqref{eq:valid12} are not sufficient to guarantee the integrality of $x$ in the optimal solutions of (SR) for all values of $a$ and $b$, since they do not describe $\ensuremath{\text{conv}}(X)$ (given in Section~\ref{sec:convexHullBounded}). However, we now show that optimal solutions of (SR) are indeed integral} under mild assumptions on the coefficients $a$ and $b$. First, we prove an auxiliary lemma. \begin{lemma} \label{lem:bound} If there exists an optimal solution to (SR) with $y_i \in \{0,1\}$ for some $i\in\{1,2\}$, then there exists an optimal solution that is integral in $x$. \end{lemma} \begin{proof} If $y_1=0$, then clearly there is an optimal solution with $x_1 \in \{0,1\}$, depending on the sign of $a_1$. Moreover, (SR) reduces to $\min_{0\leq y_2\leq x_2\leq 1}\left\{a_2x_2+b_2y_2+y_2^2/x_2\right\},$ which has an optimal integral solution in $x_2$. On the other hand, if $y_1=x_1=1$, then (SR) reduces to $\min_{0\leq y_2\leq x_2\leq 1}\left\{a_2x_2+(b_2-2)y_2+y_2^2/x_2\right\},$ which, again, has an optimal integral solution in $x_2$. The case with $y_2 \in \{0,1\}$ is symmetric. \ \end{proof} \begin{proposition} \label{prop:sameSign} If $a_1,a_2$ have the same sign and $b_1,b_2$ have the same sign, then (SR) has an optimal solution that is integral in $x$. \end{proposition} \begin{proof} Note that if $a_1, a_2\leq 0$, then $x_1=x_2=1$ for an optimal solution of (SR). Also, if $b_1,b_2\geq 0$, then $y_1=y_2=0$ in an optimal solution of (SR), in which case $x$ is integral in extreme point solutions. It remains to show that if $a_1,a_2\geq 0$ and $b_1,b_2\leq 0$, then there exists an optimal solution of (SR) that is integral in $x$. Suppose that $y_1=y_2=y$ in an optimal solution. Then $(y_1-y_2)^2=0$ and $\frac{y^2}{x_1}+\frac{y^2}{x_2}-2y\leq 0$. Thus, $t=0$ and (SR) reduces to $$ \min\left\{a_1x_1+a_2x_2+(b_1+b_2)y : 0\leq y\leq \min\{x_1,x_2\}\leq 1 \right \}, $$ which has an optimal solution integral in $x$. Now suppose, without loss of generality, there is an optimal solution with $1 > y_1>y_2>0$ (if $y_1=1$ or $y_2=0$ then by Lemma~\ref{lem:bound} the solution is integral in $x$). Then observe that, in this case, the functions $(y_1-y_2)^2$ and $y_2^2/x_2-2y_2$ are non-increasing in $y_2$. Since $b_2\leq 0$, there exists a solution where $y_2$ is at its upper bound, i.e., $y_2=x_2$. Thus problem (SR) reduces to \[ \text{(SR$'$)} \ \ \min \left \{ a_1x_1+b_1y_1+(a_2 \! + \!b_2)y_2+t: (y_1 \!- \!y_2)^2\leq t, \frac{y_1^2}{x_1}\!-\!y_2\leq t, y_1 \! \leq\! x_1 \!\leq 1 \right \} \cdot \] Let $(\lambda, \mu, \alpha, \beta)$ be the dual variables associated with the $\leq$ constraints displayed in the order above and consider the dual feasibility conditions of problem (SR$'$) \begin{align*} -a_1&=-\mu_1\frac{y_1^2}{x_1^2}-\alpha+\beta\\ -b_1&=2\lambda(y_1-y_2)+2\mu\frac{y_1}{x_1}+\alpha\\ -(a_2+b_2)&=-2\lambda(y_1-y_2)-\mu\\ 1&=\lambda+\mu\\ 0&\leq \lambda,\mu,\alpha,\beta. \end{align*} Let $(\bar{x_1},\bar{y_1},\bar{y_2},\bar{t})$ be a KKT point with multipliers $(\bar{\lambda},\bar{\mu},\bar{\alpha}, \bar{\beta})$ and suppose that $\bar{x}_1<1$. Then observe that for small $\epsilon>0$, $(\frac{\bar{y}_1+\epsilon}{\bar{y}_1}\bar{x_1},\bar{y_1}+\epsilon,\bar{y_2}+\epsilon,\bar{t})$ is also a KKT point with the same multipliers. In particular, by choosing $\epsilon$ so that $1=\frac{\bar{y}+\epsilon}{\bar{y}}\bar{x}$, we see that there is an optimal solution with $x_1=1$. Then, problem (SR$'$) further simplifies to \[ \text{(SR$''$)} \ \ \ \ \min\{ b_1y_1+(a_2+b_2)y_2+t: (y_1-y_2)^2\leq t, y_1^2-y_2\leq t \} \cdot \] It remains to show that $y_2=x_2$ is integral. Note that $$y_1^2-2y_1y_2+y_2^2=y_1^2-y_2(2y_1-1)\geq y_1^2-y_2,$$ and, therefore, constraint $y_1^2-y_2\leq t$ is not binding when $y_1 < 1$. So, (SR$''$) is equivalent to $\min b_1y_1+(a_2+b_2)y_2+(y_1-y_2)^2$. However, by increasing or decreasing $y_1$ and $y_2$ by the same amount it is easy to check that there exists an optimal solution where either $y_1=1$ or $y_2=0$, and from Lemma~\ref{lem:bound} there exists an optimal integral solution. \ \end{proof} \rev{Proposition~\ref{prop:sameSign} provides an insight on which problems inequalities \eqref{eq:valid12} may be particularly effective: if the coefficients of the binary variables and the continuous variables have the same sign, then the relaxation induced by \eqref{eq:valid12} may be close to ideal; otherwise, using subgradient inequalities may be required to find strong formulations. In our computations, this simple rule of thumb indeed results in the best performance.} \ignore{ As Example~\ref{ex:notConvexHull} shows, if $a_1$ and $a_2$ have different signs, then the optimal solution to (SR) may not be integral. A similar example can also be constructed if $b_1$ and $b_2$ have different signs. \begin{example} \label{ex:notConvexHull} Consider the optimization problem \begin{align*} \delta=\min\;& -x_1+x_2-0.5y_1 -0.6y_2+t\\ \text{s.t.}\;&(y_1-y_2)^2\leq t\\ &x_i\in \{0,1\}, 0\leq y_i\leq x_i,\; t\geq 0,\quad &i=1,2. \end{align*} An optimal solution is $x_1^*=x_2^*=y_1^*=y_2^*=1$ and $t^*=0$, with an objective value of $\delta^*=-1.1$. The optimal solution of the continuous relaxation is $\bar{x}_1=\bar{y}_1=1$, $\bar{x}_2=\bar{y}_2=0.80$ and $\bar{t}= 0.04$ with an objective value of $\bar{\delta}=-1.14$. If constraint $\frac{y_1^2}{x_1}+\frac{y_2^2}{x_2}-2\min\{y_1,y_2\}\leq t$ is added, then the optimal solution of the corresponding continuous relaxation is $\hat{x}_1=1$, $\hat{y}_1\approx 0.85$, $\hat{x}_2=\hat{y}_2\approx 0.70$ and $\hat{t}\approx 0.025$, with an objective value of $\hat{\delta}\approx-1.1225$. Thus we see that inequalities \eqref{eq:valid12} help strengthen the continuous relaxation but do not guarantee optimality if the coefficients of the discrete variables in the objective function are of opposite signs. Finally note that $g(\hat{x},\hat{y})\approx 0.08>\hat{t}$, where $g$ is the function defined in \eqref{eq:defG}, and we find that indeed the valid inequality $g(x,y)\leq t$ would further strengthen the continuous relaxation. \end{example} } \section{Extensions to other quadratic functions with two \rev{indicator} variables} \label{sec:extensions} In this paper we focus on the set $X$, i.e., a \rev{mixed-integer }set with non-negative continuous variables and non-positive off-diagonal entries in the quadratic matrix. Although an in-depth study of more general quadratic functions is outside the scope of this paper, the approach used in Section~\ref{sec:valid} can be naturally extended to other quadratic functions. We briefly discuss two such extensions. \subsection{General quadratic functions} Observe that a general quadratic function $y'Ay$ can be decomposed as \begin{equation*} y'Ay=\sum_{i=1}^n\left(\left(A_{ii}-\sum_{j\neq i}|A_{ij}|\right)y_i^2-\sum_{j>i:A_{ij}<0} A_{ij}(y_i-y_j)^2+\sum_{j>i:A_{ij}>0} A_{ij}(y_i+y_j)^2\right). \end{equation*} Thus, stronger formulations for general quadratic functions may be obtained by studying the set with two continuous \rev{and two indicator} variables and positive off-diagonal term \begin{equation*} X_+=\left\{(x,y,t)\in \{0,1\}^2\times \ensuremath{\mathbb{R}}_+^2 \times \ensuremath{\mathbb{R}}: (y_1+y_2)^2\leq t,\; y_i\leq x_i, \ i=1,2\right\}. \end{equation*} \begin{proposition} Inequality \begin{equation} \label{eq:valid+} \frac{y_1^2}{x_1}+\frac{y_2^2}{x_2}\leq t \end{equation} is valid for $X_+$ and is the strongest among inequalities of the form \eqref{eq:form}. \end{proposition} The proof is analogous the the proof of Proposition~\ref{prop:lifting} as is omitted for brevity. \rev{Although} inequality \eqref{eq:valid+} is similar in spirit to \eqref{eq:valid1}, and that it is the strongest among inequalities of the form \eqref{eq:form}, it is not as strong as \eqref{eq:valid1} for $X$. In particular, an integrality result similar to Proposition~\ref{prop:sameSign} does not hold for \eqref{eq:valid+}. \ignore{ \begin{example} Consider the optimization problems \begin{align*} \text{$(P_+^1)$} \ \ \ \ \ \ \ \ \ \min_{(x,y,t)\in X_+}\;&0.5x_1+2x_2-1.9y_1-1.3y_2+t, \\ \text{$(P_+^2)$} \ \ \ \ \ \ \ \ \ \min_{(x,y,t)\in X_+}\;&0.4x_1+0.4x_2-.3.7y_1-3.65y_2+t. \end{align*} Inequality \eqref{eq:valid+} is sufficient to get an optimal integer solution in $(P_+^1)$ but does not cut off the fractional solution corresponding to the natural convex relaxation for $(P_+^2)$. \end{example} } \subsection{Quadratic functions with continuous variables unrestricted in sign} Consider the set \begin{equation*} X_{\pm}=\left\{(x,y,t)\in \{0,1\}^2\times \ensuremath{\mathbb{R}}^2\times \ensuremath{\mathbb{R}}: (y_1\pm y_2)^2\leq t,\; -x_i\leq y_i\leq x_i \text{ for }i=1,2\right\}. \end{equation*} Observe that, since the continuous variables can be positive or negative, the sign inside the quadratic expression does not matter (e.g., it can be flipped via the transformation $\bar{y}_2=-y_2$). Thus we assume, without loss of generality, that it is a minus sign. \begin{proposition} \label{prop:plusminus} Inequality \eqref{eq:jeff2}, originally proposed by Jeon et al. \cite{Jeon2017}, is valid for $X_\pm$ and is the strongest among inequalities of the form \eqref{eq:form}. \end{proposition} \begin{proof} Any valid inequality for $X_{\pm}$ of the form \eqref{eq:form} needs to satisfy \begin{align*} h(x_2,y_2)\leq \alpha =\min\; & \left \{ (y_1-y_2)^2 -\frac{y_1^2}{x_1} \ : \ -x_1\leq y_1\leq x_1,\; x_1\in \{0,1\} \right \} \cdot \end{align*} If $x_1=0$, then $\alpha=y_2^2$. Else, $\alpha=-2y_1y_2+y_2^2$; in this case, the minimum is attained at $y_1^*=1$ if $y_2\geq 0$ and at $y_1^*=-1$ otherwise. Thus, we find that $h(x_2,y_2)\leq y_2^2-2|y_2|$ for $x_2\in \{0,1\}$. To find the strongest convex inequality, we compute $\text{conv}(W_{\pm})$, where $W_{\pm}=\left\{(y_2,x_2,t_2)\in \{0,1\}\times \ensuremath{\mathbb{R}}\times \ensuremath{\mathbb{R}}: y_2^2-2|y_2|\leq t_2,\; -x_2\leq y_2\leq x_2\right\}.$ The convex lower envelope corresponding to the one-dimensional non-convex function $h_1(y_2)=y_2^2-2|y_2|$ for $y_2\in [-1,1]$ is the constant function equal to $-1$. Moreover, it can be shown that $$\text{conv}(W_{\pm})=\left\{(y_2,x_2,t_2)\in [0,1]\times \ensuremath{\mathbb{R}}_+^2: -x_2\leq t_2,\; -x_2\leq y_2\leq x_2\right\}$$ and we get the convex valid inequality $\frac{y_1^2}{x_1}-x_2\leq t$ for $X_{\pm}$. \ \end{proof} In light of Proposition~\ref{prop:plusminus}, inequalities \eqref{eq:valid1}-\eqref{eq:valid12} can be interpreted as inequalities that additionally account for the non-negativity of the continuous variables, with respect to the valid inequalities proposed by Jeon et al. \cite{Jeon2017}. Moreover, although not explicitly considered by Jeon et al., their inequalities may be particularly effective for quadratic optimization problems with \rev{indicator variables and} continuous variables unrestricted in sign. \rev{Observe that inequalities \eqref{eq:jeff1}--\eqref{eq:jeff3} are indeed valid even if the variables are not required to be non-negative -- in contrast with the inequalities $f(x,y)\leq t$, $g(x,y)\leq t$ and \eqref{eq:valid12}, which account for the non-negativity of the variables and are only valid in that case.} \section{Computations} \label{sec:computations} In this section we report a summary of computational experiments performed to test the effectiveness of the proposed inequalities in a branch-and-bound algorithm. All experiments are conducted using Gurobi 7.5 solver on a workstation with a 3.60GHz Intel\textregistered \ Xeon\textregistered \ E5-1650 CPU and 32 GB main memory with a single thread. The time limit is set to one hour and Gurobi's default settings are used (except for the parameter ``PreCrush", which is set to 1 in order to use cuts). Cuts (if used) are added only at the root node using the callback features of Gurobi, and the reported times include the time used to add cuts. \subsection{Image segmentation with $\ell$-0 penalty} \label{subsec:dualNetwork} Given a finite set $N$, functions $d_i:\ensuremath{\mathbb{R}}\to \ensuremath{\mathbb{R}}_+$ for $i\in N$ and $s_{ij}:\ensuremath{\mathbb{R}}\to \ensuremath{\mathbb{R}}_+$ for $i\neq j$, consider \begin{align*} (D)\ \ \ \ \ \ \ \ \ \ \min_{y\in Y}\;&\sum_{i\in N}d_i(y_i)+\sum_{i\neq j}s_{ij}(y_i-y_j), \end{align*} where $Y\subseteq \ensuremath{\mathbb{R}}_+^N$. Problem (D) arises as the Markov Random Fields (MRF) problem for image segmentation, see \cite{boykov2001fast,kolmogorov2004energy}. In the MRF context, $d_i$ are the \emph{deviation} penalty functions, used to model the cost of changing the value of a pixel from the observed value $p_i$ to $y_i$, e.g., $d_i(y_i)=c_i (p_i-y_i)^2$ with $c_i\in \ensuremath{\mathbb{R}}_+$; functions $s_{ij}$ are the \emph{separation} penalty functions, used to model the cost of having adjacent pixels with different values, e.g., $s_{ij}(y_i-y_j)=c_{ij}(y_i-y_j)^2$ with $c_{ij} > 0$ if pixels $i$ and $j$ are adjacent, and $s_{ij}(y_i-y_j)=0$ otherwise. Often, $Y=[0,1]^N$ or is given by a suitable discretization, i.e., $y$ is a vector of integer multiples of a parameter $\varepsilon$. We consider in our computations the case $Y=[0,1]^N$, but the proposed approach can be used with any $Y$. Problem (D) can be cast as the nonlinear dual of the undirected minimum cost network flow problem \citep{Ahuja2004} and efficient algorithms exist when all functions are convex \cite{Hochbaum2013}. In contrast, we consider here the case where the deviation functions involve a non-convex $\ell$-0 penalty, which is often used to induce sparsity, e.g., restricting the number of pixels that can have a color different from the background color. In particular, $d_i(y_i)=a_i\|y_i\|_0+\bar{d}_i(y_i)$ with $\bar{d}_i=c_i(p_i-y_i)^2$. Thus, the problem can be formulated as \begin{equation} \label{eq:unconstrained}\min\; \sum_{i\in N}a_ix_i+\sum_{i\in N}c_i(p_i-y_i)^2+\sum_{i\neq j}c_{ij}t_{ij} \text{ s. t. }(x_i,x_j,y_i,y_j,t_{ij})\in X,\; \forall i\neq j. \end{equation} \paragraph{\textbf{Instances}} The instances are constructed as follows. The elements of $N$ correspond to points in a $k \times k$ grid, thus $n=k^2$, and separation functions $s_{ij}$ are non-zero whenever the corresponding points are adjacent in the grid. The parameters $p_i$ for $i\in N$, and $c_{ij}$ for each pair of adjacent points $i,j\in N$ are drawn uniformly between 0 and 1. We set $a_i=c_i$, where $c_i$ is generated as follows: first we draw $\tilde{c}_i$ uniformly between $0$ and $1$ for all $i\in N$, let $C_1=\sum_{i\in N}\tilde{c}_i$ and $C_2=\sum_{i:p_i\geq 0.5}(2p_i-1)$; then we set $c_i=\tilde{c}_i\frac{C_1}{C_2}$. Instances generated with these parameters are observed to have large integrality gaps. \paragraph{\textbf{Formulations}} We test the following formulations for solving problem \eqref{eq:unconstrained}: \begin{description} \item[\texttt{\rev{Basic}}] The \rev{natural formulation \begin{equation*} \label{eq:natural} \min\;\sum_{i\in N}a_ix_i+\sum_{i\in N}c_i(p_i-y_i)^2+\sum_{i\neq j}c_{ij}(y_i-y_j)^2 \text{ s.t. }0\leq y\leq x,\; x\in \{0,1\}^N.\end{equation*}} \item[\texttt{Perspective}] \rev{The perspective reformulation implemented with rotated cone constraints \begin{align*} \sum_{i\in N}c_ip_i^2+\min\;&\sum_{i\in N}a_ix_i+\sum_{i\in N}c_i\left(-2p_iy_i+z_i\right)+\sum_{i\neq j}c_{ij}(y_i-y_j)^2\\ \text{ s.t.}\;&y_i^2\leq z_ix_i,\; \forall i\in N\\ &0\leq y\leq x,\;z\geq 0,\; x\in \{0,1\}^N.\end{align*} } \item[\texttt{Conic}] \rev{The formulation with the conic quadratic inequalities \eqref{eq:valid12} \begin{align*} \sum_{i\in N}c_ip_i^2+\min\;&\sum_{i\in N}a_ix_i+\sum_{i\in N}c_i\left(-2p_iy_i+z_i\right)+\sum_{i\neq j}c_{ij}t_{ij}\\ \text{ s.t.}\;&y_i^2\leq z_ix_i,\; \forall i\in N\\ &(y_i-y_j)^2\leq t_{ij},\; z_i+z_j-2y_i\leq t_{ij},\;z_i+z_j-2y_j\leq t_{ij},\; \forall i\neq j\\ &0\leq y\leq x,\;z\geq 0,\; x\in \{0,1\}^N.\end{align*} } \ignore{In addition to the perspective reformulation, the conic quadratic inequalities \eqref{eq:valid12} are also added in an extended formulation.} \end{description} Furthermore, we also test models \texttt{Perspective+cuts} and \texttt{Conic+cuts}, where the \rev{subgradient} inequalities \eqref{eq:gradient} are used as cutting planes to strengthen the \texttt{Pers\-pective} and \texttt{Conic} formulations, respectively. If $\bar{x}_i=0$ for some $i\in N$ then we use the first-order expansion around $\bar{x}_i=10^{-5}$ instead. \paragraph{\textbf{Results}} Table~\ref{tab:QPDual} shows a comparison of the performance of the algorithm for each formulation for varying grid sizes. Each row in the table represents the average for five instances for a grid size. Table~\ref{tab:QPDual} displays the initial gap (\texttt{igap}), the root gap improvement (\texttt{rimp}), the number of branch and bound nodes (\texttt{nodes}), the elapsed time in seconds (\texttt{time}), and the end gap at termination (\texttt{egap}) (in brackets, we report the number of instances solved to optimality within the time limit). The initial gap is computed as $\texttt{igap}=\frac{\texttt{obj}_{\texttt{best}}-\texttt{obj}_{\texttt{cont}}}{\left|\texttt{obj}_{\texttt{best}}\right|}\rev{\times 100}$, where $\texttt{obj}_{\texttt{best}}$ is the objective value of the best feasible solution found and $\texttt{obj}_{\texttt{cont}}$ is the objective \rev{of the continuous relaxation of \texttt{Basic}}. The root improvement is computed as $\texttt{rimp}= \frac{\texttt{obj}_{\texttt{relax}}-\texttt{obj}_{\texttt{cont}}} {\texttt{obj}_{\texttt{best}}-\texttt{obj}_{\texttt{cont}}}\rev{\times 100}$, where $\texttt{obj}_{\texttt{relax}}$ is the objective value of the relaxation obtained after processing the first node of the branch-and-bound tree for a given formulation, \rev{obtained by querying Gurobi's attribute ``ObjBound" at the root node using a callback}. We observe that the \texttt{Basic} formulation requires a substantial amount of branching before proving optimality, resulting in long solution times. The \texttt{Perspective} formulation results in a root gap improvement close to 50\% and better times and end gaps than the \texttt{Basic} formulation. However, even with the \texttt{Perspective} formulation, instances with $k \times k=400$ and larger cannot be solved to optimality leaving end gaps 15.3\% or more. In contrast, formulation \texttt{Conic} results in root gap improvements close to 100\%, and the performance of the branch-and-bound algorithm is orders-of-magnitude better than with the \texttt{Basic} and \texttt{Perspective} formulations: instances with $k \times k=400$ that are not close to being solved after one hour of computation with \texttt{Basic} and \texttt{Perspective} are solved to optimality in one second; while formulation \texttt{Basic} is able to solve in five minutes instances with $100$ variables, formulation \texttt{Conic} is able to solve in the same amount of time formulations with $2,500$ variables, i.e., instances 250 times larger. \begin{table}[h!] \setlength{\tabcolsep}{1pt} \begin{center} \caption{Experiments with image segmentation with $\ell$-0 penalty.} \label{tab:QPDual} \scalebox{0.55}{ \begin{tabular}{ c c c |c r r r | c r r r | c r r r| c r r r|c r r r} \hline \hline & \multirow{2}{*}{$k \times k$} & \multirow{2}{*}{\texttt{igap}} & \multicolumn{4}{c|}{\textbf{\texttt{Basic}}} & \multicolumn{4}{c|}{\textbf{\texttt{Perspective}}}& \multicolumn{4}{c|}{\textbf{\texttt{Perspective+cuts}}}& \multicolumn{4}{c|}{\textbf{\texttt{Conic}}}& \multicolumn{4}{c}{\textbf{\texttt{Conic+cuts}}}\\ &&&&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}\\ \hline & 100 & 51.0 & & 2,065,285 & 301 & 0.0[5] & 47.9 & 70,898 & 17 & 0.0[5]& 99.6 & 27,006 & 601 & 0.0[5] & 99.4 & 7 & 0 & 0.0[5]& 99.7 & 7 & 0 & 0.0[5] \\ & 400 & 47.7 & & 9,520,774 & 3,600 & 34.0[0] & 48.6 & 5,277,876 & 3,600 & 15.3[0]& 93.2 & 305 & 2 & 0.0[5] & 99.5 & 59 & 1 & 0.0[5]& 99.5 & 58 & 1 & 0.0[5] \\ & 2,500 & 47.9 & & 1,091,872 & 3,600 & 46.3[0] & 45.6 & 682,406 & 3,600 & 25.6[0]& 47.2 & 38,989 & 2,235 & 9.9[2] & 99.3 & 17,561 & 393 & 0.0[5]& 99.6 & 9,220 & 210 & 0.0[5] \\ & 10,000 & 47.4 & & 167,529 & 3,600 & 47.2[0] & 45.9 & 131,986 & 3,600 & 25.9[0]& 32.4 & 25,992 & 3,600 & 0.2[0] & 99.5 & 25,842 & 3,600 & 0.1[0]& 99.6 & 26,695 & 3,600 & 0.1[0] \\ \hline\hline \end{tabular} } \end{center} \end{table} Formulation \texttt{Conic+cuts} results in very modest improvement in the strength of the continuous relaxation when compared with \texttt{Conic} (less than 0.3\% additional root gap improvement) and almost no difference in terms of nodes, times or end gaps. Observe that in \eqref{eq:unconstrained} the coefficients of the linear objective terms corresponding to the discrete and continuous variables have the same sign, and the experimental results are consistent with Proposition~\ref{prop:sameSign} --- \texttt{Conic} indeed is a very close approximation of inequalities \eqref{eq:gradient} in this case. Note that if cuts are added without the approximation given by inequalities \eqref{eq:valid12} (formulation \texttt{Perspective+cuts}), the root improvement is substantial for small instances but it degrades as the size increases. We conjecture that the required number of cuts to obtain an adequate relaxation increases with the size of the instances. Thus, for larger instances, Gurobi may stop adding cuts before obtaining a strong relaxation. Additionally, to solve second-order conic subproblems in branch-and-bound, solvers like Gurobi construct a linear outer approximation of the convex sets; adding a large number of cuts may interfere with the construction of the outer approximation, leading to weak relaxations of the convex set, which is observed for instances with $k\times k = 10,000$. Using the approximation of the convex hull derived in Section~\ref{sec:valid} as a starting point appears to circumvent such numerical difficulties. \rev{Finally, we remark that for the larger instances that are not solved to optimality by \texttt{Conic}, high quality solutions and tight lower bounds are found within a few seconds, but branching is ineffective to close the remaining gap. To illustrate, Figure~\ref{fig:timeMRF} presents the time to prove an optimality gap of at most 1\%, as a function of the dimension $n$ of the problem. We see that the proposed approach scales very well (almost linearly) up to $n=20,000$. In particular, the lower bound found corresponds to the one obtained at the root node, and the feasible solutions are found within a small number (50--60) of branch-and-bound nodes. Memory limit is reached for instances with $n>20,000$.} \begin{figure}[!h ] \centering \includegraphics[width=0.9\textwidth,trim={8cm 6cm 8cm 6cm},clip]{./timeMRF.pdf} \caption{Time to prove an optimality gap of 1\% with \texttt{Conic} as a function of the dimension $n=k\times k$.} \label{fig:timeMRF} \end{figure} \subsection{Portfolio optimization with transaction costs} \label{subsec:transaction} Consider a simple portfolio optimization problem with transaction costs similar to the one discussed in \cite[p.146]{cornuejols2006optimization}. However, in our case, transactions have a fixed cost and there is a restricted number of transactions. For simplicity, we \rev{first} consider assets with uncorrelated returns. \rev{In this context, an M-matrix arises directly due to the buying and selling decisions. In Section~\ref{subsec:dense} we present computations with a general covariance matrix, from which an M-matrix corresponding to the negatively correlated assets can be extracted to apply the reformulations.} Let $N$ be the set of assets, $\mu,\sigma\in \ensuremath{\mathbb{R}}_+^N$ be the vectors of expected returns and standard deviations of returns. Let $w\in \ensuremath{\mathbb{R}}_+^N$ denote the current holdings in each asset, let $a^+,a^-\in \ensuremath{\mathbb{R}}_+^N$ be the fixed transaction costs associated with buying and selling any quantity, $c^+,c^-\in \ensuremath{\mathbb{R}}^N$ be the variable transaction costs and profits of buying and selling each asset, let $u^+,u^-\in \ensuremath{\mathbb{R}}_+^N$ be the upper bounds on the transactions, and let $k$ be the maximum number of transactions. Then the problem of finding a minimum risk portfolio that satisfies a given expected return $b\in \ensuremath{\mathbb{R}}$ with at most $k$ transactions can be formulated as the mixed-integer quadratic problem: \begin{align*} \min\;& v(y)=\sum_{i\in N}\sigma_i^2 (w_i+y_i^+-y_i^-)^2\\ \text{s.t.}\;& \sum_{i\in N}\left(\mu_iw_i+y_i^+(\mu_i-c_i^+)-y_i^-(\mu_i-c_i^-)-a_i^+x_i^+-a_i^-x_i^-\right)\geq b\\ &\sum_{i\in N}(x_i^++x_i^-)\leq k\\ & 0\leq y_i^+\leq u_i^+x_i^+,\; 0\leq y_i^-\leq u_i^-x_i^-,\rev{\;x_i^++x_i^-\leq 1,}\quad \forall i\in N\\ &(x^+,x^-,y^+,y^-)\in \{0,1\}^N\times \{0,1\}^N \times \ensuremath{\mathbb{R}}_+^N\times \ensuremath{\mathbb{R}}_+^N, \end{align*} where $v(y)$ is the variance of the new portfolio, the decision variables $y_i^+$ ($y_i^-$) indicate the amount bought (sold) in asset $i$ and the variables $x_i^+$ ($x_i^-$) indicate whether asset $i$ is bought (sold). Note that the quadratic objective function is nonseparable and the corresponding quadratic matrix is positive semi-definite but not positive definite; therefore, the classical perspective reformulation cannot be used. Additionally, observe that the portfolio optimization problem can be reformulated by adding continuous variables $t\in \ensuremath{\mathbb{R}}_+^N$, constraints $(x_i^+,x_i^-,y_i^+,y_i^-,t_{i})\in X$ for all $i\in N$ to minimize the linear objective \begin{equation} \label{eq:portfolioReformulation}\sum_{i\in N} \sigma_i^2(2 w_i(y_i^+-y_i^-)+t_{i}) \cdot \end{equation} Note that since each continuous variable is involved in exactly one term in the objective, the extended formulation given by \eqref{eq:portfolioReformulation} and constraints $(x_i^+,x_i^-,y_i^+,y_i^-,t_{i})\in \ensuremath{\text{conv}}(X)$ results in the convex envelope of $v(y)$. \paragraph{\textbf{Instances}} The instances are constructed as follows. We set $w_i=u_i^+=u_i^-=1$ for all $i\in N$. Coefficients $\sigma_i$ are drawn uniformly between $0$ and $1$, $\mu_i$ are drawn uniformly between $0$ and $2\sigma_i$, the transactions costs and profits $c_i^+$ and $c_i^-$ are drawn uniformly between $0$ and $\mu_i$, the fixed costs $a_i^+$ and $a_i^-$ are drawn uniformly between $0$ and $(\mu_i-c_i^+)$ and $(\mu_i-c_i^-)$, respectively. The target return is set to $\beta\sum_{i\in N}\mu_i$ where $\beta > 0$ is a parameter; $k$ is set to $n/10$. \paragraph{\textbf{Formulations}} We test the formulations \texttt{Basic}, \texttt{Basic+cuts}, \texttt{Conic}, and \texttt{Co\-nic+cuts}, as defined in Section~\ref{subsec:dualNetwork}. As mentioned above, the perspective reformulation cannot be used for these instances. \paragraph{\textbf{Results}}Table~\ref{tab:QPGurobiPortfolio} shows the results for varying number of assets $n$ and values of the expected return $\beta$. Observe that instances with lower values of $\beta$ are more difficult to solve for the \texttt{Basic} formulation: low $\beta$ results in more feasible solutions, and more branch-and-bound nodes need to be explored before proving optimality. We also see that the \texttt{Basic} formulation is not effective for instances with $250$ or more assets, where most instances (27 out of 30) are not solved to optimality within the time limit and leaving large end gaps at termination. On the other hand, the other three formulations achieve root improvements of over $90\%$ in most cases, and lead to much lower solution times and end gaps. Observe that for the portfolio problem, the coefficients of $y_i^+$ and $y_i^-$ in the objective and return constraints have opposite signs. Thus, we expect the approximation given by $\texttt{Conic}$ not to be as effective as in Section~\ref{subsec:dualNetwork} and, therefore, the cuts to have a larger impact in closing the root gaps. Indeed, we see in these experiments that adding cuts leads to an additional 2\% to 4\% root improvement (compared to the 0.3\% improvement observed in Section~\ref{subsec:dualNetwork})\footnote{The root gap improvements of 95\% achieved by \texttt{Conic} indicate that the approximation given in Section~\ref{sec:valid} is strong and considerably better than the natural continuous relaxation.}. In particular, formulation \texttt{Basic+cuts} is able to solve all instances in seconds, even instances with low values of $\beta$ where all other formulations struggle. \begin{table}[h!] \setlength{\tabcolsep}{1pt} \begin{center} \caption{Experiments with portfolio optimization with fixed transaction costs.} \label{tab:QPGurobiPortfolio} \scalebox{0.6}{ \begin{tabular}{ c c c |c c c c | c c c c | c c c c| c c c c } \hline \hline \multirow{2}{*}{\texttt{$n$}} & \multirow{2}{*}{$\beta$} & \multirow{2}{*}{\texttt{igap}} & \multicolumn{4}{c|}{\textbf{\texttt{Basic}}} & \multicolumn{4}{c|}{\textbf{\texttt{Basic+cuts}}}& \multicolumn{4}{c|}{\textbf{\texttt{Conic}}}& \multicolumn{4}{c}{\textbf{\texttt{Conic+cuts}}}\\ &&&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}\\ \hline \multirow{3}{*}{100}& 0.95 & 30.8 & 0.0 & 39,963 & 11 & 0.0[5]& 98.9 & 57 & 0 & 0.0[5] & 86.9 & 822 & 4 & 0.0[5] & 92.8 & 1,069 & 4 & 0.0[5]\\ & 0.98& 27.9 & 0.0 & 6,926 & 2 & 0.0[5]& 93.2 & 130 & 1 & 0.0[5] & 98.4 & 35 & 0 & 0.0[5] & 94.4 & 167 & 1 & 0.0[5] \\ & 1.00& 32.7 & 0.0 & 3,229 & 1 & 0.0[5]& 97.9 & 32 & 0 & 0.0[5] & 96.9 & 49 & 0 & 0.0[5] & 97.2 & 37 & 0 & 0.0[5]\\ \multicolumn{3}{c|}{\textbf{Average}}&\textbf{0.0} &\textbf{ 16,706} & \textbf{5} & \textbf{0.0[15]}&\textbf{96.7} &\textbf{ 76} & \textbf{0} & \textbf{0.0[15]} &\textbf{94.1} &\textbf{ 302} & \textbf{2} & \textbf{0.0[15]}&\textbf{94.8} &\textbf{ 425} & \textbf{2} & \textbf{0.0[15]} \\ \hline &&&&&&&&&&&&&&&&\\ \multirow{3}{*}{250}& 0.95 & 32.4 & 0.0 & 5,344,016 & 3,600 & 15.0[0]& 98.8 & 176 & 0 & 0.0[5] & 94.0 & 175,859 & 2,880 & 1.7[1] & 96.0 & 233,024 & 2,880 & 1.1[1]\\ & 0.98& 26.0 & 0.0 & 4,831,484 & 3,227 & 6.2[1] & 97.8 & 210 & 1 & 0.0[5]& 99.1 & 27 & 0 & 0.0[5] & 98.4 & 50,689 & 720 & 0.3[4] \\ & 1.00& 29.4 & 0.0 & 4,518,960 & 2,970 & 4.0[1]& 97.3 & 2,061 & 49 & 0.0[5] & 97.4 & 3,597 & 38 & 0.0[5] & 97.0 & 3,858 & 130 & 0.0[5]\\ \multicolumn{3}{c|}{\textbf{Average}}&\textbf{0.0} &\textbf{4,898,153} & \textbf{3,265} & \textbf{8.4[2]}&\textbf{98.0} &\textbf{ 816} & \textbf{17} & \textbf{0.0[15]} &\textbf{96.8} &\textbf{ 59,827} & \textbf{973} & \textbf{0.6[11]}&\textbf{97.2} &\textbf{ 95,857} & \textbf{1,243} & \textbf{0.5[10]} \\ \hline &&&&&&&&&&&&&&&&\\ \multirow{3}{*}{500}& 0.95 & 32.3 & 0.0 & 2,906,338 & 3,600 & 24.5[0]& 97.6 & 387 & 2 & 0.0[5] & 95.2 & 26,640 & 1,441 & 0.6[3] & 97.2 & 139,686 & 3,600 & 0.9[0]\\ & 0.98& 26.1 & 0.0 & 3,096,026 & 3,600 & 16.4[0]& 98.0 & 343 & 3 & 0.0[5] & 96.4 & 295 & 2 & 0.0[5] & 99.1 & 182 & 1 & 0.0[5] \\ & 1.00& 32.8 & 0.0 &3,076,324 & 3,600 & 18.8[0]& 97.5 & 328 & 2 & 0.0[5] & 93.4 & 330 & 2 & 0.0[5] & 97.0 & 254 & 1 & 0.0[5]\\ \multicolumn{3}{c|}{\textbf{Average}}&\textbf{0.0} &\textbf{ 3,026,229} & \textbf{3,600} & \textbf{19.9[0]}&\textbf{97.7} &\textbf{ 353} & \textbf{2} & \textbf{0.0[15]} &\textbf{95.0} &\textbf{ 9,088} & \textbf{481} & \textbf{0.2[13]}&\textbf{97.7} &\textbf{ 46,707} & \textbf{1,201} & \textbf{0.3[10]} \\ \hline\hline \end{tabular} } \end{center} \end{table} \subsection{General convex quadratic functions} \label{subsec:dense} The quadratic matrices used in the previous computations had specific structures, given by the applications considered. Although our results are \rev{for} M-matrices, in this section, we test the strength of the formulations for more general problems, with dense matrices having positive and negative off-diagonal entries. To employ the results developed for M-matrices, we simply apply the strengthening on the pairs of variables with a negative off-diagonal entry. Toward this end, we consider the mean-variance portfolio optimization \begin{align*} \min\;& y'Ay\\ \text{s.t.}\;& b'y\geq r\\ (MV)\ \ \ \ \ \ \ \ \ \ & 1'x \le k\\ & 0\leq y\leq x \\ &x\in \{0,1\}^n. \end{align*} where the objective is to minimize the portfolio variance $y'Ay$, where $A$ is a covariance matrix, subject to meeting a target return and satisfying sparsity constraints. \paragraph{\textbf{Instances}} In order test the effect of positive off-diagonal elements and diagonal dominance, the matrix $A$ is constructed as follows: Let $\rho\geq 0$ be a parameter that controls the magnitude of the positive off-diagonal entries of $A$, and $\delta\geq 0$ be a parameter that controls the diagonal dominance of $A$. First, we construct a factor matrix $F=GG'$, where each entry in $G_{20\times 20}$ in drawn uniformly from $[-1,1]$, and an exposure matrix $X_{n\times 20}$ such that $X_{ij}=0$ with probability $0.8$, and $X_{ij}$ is drawn uniformly from $[0,1]$, otherwise. Then we construct an auxiliary matrix $\bar{A}=XFX'$. Then, for $i\neq j$, we set $A_{ij}=\bar{A}_{ij}$ if $\bar{A}_{ij}\leq 0$, and we set $A_{ij}=\rho \bar{A}_{ij}$ otherwise\footnote{The matrices generated this way have only 20.1\% of the off-diagonal entries negative on average -- the rest are positive if $\rho>0$ and $0$ if $\rho=0$. The ratio of the magnitude of the negative entries vs. the total, i.e., $\frac{\sum_{i\neq j: A_{ij}<0}|A_{ij}|}{\sum_{i\neq j}|A_{ij}|}$, is on average $0.72$ if $\rho=0.1$, $0.57$ if $\rho=0.2$ and $0.34$ if $\rho=0.5$.}. Finally, $\upsilon_i$ is drawn uniformly from $[0, \delta\bar{\sigma}]$, where $\bar{\sigma}=\frac{1}{n}\sum_{i\neq j}|A_{ij}|$, and $A_{ii}=\sum_{j\in N}|A_{ij}|+\upsilon_i$. \rev{Observe that the auxiliary matrix $\bar A$ represents a low-rank matrix obtained from a 20-factor model, and $\ensuremath{\text{diag}}(\upsilon)$ is a diagonal matrix representing the residual variances not explained by the factor model. The matrix $A$ is obtained by scaling the positive off-diagonals of $\bar A$ by $\rho$, and updating the diagonal entries to ensure positive definiteness by imposing diagonal dominance.} Additionally, $b_i$ is drawn uniformly between $0.5U_{ii}$ and $1.5U_{ii}$. \rev{Finally, we let $r=0.25 \times \sum_{i\in N}b_i$ and $k=n/5$ for ``small" instances, and $r=0.125 \times \sum_{i\in N}b_i$ and $k=n/10$ for ``large" instances.} \paragraph{\textbf{Formulations}} We test the same formulations as in Section \ref{subsec:dualNetwork}. In this case, the diagonal matrix $\ensuremath{\text{diag}}(\upsilon)$ is used for the \texttt{Perspective} formulation. \rev{In particular, formulations \texttt{Perspective+cuts}, \texttt{Conic} and \texttt{Conic+cuts} are based on the decomposition of the objective function given by \begin{align*} \min\;&\sum_{i\in N}\upsilon_i z_i+\sum_{A_{ij}< 0}|A_{ij}|t_{ij}+y'(A-Q-\ensuremath{\text{diag}}(\upsilon))y \\ \text{ s.t.}\;&y_i^2\leq z_ix_i,\;\forall i\in N, \quad (x_i,x_j,y_i,y_j,t_{ij})\in X,\; \forall i\neq j: A_{ij}< 0,\end{align*} where $Q_{ij}=\min\{0,A_{ij}\}$ for $i\neq j$ and $Q_{ii}=-\sum_{j\neq i}Q_{ij}$. By construction, $A-Q-diag(\upsilon)$ is positive semi-definite.} \paragraph{\textbf{Results}} Table \ref{tab:QPGurobiConstrained} presents the results for matrices with non-positive off diagonal entries (i.e., $\rho=0$) and varying diagonal dominance $\delta$. Table \ref{tab:QPGurobiConstrainedPositive} presents the results for matrices with fixed diagonal dominance and varying magnitudes for positive off-diagonal entries $\rho$. We see that, in all cases formulation \texttt{Conic} results in better root gap improvements than \texttt{Perspective} and \texttt{Basic}. The gap improvements depend on the parameters $\delta$ and $\rho$. In Table~\ref{tab:QPGurobiConstrained} we see that \texttt{Conic} formulation closes an additional 30\% to 40\% gap with respect to \texttt{Perspective} (independent of the diagonal dominance $\delta$). In Table~\ref{tab:QPGurobiConstrainedPositive} we observe that, as expected, \texttt{Conic} formulation is more effective at closing root gaps when the magnitude $\rho$ for the positive off-diagonal entries is small. Nevertheless, for all instances formulations \texttt{Conic} and \texttt{Conic+cuts} result in significantly stronger root improvements than \texttt{Perspective} (at least 15\%, and often much more) and the number of nodes required to solve the instances is decreased by at least an order of magnitude. \begin{table}[h!] \setlength{\tabcolsep}{0.5pt} \begin{center} \caption{Experiments with non-positive off diagonal entries and varying diagonal dominance, \rev{$k=n/5$}.} \label{tab:QPGurobiConstrained} \scalebox{0.55}{ \begin{tabular}{ c c c |c c c c | c c c c | c c c c| c c c c|c c c c} \hline \hline \multirow{2}{*}{\texttt{$n$}} & \multirow{2}{*}{$\delta$} & \multirow{2}{*}{\texttt{igap}} & \multicolumn{4}{c|}{\textbf{\texttt{Basic}}} & \multicolumn{4}{c|}{\textbf{\texttt{Perspective}}}& \multicolumn{4}{c|}{\textbf{\texttt{Perspective+cuts}}}& \multicolumn{4}{c|}{\textbf{\texttt{Conic}}}& \multicolumn{4}{c}{\textbf{\texttt{Conic+cuts}}}\\ &&&&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}\\ \hline \multirow{3}{*}{60}& 0.1 & 88.2 & & $4\cdot 10^5$ & 86 & 0.0[5] & 7.2 & $4\cdot 10^5$ & 99 & 0.0[5]& 19.2 & 15,230 & 544 & 0.0[5] & 43.6 & 3,704 & 107 & 0.0[5]& 43.9 & 4,653 & 154 & 0.0[5] \\ & 0.5& 80.2 & & $5\cdot 10^5$ & 103 & 0.0[5] & 28.0 & $2\cdot 10^5$ & 47 & 0.0[5]& 38.9 & 3,243 & 92 & 0.0[5] & 66.1 & 1,783 & 44 & 0.0[5]& 66.6 & 1,567 & 49 & 0.0[5] \\ & 1.0& 74.0 & & $6\cdot 10^5$ & 121 & 0.0[5] & 44.4 & $6\cdot 10^4$ & 18 & 0.0[5]& 52.8 & 1,335 & 35 & 0.0[5] & 81.5 & 863 & 14 & 0.0[5]& 82.3 & 709 & 19 & 0.0[5] \\ \multicolumn{3}{c|}{\textbf{Average}}&&$\mathbf{ 5\cdot 10^5}$ & \textbf{103} & \textbf{0.0[15]} &\textbf{26.5} &$\mathbf{ 2\cdot10^5}$ & \textbf{55} & \textbf{0.0[15]}&\textbf{37.0} &\textbf{ 6,603} & \textbf{224} & \textbf{0.0[15]}&\textbf{63.7} &\textbf{ 2,117} & \textbf{55} & \textbf{0.0[15]}&\textbf{64.3} &\textbf{ 2,310} & \textbf{74} & \textbf{0.0[15]} \\ \hline &&&&&&&&&&&&&&&&&&\\ \multirow{3}{*}{80}& 0.1 & 90.3 & & $1\cdot 10^7$ & 3,600 & 9.7[0] & 7.2 & $9\cdot 10^6$ & 3,600 & 10.1[0]& 4.0 & 31,194 & 3,600 & 16.1[0] & 37.0 & 26,657 & 2,758 & 5.7[2]& 37.3 & 36,998 & 2,776 & 4.6[2] \\ & 0.5& 82.8 & & $1\cdot 10^7$ & 3,600 & 10.5[0] & 28.2 & $6\cdot 10^6$ & 2,902 & 2.8[3]& 16.8 & 29,220 & 3,017 & 4.0[2] & 60.2 & 11,367 & 1,108 & 0.0[5]& 60.4 & 13,898 & 1,208 & 0.0[5] \\ & 1.0& 77.0 & & $1\cdot 10^7$ & 3,600 & 9.5[0] & 44.1 & $2\cdot 10^6$ & 988 & 0.0[5] & 27.2 & 4,889 & 566 & 0.0[5]&78.4 & 2,689 & 183 & 0.0[5]& 79.0 & 3,395 & 233 & 0.0[5] \\ \multicolumn{3}{c|}{\textbf{Average}}& &$\mathbf{ 1\cdot 10^7}$ & \textbf{3,600} & \textbf{9.9[0]} &\textbf{26.5} &$\mathbf{ 5\cdot 10^6}$ & \textbf{2,496} & \textbf{4.3[8]}&\textbf{16.0} &\textbf{ 21,768} & \textbf{2,394} & \textbf{6.7[7]}&\textbf{58.5} &\textbf{ 13,571} & \textbf{1,350} & \textbf{1.9[12]}&\textbf{58.9} &\textbf{ 18,097} & \textbf{1,406} & \textbf{1.5[12]} \\ \hline &&&&&&&&&&&&&&&&&&\\ \multirow{3}{*}{100}& 0.1 & 90.2 & & $1\cdot 10^7$ & 3,600 & 30.0[0] & 6.4 & $6\cdot 10^6$ & 3,600 & 29.3[0]& 2.8 & 14,855 & 3,600 & 35.8[0] & 37.1 & 19,660 & 3,600 & 19.6[0]& 37.0 & 17,047 & 3,600 & 21.6[2] \\ & 0.5& 83.0 & & $1\cdot 10^7$ & 3,600 & 27.5[0] & 25.2 & $5\cdot 10^6$ & 3,600 & 18.7[0]& 12.8 & 11,912 & 3,600 & 16.4[0] & 58.6 & 16,398 & 3,432 & 7.7[1]& 58.7 & 18,645 & 3,600 & 7.9[0] \\ & 1.0& 77.3 & & $1\cdot 10^7$ & 3,600 & 25.0[0] & 39.9 & $6\cdot 10^6$ & 3,600 & 10.0[0] & 19.7 & 16,144 & 3,236 & 4.8[1]&75.0 & 11,376 & 1,824 & 2.1[3]& 75.4 & 10,588 & 1,822 & 2.5[3] \\ \multicolumn{3}{c|}{\textbf{Average}}& &$\mathbf{ 1\cdot 10^7}$ & \textbf{3,600} & \textbf{27.5[0]} &\textbf{23.8} &$\mathbf{ 6\cdot 10^6}$ & \textbf{3,600} & \textbf{19.3[0]}&\textbf{11.8} &\textbf{ 14,304} & \textbf{3,479} & \textbf{19.0[1]}&\textbf{56.9} &\textbf{ 15,811} & \textbf{2,952} & \textbf{9.8[4]}&\textbf{57.1} &\textbf{ 15,426} & \textbf{3,007} & \textbf{10.7[3]} \\ \hline\hline \end{tabular} } \end{center} \end{table} \begin{table}[h!] \setlength{\tabcolsep}{0.5pt} \begin{center} \caption{Experiments with constant diagonal dominance \& varying positive off-diagonal entries, \rev{$k=n/5$}.} \label{tab:QPGurobiConstrainedPositive} \scalebox{0.55}{ \begin{tabular}{ c c c |c c c c | c c c c | c c c c| c c c c|c c c c} \hline \hline \multirow{2}{*}{\texttt{$n$}} & \multirow{2}{*}{$\rho$} & \multirow{2}{*}{\texttt{igap}} & \multicolumn{4}{c|}{\textbf{\texttt{Basic}}} & \multicolumn{4}{c|}{\textbf{\texttt{Perspective}}}& \multicolumn{4}{c|}{\textbf{\texttt{Perspective+cuts}}}& \multicolumn{4}{c|}{\textbf{\texttt{Conic}}}& \multicolumn{4}{c}{\textbf{\texttt{Conic+cuts}}}\\ &&&&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}&\texttt{rimp}&\texttt{nodes}&\texttt{time}&\texttt{egap}\\ \hline \multirow{3}{*}{60}& 0.1 & 62.4 & & $7\cdot 10^5$ & 153 & 0.0[5] & 46.0 & $7\cdot 10^4$ & 22 & 0.0[5]& 56.1 & 10,165 & 62 & 0.0[5] & 77.6 & 2,141 & 19 & 0.0[5]& 78.1 & 2,065 & 23 & 0.0[5] \\ & 0.2& 57.3 & & $7\cdot 10^5$ & 144 & 0.0[5] & 46.8 & $7\cdot 10^4$ & 22 & 0.0[5]& 56.4 & 16,642 & 89 & 0.0[5] & 73.5 & 3,314 & 20 & 0.0[5]& 73.9 & 3,261 & 24 & 0.0[5] \\ & 0.5& 51.2 & & $6\cdot 10^5$ & 128 & 0.0[5] & 48.0 & $6\cdot 10^4$ & 19 & 0.0[5]& 53.6 & 22,526 & 137 & 0.0[5] & 65.1 & 8,635 & 36 & 0.0[5]& 65.5 & 8,742 & 60 & 0.0[5] \\ \multicolumn{3}{c|}{\textbf{Average}}& &$\mathbf{ 7\cdot 10^5}$ & \textbf{142} & \textbf{0.0[15]} &\textbf{46.9} &$\mathbf{ 6\cdot 10^5} $& \textbf{21} & \textbf{0.0[15]}&\textbf{55.4} &\textbf{ 16,444} & \textbf{96} & \textbf{0.0[15]}&\textbf{72.1} &\textbf{ 4,696} & \textbf{25} & \textbf{0.0[15]}&\textbf{72.5} &\textbf{ 4,689} & \textbf{36} & \textbf{0.0[15]} \\ \hline &&&&&&&&&&&&&&&&&&\\ \multirow{3}{*}{80}& 0.1 & 64.4 & & $1\cdot 10^7$ & 3,600 & 7.6[0] & 46.9 & $2\cdot 10^6$ & 852 & 0.0[5] & 32.8 & 53,774 & 1,401 & 0.4[4]& 77.4 & 8,979 & 244 & 0.0[5]& 78.2 & 8,551 & 183 & 0.0[5] \\ & 0.2& 58.8 & & $1\cdot 10^7$ & 3,600 & 5.9[0] & 48.1 & $2\cdot 10^6$ & 881 & 0.0[5]& 37.8 & 98,151 & 1,997 & 0.6[4] & 74.3 & 25,152 & 349 & 0.0[5]& 75.4 & 22,630 & 327 & 0.0[5] \\ & 0.5& 51.8 & & $1\cdot 10^7$ & 3,255 & 3.2[1] & 49.7 & $8\cdot 10^5$ & 391 & 0.0[5]& 43.7 & 185,839 & 2,462 & 0.4[4] &67.8 & 66,779 & 482 & 0.0[5]& 68.5 & 64,512 & 535 & 0.0[5] \\ \multicolumn{3}{c|}{\textbf{Average}}& &$\mathbf{ 1\cdot 10^7}$ & \textbf{3,485} & \textbf{5.5[1]} &\textbf{48.2} &$\mathbf{ 1\cdot 10^6}$ & \textbf{708} & \textbf{0.0[15]}&\textbf{38.1} &\textbf{ 112,588} & \textbf{1,953} & \textbf{0.5[12]}&\textbf{73.2} &\textbf{ 33,637} & \textbf{358} & \textbf{0.0[15]}&\textbf{74.0} &\textbf{ 31,898} & \textbf{349} & \textbf{0.0[15]} \\ \hline &&&&&&&&&&&&&&&&&&\\ \multirow{3}{*}{100}& 0.1 & 65.0 & & $9\cdot 10^6$ & 3,600 & 23.1[0] & 42.3 & $5\cdot 10^6$ & 3,600 & 9.1[0]& 28.8 & 65,628 & 3,600 & 6.4[0] & 73.0 & 83,300 & 2,667 & 2.5[2]& 73.8 & 67,074 & 2,904 & 2.6[2] \\ & 0.2& 59.4 & & $9\cdot 10^6$ & 3,600 & 20.9[0] & 43.9 & $5\cdot 10^6$ & 3,600 & 7.8[0]& 32.9 & 72,439 & 3,600 & 9.0[0] & 70.6 & 122,553 & 3,031 & 2.8[2]& 71.2 & 116,173 &3,033 & 3.3[1] \\ & 0.5& 52.5 & & $9\cdot 10^6$ & 3,600 & 17.2[0] & 46.2 & $5\cdot 10^6$ & 3,600 & 5.4[0]& 39.1 & 136,082 & 3,600 & 7.7[0] &64.4 & 261,440 & 3,327 & 3.8[1]& 64.8 & 270,701 & 3,396 & 3.7[1] \\ \multicolumn{3}{c|}{\textbf{Average}}& &$\mathbf{ 9\cdot 10^6}$ & \textbf{3,600} & \textbf{20.4[0]} &\textbf{44.2} &$\mathbf{ 5\cdot 10^6}$ & \textbf{3,600} & \textbf{7.4[0]}&\textbf{33.6} &\textbf{ 91,383} & \textbf{3,600} & \textbf{7.4[0]}&\textbf{69.3} &\textbf{ 155,764} & \textbf{3,008} & \textbf{3.0[5]}&\textbf{69.9} &\textbf{ 151,316} & \textbf{3,111} & \textbf{3.2[4]} \\ \hline\hline \end{tabular} } \end{center} \end{table} Observe that the stronger formulations of \texttt{Conic} and \texttt{Conic+cuts} do not necessarily lead to better solution times for small instances. Nevertheless, for the larger instances ($n=100$), using the \texttt{Conic} formulation leads to faster solution times, lower end gaps and more instances solved to optimality for all values of $\delta$ and $\rho$. As in Section~\ref{subsec:dualNetwork}, we observe little difference between \texttt{Conic} and \texttt{Conic+cuts} --- consistent with Proposition~\ref{prop:sameSign}--- and that \texttt{Perspective+cuts} is not effective in closing the root gap. Approximating the nonlinear function with gradient inequalities appears to cause numerical issues as adding cuts weakens the relaxation contrary to expectations. Please see our comments at the end of Section~\ref{subsec:dualNetwork}. \rev{Finally, observe that the formulations tested require adding $O(n^2)$ additional variables, one for each negative off-diagonal entry in $A$. Thus, solving the continuous relaxations may be computationally expensive for large values of $n$. Table~\ref{tab:QPGurobiLarge} illustrates this point for matrices with $\rho=0$ and $\delta=1$. It shows, for the \texttt{Basic}, \texttt{Perspective} and \texttt{Conic} formulations, the value of the best feasible solution found (\texttt{sol}), the value of the lower bound after one hour of branch and bound (\texttt{ebound}), the value of the lower bound after processing the root node (\texttt{rbound}), the time used to process the root node in seconds (\texttt{rtime}), and the number of nodes explored in one hour (\texttt{nodes}). Each row represents the average over five instances, and the values of \texttt{sol}, \texttt{ebound} and \texttt{rbound} are scaled so that the best feasible solution found for a given instance has value $100$. Observe that for $n\geq 150$ the lower bound found by \texttt{Conic} at the root node is stronger than the lower bounds found by other formulations after one hour of branch-and-bound. However, the continuous relaxations of \texttt{Conic} are difficult to solve for large values of $n$, leading to few branch-and-bound nodes explored and few or no feasible solutions found within the time limit. \begin{table}[h!] \setlength{\tabcolsep}{1pt} \begin{center} \caption{Experiments with $n\geq 100$ and $k=n/10$.} \label{tab:QPGurobiLarge} \scalebox{0.6}{ \begin{tabular}{ c | c c c c c | c c c c c| c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{\texttt{$n$}} & \multicolumn{5}{c|}{\textbf{\texttt{Basic}}} & \multicolumn{5}{c|}{\textbf{\texttt{Perspective}}}& \multicolumn{5}{c}{\textbf{\texttt{Conic}}}\\ &\texttt{sol}&\texttt{ebound}&\texttt{rbound}&\texttt{rtime}&\texttt{nodes}&\texttt{sol}&\texttt{ebound}&\texttt{rbound}&\texttt{rtime}&\texttt{nodes}&\texttt{sol}&\texttt{ebound}&\texttt{rbound}&\texttt{rtime}&\texttt{nodes}\\ \hline 100& 100.0 & 94.9 & 11.0 & 0.09 & 12,375,694 & 100.0 & 100.0 & 42.2 & 0.05& 1,968,600 & 100.0 & 100.0 & 77.9 & 2.13 & 2,176 \\ 150& 100.0 & 61.3 & 11.7 & 0.07 & 9,739,922 & 100.3 & 81.3 & 45.6 & 0.08& 3,788,060 & 100.6 & 96.2 & 83.8 & 141.46 & 3,174 \\ 200& 100.0 & 46.5 & 12.4 & 0.11 & 6,382,960 & 100.3 & 72.5 & 48.4 & 0.13& 2,644,816 & - & 90.8 & 86.4 & 1090.73 & 1,531 \\ 250& 100.0 & 34.7 & 11.6 & 0.22 & 4,092,948 & 100.3 & 72.5 & 48.4 & 0.21& 1,692,204 & - & 82.7 & 82.7 & 1732.13 & 3 \\ 300& 100.0 & 29.5 & 12.0 & 0.41 & 2,763,780 & 100.9 & 61.0 & 47.1 & 0.32& 1,166,534 & - & 86.1 & 86.1 & 2333.81 & 1 \\ \hline\hline \end{tabular} } \end{center} \end{table} } \rev{A possible approach that achieves a compromise between the strength and the size of the formulation is to apply the proposed conic inequalities for a subset of the matrix: given an M-matrix Q, choose $I\subset \left\{(i,j)\in N\times N: Q_{ij}<0\right\}$ and use the formulation \begin{align*} \min\;&\sum_{i\in P}\bar Q_{i} z_i+ \sum_{i \in \bar P} \bar Q_{i} y_i -\sum_{(i,j)\in I}Q_{ij}t_{ij}-\sum_{(i,j)\not\in I}Q_{ij}(y_i-y_j)^2 \\ \text{ s.t.}\;&y_i^2\leq z_ix_i,\;\forall i\in P, \quad (x_i,x_j,y_i,y_j,t_{ij})\in X,\; \forall (i,j)\in I.\end{align*} In particular, if $|I|\approx 4n$, then the results in Section~\ref{subsec:dualNetwork} suggest that the formulations would scale well. Additionally, the component corresponding to the remainder, $-\sum_{(i,j)\not\in I}Q_{ij}(y_i-y_j)^2$, could be further strengthened by linear inequalities \eqref{eq:polymatroidX} (and other subgradient inequalities corresponding to points where $\bar y\neq \bar x$) in the original space of variables instead of extended reformulations. An effective implementation of such a partial strengthening is beyond the scope of the current paper. } \vspace{-2mm} \section{Conclusions} \label{sec:conclusions} In this paper we show, under mild assumptions, that minimization of a quadratic function with an M-matrix \rev{with indicator} variables is a submodular minimization problem, hence, solvable in polynomial time. We derive strong formulations using the convex hull description of non-separable quadratic terms with two \rev{indicator} variables arising from a decomposition of the quadratic function. Additionally, we provide strong conic quadratic valid inequalities approximating the convex hulls. The derived formulations generalize previous results in the binary case and separable case, and the inequalities dominate valid inequalities given in the literature. Computational experiments indicate that the proposed conic formulations may be significantly more effective compared to the natural convex relaxation and the perspective reformulation. \bibliographystyle{spbasic}
{ "timestamp": "2018-04-17T02:08:54", "yymm": "1804", "arxiv_id": "1804.05284", "language": "en", "url": "https://arxiv.org/abs/1804.05284" }
\section{Introduction} Cognitive radio (CR) is an important technology to maximize radio spectrum utilization efficiency\cite{819467}\nocite{4840529}-\cite{1391031}. In CR systems, the secondary network is allowed to share the spectrum allocated to the primary network provided that the interference caused by the secondary transmitter (ST) does not deteriorate the performance of the primary network. Consequently, the challenge is to maintain the interference caused by the ST to the primary receiver (PR) below a pre-determined threshold level. This can be achieved by adapting the ST transmit power that ensures satisfaction of the interference constraint at the PR \cite{4786456}. Multiuser diversity is considered an important diversity technique to improve wireless communication systems performance \cite{tse2005fundamentals}. Considering a multiuser network where the users experience independent fading conditions, the basic idea of multiuser diversity is to select the users with the best fading conditions for transmission or reception to obtain a specific performance gain. Multiuser diversity in CR systems has attracted much attention recently. In particular, the ergodic capacity (throughput) of multiuser diversity gain of uplink multiuser underlay CR systems is investigated in \cite{4786488}. In \cite{ekin2012capacity}, the authors analyze the achievable capacity gain of uplink multiuser spectrum-sharing systems over dynamic fading environments. In \cite{li2013capacity}, the outage probability and effective capacity are analyzed for opportunistic spectrum sharing in Rayleigh fading environment. In \cite{khan2015performance}, the authors analyze the outage probability, average symbol error rate (SER) and ergodic capacity of an opportunistic multiuser cognitive network with multiple primary users assuming the channels in the secondary network are independent but not identical Nakagami-$m$ fading. In \cite{7881835}, \cite{AGHAZADEH2018160} the authors analyze the outage probability and average capacity of multiuser diversity in single-input multiple-output (SIMO) spectrum sharing systems. Related previous work has focused on conventional multiuser diversity in underlay CR systems where the secondary user (SU) with the best channel quality is selected. However, in practical underlay CR systems the SU with the best channel quality may not be available for transmission under given traffic conditions. Consequently for such systems, a more general multiuser diversity scheme that features selection of the SU with the $k$-th best channel quality ($k$-th best SU) is of practical interest. In addition, selecting the best SU is not always beneficial since it prevents other users with good channel quality from transmission or reception although it can achieve maximum diversity gain. One might sacrifice some diversity gain by selecting the second, third or in general the $k$-th best SU to improve the performance from a fairness standpoint. In this paper, we analyze the average and effective throughputs of a multiuser diversity scheme for secondary multiuser networks under a transmit power adaptation strategy such that the instantaneous interference constraint at the PR is not violated. The SR is equipped with multiple receive antennas where maximal ratio combining (MRC) is employed. Based on the SR-MRC output, the SR selects the $k$-th best SU for transmission. In general, it is hard to find exact and tractable expressions for the average and effective throughputs of the $k$-th best SU. This difficulty is due to the complicated nature of the distribution of the $k$-th highest SNR. Therefore, another approach based on extreme value theory (EVT) is used to analyze the throughput of the $k$-th best SU selection scheme in underlay CR systems. Recently, EVT has been used to derive simple closed form asymptotic expressions for the average and effective throughputs of the $k$-th best link selection for traditional wireless communication systems \cite{8269400}. Our contribution in this paper is to utilize EVT to analyze the average and effective throughputs of the $k$-th best SU selection scheme of underlay CR systems. More specifically, we show that the SNR of the $k$-th best SU converges uniformly in distribution to an inverse gamma random variable for a fixed $k$ and large number of secondary users. Then, we derive novel closed-form asymptotic expressions for the average and effective throughputs of the $k$-th best SU. The rest of this paper is organized as follows. In Section II we discuss the system model. In Section III we discuss the asymptotic average and effective throughputs. Section IV includes numerical results and Section V concludes. \section{System model} Consider an underlay secondary network consisting of $N$ secondary users (transmitters), each equipped with a single antenna, and a secondary receiver equipped with $M$ receive antennas. The secondary network is sharing the spectrum of a primary network with one PT and one PR. The PT and PR are equipped with a single antenna each. Let $g_{i}$ and $h_{i,j}$ denote the channel gain from the $i$-th SU to the PR and the $j$-th receive antenna of the SR, respectively. The channel gains $g_{i}$ and $h_{i,j}$ are assumed to be independent Rayleigh distributed random variables. Consequently, the channel power gains $|g_{i}|^{2}$ and $|h_{i,j}|^{2}$ have probability density functions (PDFs) $g(x) = \lambda e^{-\lambda x} u(x) $ and $h(x) = \eta e^{- \eta x}u(x)$, respectively, where $u(x)$ is the unit step function and the parameters $\lambda$ and $\eta$ are the fading parameters. The channel power gains in the secondary system $|h_{i,j}|^{2}$ are assumed to be independent and identically distributed (i.i.d) for $i=1, 2, ..., N$ and $j=1, 2, ..., M$. With a perfect knowledge of $|g_{i}|^{2}$, we consider a continuous transmit power adaptation strategy at each SU to control its interference to the PR such that the instantaneous transmit power of the $i$-th SU is $P_{i}= \frac{Q}{|g_{i}|^{2}}$, where $Q$ is the maximum tolerable interference level at the PR. It should be noted that this paper focuses on a power adaptation strategy with unlimited transmit power as in \cite{6134707}, \cite{7890994}. An efficient power adaptation strategy with peak transmit power constraint is left for future work. Assuming that MRC is employed at the SR, the instantaneous SNR at the SR-MRC output is given by \begin{gather} \label{eq:1} Z_{i}= \frac{Q}{|g_{i}|^{2}} \frac{\gamma_{i}}{N_{0}} . \end{gather} where $N_{0}$ is the common noise variance at the SR-MRC output and $\gamma_{i}= \sum_{j=1}^{M} |h_{i,j}|^{2}$. The random variable $\gamma_{i}$ represents a sum of i.i.d exponential random variables, therefore, its PDF is given by \begin{equation} \label{eq:2} f_{\gamma_{i}}(x)={ \frac{\eta^{M} x^{M-1}}{ \Gamma(M)} e^{-\eta x}} u(x). \end{equation} Based on the SR-MRC output, we sort the random variables $Z_{i}$ in an increasing order denoted as $Z_{(1)} \leq Z_{(2)}.... \leq Z_{(N-k+1)} \leq .... \leq Z_{(N)}$, such that the SR selects the $k$-th best SU that leads to the $k$-th highest SNR, $Z_{(N-k+1)}$. According to \cite{david2003order}, the PDF of $Z_{\left(N-k+1\right)}$ can be expressed in terms of the PDF, $f(z)$, and cumulative distribution function (CDF), $F(z)$, of $Z_{i}$ as \begin{equation}\label{eq:3} f_{Z_{\left(N-k+1\right)}}(z)=k \binom{N}{k} f(z) F(z)^{N-k} \left(1-F(z) \right)^{k-1}. \end{equation} Noting that the random variable $Z_{i}$ represents a ratio of two gamma distributions, its CDF is given by \cite{6924725} \begin{equation}\label{eq:4} F(z)=\left( \frac{\rho z}{\frac{\lambda}{\eta} +\rho z}\right)^{M} u(z), \end{equation} where $\rho=\frac{N_{0}}{Q}$. Considering the $k$-th best SU selection, the average (ergodic) throughput of the selected SU, $ \overline{R}_{k,N}$, can be evaluated as \begin{equation}\label{eq:6} \begin{split} \overline{R}_{k,N}&=B E\left[\log_{2}(1+Z_{\left(N-k+1\right)})\right]\\ &=B \int_{0}^{\infty} \log_{2}(1+z) f_{Z_{\left(N-k+1\right)}}(z) dz. \end{split} \end{equation} where $B$ is the system bandwidth and $E[\cdot]$ denotes expectation. Assuming a block fading channel, the effective throughput that can be supported by a wireless system under a statistical QoS constraint described by the delay QoS exponent $\theta$ as \cite{1210731} \begin{eqnarray} \label{eq:7} \alpha (\theta) =-\frac{1}{\theta T} \log \left( E\left[ e^{-\theta T R }\right] \right), \ \theta> 0, \end{eqnarray} where $R$ is a random variable which represents the instantaneous throughput during a single block and $T$ is the block length. $\theta=0$ implies there is no delay constraint and the effective throughput is then the average throughput of the corresponding wireless channel. Considering the $k$-th best SU selection, the effective throughput of the selected SU, $\alpha(\theta, k, N)$, can be expressed as \cite{6006584} \begin{gather} \label{eq:8} \begin{split} \alpha(\theta, k, N)&=-\frac{1}{A} \log_{2} \left( E\left[ \left( 1+Z_{(N-k+1)} \right)^{-A} \right] \right), \end{split} \end{gather} where $A=\theta TB/ \ln(2)$. In general, it is difficult to obtain exact expressions for $ \overline{R}_{k,N}$ and $\alpha(\theta, k, N)$. Therefore, in what follows we consider extreme value theory to derive closed-form asymptotic expressions for the average and effective throughputs of $k$-th best SU. \section{ Asymptotic Throughput Analysis} In this section, we derive the limiting distribution of $Z_{\left(N-k+1\right)}$ in Proposition 1 below. Based on this result we will analyze the average and effective throughputs of the $k$-th best SU. \subsection{ The Limiting Distribution of $Z_{\left(N-k+1\right)}$} \noindent{\textbf{Proposition 1:}} Let $Z_{(N-k+1)}$ denote the $k$-th largest order statistic of $N$ i.i.d. random variables with a common CDF of $F(z)$, as expressed in (\ref{eq:4}), then for a fixed $k$ and $N \to \infty$, $\frac{ Z_{(N-k+1)}- a}{b}$ converges in distribution to a random variable $Z$ with CDF $G^{(k)}(z)$, which can be characterized by an inverse gamma distribution as \begin{eqnarray}\label{eq:11D} \begin{split} G^{(k)}(z)=\frac{\Gamma \left( k,{\frac {1}{z}} \right)}{(k-1)!} u(z) , \end{split} \end{eqnarray} where $a=0$, $b=\frac{ \beta \lambda}{P_{M} \left( \left(1-\frac{1}{N}\right)^{-\frac{1}{m}} -1 \right)} >0$ and $\Gamma(s,x)= \int_{x}^{\infty} u^{s-1} e^{-u} du$ is the upper incomplete gamma function \cite{xxx}. Furthermore, the PDF of $Z$, $f^{(k)}(z)$, can be obtained as \begin{eqnarray}\label{eq:12D} f^{(k)}(z)= \frac{e^{- z^{-1}}}{z^{k+1} (k-1)!} u(z). \end{eqnarray} \noindent \textit{Proof}: We first obtain the limiting distribution of $Z_{(N)}$, which denotes the first largest order statistic of $N$ i.i.d. random variables. From Proposition 2 of \cite{6924725}, $\frac{ Z_{(N)}- a}{b}$ converges in distribution to a unit Fr\'echet distribution i.e., \begin{eqnarray}\label{eq:13D} G(z)= e^{-z^{-1}} u(z), \end{eqnarray} where $a=0$ and $b = { F^{-1}\left(1-\frac{1}{N}\right)}=\frac{ \beta \lambda}{P_{M} \left( \left(1-\frac{1}{N}\right)^{-\frac{1}{m}} -1 \right)}$. Making use of Proposition 1 of \cite{8269400} with $G(z)$ as in (\ref{eq:13D}), it follows that for a fixed $k$ and $N \to \infty$, the sequence $\frac{ Z_{(N-k+1)}}{b}$ converges in distribution to a random variable $Z$ with CDF of $G^{(k)}(z)$, which can be expressed in terms of $G(z)$ as \begin{eqnarray}\label{eq:14D} \begin{split} G^{(k)}(z)&=G(z) \sum_{j=0}^{k-1} \frac{\left[ -\log \left( G(z) \right) \right]^{j}}{j !}\\ &= e^{-z^{-1}}\sum_{j=0}^{k-1} \frac{(z^{-1})^j }{j!} u(z).\\ \end{split} \end{eqnarray} Using the fact that $\Gamma(k,x)= (k-1)! \ e^{-x}\sum_{j=0}^{k-1} \frac{x^j }{j!}$ for an integer $k$, $G^{(k)}(z)$ can be finally expressed as in (\ref{eq:11D}). By differentiating (\ref{eq:11D}) we obtain (\ref{eq:12D}). Note that Proposition 1 of \cite{8269400} can be applied for different CDF functions. In this paper we focus on the case when $G(z)$ represents a Fr\'echet CDF. In this case $Z_{(N-k+1)}$ has a limiting distribution of inverse gamma as shown in (\ref{eq:11D}). This is different from what was obtained in \cite{8269400}, where Proposition 1 of \cite{8269400} was applied for the case when $G(z)$ represents Gumbel CDF and thus $Z_{(N-k+1)}$ has a limiting distribution of $\textit{Log-Gamma}$. \subsection{ Asymptotic average throughput} Using Proposition 1, we derive the average throughput for the $k$-th best SU, $ \overline{R}_{k,N}$, in the following proposition. \\ \noindent{\textbf{Proposition 2:}} For a fixed $k$ and $N \to \infty$, the average throughput of the $k$-th best SU can be approximated as \begin{gather} \label{eq:14} \begin{split} \frac{\overline{R}_{k,N}}{B}\approx & \frac{ \ln(b) - \psi(k)}{\ln(2)} +\frac{1}{\ln(2)} \sum_{\mu=0}^{k-1} \frac{ 1}{ (k-\mu-1)!} \times \\ & \left[ {\left(-1\right)^{k-\mu-2} b^{k-\mu-1} e^{b} E_{i}(-b) } \right. \\ &\left. +{ \sum_{v=1}^{k-\mu-1} (v-1)! (-b)^{k-\mu-1-v}} \right], \end{split} \end{gather} in \text{bit/s/Hz}, where $E_{i}(x)=-\int_{-x}^{\infty} \frac{e^{-y}}{y} dy$ is the exponential integral function and $\psi(x)$ is the digamma function. \noindent \textit{Proof}: Invoking (\ref{eq:6}) we have \begin{gather}\label{eq:16} \begin{split} \frac{\overline{R}_{k,N}}{B}&=\frac{1}{\ln(2)} E\left[\ln\left(1+Z_{(N-k+1)} \right) \right].\\ \end{split} \end{gather} From Proposition 1, the CDF of $\frac{ Z_{(N-k+1)}}{ b}$ approaches the CDF $Z$ for a fixed $k$ and $N \to \infty$, where the CDF of $Z$ is as in (\ref{eq:11D}). Or equivalently, the PDF of $ Z_{(N-k+1)}$ can be approximated by the PDF of $ b Z$ for a fixed $k$ and $N \to \infty$, where the PDF of $Z$ is as in (\ref{eq:12D}). Then for a fixed $k$ and $N \to \infty$, the average throughput can be approximated as \begin{gather}\label{eq:17} \begin{split} \frac{\overline{R}_{k,N}}{B} &\approx\frac{1}{\ln(2)} E\left[\ln\left(1+ b Z \right) \right]\\ & = \int_{0}^{\infty} \frac{ \ln(1+b z)}{\ln(2)} \frac{ e^{- z^{-1}} }{ z^{k+1} (k-1)!} dz. \end{split} \end{gather} Using change of variables $u= z ^{-1}$, then we can write \begin{gather}\label{eq:18} \begin{split} \frac{\overline{R}_{k,N}}{B} \approx & \underbrace{\int_{0}^{\infty} \frac{\ln(b+u) e^{-u} u^{k-1}}{\ln(2) (k-1)!} du}_\text{$I_{1}$}\\ &- \underbrace{ \int_{0}^{\infty} \frac{\ln(u) e^{- u} u^{k-1} }{\ln(2) (k-1)!} du}_\text{$I_{2}$} . \end{split} \end{gather} Using Eq. (4.352, 1) of \cite{xxx}, $I_{2}$ can be expressed as \begin{eqnarray}\label{eq:19} \begin{split} I_{2}= \frac{\psi(k)}{\ln(2)}. \end{split} \end{eqnarray} To evaluate $I_{1}$, we use Eq. (4.337, 5) of \cite{xxx}. After some basic algebraic manipulation, $I_{1}$ can be expressed as \begin{gather} \label{eq:20} \begin{split} I_{1}= & \frac{ \ln(b)}{\ln(2)} +\frac{1}{\ln(2)} \sum_{\mu=0}^{k-1} \frac{ 1 }{ (k-\mu-1)!} \times \\ & \left[ {\left(-1\right)^{k-\mu-2} b^{k-\mu-1} e^{b} E_{i}(-b) } \right. \\ &\left. +{ \sum_{v=1}^{k-\mu-1} (v-1)! (-b)^{k-\mu-1-v}} \right]. \end{split} \end{gather} Combining (\ref{eq:19}) and (\ref{eq:20}) with (\ref{eq:18}), the average throughput is as expressed in (\ref{eq:14}). As a special case, if $k=1$ in (\ref{eq:14}), $\psi(1)=-\gamma$ (Euler's constant); thus \begin{gather}\label{eq:23} \begin{split} \frac{\overline{R}_{1,N}}{B} \approx \frac{\ln(b) +\gamma- e^{b} E_{i}(-b) }{\ln(2)}. \end{split} \end{gather} \subsection{Asymptotic Effective Throughput} Using Proposition 1, we analyze the effective throughput of the $k$-th best SU, $\alpha (\theta, k, N)$, in the following proposition. \\ \noindent{\textbf{Proposition 3:}} The effective throughput of the $k$-th best SU can be approximated as \begin{gather} \label{eq:24} \begin{split} &\alpha (\theta, k, N) \approx -\frac{1}{A} \log_{2} \left( \frac{b^{k} U\left(A+k; k+1;b\right) \Gamma\left(A+k\right)}{(k-1)!} \right), \\ \end{split} \end{gather} for a fixed $k$, $ \theta >0$ and $ N \to \infty$, where $ \Gamma( \cdot )$ is the gamma function and $\textit{U}\left( a;b;z \right)=\frac{1}{\Gamma(a)} \int_{0}^{\infty} e^{-zt} t^{a-1} (1+t)^{b-a-1}dt$, $a > 0$ is the Tricomi hypergeometric function. \noindent{\textit{Proof:}} As we mentioned earlier, the PDF of $ Z_{(N-k+1)}$ can be approximated by the PDF of $ b Z$ for a fixed $k$ and $N \to \infty$, where the PDF of $Z$ is as in (\ref{eq:12D}). Then for a fixed $k$ and $N \to \infty$, $ E\left[ \left( 1+ Z_{(N-k+1)} \right)^{-A}\right]$ in (\ref{eq:8}) can be approximated as \begin{gather} \label{eq:25} \begin{split} E\left[ \left( 1+ Z_{(N-k+1)} \right)^{-A} \right] &\approx E\left[ \left( 1+ b Z \right)^{-A} \right] \\ &=\underbrace{ \int_{0}^{\infty} \frac{\left( 1+ b z \right)^{-A} e^{-z^{-1}} }{ z^{k+1} (k-1)!} dz.}_\text{$I_{3}$} \end{split} \end{gather} Using change of variables $u=(b z)^{-1}$ and with the help of Eq. (39) of \cite{1576535}, $I_{3}$ can be finally expressed as \begin{gather} \label{eq:26} \begin{split} I_{3} &= \int_{0}^{\infty} \frac{b^{k} u^{A+k-1} e^{-b u} }{ (1+u)^{A} (k-1)!} du\\ &=\frac{b^{k} U\left(A+k; k+1;b\right) \Gamma\left(A+k\right)}{(k-1)!} . \end{split} \end{gather} Substituting (\ref{eq:26}) in (\ref{eq:8}), we obtain (\ref{eq:24}). \section{ NUMERICAL RESULTS } In this section, we numerically illustrate and verify the derived asymptotic results in the previous section. In Fig. 1, we plot the average throughput as a function of the number of secondary users, $N$, for $M=2$ and different values of $k$. We use Monte Carlo simulations to validate the obtained asymptotic expression for the average throughput. Compared to the simulations, we observe that the asymptotic expression is accurate for large $N$ relative to $k$. However, if $N$ is close enough to $k$, for example when $N=5$ and $k=3$, the asymptotic expression tends to be less accurate. This is due to the fact that the asymptotic behavior of the $k$ highest SNR holds for large $N$ and fixed $k$ as discussed in the previous section. Therefore, it is expected that the asymptotic average throughput will be less accurate if $N$ is close enough to $k$. \begin{figure}[h] \label{fig:1} \begin{center} \includegraphics[width=1\columnwidth]{Fig1.eps} \caption{Average throughput of the $k$-th best SU versus the number of secondary users, $N$, for $M=2$, $\rho=1$, $\lambda=2$, $\eta=\frac{1}{3}$ .} \end{center} \end{figure} \begin{figure}[h] \label{fig:2} \begin{center} \includegraphics[width=1\columnwidth]{Fig2.eps} \caption{Average throughput of the $k$-th best SU versus the number of receive antennas, $M$, for $N=20$, $\rho=1$, $\lambda=2$, $\eta=\frac{1}{3}$.} \end{center} \end{figure} In Fig. 2 the average throughput is plotted against the number of receive antennas, $M$, for $N=20$ and different values of $k$. In Fig. 3, We plot the effective throughput as a function of the number of secondary users, $N$, for $M=1$, $k=1,2$ and different values of delay exponent $A$. We verify the accuracy of the asymptotic effective throughput using simulations. We also observe that the asymptotic effective throughput is accurate for large $N$ and less accurate as $N$ gets closer to $k$ as previously observed for the average throughput. In Fig. 4, we plot the effective throughput at $A=0$ (average throughput) and at $A=1$ as a function of the maximum tolerable interference level, $Q$, for $N=50$, $M=3$ and $k=1,2$. \begin{figure}[h] \label{fig:3} \begin{center} \includegraphics[width=1\columnwidth]{Fig3.eps} \caption{Effective throughput of the $k$-th best SU versus the number of secondary users, $N$, for $A=0.1$, 2, $M=1$, $\rho=1$, $\lambda=2$, $\eta=\frac{1}{3}$.} \end{center} \end{figure} \begin{figure}[h] \label{fig:4} \begin{center} \includegraphics[width=1\columnwidth]{Fig4.eps} \caption{Effective throughput of the $k$-th best SU versus the maximum tolerable interference level, $Q$, for $A=0, 1$, $M=3$, $N=50$, $\lambda=2$, $\eta=\frac{1}{3}$ and $N_{0}=0 \ dB$.} \end{center} \end{figure} \section{Conclusion} We considered a multiuser diversity scheme for cognitive radio system with transmit power adaptation strategy which ensures the satisfaction of the instantaneous interference constraint at the PR. Assuming a large number of secondary users and the SR selects the $k$-th best SU for transmission, we showed that the $k$-th highest SNR converges in distribution to an inverse gamma random variable. We used this result to derive novel closed-form asymptotic expressions for the average and effective throughputs of the $k$-th best SU. We verified the accuracy of the derived expressions through Monte Carlo simulations. \section*{Acknowledgement} This publication was made possible by the NPRP award [NPRP 8-648-2-273] from the Qatar National Research Fund (a member of The Qatar Foundation). The statements made herein are solely the responsibility of the authors. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-29T02:11:44", "yymm": "1804", "arxiv_id": "1804.05257", "language": "en", "url": "https://arxiv.org/abs/1804.05257" }
\section{Conclusion and future work} We have presented a novel method to model flames from multi-view images. Our system can generate physically-based models, it estimates plausible camera exposure, 3D volumetric temperature and density fields from input images. We allow artists to seamlessly insert flames as seen in input videos into their virtual scenes, and those flames realistically illuminate the scene without the need to artificially add any additional light sources. Our work is freely available for usage in further research or industrial projects. The source code and data are available at https://github.com/Garoe/bath-fire-shader. There are different opportunities for future work. A first improvement might be to use the results of the computation at one timestamp to initialize the computation for the following frame, in order to have smoother animations and improved convergence rates. One way to achieve this might be via extrapolation of the voxel values between frames. Another possible improvement could be to reduce the number of cameras needed to perform the initial volume reconstruction, in order to increase applicability of our system. This could be done by integrating techniques such as Okabe et al.~\cite{Okabe:2015} into our framework. An interesting possible application of our system is to use the output model to initialize conditions and parameters for an existing fire simulation software (e.g.~\cite{Uintah}). This could allow for mixed techniques that generate new flames from existing video examples. Lastly, while our system is able to provide visually pleasing and realistic results, improvements can be pursued toward more efficient computation times. Areas of possible improvement include optimized GPU implementations and improved clustering with, for example PCA to compute axes of major change or with k-nearest neighbors. In the future, this would possibly allow for real-time model computation. \section{Introduction} Fire is one of the fundamental pillars of our civilization. For more than a million years, we used fire in a variety of applications ranging from the most basic ones like protection, warmth, and food processing, to advanced technological ones~\cite{berna2012microstratigraphic}. Fire however is not just a tool for us, it shaped our culture~\cite{goudsblom1992fire} and we have a fascination towards its attractive presence and its dangerous nature. It is thus not surprising that modern digital techniques do not only look into simulation and rendering of fire for engineering or safety purposes, but visual quality for entertainment purposes is paramount. Accurate visual reproduction of fire is prominent in entertainment industries like movie visual effects and video games. Research techniques related to computer generated fires have been successfully applied in the movie industry. Famous examples include, among others, a planet explosion in Star Trek II, where a particle-based technique by Reeves~\cite{Reeves:1983} was used; Shrek featured a dragon exhaling fire, where parametric curves were used to drive the flames~\cite{Lamorlette:2002}; or the more recent work by Horvath and Geiger~\cite{Horvath:2009} based on 2D screen projections for the film Harry Potter and the Deathly Hallows. In these and in many other applications, using real flames would have been an expensive and hazardous endeavor. The computer graphics community has intensively researched the fluid behavior of water and smoke. While fire can be modeled as a fluid, techniques used for water or smoke cannot be directly applied to flames. This is due to specific fire proprieties like multi-phase flow, fast chemical reactions and radiative heat transport. As a result of the complexity and the interdisciplinary nature of the problem, fire simulation is still an open problem in computer graphics. A great deal of work done in the area has sacrificed complexity for interactiveness, therefore producing simplified models which hope to deceive the observer by exploiting the chaotic behavior present in fire motion. Nevertheless, physically-based simulations incorporate the intrinsic processes that occur in a combustion scenario in order to be able to produce realistic results. Current state-of-the-art methods in fluid appearance transfer~\cite{Jamriska:2015,Okabe:2015} overlook the emissive characteristics of fire, be it image based or with full volume reconstruction, the synthesized data fails to illuminate other objects in the scene. Artists must manually reproduce the aforementioned effects via additional light sources. Physically-based rendering methods that faithfully recreate the global illumination effects for a flame have been proposed~\cite{Nguyen:2002,Pegoraro:2006}. However, the input data needed by those techniques are volumetric temperature fields and species concentrations, which can be either synthesized employing complex simulation software~\cite{Uintah} or captured from real flames using equally complex equipment~\cite{Schwarz:1996}. We propose a new method that can model fire volume with physically valid properties (e.g., temperature fields and species concentrations) from multi-view stereo images. In our method, the fire shape is represented by a grid volume which is reconstructed using tomographic techniques~\cite{Ihrke:2004}. The fire appearance is modeled using a physics-based fire renderer~\cite{Pegoraro:2006} which maps the physical properties to the actual appearance. The physical properties are estimated by a carefully designed hierarchical optimization referring to the input images, such that the rendered images match the input images in the color space. We test our method on a variety of inputs and the results show that the generated fire models are not only visually pleasing (see Figure~\ref{fig:teaser}), but can also be used for global fire illumination based visual effects, which is not possible for previous image-based fire modeling tools. Overall our work makes two major contributions: 1) We propose the first image-based fire modeling method that can estimate physical properties instead of color intensities; 2) We demonstrate a novel application of using physically plausible fire generated from images for global illumination based visual effects. \section{Methodology} An overview of the system is shown in Figure~\ref{fig:overview}. In the first step we use a multi-view camera setup to record real flames from different viewpoints. To acquire the flames we use a Norpix StreamPix 6 camera system, frame-synchronized at 100 FPS. For each set of concurrent frames we use 3D tomography techniques~\cite{Ihrke:2004} to obtain a volumetric representation of the flame. In the main processing step, the fire appearance is modeled using physical parameters, i.e.~temperature and density fields, and reconstructed based on a carefully designed optimization, which adjusts the parameters such that the appearance of the fire volume coincides with the captured images under corresponding camera views. Finally, the reconstructed fire model can be seamlessly integrated in any virtual scene using physically-based fire shaders~\cite{Pegoraro:2006}. Due to its physical accuracy, our flame models illuminate the scene naturally and interact with the objects therein without any additional efforts. Since the data capture and volumetric reconstruction steps are based on off-the-shelf techniques, we will mainly describe the optimization algorithms for appearance modeling in the following sections. \subsection{Problem formulation} For physically-plausible fire appearance modeling, our goal is to find physical parameters of a flame such that when they are used to render an image, the output resembles an input photograph. Given the reconstructed fire volume, the parameters which we are interested in are the camera exposure settings, the flame volumetric temperature, and fuel density distributions. \noindent Formally, the above statement can be written as: \begin{equation} \label{eq:problem_formualation} \argmin_{\lbrace t, d, s \rbrace} E = \Vert r(t, d, s ) - I_{cam} \Vert, \lbrace t, d \rbrace \in \mathbb{R}^{n_v}, s \in \mathbb{R}, \end{equation} where $E$ is the energy function to be minimized, $t$ and $d$ are volumes of temperatures and densities with $n_v$ voxels, $I_{cam}$ is the input photograph captured by the camera, $s$ is the exposure of $I_{cam}$, $r: \mathbb{R}^{2 n_v + 1} \mapsto \mathbb{R}^{3 \times n_I}$ is the rendering function that transforms the volumes $t$ and $d$ and the exposure $s$ into an RGB image $I_{cg}$, $n_I$ is the total number of pixels in $I_{cg}$ and $I_{cam}$, and $\Vert \cdot \Vert$ is a similarity metric which compares the source with the target and will be detailed in the next section. The rendering function $r$ is computed using the Radiate Transfer Equation (RTE)~\cite{Siegel:2002}. For the absorption and emission coefficients we use the methods described by Pegoraro and Parker~\cite{Pegoraro:2006}. \subsection{Error function} In order to estimate the parameters defined in the previous section, we present an error function inspired by Markov Random Fields optimization methods, which define probability estimates for the data and pairwise terms for the variables to be estimated. The \emph{data term} contains an appearance matching function which ensures that the synthetic image $I_{cg}$ generated by our method matches the camera input image $I_{cam}$. Histogram distance functions~\cite{Rubner:2000} have been used in previous work~\cite{Dobashi:2012}, however these metrics would fail to transport the complex features present in flames. Direct pixel to pixel comparison can be applied as the images are aligned. We measure the appearance error as the perceived color difference for each pixel \begin{equation} \label{eq:appearance_matching} E_{am}\left( I_{cg}, I_{cam} \right) = w_{am} \sum_{i = 1}^{n_I} \Vert I_{cg}(i) - I_{cam}(i) \Vert, \end{equation} were $w_{am}$ is a weighting factor, $I_{cg}(i)$ and $I_{cam}(i)$ are the $i^{th}$ pixels values in the CIELab color space~\cite{fairchild2013color} of the synthetic and target images, and the error distance between two pixel colors $C_1$ and $C_2$ is measured as \begin{equation} \label{eq:pixel-lab-dist} \Vert C_1 - C_2 \Vert = \sqrt{ \left( l_1 - l_2 \right)^2 + \left( a_1 - a_2 \right)^2 + \left( b_1 - b_2 \right)^2 }. \end{equation} For the \emph{pairwise term} a smoothness estimate is used to avoid unrealistic variations between neighboring voxels, as well as tiling artifacts. This is justified by the heat diffusion equation: even with the chaotic nature of fire, nearby points in space will have similar temperatures. Note that the previous statements only hold if the volume resolution is large enough. The term is computed as \begin{equation} \label{eq:gradient_fnc} E_{sm}\left( v_i, \mathcal{N}_i \right) = w_{sm} \sum_{j\in \mathcal{N}_i} \vert v_j - v_i \vert, \end{equation} where $w_{sm}$ is a weighting factor, $v_i$ is the physical parameter (either a temperature or a density) of the $i$-th voxel, and $\mathcal{N}_i$ is a set with the indices of the 18 closest neighboring voxels of the $i$-th voxel. A lower number of neighbors, for instance six would use the immediate adjacent voxel one on each dimension, yet it proved to be insufficient to maintain smoothness in highly disconnected flames, while values larger than 18 become a significant computational burden for the pairwise term. As several views of the fire volume are available, the method can be easily extended to match the appearance of each of the views. The total score is computed by integrating the values of each view. Naturally, the complexity in the evaluation of the data term increases linearly with the number of input images used during the optimization. For simple and mostly symmetric fires, e.g. candle flames, two views are generally enough to provide good results. However, when dealing with more complex shapes the number of cameras needed might increase up to six. Finally, the total score is computed as a linear combination of the previous data and pairwise terms $E = w_{am}E_{am} + w_{sm}E_{sm}$. \begin{figure}[t] \centering \includegraphics[width=0.3\linewidth]{images/cluster3} \, \includegraphics[width=0.3\linewidth]{images/cluster10} \, \includegraphics[width=0.3\linewidth]{images/cluster20} \caption{Clustering examples, each cube depicts a different cluster, left to right 3, 10 and 20 divisions.} \label{fig:clustering} \end{figure} \begin{algorithm}[t] \caption{Parameter estimation procedure.} \label{tb:icm-pseudocode} initialise\_clusters\_and\_other\_variables\; \While{not exitConditions()}{ \For{$i=0$ to $n_{cluster}$ }{ \For{$j=0$ to $n_{samples}$ }{ $v_{new} =$ sampleNewValue()\; new\_score $=$ dataTerm($v_{new}$) + pairwiseTerm($v_{new}$)\; \If{\upshape{new\_score < current\_score(i)}}{ $(^*x)(i) = v_{new}$\; current\_score(i) = new\_score; } } } exposure = estimateNewExposure()\; updateClustering()\; current\_score = updateScore()\; total\_score = sum(current\_score)\; \eIf(\tcp*[h]{Switch variable}){$x == \&$\upshape{densities}}{ $x = \&$temperatures;\ }{ $x = \&$densities;\ } } \end{algorithm} \subsection{Estimation method} Finding a global solution for the system is intractable due to the non-linearities in the equations that govern flame behavior. Estimating the Jacobian is computationally expensive. Therefore we opted for a local gradient sampling strategy based on Coordinate Descent, which allows us to inexpensively evaluate the gradient locally. For each dimension a simple global evaluation is computed, with the (inaccurate) assumption that there are no dependencies between variables, and we greedily take the value with the lowest error. The optimization procedure is shown in Algorithm~\ref{tb:icm-pseudocode}, where $\&$ and $(^*)$ denote respectively memory address and pointer dereference operators. Note that similarly to the expectation maximization (EM) algorithm~\cite{dempster1977maximum}, each collection of variables $t, d$ and $s$ is optimized sequentially, by fixing the value of the remaining ones. In order to avoid local minima and improve the convergence rate of the algorithm, a clustering approach is used, as shown in Figure~\ref{fig:clustering}. Initially the number of clusters in the volume is reduced to only two, as we have a sparse representation. The voxel indices are ordered along the yzx dimensions, with y up and x right. The first half of the indices will be treated as the first cluster and the rest as the second. The updateClustering() function increases the number of clusters as the optimization progresses, on each update the resolution is doubled. The temperature and densities in each voxel, as well as the exposure can take floating point precision values. From a pure theoretical perspective the number of labels is infinite, and in practice it is too large to evaluate every label on each iteration. As an approximation we sample around the current point using a Gaussian distribution centered on the current value, with a standard deviation which is inversely proportional to the iteration number. We model the exposure of the camera by a single floating point value constrained in the range $s \in [0.01,1000]$. Since modifying the exposure does not modify the voxel neighbor relationships, only the data term is evaluated for each new sample. If there is a decrease in the total error, the exposure is updated with the new value. \begin{algorithm}[t] \caption{Simplified parameter estimation procedure.} \label{tb:icm-common-d-pseudocode} initialise\_clusters\_and\_other\_variables\; \While{not exitConditions()}{ \For{$i=0$ to $n_{cluster}$ }{ \For{$j=0$ to $n_{samples}$ }{ $v_{new} =$ sampleNewValue()\; new\_score $=$ dataTerm($v_{new}$) + pairwiseTerm($v_{new}$)\; \If{\upshape{new\_score < current\_score(i)}}{ temperatures$(i) = v_{new}$\; current\_score(i) = new\_score; } } } density\_factor = estimateNewDensityFactor()\; exposure = estimateNewExposure()\; updateClustering()\; current\_score = updateScore()\; total\_score = sum(current\_score)\; } \end{algorithm} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.23\linewidth} \includegraphics[height=3.25cm]{images/goal1-candle} \caption{Candle fire, front view.} \end{subfigure} \, \begin{subfigure}[t]{0.23\linewidth} \includegraphics[height=3.25cm]{images/goal2-candle} \caption{Candle fire, side view.} \end{subfigure} \, \begin{subfigure}[t]{0.23\linewidth} \includegraphics[height=3.25cm]{images/goal1-large-flame} \caption{Fire licks, front view.} \end{subfigure} \, \begin{subfigure}[t]{0.23\linewidth} \includegraphics[height=3.25cm]{images/goal2-large-flame} \caption{Fire licks, side view.} \end{subfigure} \caption{Goal images for our data, (a) and (b) front and side view of a candle flame, (c) and (d) front and side view of several fire licks.} \label{fig:goal-images} \end{figure*} In the initialization step, each voxel is assigned a temperature and density randomly sampled with uniform probability from a range of maximum and minimum values which are given by the physical nature of the data~\cite{Howell:2002}, $t \in [300, 2300]$ degrees Kelvin and $d \in [0.01 \times 10^{27}, 500 \times 10^{27}]$ number of particles per cubic meter. The initial exposure is sampled using a logarithmic scale from its own bounded range and the exposure value with the lowest error is chosen. Once the initial value for the exposure has been estimated, the mean value for $t$ and $d$ of the voxels that compose a cluster is assigned to the cluster itself. The standard deviation of all the Gaussian distributions that are sampled during the optimization is progressively decreased as the number of iterations increases. In practice a simplified optimization method can be used instead of the one shown in Algorithm~\ref{tb:icm-pseudocode}. The RGB values can be a good approximation to the real fuel density field. We take the red channel of each voxel from the volumetric reconstruction stage and we normalize all the values. During the optimization a single density scale factor is estimated instead of the full range of possible densities per voxel in a step analogous to the exposure estimation. This variation of the optimization procedure is shown in Algorithm~\ref{tb:icm-common-d-pseudocode}. With this approximation the dimensionality of the parameters to be estimated is reduced from $2 n_v +1$ to $n_v + 2$, i.e. the amount of data renderings is reduced by half. In our experiments this simplification was able to yield compelling results. \section{Related work} In this section we present an overview of different techniques used for fire rendering and inverse rendering problems. A more detailed discussion of fire modeling, simulation and rendering is given in Huang et al.~\cite{Huang:2014}. \subsection{Fire rendering} Many physically-based methods have been proposed to render participating media realistically. Typically, approximate solutions for the Radiative Transport Equation (RTE)~\cite{Howell:2002} are computed. Rushmeier et al.~\cite{Rushmeier:1995} presented a method to perform accurate ray casting on sparse measured data. The fire was modeled as a series of stacked cylindrical rings, where each ring has uniform properties. A technique to animate fire with suspended particles was introduced by Feldman et al.~\cite{Feldman:2003}. Emitted light was computed using Planck's formula of black body radiation, however their RGB mapping requires a manual adjustment using images of real explosions. Nguyen et al.~\cite{Nguyen:2002} proposed a ray marching technique using black body radiation as well, scattering in the media and the observer's visual adaptation to the fire are modeled. The visual adaptation method assumes that the hottest part in the single flame maps to white for the observer. An extension was presented by Pegoraro and Parker~\cite{Pegoraro:2006}, the authors' model has physically-based absorption, emission and scattering properties. The spectroscopic characteristics of different fuels are achieved by modeling the transitions of electrons between different energy states in the molecules. The method allows for non-linear light trajectories in the medium due to refraction and it includes visual adaptation effects by means of a post-processing mechanism. \begin{figure*}[t] \includegraphics[width=\textwidth]{images/overview} \caption{Overview of our fire modeling method, from left to right, a multi-view camera setup is used to \emph{capture} real flames, density 3D volumes are computed in the \emph{reconstruction} stage, our \emph{optimization} procedure finds temperatures and fuel density fields that when rendered resemble the captured images, finally the output can be applied for realistic \emph{illumination} of virtual scenes.} \label{fig:overview} \end{figure*} Horvath and Geiger~\cite{Horvath:2009} proposed a rendering method whose main objective was user-friendliness for artists. The authors perform simple volume rendering on several fixed camera slices to generate an image. Black body radiation is used to compute light emission; the result is motion-blurred with a filter based on the velocities in the slices, and heat distortion was added as a post-processing filter defined by the user. Rendering flames at interactive frame rates has also been explored, this techniques inevitably sacrifice quality for performance. Bridault et al.~\cite{Bridault:2006} used a spectrophotometer to capture photometric distributions of candles. The intensities are stored on a texture and changes in illumination over time are approximated with an attenuation factor proportional to the size of the flame. \cite{Zhang:2011} used a plane blending technique were a one-dimensional color texture is used as a transfer function to convert flow attributes to colors and opacities. \subsection{Parameter optimization} Dobashi et al.~\cite{Dobashi:2012} proposed a method to compute eight rendering parameters for clouds shapes using a real cloud photograph. The authors' technique is limited to simplified scenes with a single light and the cloud, were the camera and the light position are fixed. Under these restrictions a set of images can be pre-computed to accelerate the optimization. Klehm et al.~\cite{Klehm:2014} presented a multi-view optimization framework to modify the appearance of volumetric data using input images from several views. The goal images must perfectly match the shape of the data and the position of the cameras in the scene. A texture synthesis technique to generate fire animations was introduced by~\cite{Jamriska:2015}. The method requires a hand made motion field and an alpha mask of the desired result, both of which are used to generate a new sequence using data from an existing video exemplar. Okabe et al.~\cite{Okabe:2015} presented a technique to reconstruct a volume shape using a sparse set of images. The texture for the volume is transferred separately using a pyramid-based texture synthesis approach. Recently a method to generate textures for fluids in animations was proposed by Gagnon et al.~\cite{Gagnon:2016}. From an initial set of $uv$ coordinates, the new ones are generated by tracking the topological deformations in a number of sample points in the fluid. Shading parameters for fabrics were estimated using photographs to match appearance and Micro-CT scans to reconstruct fiber-level geometry~\cite{Khungurn:2015}. The previous methods only consider flames in isolation~\cite{Okabe:2015} or only compute a subset of the parameters, such as the final extinction coefficients at each voxel~\cite{Klehm:2014} or other simpler abstractions~\cite{Dobashi:2012}. In contrast, our method operates of the realm of physically valid temperatures and fuel densities. \section{Results and discussion} \label{sec:results-and-discussion} For the results shown in this section we used a server with 24 Intel Xeon E5-2620v2 2.10GHz processors and 62GB of RAM. The input parameters for all the figures in the paper were $w_{am} = 1$ and $w_{sm} = 10$. The volume data for the flames fuel density and temperature distribution have a resolution of $128 \times 128 \times 128$ voxels. The images were assembled in Maya 2015 and rendered using a custom Mental Ray CPU shader which implements ray marching for fire rendering based on the work presented by Pegoraro and Parker~\cite{Pegoraro:2006}. The evaluation times range from two to twelve hours using an unoptimized Matlab CPU implementation, where the main bulk of the computation time goes into the fitness evaluation, i.e. rendering one image with a set of input parameters. The images used during the optimization have a resolution of $320 \times 240$ pixels, while the results which show the output flames in more complex scenes were rendered at a higher quality, $960 \times 540$. For the figures shown in the paper, the goal images were taken from the frontal and side (90 degrees) cameras, as shown in Figure~\ref{fig:goal-images}. The optimization time is proportional to the image and volume sizes, as well as the sampling rates for the optimization and rendering parameters. Unless otherwise stated all the results were generated with Algorithm~\ref{tb:icm-common-d-pseudocode}. The voxels in the volume are flattened into an array and are ordered along the yzx dimensions, with y up and x right. For a $128 \times 128 \times 128$ voxel space, each temperature and density volumes generated after applying the aforementioned operation is $2097152$ dimensional, which can be challenging to optimize. To address this issue, we use a sparse representation for the volume data. After the reconstruction step any RGB voxel value that is below a certain threshold is considered to be empty, which reduces the dimensionality of the data during the search. To be able to validate our method, we initially optimized the parameters with respect to goal images rendered from synthetic data. The RGB volume extracted from the multi-view 3D reconstruction algorithm from a video of a real flame is used as initialization. The density field is the R component in each voxel scaled to a plausible density, the same procedure is applied to the temperature using the maximum of each channel. The goal image is the result of rendering the previous data with a manually tuned exposure. Figure~\ref{fig:ground-truth-results} shows the result of the optimization via Algorithm~\ref{tb:icm-pseudocode} with a volume resolution of $32 \times 32 \times 32$ voxels. \begin{figure}[t] \centering \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/goal-synthetic} \caption{Goal image.} \end{subfigure}% ~ \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/optimized-synthetic} \caption{Result image.} \end{subfigure} \caption{Fire modeling with our method using synthetic data, (a) the goal image rendered with synthetic data and (b) the optimization result.} \label{fig:ground-truth-results} \end{figure} Given the non-linear interaction between the input parameters in the RTE and the ambiguity introduced by 2D projection for image rendering, it can be assumed that the optimization space has many local minima. As a result different initialization parameters may lead to different outputs. However, our objective is not to accurately estimate the temperature and density fields, but to produce plausible estimates which will faithfully reproduce the input images. And as such, those variations are satisfactory for our purposes. The evolution of the temperature values during the optimization with their corresponding fitness value is shown in Figure~\ref{fig:objective-function}. The figure shows how the objective function decreases rapidly in the first few iterations and stabilizes as the optimization progresses. Note how our clustering approach first finds the right color for large regions, which are progressively refined as the resolution increases. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{images/error_function} \caption{Transition of the error function during the optimization.} \label{fig:objective-function} \end{figure} Analyzing the spatial distribution of the output volumes after each iteration gives some insight into how the Coordinate Descent method explores the error function space. This test reveals how overall variations in temperatures and densities map to changes in the error function, which in turn is a measure of color variation. Each flame is $2n_v+1$-dimensional, to simplify the interpretation of the data, only the $n_v$-dimensional temperature fields are depicted in Figure~\ref{fig:mds1}. Each circle represents a flame temperature in a given iteration, MultiDimensional Scaling~\cite{Seber:1984} is applied to project the data into a 2D space, where the axes are in some arbitrary units which preserves the Euclidean distances as measured in the parameters space. The radius of the circles are proportional to the score of the error function, and the lines connect adjacent iterations. It can be seen clearly, especially when making a comparison to Figure~\ref{fig:objective-function}, that a large variation in the temperature field does not have to necessarily imply large variations in the error function. \begin{figure}[t] \includegraphics[width=0.99\linewidth]{images/mds1} \caption{Temperature fields during the optimization projected on a 2D plane, x and y axes are euclidean distance in temperature space and the radius of the circles are proportional to the value of the error function.} \label{fig:mds1} \end{figure} To demonstrate visually the effects of each term in the error function, we optimized only with the data term, the result is shown in Figure~\ref{fig:optimized-Cam1-without-neigh}. Note how the flame has drastic temperature changes between neighboring voxels and the shape has little resemblance to the original fire. The same optimization adding the pairwise smoothness term to the objective function is shown in Figure~\ref{fig:optimized-Cam1-with-neigh}. The shape now better resembles the original, and the overall color remains mostly the same. \begin{figure}[t] \centering \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/optimized-Cam1-without-neigh} \caption{Without smoothness.} \label{fig:optimized-Cam1-without-neigh} \end{subfigure}% ~ \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/optimized-Cam1-with-neigh} \caption{With smoothness.} \label{fig:optimized-Cam1-with-neigh} \end{subfigure} \caption{Effects of the smoothness term, (a) output flame without the smoothness term and (b) result with the smoothness term.} \label{fig:smoothness-effects} \end{figure} The number of views used as goal images also plays an important role in the optimization procedure. The output results for a simple candle flame using a single frontal view and, a pair of frontal and side views is shown in Figure~\ref{fig:single-multi-view}. Note that in the single view setting there are more degrees of freedom, which allows the algorithm to better match the shape and color of the input image. However, severe artifacts including discontinuities can be observed from other viewing angles. Using two goal images for the modeling is enough in this scenario to produce satisfactory renders from novel views. \begin{figure}[t] \centering \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/optimized-Cam1-single-view} \caption{Single goal, optimized view.} \end{subfigure}% ~ \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/optimized-Cam2-single-view} \caption{Single goal, novel view.} \vspace*{2mm} \end{subfigure} \\ \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/optimized-Cam1-two-goal} \caption{Two goal, optimized view.} \end{subfigure}% ~ \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/optimized-Cam3-two-goal} \caption{Two goal, novel view.} \end{subfigure} \caption{Comparison of using single goal, (a) optimized view and (b) novel view, vs multi-view optimization with (c) optimized view and (d) the novel view.} \label{fig:single-multi-view} \end{figure} The clustering approach plays a crucial part in the optimization procedure. Otherwise the system is more likely to fall in worse local minima, as it loses the power to induce large changes in the rendered images. A comparison of an optimization with and without this feature is shown in Figure~\ref{fig:no_clustering}. The clustered version reaches lower error and more visually pleasing results with significantly fewer function evaluations. For this particular experiment the temperature and density volumes were downsampled to a resolution of $64 \times 64 \times 64$ voxels in order to better show the difference between the images. \begin{figure}[t] \centering \includegraphics[width=0.24\linewidth, trim={3cm 0.5cm 3.5cm 0.5cm}, clip]{images/cluster-25} \includegraphics[width=0.24\linewidth, trim={3cm 0.5cm 3.5cm 0.5cm}, clip]{images/cluster-50} \includegraphics[width=0.24\linewidth, trim={3cm 0.5cm 3.5cm 0.5cm}, clip]{images/cluster-75} \includegraphics[width=0.24\linewidth, trim={3cm 0.5cm 3.5cm 0.5cm}, clip]{images/cluster-100} \includegraphics[width=0.24\linewidth, trim={3cm 0.5cm 3.5cm 0.5cm}, clip]{images/no-cluster-25} \includegraphics[width=0.24\linewidth, trim={3cm 0.5cm 3.5cm 0.5cm}, clip]{images/no-cluster-50} \includegraphics[width=0.24\linewidth, trim={3cm 0.5cm 3.5cm 0.5cm}, clip]{images/no-cluster-75} \includegraphics[width=0.24\linewidth, trim={3cm 0.5cm 3.5cm 0.5cm}, clip]{images/no-cluster-100} \caption{The effects of the clustering on the optimization, top with clustering, bottom without, left to right, after 30 minutes, 1 hour, 2 hours and 3 hours of processing. It is evident how much clustering improves the efficiency of the computation.} \label{fig:no_clustering} \end{figure} To further demonstrate the advantages of our method, we compare the traditional approach for flame modeling in Maya with ours. Figure~\ref{fig:teaser} was rendered in Maya with flame emissivity manually disabled, i.e. it would not illuminate other objects in the scene, and a spherical light with the output color of the voxel with the highest temperature was placed in the center of the flame volume. The result is shown in Figure~\ref{fig:image-fake-ilum}, note how the shadows are not as soft and the areas with more illumination do not correspond to the flame position. These defects become more apparent as the flame complexity increases, and in simple fires, e.g. candle flames, they are hardly noticeable. Other benefits of our system include the ability to model the human eye visual adaptation to different illumination stimulus via HDR to LDR conversion techniques~\cite{Banterle:2011}. An example using Reinhard et al.\cite{Reinhard:2002} method is shown in Figure~\ref{fig:light_adaptation}, the effect is comparable to increasing the exposure in the camera.
{ "timestamp": "2018-04-17T02:08:00", "yymm": "1804", "arxiv_id": "1804.05261", "language": "en", "url": "https://arxiv.org/abs/1804.05261" }
\section{INTRODUCTION} Let $R$ be a cyclically reduced word which is not a proper power and has length at least two in the free group $F = F(X)$. In \cite{We1}, Weinbaum showed that some cyclic conjugate of $R$ has a decomposition of the form $UV$, where $U$ and $V$ are non-empty cyclic subwords of $R$, each of which is uniquely positioned in $R$ i.e. occurs exactly once as a cyclic subword of $R$. Weinbaum also conjectured that $U$ and $V$ can be chosen so that neither is a cyclic subword of $R^{-1}$. A stronger version of his conjecture was proved by Duncan and Howie \cite{dh}. In this paper, a cyclic subword is uniquely positioned if it is non-empty, occur exactly once as a subword of $R$ and does not occurs as a subword of $R^{-1}$. \medskip From now on $R$ is a word in the free product of groups $G_1$ and $G_2$, which is not a proper power and has length at least two. Before we can continue, we need to define the notion of $n$-\textit{length} of a word. We do this in the special case when $n=2$ and the word is $R$, but of course the definition can be generalised for any $n>1$ and any word in a group. \medskip For each element $a$ of order $2$ involved in $R$, let $D(a)$ denote its number of occurrence in $R$. In other words suppose $R$ has free product length of $2k$ for some $k>0$. Then, $R$ has an expression of the form $$R=\prod_{i=1}^ka_{i}b_{i},$$ with $a_i\in G_1$ and $b_i\in G_2$. If $a^2=1$, then we define $D(a)$ to be the cardinality of the set $\{i\in \{1,2,\cdots,k\}~|~a_i=a\}$. Denote by $\textbf{S}_{R}$ the symmetrized closure of $R$ in $G_1*G_2$ i.e. the smallest subset of $G_1*G_2$ containing $R$ which is closed under cyclic permutations and inversion. Since $D(a)$ is unchanged by replacing $R$ with any other element in $\textbf{S}_{R}$, we make the following definition. \begin{defn} The $2$-\textit{length} of $\textbf{S}_{R}$, denoted by $D_2(\textbf{S}_{R})$, is the maximum $D(a)$, such that $a$ is a letter of order $2$ involved in $R$. \end{defn} In this paper, we will be mostly concerned with the element $R'$ in $\textbf{S}_{R}$ of the form $$R'= \prod_{i= 1}^{D_2(\textbf{S}_{R})}aM_i,$$ with $D(a) = D_2(\textbf{S}_{R})$ and $M_i\in G_1*G_2$. It follows that each $M_i$ has odd length (as a reduced but not cyclically reduced word in the free product) and does not contain any letter equal to $a$. When we use the notation `$`=$" for words, it will mean identical equality. We will use $\ell()$ to mean then length of a reduced free product word which is not necessarily cyclically reduced. \medskip As mentioned in the abstract, the authors of \cite{dh} observed that in the case when $D_2(\textbf{S}_{R})=0$, the word $R$ can be decomposed as a product of two uniquely positioned subwords. Using that they showed that every minimal picture over a one-relator product with relator $R^3$ satisfies $C(6)$, from which important results about the group were proved. In this paper we work in a more general setting where $D_2(\textbf{S}_{R})\leq 2$. It is no longer always possible that $R$ has a decomposition into uniquely positioned subwords. However, we can show that $R$ has a certain structure which allows us to obtain similar results. This idea is captured in the following theorem. \begin{thm}\label{the1} Let $R$ be a word in a free product of length at least $2$ and which is not a proper power. Suppose also that $D_2(\textbf{S}_{R})\leq 2$. Then either a cyclic conjugate of $R$ has a decomposition of the form $UV$ such that $U$ and $V$ are uniquely positioned or one of the following holds: \begin{itemize} \item[(a)] $D_2(\textbf{S}_{R})=1$ and $R$ has a cyclic conjugate of the form $aXbX^{-1}$ or $aM$, where $a,b$ are letters of order $2$ and $M$ does not involve any letter of order $2$. \item[(b)] $D_2(\textbf{S}_{R})=2$ and $R$ has a cyclic conjugate of the form $aXbX^{-1}$ where $a$ is a letter of order $2$. \end{itemize} \end{thm} Note that in Theorem \ref{the1}, the requirement that $D_2(\textbf{S}_{R})\leq 2$ is optimal in the sense that there is no hope to obtain such result when $D_2(\textbf{S}_{R})> 2$. To see why this is true, consider the word $S=\prod_{i=1}^{n}ab_i$, with $a\in G_1$ and $b_i\in G_2$, $i=1,2,\cdots,n$. Suppose that $b_i\neq b_j$ for $i\neq j$ and $a^2=b_i^2=1$ for $i=1,2,\cdots, n$. It is easy to verify that $D_2(\textbf{S}_{R})=n$ and Theorem \ref{the1} fails for $n>2$. In other words, $S$ does not have a decomposition into two uniquely positioned subwords, nor does it have a decomposition of the form $xXyX^{-1}$ such that $x^2=1$. \medskip Further analysis of the structure of $R$ leads us to the following theorem which is our main result in this paper. \begin{thm}\label{the2} Let $R$ satisfy the conditions of Theorem \ref{the1}. Then either any minimal picture over $G$ satisfies $C(6)$ or $R$ has the form (up to cyclic conjugacy) $aXbX^{-1}$ with $a^2=1\neq b^2$. \end{thm} The rest of the paper is arranged as follows. We begin in Section $2$ by providing some literature on related results. We also recall only the basic ideas about pictures. In Section $3$ we prove a number of Lemmas about word combinatorics and pictures. In particular we deduce Theorem \ref{the1}. Furthermore, these Lemmas are applied in Section $4$ to prove the main result and deduce a number of applications. \section{PRELIMINARIES} Let $G_1$ and $G_2$ be nontrivial groups and $w\in G_1*G_2$ a cyclically reduced word. Let $G$ be the quotient of the free product $G_1*G_2$ by the normal closure of $w$, denoted $N(w)$. Then $G$ is called a one-relator product and denoted by $$G= (G_1*G_2)/N(w).$$ We refer to $G_1,G_2$ as the factors of $G$, and $w$ as the relator. For us, $w = R^m$ such that $R$ is a cyclically reduced word which is not proper power and $m\geq 3$. If $m\geq 4$, a number of results were proved in \cite{jh1,jh2,jh3}, about $G$. These results were also proved in \cite{dh} when $m= 3$ but not without the extra condition that $R$ involves no letter of order $2$. We also mention that the case when $m= 2$ is largely open. For partial results in this case see \cite{fhr,Ih1,Ih2}. The aim of this paper is to extend the result in \cite{dh} by allowing letters of order $2$ in $R$. Also we require results about pictures over $G$, in particular the fact that $R^m$ satisfies the small cancellation condition $C(2m)$ when $R$ has a certain form. Pictures can be seen as duals of van Kampen diagrams and have been widely used to prove results about one-relator groups and one-relator products. Below, we recall only basic concepts on pictures over a one-relator product as given in \cite{Ih1}. For more details, the reader can see \cite{jh1,jh2,jh3,dh,Ih2}. \subsection{PICTURES}\label{sec4} A \textit{picture} $\Gamma$ over $G$ on an oriented surface $\Sigma$ is made up of the following data: \begin{itemize} \item a finite collection of pairwise disjoint closed discs in the interior of $\Sigma$ called \textit{vertices}; \item a finite collection of disjoint closed arcs called \textit{edges}, each of which is either: a simple closed arc in the interior of $\Sigma$ meeting no vertex of $\Gamma$, a simple arc joining two vertices (possibly same one) on $\Gamma$, a simple arc joining a vertex to the boundary $\partial \Sigma$ of $\Sigma$, a simple arc joining $\partial \Sigma$ to $\partial \Sigma$; \item a collection of \textit{labels} (i.e words in $G_1\cup G_2$), one for each corner of each \textit{region} (i.e connected component of the complement in $\Sigma$ of the union of vertices and arcs of $\Gamma$) at a vertex and one along each component of the intersection of the region with $\partial \Sigma$. For each vertex, the label around it spells out the word $R^{\pm m}$ (up to cyclic permutation) in the clockwise order as a cyclically reduced word in $G_1 * G_2$. We call a vertex \textit{positive} or \textit{negative} depending on whether the label around it is $R^{ m}$ or $R^{-m}$ respectively. \end{itemize} For us $\Sigma$ will either be the $2$-sphere $S^2$ or $2$-disc $D^2$. A picture on $\Sigma$ is called \textit{spherical} if either $\Sigma=S^2$ or $\Sigma=D^2$ but with no arcs connected to $\partial {D^2}$. If $\Gamma$ is not spherical, $\partial {D^2}$ is one of the boundary components of a non-simply connected region (provided, of course, that $\Gamma$ contains at least one vertex or arc), which is called the \textit{exterior}. All other regions are called \textit{interior}. \medskip We shall be interested mainly in \textit{connected} pictures. A picture is \textit{connected} if the union of its vertices and arcs is connected. In particular, no arc of a connected picture is a closed arc or joins two points of $\partial \Sigma$, unless the picture consists only of that arc. In a connected picture, all interior regions $\bigtriangleup$ of $\Gamma$ are simply-connected, i.e topological discs. Just as in the case of vertices, the label around each region -- read {\em anticlockwise} -- gives a word which in a connected picture is required to be trivial in $G_1$ or $G_2$. Hence it makes sense to talk of $G_1-$regions or $G_2-$regions. Each arc is required to separate a $G_1-$region from a $G_2-$region. This is compatible with the alignment of regions around a vertex, where the labels spell a cyclically reduced word, so must come alternately from $G_1$ and $G_2$. \medskip A vertex is called \textit{exterior} if it is possible to join it to the \textit{exterior} region by some arc without intersecting any arc of $\Gamma$, and \textit{interior} otherwise. For simplicity we will indeed assume from this point that our $\Sigma$ is either $S^2$ or $D^2$. It follows that reading the label round any \textit{interior} region spells a word which is trivial in $G_1$ or $G_2$. The \textit{boundary label} of $\Gamma$ on $D^2$ is a word obtained by reading the \textit{labels} on $\partial D^2$ in an \textit{anticlockwise} direction. This word (which we may be assumed to cyclically reduced in $G_1 * G_2$) represents an identity element in $G$. In the case where $\Gamma$ is spherical, the \textit{boundary label} is an element in $G_1$ or $G_2$ determined by other labels in the \textit{exterior} region. \medskip Two distinct vertices of a picture are said to \emph{cancel} along an arc $e$ if they are joined by $e$ and if their labels, read from the endpoints of $e$, are mutually inverse words in $G_1 * G_2$. Such vertices can be removed from a picture via a sequence of \textit{bridge moves} (see Figure \ref{bridge} and \cite{dh} for more details), followed by deletion of a \textit{dipole} without changing the boundary label. A \textit{dipole} is a connected spherical sub-picture that contains precisely two vertices, does not meet $\partial \Sigma$, and such that none of its interior regions contain other components of $\Gamma$. This gives an alternative picture with the same boundary label and two fewer vertices. \begin{figure}[h!] \includegraphics[scale=0.5]{image.png} \caption{\textit{Diagram showing bridge-move.}} \label{bridge} \end{figure} \medskip We say that a picture $\Gamma$ is \textit{reduced} if it cannot be altered by bridge moves to a picture with a pair of cancelling vertices. A picture $\Gamma$ on $D^2$ is \textit{minimal} if it is non-empty and has the minimum number of vertices amongst all pictures over $G$ with the same boundary label as $\Gamma$. Clearly minimal pictures are reduced. Any cyclically reduced word in $G_1 * G_2$ representing the identity element of $G$ occurs as the boundary label of some reduced picture on $D^2$. \begin{defn}\label{rmk} Let $\Gamma$ be a picture over $G$. Two arcs of $\Gamma$ are said to be \textit{parallel} if they are the only two arcs in the boundary of some simply-connected region $\bigtriangleup$ of $\Gamma$. \end{defn} We will also use the term \textit{parallel} to denote the equivalence relation generated by this relation, and refer to any of the corresponding equivalence classes as a \textit{class of $\omega$ parallel arcs} or \textit{$\omega$-zone}. Given a \textit{$\omega$-zone} with $\omega>1$ joining vertices $u$ and $v$ of $\Gamma$, consider the $\omega- 1$ two-sided regions separating these arcs. Each such region has a corner label $x_{_u}$ at $u$ and a corner label $x_{_v}$ at $v$, and the picture axioms imply that $x_{_u}x_{_v} = 1$ in $G_1$ or $G_2$. The $\omega -1$ corner labels at $v$ spell a cyclic subword $s$ of length $\omega-1$ of the label of $v$. Similarly the corner labels at $u$ spell out a cyclic subword $t$ of length $\omega -1$ of the label of $u$. Moreover, $s=t^{-1}$. If we assume that $\Gamma$ is reduced, then $u$ and $v$ do not cancel. In the spirit of small-cancellation-theory, we refer to $t$ and $s$ as \textit{pieces}. \medskip As in graphs, the \textit{degree} of a vertex in $\Gamma$ is the number of \textit{zones} incident on it. For a region, the \textit{degree} is the number corners it has. We say that a vertex $v$ of $\Gamma$ satisfies the (local) $C(m)$ condition if it is joined to at least $m$ \textit{zones}. We say that $\Gamma$ satisfies $C(m)$ if every interior vertex satisfies $C(m)$. \section{TECHNICAL RESULTS} In this section we give a number of results on the structure of $R$ when $D_2(\textbf{S}_{R})\leq 2$, from which Theorem \ref{the1} follows. It is assumed throughout that no element of $\textbf{S}_{R}$ has the form $UV$, where $U$ and $V$ are both uniquely positioned. In particular if $D(a)\geq 2$, there exists at most one $i\in \{1,2,\cdots,D(a)\}$ such that $M_i$ uniquely positioned in the decomposition $R=\prod_{i=1}^{D(a)}aM_i$. We begin with the proof of part(a) of Theorem \ref{the1}. \begin{lem}\label{lems1} If $D_2(\textbf{S}_{R})=1$, then $R$ has a cyclic conjugate of the form $aM$ or $aXbX^{-1}$, where $a,b$ are letters of order $2$ and $M$ does not involve any letter of order $2$. \end{lem} \begin{proof} Since $D_2(\textbf{S}_{R})=1$, we can assume without loss of generality that $R=aM$, where $M$ is a word in $G_1*G_2$ which does not involve $a$. We now proceed to show that either $M$ does not involve any letter of order $2$ or $M$ can be decomposed in the form $XbX^{-1}$, where $b\in G_1\cup G_2$ is a letter of order $2$ and $X$ is a (possibly empty) word in $G_1*G_2$. \medskip Suppose by contradiction that $M$ has a decomposition of the form $XbY$ with $b^2=1$ and $X\neq Y^{-1}$. Note that we can assume without loss of generality that $0<\ell(X)<\ell(Y)$. Clearly, if $\ell(X)=\ell(Y)>0$, then both $aX$ and $bY$ are uniquely positioned which is a contradiction. There is nothing to prove if $\ell(X)=\ell(Y)=0$. Also if $\ell(X)= 0\neq \ell(Y)$, we get a contradiction since $ab$ and $Y$ will be uniquely positioned. Hence the inequality $0<\ell(X)<\ell(Y)$ holds. \medskip Suppose that $X^2=1$ and $Y^2=1$ holds simultaneously. Then by setting $X=X_1pX_1^{-1}$ and $Y=Y_1^{-1}qY_1$, where $X_1,Y_1\in G_1*G_2$ and $p,q$ are distinct letters of order $2$ in $G_1\cup G_2$, we can replace $R$ with $$R'=pX'qY',$$ where $X'=(Y_1bX_1)^{-1}$ and $Y'=Y_1aX_1$. Since $a\neq b$, we have that $X'\neq Y'^{-1}$. Given that $\ell(X')=\ell(Y')$, we easily conclude that $pX'$ and $qY'$ are uniquely positioned. This is a contradiction. \medskip Suppose that $X^2=1\neq Y^2$. By the assumption that $D_2(\textbf{S}_{R})=1$, we know that $X$ can not be equal to a segment of $Y$. Hence $aX$ and $bY$ are both uniquely positioned. This is a contradiction. Similarly, suppose that $X^2\neq 1=Y^2$. Since $\ell(X)<\ell(Y)$ and $D_2(\textbf{S}_{R})=1$, we have that both $bY$ and $Ya$ are uniquely positioned. Hence, neither $aX$ nor $Xb$ is uniquely positioned. This means that $X^{-1}$ is identically equal to an initial and a terminal segment of $Y$. Therefore, $X^2=1$. This is a contradiction. \medskip Finally if $X^2\neq 1\neq Y^2$, then $aXb$ and $Y$ are both uniquely positioned. This contradiction completes the proof. \end{proof} \begin{lem}\label{lems2} Suppose that $D_2(\textbf{S}_{R})=2$. Then $R$ has a cyclic conjugate of the form $aXbX^{-1}$ where $a$ is a letter of order $2$. \end{lem} \begin{proof} Since $D_2(\textbf{S}_{R})=2$, we can assume without loss of generality that $$R = aM_1aM_2,$$ where $M_1,M_2\in G_1*G_2$, and neither involves the letter $a$. By assumption $M_1$ and $M_2$ can not be uniquely positioned simultaneously. If $M_1^2=1$ and $M_2^2=1$ hold simultaneously, then by replacing $R$ with a cyclic conjugate, it can be shown that $R$ has the desired form. Without loss of generality, we can assume that $1\leq \ell(M_1)\leq \ell(M_2)$. \medskip Suppose that $\ell(M_1)=\ell(M_2)$. We can not have $M_1= M_2$ since $R$ is not a proper power. Also if $M_1= M_2^{-1}$, then there is nothing to prove. So we assume without loss of generality that $M_1^2=1$ and $ M_2$ is uniquely positioned. If $\ell(M_1)=1$, then there is nothing to prove since $M_1$ has order $2$ and so $R$ has the desired form. Hence we assume that $\ell(M_1)= \ell(M_2)\geq 3$. Let $M_1=XpX^{-1}$ and $M_2=YqZ$, with $p,q\in G_1\cup G_2$, $p^2=1$, $\ell(Y)=\ell(Z)$ and $Y\neq Z^{-1}$ (as otherwise there is nothing to prove). Then $$R=aXpX^{-1}aYqZ.$$ Set \begin{align*} U= & aYq, \ \ \qquad\qquad U'=qZa,\\ V= & ZaXpX^{-1},\qquad V'=XpX^{-1}aY. \end{align*} Clearly, $V^2\neq 1\neq V'^2$ since $D(a)=2$. Also since $Y\neq Z^{-1}$, it follows that $V$ and $V'$ are simultaneously uniquely positioned. Hence neither $U$ nor $U'$ is uniquely positioned. It is easy to see that this means that $U^2=1$ or $U'^2=1$ or $U'=U^{\pm 1}$. However, any such occurrence will imply that $a=q$ or $Y=Z^{-1}$. This is a contradiction. \medskip Now suppose that $\ell(M_i)\neq \ell(M_j)$, where $i,j\in \lbrace 1,2\rbrace$ with $i\neq j$. Note that it is not possible to have $M_i^2\neq 1$ and $M_j^2\neq 1$ holding simultaneously since that will imply that $aM_ia$ and $M_j$ are both uniquely positioned, assuming $\ell(M_i)<\ell(M_j)$. Suppose that $M_i^2=1$. Let $M_i=XpX^{-1}$ and $M_j=YqZ$, with $p,q\in G_1\cup G_2$, $p^2=1$, $\ell(Y)=\ell(Z)$ and $Y\neq Z^{-1}$. We claim that exactly one of $aY$ or $Za$ is uniquely positioned. This is because if both are uniquely positioned, then there is nothing to prove. Also if neither is uniquely positioned, then $Y=Z^{-1}$. In both cases we get a contradiction. \medskip By symmetry we assume that $aY$ is uniquely positioned, and hence $qZaM_i$ is not. This leads to a contradiction when $\ell(Y)\geq \ell(M_i)$ since that will mean $Y=Z^{-1}$. Suppose then that $\ell(Y)< \ell(M_i)$. This implies that $M_i$ is an initial or terminal segment of $M_j$. Hence, we have that $M_j=M_iW$ or $M_j=WM_i$ for some $W\in G_1*G_2$, depending on whether $M_i$ is an initial or terminal segment of $M_j$. Note that $\ell(W)=2n$ for some integer $n>0$. Now we replace $R$ by $$R'= pMpN,$$ where $M=X^{-1}aX$ and $N=X^{-1}WaX$ or $N=X^{-1}aWX$. We consider first the case when $N=X^{-1}WaX$. In this case, the initial segment $X^{-1}W$ of $N$ has length $\ell(X^{-1}W)\geq \ell(X)+2$. Since $D_2(\textbf{S}_{R})=2$, $X^{-1}W$ neither involves $a$ nor $p$. It follows that $aXpXaX^{-1}p$ is uniquely positioned. Hence, $X^{-1}W$ is not uniquely positioned. The length condition on $X^{-1}W$ implies that $(X^{-1}W)^2=1$. Again since $D_2(\textbf{S}_{R})=2$, $X$ does not involve a letter of order $2$. So $W=SxS^{-1}X$, for some (possibly empty) word $S$ and some letter $x$ of order $2$. Hence $$R'= pX^{-1}aXpX^{-1}SxS^{-1}XaX.$$ Consider the cyclic subwords $W_1=S^{-1}XaXpX^{-1}aX$ and $W_2=pX^{-1}Sx$. Clearly, $W_1^2\neq 1$ as otherwise $S$ is empty and more importantly $X^2=1$, which is a contradiction. Also, $W_2^2\neq 1$ since $p\neq x$. In fact, it is easy to see that both $W_1$ and $W_2$ are uniquely positioned. This is a contradiction. Similar argument works when $N=X^{-1}aWX$ by replacing $W_1$ and $W_2$ with their inverses. This completes the proof. \end{proof} By combining Lemmas \ref{lems1} and \ref{lems2}, we obtain Theorem \ref{the1}. \medskip The remaining results in this section are consequences of results about a picture $\Gamma$ over $G$. First, we give a necessary and sufficient condition under which the word $R$ has a decomposition into a pair of uniquely positioned subwords when $D_2(\textbf{S}_{R})=1$. \begin{lem}\label{lems4} Let $r$ be a cyclically reduced word which is not a proper power in the free product $G_1*G_2$ such that $D_2(\textbf{S}_{r})=1$. Then, $r$ has a decomposition into two uniquely positioned subwords if and only if $\ell(r)>2$ and there exists $r'\in \textbf{S}_{r}$ such that $r'=aXxYyX^{-1}$ with $X,Y,x,y,a\in G_1*G_2$, $\ell(Y)\geq 1$, $\ell(x)=\ell(y)=\ell(a)=1$, $x\neq y^{-1}$ and $a^2=1$. \end{lem} \begin{proof} Suppose that $r$ has a decomposition into two uniquely positioned subwords $U$ and $V$. Since $D(\textbf{S}_{r})=1$, we have that $\ell(r)>2$. Without loss of generality, it follows that a cyclic conjugate of $r$ has the form $$r'=aU_2VU_1,$$ where $U=U_1aU_2$ and $a^2=1$. Hence $U_2VU_1=XYX^{-1}$ for some words $X,Y\in G_1*G_2$, where $X$ is possibly empty. Since $U$ and $V$ are uniquely positioned in $r$, we conclude that $\ell(Y)\geq 3$ and the first and last letters of $Y$ are not inverses. The result follows. \medskip For the other direction, suppose $r'=aXxYyX^{-1}$ with $X,Y,x,y,a\in G_1*G_2$, $\ell(x)=\ell(y)=\ell(a)=1$, $x\neq y^{-1}$ and $a^2=1$. Then $aXx$ is clearly uniquely positioned in $r$ since $x\neq y^{-1}$. For the same reason, we deduce from part(a) of Theorem \ref{the1} that $XxYyX^{-1}$ has no element of order $2$. In particular, this means that $YyX^{-1}$ and its inverse do not intersect (in an initial or terminal segment). We claim that this means that $YyX^{-1}$ is also uniquely positioned. We prove this by contradiction by assuming that $YyX^{-1}$ is not uniquely positioned and showing that $XxYyX^{-1}$ contains an element of order $2$. \medskip Let $XxYyX^{-1}=x_1x_2\cdots x_n,$ with $X=x_1x_2\cdots x_p$. Suppose that $YyX^{-1}$ is not uniquely positioned. Then, $(YyX^{-1})^{\pm 1}$ is identically equal to some segment of $XxYyX^{-1}$. This segment must intersect $YyX^{-1}$. By above discussion, we have that $YyX^{-1}$ is identically equal to the segment $$x_kx_{k+1}\cdots x_{\ell(YyX^{-1})-1},$$ with $k\leq p$. Hence, we have that the terminal segment of $XxYyX^{-1}$ of length $n+1-k$ has period $\lambda=p+2-k$. Consider the initial segment of this periodic segement given by $$W_k=x_kx_{k+1}\cdots x_{n+k-(p+2)}.$$ In particular $W_k$ is of length $n-(p+1)$. Note that $X^{-1}=x^{-1}_px^{-1}_{p-1}\cdots x^{-1}_1=x_{n+1-p}x_{n+2-p}\cdots x_n$. If $x_i=x^{-1}_i$ for some $k\leq i\leq p$, then we are done. Suppose not. If $x_p$ (alternatively $x_k$) is identified with $x^{-1}_i$ for some $k\leq i\leq p$, then $x_{\frac{p+i}{2}}=x^{-1}_{\frac{p+i}{2}}$ (alternatively $x_{\frac{k+i}{2}}=x^{-1}_{\frac{k+i}{2}}$). This is a contradiction. Otherwise, both $x_k$ and $x_p$ are identified with $x^{-1}_i$ and $x^{-1}_j$ respectively, where $1\leq j\leq i<k-1$ (since we are in a free product). In fact, $j=i+k-p<2k-1-p$. Choose $j$ such that under this periodicity, $x^{-1}_j$ is the letter that provides the first identification with $x_p$. We claim that $j+\lambda$ lies between $k$ and $p$. To verify this claim, it is enough to show that $p\geq j+\lambda$. We have that $j+\lambda<2k-1-p+\lambda=k+1$. Therefore, $j+\lambda\leq k\leq p$. Hence $x_{p}=x^{-1}_{j+\lambda}$ and $j+\lambda\leq p$. By the choice of $j$, we must have that $k\leq j+\lambda\leq p$. This is a contradiction. Hence $YyX^{-1}$ is uniquely positioned. This completes the proof. \end{proof} \begin{lem}\label{lems3} Let $\Gamma$ be a reduced picture over $G$ on $D^2$ where $R=aXbX^{-1}$ and $a^2=b^2=1$. Then either $\Gamma$ is empty or $\Gamma$ satisfies $C(6)$. \end{lem} \begin{proof} Suppose $\Gamma$ a non-empty picture over $G$ which is reduced. Suppose also that $\Gamma$ contains some interior vertex $v$ of degree less than $6$. Then $v$ is connected to another vertex $u$ by a zone containing $a$ or $b$. Using this zone, we can do bridge-moves so that $u$ and $v$ form a dipole. This contradicts the assumption that $\Gamma$ was reduced. \end{proof} \medskip We obtain from Lemmas \ref{lems1}, \ref{lems4} and \ref{lems3} the following corollary. \begin{cor}\label{cor2} Let $D_2(\textbf{S}_{R})\leq 2$ and $R\neq aXbX^{-1}$ with $a^2=1\neq b^2$. Then any non-empty reduced picture on $D^2$ over $G$ satisfies $C(6)$. \end{cor} \begin{proof} By Lemma \ref{lems4}, either $R$ has a decomposition $UV$ with $U,V$ uniquely positioned in $R$ or $R$ has the form (up to cyclic conjugation) $aXbX^{-1}$ with $a^2=b^2=1$. For the first case, the proof is exactly as it is in [\cite{dh} Lemma 3.1]. For the latter case, the result follows from Lemmas \ref{lems3}. \end{proof} \section{APPLICATIONS} In this section we deduce a number of applications of Theorem \ref{the2}. But first, we recall the setting. \medskip Let $G_1$ and $G_2$ be non-trivial groups and $R\in G_1*G_2$ which is not a proper power and has length at least $2$. We also require that no letter of order $2$ involved in $R$ appears more than twice i.e. $D_2(\textbf{S}_{R})\leq 2.$ For a natural number $m\geq 3$, $G$ is the quotient of $G_1*G_2$ by the normal closure of $R^m$. \begin{pfof}{Theorem \ref{the2}} This follows from Part (b) or Theorem \ref{the1} and Corollary \ref{cor2}. \end{pfof} When $R$ has a conjugate of the form $aXbX^{-1}$ and $a^2=1\neq b^2$, we will call $R$ \textit{exceptional}. As mentioned earlier, there are results in the literature on the two classes and we list them without proof. We begin the non-exceptional case. For this case the proofs can be found in \cite{dh}. \begin{thm}\label{the3} Suppose that $G$ is as above and $R$ is not exceptional. Then the following hold. \begin{itemize} \item[(a)] \textbf{Freiheitssatz.} The natural homomorphisms $G_1\rightarrow G$ and $G_2\rightarrow G$ are injective. \item[(b)] \textbf{Weinbaum's Theorem.} No non-empty proper subword of $R^m$ represents the identity element of $G$. \item[(c)] \textbf{Word problem.} If $G_1$ and $G_2$ are given by a recursive presentation with soluble word problem, then so is $G$. Moreover, the generalized word problem for $G_1$ and $G_2$ in $G$ is soluble with respect to these presentations. \item[(d)] \textbf{The Identity Theorem.} $N(R^m)/[N(R^m), N(R^m)] = \mathbb{Z}G/(1-R)\mathbb{Z}G$ as a (right) $\mathbb{Z}G$-module. \end{itemize} \end{thm} \begin{cor} There are natural isomorphisms for all $q > 3$; \begin{align*} H^{q}(G ; -) \longrightarrow H^{q}(G_1 ; -) & \times H^{q}(G_2 ; -) \times H^{q}(\mathbb{Z}_m ; -),\\ H_{q}(G ; -) \longleftarrow H_{q}(G_1 ; -) & \oplus H^{q}(G_2 ; -) \oplus H^{q}(\mathbb{Z}_m ; -); \end{align*} a natural epimorphism \begin{align*} H^{2}(G ; -) \longrightarrow H^{2}(G_1 ; -) & \times H^{2}(G_2 ; -) \times H^{2}(\mathbb{Z}_m ; -), \end{align*} and a natural monomorphism \begin{align*} H_{2}(G ; -) \longleftarrow H_{2}(G_1 ; -) & \oplus H_{2}(G_2 ; -) \oplus H_{2}(\mathbb{Z}_m ; -). \end{align*} \end{cor} These are defined on the category of $\mathbb{Z}G$-modules, $\mathbb{Z}_m$ is the cyclic subgroup of order $m$ generated by $R$, and all these maps are induced by restriction on each factor. \medskip Next we consider the exceptional case. In this case, $G$ is called a one-relator product induced from the generalized triangle group $H$, described as follows. Let $A:=\<a\>$ and $X^{-1}BX:=\<b\>$ be the cyclic subgroups of $G_1$ or $G_2$ generated by $a$ and $b$ respectively. Then $H:=(A*B)/N(R^m)$. Note that $G$ can be realized as a push-out of groups as shown in Figure \ref{push-out}. \begin{figure}[ht] \begin{tikzpicture}[>=latex] \centering \node (w) at (0,0) {\(A*B\)}; \node (x) at (0,-2) {\(G_1 * G_2\)}; \node (y) at (2.4,0) {\(H\)}; \node (z) at (2.4,-2) {\(G\)}; \draw[->] (w) -- (y); \draw[->] (w) -- (x); \draw[->] (x) -- (z); \draw[->] (y) -- (z); \end{tikzpicture} \caption{\textit{Push-out diagram.}}\label{push-out} \end{figure} This pushout representation of $G$ is referred to as a generalized triangle group description of $G$, and we require it to be \textit{maximal} in the sense \cite{Ih1}. Another technical requirement is that $(a, b)$ be \textit{admissible}: whenever both $a$ and $b$ belong to same factor, say $G_1$, then either the subgroup of $G_1$ generated by $\{a,b\}$ is cyclic or $\<a\>\cap\<b\>=1$. It is very easy to verify that these conditions are satisfied in our setting. Hence the results in \cite{hs} hold. \begin{thm}\label{the11} \label{the4} Suppose that $G$ is as above and $R$ is exceptional. Then the following hold. \begin{itemize} \item[(a)]\textbf{Freiheitssatz.} The natural homomorphisms $G_1\rightarrow G$ and $G_2\rightarrow G$ are injective. \item[(b)] \textbf{Weinbaum's Theorem.} No non-empty proper subword of $R^m$ represents the identity element of $G$. \item[(c)] \textbf{Membership problem.} Assume that the membership problems for $\langle a\rangle$ and $\langle b \rangle$ in $G_1*G_2$ are solvable. Then the word problem for $G$ is also soluble. \item[(d)] \textbf{Mayer-Vietoris.} The pushout of groups in Figure \ref{push-out} is {\em geometrically Mayer-Vietoris} in the sense of \cite{hs}. In particular it gives rise to Mayer-Vietoris sequences $$\cdots \to H_{k+1}(G,M)\to H_k(A*B,M)\to$$ $$ H_k(G_1*G_2,M)\oplus H_k(H,M)\to H_k(G,M)\to\cdots $$ and $$\cdots \to H^k(G,M)\to H^k(G_1*G_2,M)\oplus H^k(H,M)$$ $$\to H^k(A*B,M)\to H^{k+1}(G,M)\to\cdots $$ for any $\mathbb{Z}G$-module $M$. \end{itemize} \end{thm}
{ "timestamp": "2018-04-17T02:09:52", "yymm": "1804", "arxiv_id": "1804.05325", "language": "en", "url": "https://arxiv.org/abs/1804.05325" }
\section{Appendix} \begin{table}[!hbt] \centering \caption{Test errors (MAE)} \begin{tabular}{|c|c|c|c|c|c|} \hline Dataset & RF & XGT & DUAL & RETAIN & MV-LSTM \\ \hline PM2.5& $0.433 \pm 0.011$& $0.302 \pm 0.012$& $0.248 \pm 0.003$& $0.943 \pm 0.018$ &$0.227 \pm 0.002$\\ ENERGY& $0.404 \pm 0.021$& $0.310 \pm 0.023$& $0.249 \pm 0.006$& $0.507 \pm 0.016$& $0.256 \pm 0.006$\\ \hline \end{tabular} \label{tab:horizon_mae} \end{table} \begin{figure*}[!hbt] \centering \includegraphics[width=0.8\textwidth]{figures/energy_mv.pdf} \caption{Top four variables ranked by empirical mean of attention values in MV-LSTM on ENERGY dataset. Variable name with the color background indicate that the variable is identified by Granger-causality test as well. Only variable parents room temp. and living room temp. are identified by Granger-causality test in this dataset. } \label{fig:energy_mv} \end{figure*} \begin{figure*}[!hbt] \centering \includegraphics[width=0.8\textwidth]{figures/pm_dual.pdf} \caption{Top four variables ranked by empirical mean of attention values in DUAL on PM2.5 dataset. Variable name with the color background indicate that the variable is identified by Granger-causality test as well.} \label{fig:pm_dual} \end{figure*} \begin{figure*}[!hbt] \centering \includegraphics[width=0.8\textwidth]{figures/energy_dual.pdf} \caption{Top four variables ranked by empirical mean of attention values in DUAL on ENERGY dataset. Variable name with the color background indicate that the variable is identified by Granger-causality test as well.} \label{fig:energy_dual} \end{figure*} \section{Experiments} In this part, we report some preliminary results to demonstrate the prediction performance of MV-LSTM as well as the variable importance interpretation. Please refer to the appendix section for more results about MAE errors and variable attention interpretation. We use two real datasets. \textbf{PM2.5:} It contains hourly PM2.5 data and the associated meteorological data in Beijing of China. PM2.5 measurement is the target series. The exogenous time series include dew point, temperature, pressure, combined wind direction, cumulated wind speed, hours of snow, and hours of rain. \textbf{ENERGY:} It collects the appliance energy use in a low energy building. The target series is the energy data logged every 10 minutes. Exogenous time series consist of $14$ variables, e.g., house inside temperature conditions and outside weather information including temperature, wind speed, humanity and dew point from the nearest weather station. Baselines include ensemble methods on time series, i.e., random forests (RF) \citep{liaw2002classification, meek2002autoregressive} and extreme gradient boosting (XGT) \citep{chen2016xgboost, friedman2001greedy}, and state-of-the-art attention based recurrent neural networks on multi-variable sequence data, referred to as DUAL \citep{qin2017dual} and RETAIN \citep{choi2016retain}. In the first group of experiments, we report the prediction performance of all approaches in Table~\ref{tab:horizon_rmse}. \begin{table}[!h] \centering \caption{Test errors (RMSE)} \begin{tabular}{|c|c|c|c|c|c|} \hline Dataset & RF & XGT & DUAL & RETAIN & MV-LSTM \\ \hline PM2.5& $0.573 \pm 0.003$& $0.370 \pm 0.002$& $0.355 \pm 0.002$& $1.112 \pm 0.017$ &$0.340 \pm 0.001$\\ ENERGY& $0.494 \pm 0.004$& $0.360 \pm 0.003$& $0.372 \pm 0.006$& $0.669 \pm 0.006$& $0.361 \pm 0.001$\\ \hline \end{tabular} \label{tab:horizon_rmse} \end{table} Next, we analyze the variable level attention obtained in MV-LSTM on PM2.5 dataset. Specifically, in testing phase MV-LSTM outputs variable attention values specific for each testing instance, and thus for each variable we can estimate an empirical distribution of the corresponding attention value. Figure~\ref{fig:pm} shows the histograms of the attention values on top four variables, which are ranked by the empirical mean of their attention values. For comparison, variables identified by Granger causality test \citep{arnold2007temporal} w.r.t. the target variable are shown with a colored background. \begin{figure*}[!hbt] \centering \includegraphics[width=0.8\textwidth]{figures/pm_mv.pdf} \caption{Top four variables ranked by empirical mean of attention values in MV-LSTM on PM2.5 dataset. Variable names with colored background indicates the identification by Granger-causality as well. } \label{fig:pm} \end{figure*} We observe in Figure~\ref{fig:pm} that three variables (i.e., Dew point, pressure, and temperature) identified by Granger causality test are also top ranked by the variable attention of MV-LSTM. As pointed out by~\citep{liang2015assessing}, dew point and pressure are usually affected by the arrival of the northerly wind, which brings in drier and fresher air. This is exactly in line with the observation in the rank of such three variables by MV-LSTM. On the contrary, DUAL fails to reveal meaningful variable importance, as is shown in the appendix section. \section{Introduction} Time series, a sequence of observations over time, is being generated in a wide variety of areas \citep{qin2017dual, lin2017hybrid, guo2018predicting}. Long short-term memory units (LSTM)~\citep{hochreiter1997long} and the gated recurrent unit (GRU)~\citep{cho2014properties} have achieved great success in various applications on sequence data because of the gate and memory mechanism \citep{wang2016morphological, lipton2015learning, guo2016robust}. In this paper, we focus on time series with exogenous variables. Specifically, given a target time series, we have an additional set of time series corresponding to exogenous variables. A predictive model using the historical data of both target and exogenous variables to predict the future values of the target variable is an autoregressive exogenous model, referred to as ARX. In addition to forecasting, it is also highly desirable to distill knowledge via the model, e.g., understanding the different importance of exogenous variables w.r.t. the evolution of the target series \citep{hu2018listening, siggiridou2016granger, zhou2015probabilistic}. However, current LSTM RNN falls short of such capability. When fed with the historical observations of the target and exogenous variables, LSTM blindly blends the information of all variables into the hidden states and memory cells for subsequent prediction. Therefore, it is intractable to distinguish the contribution of individual variables by looking into hidden states \citep{zhang2017stock}. Recently, attention-based neural networks~\citep{bahdanau2014neural,xu2015show,chorowski2015attention} have been proposed to enhance the ability of LSTM in using long-term memory as well as the interpretability. The attention mechanism is mostly applied to hidden states across time steps, thereby solely uncovering the temporal level importance rather than the variable level importance. To this end, we propose an interpretable LSTM recurrent neural network, called multi-variable LSTM, for ARX problem. A distinguishing feature of our multi-variable LSTM is to enable each neuron of the recurrent layer to encode information exclusively from a certain variable. As a result, from the overall hidden states of the recurrent layer, we derive variable specific hidden representations over time steps, which can be flexibly used for forecasting and temporal-variable level attentions. \subsubsection*{Acknowledgments} \section{Multi-variable LSTM} In this section, we present the proposed multi-variable LSTM referred to as MV-LSTM in detail. Assume we have $N-1$ exogenous time series and target series $\mathbf{y}$ of length $T$, where $\mathbf{y} = ( y_1, \cdots, y_T )$ and $\mathbf{y} \in \mathbb{R}^T$.\footnote{ Vectors are assumed to be in column form throughout this paper. } By stacking exogenous time series and target series, we define the multi-variable input of MV-LSTM at each time step as $\mathbf{X} = ( \mathbf{x}_1, \ldots, \mathbf{x}_T )$, where $ \mathbf{x}_t = ( x_{t,1}, \ldots, x_{t, N-1}, y_{t} ) \in \mathbb{R}^N$ and $x_{t,n} \in \mathbb{R}$ is the observation of $n$-th exogenous time series at time $t$. Given $\mathbf{X}$, we aim to learn a non-linear mapping to the one-step ahead value of the target series, namely $\hat{y}_{T+1} = \mathcal{F}(\mathbf{X})$, where $\mathcal{F}(\cdot)$ represents the MV-LSTM neural network we present below. Inspired by \citep{he2017wider}, our MV-LSTM has tensorized hidden states and the update scheme ensures that each element of the hidden state tensor encapsulates information exclusively from a certain variable of the input. Specifically, we define the hidden state and memory cell for $t$-th time step of MV-LSTM as $\mathbf{h}_t \in \mathbb{R}^M$ and $\mathbf{c}_t \in \mathbb{R}^M$, where $M$ is the size of recurrent layer. $\mathbf{h}_t$ is tensorized as $\mathcal{H}_t = [ \mathbf{h}_t^{1}, \ldots, \mathbf{h}_t^{N} ]^\top $, where $\mathcal{H}_t \in \mathbb{R}^{N \times d}$, $\mathbf{h}_t^{n} \in \mathbb{R}^d$ and $N \cdot d = M$. The element $\mathbf{h}_t^{n}$ of tensor $\mathcal{H}_t$ is a variable specific representation corresponding to $n$-th input dimension. We further define the input-to-hidden transition tensor as $\mathcal{W}_x = [ \mathbf{W}_x^{1}, \ldots, \mathbf{W}_x^{N} ]^\top $, where $\mathcal{W}_x \in \mathbb{R}^{N \times d }$ and $\mathbf{W}_x^{n} \in \mathbb{R}^d $. The hidden-to-hidden transition tensor is defined as: $\mathcal{W}_h = [ \mathbf{W}_h^{1}, \ldots, \mathbf{W}_h^{N} ]^\top$, where $\mathcal{W}_h \in \mathbb{R}^{N \times d \times d}$ and $\mathbf{W}_h^{n} \in \mathbb{R}^{d \times d}$. Given the new incoming input $\mathbf{x}_t$ and the hidden state $\mathbf{h}_{t-1}$ up to $t-1$, we formulate the iterative update process by using $\mathcal{W}_x$ and $\mathcal{W}_h$ as: \begin{align*} \mathbf{j}_t = \tanh( \mathcal{H}_{t - 1} * \mathcal{W}_h + \mathbf{x}_t * \mathcal{W}_x + \mathbf{b}_j )&= \tanh \left( \begin{bmatrix} (\mathbf{W}_h^1 \mathbf{h}_{t-1}^1)^\top \\ \vdots \\ (\mathbf{W}_h^N \mathbf{h}_{t-1}^N)^\top \end{bmatrix} + \begin{bmatrix} (\mathbf{W}_x^1 x_{t, 1})^\top \\ \vdots \\ (\mathbf{W}_x^N x_{t, N})^\top \end{bmatrix} + \mathbf{b}_j \right) \end{align*} where $*$ represents the element-wise multiplication operation on tensor elements. $\mathcal{H}_{t-1} * \mathcal{W}_h \in \mathbb{R}^{N \times d}$ is the concatenation of $N$ product results of hidden tensor element $\mathbf{h}_t^{n}$ and the corresponding transition matrix $\mathbf{W}_h^{n}$. Likewise, $ \mathbf{x}_t * \mathcal{W}_x \in \mathbb{R}^{N \times d}$, and represents how the current input $\mathbf{x}_t$ update the hidden state. $\mathbf{j}_t$ is an $N \times d$ dimensional tensor, and each $d$-dimensional element corresponds to one input variable, encoding the information exclusively from the variable specific hidden state and the input variable. The input, forget and output gates in MV-LSTM are updated by using all input dimensions of $\mathbf{x}_t$, so as to utilize the cross-correlation between multi-variable time series. In particular, $[\mathbf{i}_t, \mathbf{f}_t, \mathbf{o}_t]^{\top} = \sigma \left( \mathbf{W} [\mathbf{x}_t, \mathbf{h}_{t-1}] + \mathbf{b} \right)$. The updated memory cell and hidden states are obtained from $\mathbf{c}_t = \mathbf{f}_t \odot \mathbf{c}_t + \mathbf{i}_t \odot \mathbf{\tilde{j}}_t $, where $\mathbf{\tilde{j}}_t \in \mathbb{R}^M $ is the flattened vector of $\mathbf{j}_t$. Then, $\mathbf{h}_t = \mathbf{o}_t \odot \tanh(\mathbf{c}_t)$. After feeding $\mathbf{x}_T$ into MV-LSTM, we obtain the hidden representation $\mathbf{h}_T^n$ w.r.t. each variable, which can be combined with the attention mechanism to predict $y_{T+1}$ as well as interpreting variable importance. Concretely, the attention process is as: $e^n = \tanh( \mathbf{W}_e \mathbf{h}_{T}^n + b_e )$ and $\alpha^n = \frac{exp(e^n)}{\sum_{k=1}^N exp(e^k)}$. Then, the prediction is derived as: $\hat{y}_{T+1} = \sum_{n=1}^N \alpha^n ( \mathbf{W_n} \mathbf{h}_n^{\top} + b_n )$. Note that MV-LSTM is able to apply temporal attention with ease. In the present work, we focus on evaluating variable attention. \section{Related work} The success of attention mechanism proposed in~\citep{bahdanau2014neural} has motivated a wide use of attention in image processing~\citep{ba2014multiple,mnih2014recurrent,gregor2015draw,xu2015show}, natural language processing~\citep{hermann2015teaching,rush2015neural,lin2017structured} and speech recognition~\citep{chorowski2015attention}. However, traditional attention mechanism is normally applied to hidden states across time steps, thereby failing to reveal variable level attention. Only some very recent studies~\citep{choi2016retain,qin2017dual} attempted to develop attention mechanism to handle multi-variable sequence data. Both of them build on top of encoder-decoder architecture, and make the prediction using pre-weighted input obtained from the encoder or an additional RNN. Our MV-LSTM is a simple one recurrent layer architecture enabling variable specific representations.
{ "timestamp": "2018-04-17T02:07:46", "yymm": "1804", "arxiv_id": "1804.05251", "language": "en", "url": "https://arxiv.org/abs/1804.05251" }
\section{Introduction} Minimal energy configurations have wide ranging applications in various scientific fields such as cryptography, crystallography, viral morphology, as well as in finite element modeling, radial basis functions, and Quasi-Monte-Carlo methods for graphics applications. For a fixed dimension and cardinality, the use of the Delsarte-Yudin linear programming bounds and Levenshtein $1/N$-quadrature rules are known to provide bounds on the minimal energy and prove universal optimality of some configurations on the sphere $\mathbb{S}^d$ (see for example \cite{CohnKumar}). The goal of this paper is to adapt these techniques to provide lower bounds on minimal energy for configurations in two different but related contexts. The first is for the large $N$ limit of Riesz energy of $N$-point configurations on a compact $d$-rectifiable set embedded in $\mathbb{R}^p$, while the second is for the Gaussian energy of infinite configurations in $\mathbb{R}^p$ having a prescribed density. The latter provides an alternative method for obtaining a main result of Cohn and de Courcy-Ireland \cite{CohnIre}. For our results on Riesz potentials we need the following definitions and notations. We say a set $A\subset \mathbb{R}^p$ is \textit{$d$-rectifiable} if it is the image of a bounded set in $\mathbb{R}^d$ under a Lipschitz mapping. For a $d$-rectifiable, closed set $A$ and a lower semicontinuous, symmetric kernel $K: A\times A\to \ (-\infty,\infty]$, the $K$-energy of a configuration $\omega_N = \left\{x_1,\ldots,x_N\right\}\subset A$ of $N$ (not necessarily distinct) points is given by \[E_K(\omega_N):= \sum_{i\neq j} K(x_i,x_j).\] A commonly arising problem is to minimize the $K$-energy for a fixed number of points and describe the optimal configurations; i.e., to determine \[\mathcal{E}_K(A,N):= \inf_{\omega_N\subset A} E_K(\omega_N).\]\ For point configurations on compact sets we will primarily focus on the Riesz $s$-kernels $$K_s(x,y):= |x-y|^{-s}\,\, \textrm{for}\,\, s>d=\dim(A);$$ that is, in the \emph{hypersingular case}, which is intimately related to the best-packing problem. We remark that for such hypersingular kernels, the \textit{continuous $s$-energy} of $A$ \[ \mathcal{I}_s[\mu]:=\int_{A} \int_{A} K_s(x,y) d\mu(x)d\mu(y)\] is infinite for every probability measure $\mu$ supported on $A,$ and so the standard methods of potential theory for obtaining large $N$ limits of minimizing point configurations do not apply.\ For brevity we hereafter set $$ E_s(\omega_N):=E_{K_s}(\omega_N),\qquad \mathcal{E}_s(A,N):=\mathcal{E}_{K_s}(A,N). $$ Furthermore, if $A$ is the unit sphere $\mathbb{S}^d \subset \mathbb{R}^{d+1}$ and $K(x,y)$ is a kernel on $ \mathbb{S}^d \times \mathbb{S}^d$ of the form $K(x,y)=h(\langle x, y\rangle)$ for some function $h$ on $[-1,1]$, we write $$ E_h(\omega_N)=E_K(\omega_N),\qquad \mathcal{E}_h(\mathbb{S}^d,N)=\mathcal{E}_K(\mathbb{S}^d,N). $$ In particular, $$K_s(x,y)=h_s(\langle x, y\rangle):=(2-2\langle x, y\rangle)^{-s}.$$\ For fixed cardinalities $N$ and kernels of the form $K(x,y)=h(\langle x, y\rangle)$, a general framework for obtaining lower bounds for minimal energy configurations on the unit sphere was developed by Yudin \cite{Yudin} based on a method of Delsarte, Goethals, and Seidel \cite{DGS} for spherical designs. This linear programming technique involves maximizing a certain functional defined over a constrained class of functions $f$ that satisfy $f(t)\leq h(t)$ for $t\in [-1,1].$ Combining Yudin's approach with Levenshtein's work \cite{LevBig},\cite{LevPacking} on maximal spherical codes, Boyvalenkov et al \cite{Petersquared} derived lower bounds for discrete energy that are `universal' in the sense that they hold whenever the potential function $h(t)$ is \emph{absolutely monotone} on $[-1,1);$ that is, when $h^{(k)}(t)$ exists and is non-negative for $t \in [-1,1)$ for all $k \geq 0,$ and $h(1):=\lim_{t\to 1^{-}}h(t)$, which may be $+\infty.$ In the present paper, we use this framework to derive asymptotic lower bounds as $N \to \infty$ for $\mathcal{E}_s(\mathbb{S}^d,N)$ in the case $s>d.$ These results for the sphere, in turn, have application to the broader class of energy problems on $d$-rectifiable sets. Indeed, this is a consequence of the localized nature of the potentials $h_s$ as expressed in the following result, which is known as the \emph{Poppy-seed bagel theorem}. \begin{theorem}[\cite{CSD}, \cite{borharsaf}] For any $d$-rectifiable closed set $A\subset \mathbb{R}^p$ and any $s>d$, there exists a positive, finite constant $C_{s,d}$, independent of $A$ such that \begin{equation}\label{pop} \lim_{N\to\infty}\frac{\mathcal{E}_s(A,N)}{N^{1+s/d}} = \frac{C_{s,d}}{\mathcal{H}_d(A)^{s/d}}. \end{equation}\label{poppy seed} \noindent Furthermore, any sequence of $N$-point $s$-energy minimizing configurations is asymptotically uniformly distributed with respect to $d$-dimensional Hausdorff measure restricted to $A$. \end{theorem} In \eqref{pop}, $\mathcal{H}_d(A)$ denotes the $d$-dimensional Hausdorff measure of $A$ with the normalization that the $d$-dimensional unit cube embedded in $\mathbb{R}^p$ has measure 1.\ In dimension $d=1$, it is known \cite{CS1} that $C_{s,1} = 2\zeta(s)$, but for all other dimensions the exact values of $C_{s,d}$ have not as yet been proven. However, the following relation between $C_{s,d}$ and the optimal packing density in $\mathbb{R}^d$ was established in \cite{sinfty}: \begin{equation} \lim_{s\to\infty} [C_{s,d}]^{1/s} = \frac{1}{C_{\infty,d}},\ \ \ \ \ \ \ \ \ \ \ \ C_{\infty,d}:= 2\bigg[\frac{\Delta_d}{\mathcal{H}_d(\mathbb{B}^d)}\bigg]^{1/d}, \label{eq.Csinfty} \end{equation} where $\Delta_d$ is the largest sphere packing density in $\mathbb{R}^d$. The only dimensions for which $\Delta_d$ is known at present are $d=1,2,3$ and, more recently, $d=8$ and $d=24$ (see \cite{Via8} and \cite{Via24}). In these special dimensions, $\Delta_d$ is attained by lattice packings, which is not expected to be the case for general dimensions. Clearly, any sequence of configurations on a set $A$ provides an upper bound for $C_{s,d}$. Furthermore, it is straightforward (see, for example, \cite{BHS12}, Proposition 1) to establish that \begin{equation} C_{s,d}\leq \min_{\Lambda\subset\mathbb{R}^d}|\Lambda|^{s/d}\zeta_\Lambda(s), \label{eq.lattice bound} \end{equation} where the minimum is taken over all lattices $\Lambda\subset\mathbb{R}^d$ with covolume $|\Lambda|>$0 and \begin{equation} \zeta_\Lambda(s):=\sum_{0\neq x\in\Lambda} |x|^{-s} \label{eq.epszeta} \end{equation} is the Epstein zeta function for the lattice. Regarding equality, the following conjecture is well known \cite{BHS12}, \cite{CohnKumar}: \begin{conjecture} For $d = 2,4,8,$ and $24$, \[C_{s,d} = \widetilde{C}_{s,d}:=|\Lambda_d|^{\frac{s}{d}}\zeta_{\Lambda_d}(s),\qquad s>d,\] where $\Lambda_2$ is the equi-triangular lattice, $\Lambda_4$ the $D_4$ lattice, $\Lambda_8$ the $E_8$ lattice, and $\Lambda_{24}$ the Leech lattice. \label{Csdconj} \end{conjecture} General lower bounds on $C_{s,d}$ have been less studied. A crude but simple lower bound arises from the following convexity argument (cf. \cite{kuisaf}).\ Let $\omega_N^*=\left\{x_1,\ldots, x_N\right\}$ be a minimizing $N$-point $s$-energy configuration on $\mathbb{S}^d$ and, for each $i = 1,\ldots, N$, let $\delta_i:=\min_{j\neq i} |x_i-x_j|$. With $C(x_i, \delta_i/2)$ denoting the spherical cap with center $x_i$ and Euclidean radius $\delta_i/2$, we deduce that $\sum_{i=1}^N \mathcal{H}_d(C(x_i, \delta_i/2)) \leq\mathcal{H}_d(\mathbb{S}^d).$ It is easily verified that \begin{equation}\label{HdCap} \mathcal{H}_d(C(x_i, r))=\mathcal{H}_d(S^d) \frac{r^d}{\lambda_dd}+\mathcal{O}(r^{d+2}),\,\, \,\, r\to 0^{+}, \end{equation} where \begin{equation}\lambda_d := \int\limits_{-1}^1(1-t^2)^{\frac{d-2}{2}} dt = \frac{\sqrt{\pi}\Gamma(\frac{d}{2})}{\Gamma(\frac{d+1}{2})}. \label{eq.lambdad} \end{equation} Thus for $1>\epsilon>0$ and all $N$ sufficiently large we have from the asymptotic denseness of the minimizing configurations (Theorem \ref{poppy seed}) that $$(1-\epsilon)\mathcal{H}_d(S^d) \frac{1}{2^d\lambda_dd}\sum_{i=1}^N \delta_i^d \leq \sum_{i=1}^N \mathcal{H}_d(C(x_i, \delta_i/2)) \leq\mathcal{H}_d(\mathbb{S}^d) $$ and so \begin{equation}\label{upper} \sum_{i=1}^N \delta_i^d \leq (1-\epsilon)^{-1}2^d\lambda_dd. \end{equation} By convexity, we also have \[\mathcal{E}_s(\mathbb{S}^d,N)=\sum_{i\neq j}\frac{1}{|x_i-x_j|^s}\geq \sum_{i=1}^N \frac{1}{\delta_i^s}= \sum_{i=1}^N (\delta_i^d)^{-s/d}\geq N\bigg(\frac{1}{N}\sum_{i=1}^N \delta_i^d\bigg)^{-s/d}.\] Consequently, from \eqref{upper} we obtain for $N$ large $$\frac{\mathcal{E}_s(\mathbb{S}^d,N)}{N^{1+s/d}}\geq \left((1-\epsilon)^{-1}2^d\lambda_dd\right)^{-s/d}. $$ Letting first $N\to\infty$ and then $\epsilon\to 0,$ Theorem \ref{poppy seed} yields the estimate \begin{equation} C_{s,d}\geq\Theta_{s,d}:=\left(\frac{\mathcal{H}_d(\mathbb{S}^d)}{2^{d}\lambda_dd}\right)^{s/d}=\frac{1}{2^s} \left(\mathcal{H}_{d-1}(\mathbb{S}^{d-1})/d\right)^{s/d}. \label{basicbound} \end{equation}\ A less trivial lower bound is the following, established in \cite{BHS12}: \begin{proposition} If $d\geq 2$ and $s>d$, then for $(s-d)/2$ not an integer, \[C_{s,d} \geq \xi_{s,d}:=\bigg[\frac{\pi^{d/2}\Gamma(1+\frac{s-d}{2})}{\Gamma(1+\frac{s}{2})}\bigg]^{s/d}\frac{d}{s-d}.\] \label{bhsbound} \end{proposition} Our main result for Riesz potentials is the following improvement over the lower bounds for $C_{s,d}$ in \eqref{basicbound} and Proposition \ref{bhsbound}. \begin{theorem} For a fixed dimension $d$, let $z_{i}$ be the $i$-th smallest positive zero of the Bessel function $J_{d/2}(z)$, $i = 1,2,\ldots .$ Then, for $s>d,$ \begin{equation}C_{s,d}\geq A_{s,d}, \label{eq.Asd} \end{equation} where \begin{equation} A_{s,d}:=\bigg[\frac{\pi^{\frac{d+1}{2}}\Gamma(d+1)}{\Gamma(\frac{d+1}{2})}\bigg]^{s/d}\frac{4}{\lambda_d\Gamma(d+1)}\sum_{i=1}^\infty(z_{i})^{d-s-2}\big(J_{d/2+1}(z_{i})\big)^{-2} \end{equation} and $\lambda_d$ is as defined in \eqref{eq.lambdad}. \label{thm.csd bound} \end{theorem} For $d=1$, $A_{s,d} = 2\zeta(s),$ which is optimal. Furthermore, as we prove in Section~3, both $A_{s,d}$ and $\xi_{s,d}$ have the same dominant behavior as $C_{s,d}$ as $s\to d^{+}$; namely they all have a simple pole at $s=d$ with the same residue. In Figure~\ref{FigBnds}, we compare the bounds $A_{s,2}$, $\Theta_{s,2}$, $\xi_{s,2}$ with the conjectured value $\widetilde{C}_{2,s}$. \begin{figure}[htbp] \begin{center} \includegraphics[width=4in]{AsdFig.pdf} \caption{{The two lower graphs show $\Theta_{s,2}^{1/s}$ (constant graph) and $\xi_{s,2}^{1/s}$ while the upper two graphs (indistinguishable on this scale) show both $A_{s,2}^{1/s}$ and the conjectured value $\widetilde{C}_{2,s}^{1/s}$ as $s$ ranges from $2$ to $10$. }} \label{FigBnds} \end{center} \end{figure} \begin{proposition}Let $d\in {\mathbb N}$. Then \begin{equation} \lim_{s\to d^+} (s-d)C_{s,d} =\lim_{s\to d^+} (s-d)\xi_{s,d}=\lim_{s\to d^+} (s-d)A_{s,d}= \frac{2\pi^{d/2}}{\Gamma(\frac{d}{2})}. \label{eq.2tight} \end{equation} \label{thm.2tight} \end{proposition} As illustrated at the end of Section 2, the Levenshtein $1/N$-quadrature rules give bounds on the minimal separation distance for optimal packings on $\mathbb{S}^d$, and $A_{s,d}$ recovers these bounds as $s\to\infty$. For $d = 2,4,8,$ and $24$, letting $\widetilde{C}_{s,d}$ be the conjectured values of $C_{s,d}$ from Conjecture \ref{Csdconj}, it is easy to verify that $\lim_{s\to\infty}(\widetilde{C}_{s,d}/A_{s,d})^{1/s}$ exists. Numerical comparisons between $A_{s,d}$ and $\widetilde{C}_{s,d}$ are illustrated in Section 4.\ We next consider bounds for the Gaussian energy of infinite point configurations in $\mathbb{R}^d$. Our goal is to show that the method used to prove Theorem \ref{thm.csd bound} provides an alternative approach to deriving the lower bounds obtained by Cohn and de Courcy-Ireland \cite{CohnIre}. We begin with some essential definitions. \begin{definition}\label{lowerfenergy} For an infinite configuration $\mathcal{C}\subset\mathbb{R}^d$ and $f:(0,\infty)\to \mathbb{R}$, the \textit{lower f-energy of} $\mathcal{C}$ is \[E_f(\mathcal{C}):=\liminf_{R\to\infty}\frac{1}{\#(\mathcal{C}\cap B^d(R))}\sum_{\substack{ x,y\in\mathcal{C}\cap B^d(R)\\ x\neq y}} f(|x-y|),\] where $\#$ denotes cardinality and $B^d(R)$ is the $d$-dimensional ball of radius $R$ centered at $0$. If the limit exists, we call it the \textit{f-energy of} $\mathcal{C}$. \end{definition} \begin{definition}\label{lowerdensity} The \textit{lower density} $\rho$ of a configuration $\mathcal{C}$ is defined to be \[\rho:=\liminf_{R\to\infty}\frac{\#(\mathcal{C}\cap B^d(R))}{\textup{vol}(B^d(R))}.\] If the limit exists, we call it the \textit{density of} $\mathcal{C}$. \end{definition} We shall show that universal lower bounds developed in \cite{Petersquared} and based on Delsarte-Levenshtein methods can be used to prove the following estimate of Cohn and de Courcey-Ireland. (The results in \cite{CohnIre} came to the authors' attention during the preparation of this manuscript and appear in the dissertation of Michaels \cite{Mic2017}.) \begin{theorem}[\cite{CohnIre}] Let $f(|x-y|) = e^{-\alpha |x-y|^2},\,\alpha>0,$ be a Gaussian potential in $\mathbb{R}^d$ and choose $R_\rho$ so that \rm{vol}$(B^d(R_\rho/2))$ = $\rho$. \textit{Then the minimal f-energy for point configurations of density} $\rho$ \textit{in} $\mathbb{R}^d$ \textit{is bounded below by} \begin{equation} \frac{4}{\lambda_d\Gamma(d+1)}\sum_{i=1}^\infty z_i^{d-2}\big(J_{d/2+1}(z_{i})\big)^{-2} f\bigg(\frac{z_i}{\pi R_\rho}\bigg), \label{eq.cohnire} \end{equation} \textit{where the} $z_i$'s \textit{are as in Theorem} \rm\ref{thm.csd bound}. \label{thm.cohnire} \end{theorem} We remark that there is a strong relation connecting Theorems~\ref{thm.cohnire} and \ref{thm.csd bound}. Indeed, if $f(r)=g(r^2)$ for some completely monotone function $g$ with sufficient decay, then there is some non-negative measure $\mu$ on $[0,\infty)$ such that (e.g., see \cite{Wid1941}) $$ f(r)=\int_0^\infty e^{-\alpha r^2}\, d\mu(\alpha). $$ Then it follows that Theorem~\ref{thm.cohnire} also holds for such $f$ and, in particular, for hypersingular Riesz $s$-potentials $f_s(r)=(r^2)^{-s/2}$ for $s>d$. Furthermore, it is shown in \cite{HLSS2017} that the constant $C_{s,d}$ also appears in the context of minimizing the Riesz $s$-energy over infinite point configurations $\mathcal{C}\subset \mathbb{R}^d$ with a fixed density $\rho$: \begin{equation}\label{CsdRd} C_{s,d}=\inf_{\substack{\text{$\mathcal{C}$ has}\\ \text{density $1$}}}E_{f_s}(\mathcal{C}). \end{equation} Combining \eqref{CsdRd} and Theorem~\ref{thm.cohnire} then provides an alternate proof of Theorem~\ref{thm.csd bound}. An outline of the remainder of this article is as follows. In Section 2, we describe the Delsarte-Yudin linear programming lower bounds and the Levenshtein $1/N$-quadrature rules. More thorough treatments can be found in \cite{MEBook}, \cite{Boumova}, and \cite{LevBig}. In Section \ref{section.csdproofs}, we present the proofs of Theorem \ref{thm.csd bound}, Proposition \ref{thm.2tight}, and Theorem \ref{thm.cohnire} using an asymptotic result on Jacobi polynomials from Szeg\H{o} \cite{Szego}. Finally, in Section \ref{section.ulbnum}, we discuss numerically the quality of the bound $A_{s,d}$ and formulate a natural conjecture. \section{Linear Programming Bounds} For $\alpha,\beta> -1$, let $\left\{P^{(\alpha,\beta)}_k\right\}_{k=1}^\infty$ denote the sequence of Jacobi polynomials of respective degrees $k$ that are orthogonal with respect to the weight $\omega^{\alpha,\beta}(t) : = (1-t)^\alpha(1+t)^\beta$ on $[-1,1]$ and normalized by \begin{equation} P^{(\alpha,\beta)}_k(1) = 1. \label{normalization} \end{equation}While this normalization is crucial for the linear programming methods presented here, we note that many authors choose $P^{(\alpha,\beta)}_k(1) = \binom{k+\alpha}{k}$. For a fixed dimension $d\geq 1$, the Gegenbauer or ultraspherical polynomials are given by $P_k(t) := P^{(\frac{d-2}{2},\frac{d-2}{2})}_k(t)$ with weight $\omega_d(t) := \omega^{(\frac{d-2}{2},\frac{d-2}{2})}(t)$. For our purposes, the so-called \emph{adjacent polynomials} \begin{equation} P^{a,b}_k(t): = P^{(\frac{d-2}{2}+a,\frac{d-2}{2}+b)}_k(t), \qquad a,b \in\{0,1\}, \label{eq.adjacent jacobi} \end{equation} associated with the weights $\omega_d^{a,b}(t):=(1-t)^a(1+t)^b\omega_d(t),$ play an essential role. For functions $f:[-1,1]\rightarrow \mathbb{R}$ that are square integrable with respect to $\omega_d$ on $[-1,1]$, we consider its Gegenbauer expansion: $f(t) = \sum_{i=0}^\infty f_kP_k(t),$ where the $f_k$'s are given by \begin{equation} f_k := \frac{\int_{-1}^1f(t)P_k(t)\omega_d(t)dt}{\int_{-1}^1 [P_k(t)]^2\omega_d(t)dt}. \label{eq.gegcoeffs} \end{equation} The following result forms the basis for the linear programming bounds for packing and energy on the sphere (see, for example, \cite{Delsarte} or \cite[Theorem 5.3.2]{MEBook}): \begin{theorem} If $f:[-1,1]\rightarrow \mathbb{R}$ is of the form \[f(t) = \sum_{k=0}^{\infty}f_kP_k(t)\] with $f_k\geq 0$ for all $k\geq 1$ and $\sum_{k=0}^\infty f_k < \infty$, then for any $N$-point subset $\omega_N = \left\{x_1,\ldots,x_N\right\}\subset\mathbb{S}^d$, \begin{equation} \sum_{1\leq i\neq j\leq N} f(\langle x_i, x_j\rangle) \geq f_0 N^2 - f(1)N. \end{equation} Moreover, if $h: [-1,1] \rightarrow [0,\infty]$ and $h(t)\geq f(t)$ on $[-1,1]$, then for the energy kernel $K(x,y) := h(\langle x, y\rangle),$ \begin{equation} E_K(\omega_N)\geq \mathcal{E}_K(\mathbb{S}^d,N)\geq f_0N^2-f(1)N. \label{eq.yudin energy bound} \end{equation} Equality holds in \eqref{eq.yudin energy bound} and $\omega_N$ is an optimal (minimizing) $h$-energy configuration if and only if\ {\rm(i)} h(t)=f(t) for all $t\in\left\{\langle x_i,x_j \rangle: i\neq j\right\}$ and\ {\rm(ii)} \,for all $k\geq 1$, either $f_k = 0$ or $\displaystyle\sum_{i,j = 1}^N P_k(\langle x_i,x_j\rangle) = 0$. \label{thm.del} \end{theorem}\ An $N$-point configuration $\omega_N = \left\{x_i\right\}_{i=1}^N\subset \mathbb{S}^d$ is called a \textit{spherical $\tau$-design} if \[\int_{S^{d}} f(x)d\sigma_d(x) = \frac{1}{N}\sum_{i=1}^N f(x_i)\] holds for all spherical polynomials $f$ of degree at most $\tau$, where $\sigma_d$ denotes the normalized surface area measure on $\mathbb{S}^d$. Using Theorem \ref{thm.del}, Delsarte, Goethals, and Seidel \cite{DGS} obtained an estimate for the minimum number of points on $\mathbb{S}^d$ that are necessary for a $\tau$-design. Namely, setting \[B(d,\tau):=\min \left\{N : \exists\ \omega_N \subset \mathbb{S}^d\textup{ a spherical $\tau$-design} \right\},\] they show \[B(d,\tau)\geq D(d,\tau):=\left\{\arraycolsep=1.4pt\def1.8{1.4} \begin{array}{ll} 2\binom{d+k-1}{d} & \textup{ if } \tau = 2k-1,\\ \binom{d+k}{d} + \binom{d+k-1}{d} & \textup{ if } \tau = 2k. \end{array}\right. \] \begin{definition} A sequence of ordered pairs $\left\{(\alpha_i,\rho_i)\right\}_{i=1}^k$ is said to be a $1/N$-\textit{quadrature rule exact on a subspace} $\Lambda\subset C([-1,1])$ if $1>\alpha_1>\cdots>\alpha_k\geq -1$, $\rho_i>0$ for $i = 1,\ldots,k$, and for all $f\in \Lambda$, \begin{equation} f_0=\frac{1}{\lambda_d}\int_{-1}^1f(t)\omega_d(t)dt = \frac{f(1)}{N}+\sum_{i=1}^k\rho_if(\alpha_i). \label{Levquad} \end{equation} \end{definition} Theorem \ref{thm.del} gives rise immediately to the following: \begin{theorem} Let $\left\{(\alpha_i,\rho_i)\right\}_{i=1}^k$ be a $1/N$-quadrature rule exact on a subspace $\Lambda$. For $K(x,y) := h(\langle x,y\rangle)$, let $\mathcal{A}_h$ be the set of functions $f$ with $f(t)\leq h(t)$ on $[-1,1]$ that satisfy the hypotheses of Theorem \ref{thm.del}. Then \[\mathcal{E}_K(\mathbb{S}^d,N) \geq N^2\sum_{i=1}^k\rho_if(\alpha_i)\] and \[\sup_{f\in\Lambda\cap\mathcal{A}_h} N^2\sum_{i=1}^k\rho_if(\alpha_i)\leq N^2\sum_{i=1}^k\rho_ih(\alpha_i).\] \label{thm.Yudinquad} \end{theorem} Levenshtein derives a $1/N$-quadrature given in Theorem \ref{thm.nodeformula} below to obtain the following bound for the maximal cardinality of a configuration $\omega_N\subset\mathbb{S}^d$ with largest inner product $s$. Let \[A(d,s) := \max\left\{N: \exists\ \omega_N \subset\mathbb{S}^d \,{\rm{with}}\, \langle x_i,x_j\rangle \leq s,\, i\neq j\right\}.\] Letting $\gamma_k^{a,b}$ denote the greatest zero of $P_k^{a,b}$, we partition $[-1,1]$ into the following disjoint union of countably many intervals. For $\tau = 1,2,\ldots$, \[I_\tau := \left\{\arraycolsep=5pt\def1.8{1.8} \begin{array}{ll} [\gamma_{k-1}^{1,1},\gamma_k^{1,0}] & \textup{ if } \tau = 2k-1,\\ [\gamma_k^{1,0},\gamma_k^{1,1}] & \textup{ if } \tau = 2k, \end{array} \right. \] which are well defined by the interlacing properties $\gamma_{k-1}^{1,1}<\gamma_k^{1,0}<\gamma_k^{1,1}$. Then \[A(d,s)\leq L(d,s),\] where \begin{equation}L(d,s) := \left\{\arraycolsep=5pt\def1.8{1.8} \begin{array}{ll} L_{2k-1}(d,s) := \binom{k+d-2}{k-1}[\frac{2k+d-2}{d} - \frac{P_{k-1}(s)-P_k(s)}{(1-s)P_k(s)}] & \textup{ if } s\in I_{2k-1},\\ L_{2k}(d,s): = \binom{k+d-1}{k}[\frac{2k+d}{d}-\frac{(1+s)(P_k(s)-P_{k+1}(s))}{(1-s)(P_k(s)+P_{k+1}(s))}] & \textup{ if } s\in I_{2k}. \end{array}\right. \label{eq.Lev} \end{equation} The function $L(d,s)$ is called the \emph{Levenshtein function}. For fixed $d$, it is continuous and increasing in $s$ on $[-1,1]$. The formula for the Levenshtein function is such that the quadrature nodes given in Theorem \ref{thm.nodeformula} below will have weight $1/N$ at the node $\alpha_0 = 1$. At the endpoints of the intervals $\mathcal{I}_\tau$, \begin{equation} \begin{split} L_{2k-2}(d,\gamma^{1,1}_{k-1}) & = L_{2k-1}(d,\gamma^{1,1}_{k-1}) = D(d,2k-1),\\ L_{2k-1}(d,\gamma^{1,0}_{k-1}) & = L_{2k}(d,\gamma^{1,0}_{k-1}) = D(d,2k), \end{split} \label{Levendpts} \end{equation} where $L_{\tau}$ denotes the restriction of $L$ to the interval $I_{\tau}.$\ Setting $$ r^{a,b}_i: = \left(\frac{1}{\lambda_d^{a,b}}\int_{-1}^1(P^{a,b}_i(t))^2\omega^{a,b}_{d}(t)\,dt\right)^{-1}, $$ where $\lambda_d^{a,b}:=\int_{-1}^1\omega^{a,b}_{d}(t)\,dt,$ we define \begin{equation} Q^{a,b}_k(x,y):= \sum_{i=0}^k r^{a,b}_i P^{a,b}_i(x)P^{a,b}_i(y). \label{eq.sum} \end{equation} By the Christoffel Darboux formula (see \cite[Section 3.2]{Szego}), \begin{align} \label{CD1}Q^{a,b}_k(x,y) & =r^{a,b}_km^{a,b}_k\bigg(\frac{P^{a,b}_{k+1}(x)P^{a,b}_k(y)-P^{a,b}_k(x)P^{a,b}_{k+1}(y)}{x-y}\bigg),\ \ \ \ \ \ \ \ x\neq y\\ \label{CD2}Q^{a,b}_k(x,x) & =r^{a,b}_km^{a,b}_k\left[(P^{a,b}_{k+1})^{'}(x)P^{a,b}_k(x)-(P^{a,b}_k)^{'}(x)P^{a,b}_{k+1}(x)\right], \end{align} where $m^{a,b}_i:=l^{a,b}_i/l^{a,b}_{i+1}$ and $l^{a,b}_i$ is the leading coefficient of $P^{a,b}_i$.\ The following $1/N$-quadrature rule proven in \cite[Theorems 4.1 and 4.2]{Lev} plays an essential role in establishing Theorem \ref{thm.csd bound}. \begin{theorem} For $N\in\mathbb{N}$, let $\tau$ be such that $N \in (D(d,\tau),D(d,\tau+1)]$, and let $\alpha_1 = \beta_1 = s$ be the unique solution to \[N = L(d,s).\] {\rm(i)} If $\tau =2k-1$, define nodes $1>\alpha_1>\cdots>\alpha_k> -1$ as the solutions of \begin{equation} (t-s)Q_{k-1}^{1,0}(t,s) = 0 \label{nodes} \end{equation} with associated weights \begin{equation} \rho_i = \frac{\lambda_d^{1,0}}{\lambda_d(1-\alpha_{i})Q^{1,0}_{k-1}(\alpha_i,\alpha_i)}. \label{weights} \end{equation} Then $\left\{(\alpha_i,\rho_i)\right\}_{i=1}^k$ is a $1/N$-quadrature rule exact on $\Pi_{2k-1}$. \ {\rm(ii)} If $\tau =2k$, define nodes $1>\beta_1>\cdots>\beta_{k+1} = -1$ as the solutions of \begin{equation} (1+t)(t-s)Q_{k-1}^{1,1}(t,s) = 0 \end{equation} with associated weights \begin{equation} \begin{split} \eta_i & = \frac{\lambda_d^{1,1}}{(1-\beta_i^2)Q^{1,1}_k(\beta_i,\beta_i)}, \ \ \ \ \ \ \ \ i = 1,\ldots,k,\\ \eta_{k+1} & = \frac{Q_k(s,1)}{Q_k(-1,-1)Q_k(s,1)-Q_k(-1,1)Q_k(s,-1)}. \end{split} \end{equation} Then $\left\{(\beta_i,\eta_i)\right\}_{i=1}^{k+1}$ is a $1/N$-quadrature rule exact on $\Pi_{2k}$. \label{thm.nodeformula} \end{theorem} Here and below $\Pi_{m}$ denotes the collection of all algebraic polynomials of degree at most $m$. \begin{remark} At the endpoints we also have that for $N = D(d,2k)$, $\left\{(\alpha_i,\rho_i)\right\}_{i=1}^k$ is exact on $\Pi_{2k}$ and for $N = D(d,2k+1)$, $\left\{(\beta_i,\eta_i)\right\}_{i=1}^{k+1}$ is exact on $\Pi_{2k+1}$. \end{remark} The above quadrature rules were used by Boyvalenkov et. al to derive the following universal lower bounds for the energy of spherical configurations. \begin{theorem}{\rm(\cite{Petersquared})} Let $N$ be fixed and $h(t)$ denote an absolutely monotone potential on $[-1,1)$. Suppose $\tau = \tau(d,N)$ is such that $N\in (D(d,\tau),D(d,\tau+1)]$ and let $k = \lceil{\frac{\tau+1}{2}}\rceil$. If $\{(\alpha_i,\rho_i)\}_{i=1}^k$ is the $1/N$-quadrature rule of Theorem \ref{thm.nodeformula}, then \begin{equation} \mathcal{E}_h(\mathbb{S}^d,N)\geq N^2\sum_{i=1}^k\rho_ih(\alpha_i). \label{PeterULB} \end{equation} \label{thm.hbound} \end{theorem} An analogous statement holds for the pairs $(\beta_i,\eta_i)$ of Theorem \ref{thm.nodeformula}(ii), but we shall not make use of it in our proofs.\ Taking into account Theorem \ref{thm.Yudinquad}, inequality \eqref{PeterULB} provides an optimal linear programming lower bound for the subspace $\Lambda = \Pi_k$. As an application, we now show that Theorem \ref{thm.hbound} recovers the first-order asymptotics for integrable potentials. \begin{example} If $h(t)$ is any absolutely monotone function that is also integrable with respect to $\omega_d(t)$ on $[-1,1]$, then \begin{equation} \lim_{N\to\infty} \frac{\mathcal{E}_h(\mathbb{S}^d,N)}{N^2} \geq \frac{1}{\lambda_d}\int_{-1}^1 h(t)\omega_d(t)\,dt, \label{intasymp} \end{equation} where $\lambda_d$ is defined in \eqref{eq.lambdad}. \end{example} \begin{remark}It is a classical result of potential theory that the limit exists and equality holds in (\ref{intasymp}); see \cite{Land}. \end{remark} \noindent \emph{Proof of \eqref{intasymp}.} First suppose $h(t)$ is continuous on $[-1,1]$. For $\epsilon > 0$, let $f(t)$ be a polynomial of degree $\leq 2k-1$ such that $|f(t)-h(t)| \leq \epsilon$ uniformly on $[-1,1]$. Setting $(\alpha_0,\rho_0):= (1,1/N)$, we note that the weights $\rho_i$ given in (\ref{weights}) are positive for $i= 0,\ldots, k$ and that $\sum_{i=0}^k\rho_i = 1$. From (\ref{Levquad}), we have with $\alpha_i=\alpha_i(N),\,\rho_i=\rho_i(N), \,k=k(N),$ \begin{align*} \bigg\vert\frac{1}{\lambda_d}\int_{-1}^1h(t)\omega_d(t)\,dt&-\sum_{i=0}^k\rho_ih(\alpha_i)\bigg|\\ & \leq \frac{1}{ \lambda_d}\int_{-1}^1|h(t)-f(t)|\omega_d(t)\,dt + \sum_{i=0}^k\rho_i|f(\alpha_i)-h(\alpha_i)|\\ & \leq 2\epsilon \to 0 \,\,\,{\rm{as}}\,\, N\to\infty. \end{align*} Since $\rho_0h(\alpha_0)=h(1)/N \to 0$ as $N\to\infty$, inequality (\ref{intasymp}) follows.\ Next suppose $h(t)$ is integrable and $g_m\nearrow h$ a sequence of continuous functions increasing to $h$ (for existence, consider $g_m(t):=h((1-1/m)(t+1)-1)$). By the Monotone Convergence Theorem and a similar string of inequalities as above, it follows that \[\lim_{k\to\infty}\sum_{i=1}^k\rho_ih(\alpha_i) = \frac{1}{ \lambda_d}\int_{-1}^1 h(t)\omega_d(t),\] which concludes the proof.\\ We remark that another feature of Theorem \ref{thm.nodeformula} is that it includes a best-packing result of Levenshtein \cite{LevPacking},\cite{LevBig}, which asserts the following: if $\omega_N=\left\{x_1,\ldots, x_N\right\}$ is any $N$-point configuration on $\mathbb{S}^d$ and $\delta(\omega_N):=\max_{i\neq j}\langle x_i,x_j\rangle$, then \begin{equation} \delta(\omega_N)\geq \alpha_1, \label{eq.spheresep} \end{equation} where $\alpha_1=\alpha_1(N)$ is as given in Theorem \ref{thm.nodeformula}. This follows by considering absolutely monotone approximations to the potential \[ h(t) = \left\{ \begin{array}{ll} \infty & \textup{ if } t\geq\alpha_1\\ 0 & \textup{ if } t<\alpha_1. \end{array}\right. \] Indeed, if $\delta(\omega_N)<\alpha_1$, then $E_{h}(\omega_N)=0$, but $\sum_{i=1}^k\rho_ih(\alpha_i)=\infty$, contradicting \eqref{PeterULB}. \section{Proofs of Theorems \ref{thm.csd bound}, \ref{thm.cohnire}, and Proposition~\ref{thm.2tight}}\label{section.csdproofs} Our approach will be to find the asymptotic expansion of the right-hand side of (\ref{PeterULB}) as $N\to \infty$. Throughout this section we assume that $\alpha,\beta>-1$. We will make use of the following result from Szeg\H{o} (see \cite[Theorem 8.1.1]{Szego}) adjusted by normalization (\ref{normalization}): \begin{theorem} Locally uniformly in the complex $z$-plane, \[\lim_{k\to\infty}P^{(\alpha,\beta)}_k\bigg(\cos\frac{z}{k}\bigg) = \lim_{k\to\infty}P^{(\alpha,\beta)}_k\bigg(1-\frac{z^2}{2k^2}\bigg) = \Gamma(\alpha+1)\bigg(\frac{z}{2}\bigg)^{-\alpha}J_{\alpha}(z).\] \label{thm.szego} \end{theorem} This gives the following immediate corollary: \begin{corollary} If $-1< \gamma_{k,k}<\dots<\gamma_{k,1}<1$ are the zeros of $P^{(\alpha,\beta)}_k$ and $z_i$ is the $i$-th smallest positive zero of the Bessel function $J_\alpha(z)$, then \begin{equation} \lim_{k\to \infty} k\cos^{-1}(\gamma_{k,i}) = z_{i}. \label{eq.jacobi zeros} \end{equation} \label{thm.jacobi zeros} \end{corollary} Recalling definition \eqref{eq.adjacent jacobi} and making use of well-known properties of the derivatives, norms, and leading coefficients of the Jacobi polynomials (see, e.g., \cite[Chapter 4]{Szego}) we obtain the following asymptotic formulas as $k\to\infty$: \begin{equation} \begin{split} \frac{\textup{d}}{\textup{d}t}P^{1,0}_k(t) &= \frac{1}{2}(k+d)\frac{\binom{k+\frac{d+2}{2}}{k}}{\binom{k+\frac{d}{2}}{k}}P^{2,1}_{k-1}(t)\\ &=\bigg(\frac{k^2}{d+2}+o(k^2)\bigg)P^{2,1}_{k-1}(t). \label{eq.jacobi derivative} \end{split} \end{equation} Furthermore, \begin{equation} \begin{split} \frac{r^{1,0}_k}{\lambda_d^{1,0}} & = \bigg(\int_{-1}^1(P^{1,0}_k(t))^2\omega^{1,0}(t)\,dt\bigg)^{-1}\\ & = \Bigg(\frac{2^d\Gamma(k+\frac{d+2}{2})\Gamma(k+\frac{d}{2})}{\binom{k+\frac{d}{2}}{k}^2(2k+d)\Gamma(k+d)\Gamma(k+1)}\Bigg)^{-1}\\ & = \frac{k^{d+1}}{2^{d-1}\Gamma(\frac{d+2}{2})^2}+o(k^{d+1}). \end{split} \label{eq.jacnorms} \end{equation} Lastly, recalling that $l^{1,0}_k$ is the leading coefficient of $P^{1,0}_k(t)$, \[l^{1,0}_k = \frac{\Gamma(2k+d)}{\binom{k+\frac{d}{2}}{k}2^k\Gamma(k+1)\Gamma(k+d)},\] which yields for the ratio \begin{equation} \begin{split} m^{1,0}_k = \frac{l^{1,0}_k}{l^{1,0}_{k+1}} & = \bigg(\frac{2(k+1)(k+d)}{(2k+d+1)(2k+d)}\bigg)\bigg(\frac{2k+2+d}{2k+2}\bigg)\\ & = \frac{1}{2}+o(1). \end{split} \label{eq.coeff ratio} \end{equation} \begin{remark}Generalizing equations (\ref{eq.jacobi derivative}) - (\ref{eq.coeff ratio}) to $P^{(\alpha,\beta)}_k(t)$ we obtain \begin{equation} \frac{\textup{d}}{\textup{d}t}P^{(\alpha,\beta)}_k(t) = \bigg(\frac{k^2}{2(\alpha+1)}+o(k^2)\bigg)P^{(\alpha+1,\beta+1)}_{k-1}(t), \label{eq.jac deriv general} \end{equation} \begin{equation} r^{(\alpha, \beta)}_k = O(k^{2\alpha+1}),\ \textup{and}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{equation} \begin{equation} m^{(\alpha,\beta)}_k = \frac{1}{2}+o(1).\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{equation} \end{remark} We also need the following additional lemmas. \begin{lemma} Let $p_k(t) := P^{(\alpha,\beta)}_k(t)$ be a sequence of Jacobi polynomials. If $z\in \mathbb{R}$ is fixed such that $\displaystyle\lim_{k\to\infty}p_k(\cos\frac{z}{k}) = c $ and $\beta_k,\,-1\leq \beta_k\leq 1,$ is a sequence satisfying \begin{equation} \lim_{k\to\infty}k\cos^{-1}(\beta_k) = z, \label{betalim} \end{equation} then \begin{equation} \lim_{k\to\infty}p_{k+ j}(\beta_k) = c, \label{lem.eq.subindex} \end{equation} for any fixed $j\in \mathbb{Z}$. \label{lem.subindex} \end{lemma} \begin{proof} First, since $\displaystyle\lim_{k\to\infty}(k+ j)\cos^{-1}(\beta_k) = \lim_{k\to\infty}k\cos^{-1}(\beta_k)$, by making the substitution $k = k+ j$ it suffices to establish equation (\ref{lem.eq.subindex}) for the case $j=0$. From (\ref{betalim}), we have that \[\epsilon_k:= |\beta_k-\cos\frac{z}{k}| = o\bigg(\frac{1}{k^2}\bigg).\] Applying the mean value theorem, equation (\ref{eq.jac deriv general}), and using the fact that $p_k$ is uniformly bounded in $k$ on $[-1,1]$ (see e.g. \cite{ErdMagNev}) we get with $p_{k-1}^{1,1}:=P_{k-1}^{\alpha+1\,\beta+1},$ \[|p_k(\beta_k)-p_k(\cos\frac{z}{k})| = p_k'(\xi_k)\epsilon_k = k^2\tilde{c}p_{k-1}^{1,1}(\xi_k)\epsilon_k = o(1),\] for some $\xi_k$ between $\beta_k$ and $\cos(\frac{z}{k})$, and $\tilde{c}>0$. \end{proof} A stronger version of Lemma \ref{lem.subindex} holds when $c=0$. \begin{lemma} Let $-1< \gamma_{k,k}<\dots<\gamma_{k,1}< 1$ be the zeros of $p_k(t):= P^{(\alpha,\beta)}_k(t)$, and denote by $z_{i}$ the $i$-th smallest positive zero of the Bessel function $J_\alpha(z)$. Then for all $i = 1,2,\ldots$, \[\lim_{k\to\infty}kp_{k-1}(\gamma_{k,i}) = 2\Gamma(\alpha+1)\bigg(\frac{z_{i}}{2}\bigg)^{-\alpha+1}J_{\alpha+1}(z_{i}).\] \label{lem.aha} \end{lemma} \begin{proof} By Corollary \ref{thm.jacobi zeros}, \[\gamma_{k,i} = 1-\frac{z_{i}^2}{2k^2} + o\bigg(\frac{1}{k^2}\bigg),\] which implies \[\delta_k:=|\gamma_{k,i}-\gamma_{k-1,i}| = \frac{z_{i}^2}{k^3}+o\bigg(\frac{1}{k^3}\bigg),\,\,\mathrm{as}\,\, k\to \infty.\] By the interlacing properties of the zeros of Jacobi polynomials, we see that $\gamma_{k,i}>\gamma_{k-1,i}$ and we can drop the absolute value in $\delta_k$. Expanding the Taylor series for $p_{k-1}(t)$ around the zero $\gamma_{k-1,i}$, we have \[kp_{k-1}(\gamma_{k,i}) = k\delta_kp'_{k-1}(\gamma_{k-1,i}) + \frac{k\delta_k^2p^{''}_{k-1}(\gamma_{k-1,i})}{2}+\cdots\] Each successive derivative term beyond the first has order $o(1)$ since by repeated application of \eqref{eq.jac deriv general} and Lemma \ref{lem.subindex}, $p_k^{(j)}(t) = O(k^{2j})p^{j,j}_{k-j}(t) = O(k^{2j})$ while on the other hand $\delta_k^j = O(1/k^{3j})$. Thus, \[kp_{k-1}(\gamma_{k,i}) = \frac{z^2_{i}}{2(\alpha+1)}p^{1,1}_{k-2}(\gamma_{k-1,i})+o(1)\,\, \mathrm{as} \,\, k\to\infty.\] Now by Theorem \ref{thm.szego} and Lemma \ref{lem.subindex}, we obtain the result. \end{proof} We are now ready to prove the main theorem. \begin{proof}[\textbf{Proof of Theorem \ref{thm.csd bound}}] In the case of Riesz energy, we have \[K_s(x,y) = h_s(\langle x,y\rangle) = (2-2\langle x,y\rangle)^{-s/2}\] We consider the subsequence \begin{equation} N_k := D(d,2k) = \binom{d+k}{d} + \binom{d+k-1}{d} = \frac{2}{\Gamma(d+1)}k^{d} + o(k^d). \label{eq.subsequence} \end{equation} By Theorem \ref{poppy seed} it suffices to prove \begin{equation}\lim_{k\to \infty}\frac{\mathcal{E}_s(N_k,\mathbb{S}^d)}{N_k^{1+s/d}} \geq \frac{A_{s,d}}{\mathcal{H}_d(\mathbb{S}^d)^{s/d}} \label{eq.new bound} \end{equation} where \[\mathcal{H}_d(\mathbb{S}^d) = \frac{2\pi^{\frac{d+1}{2}}}{\Gamma\big(\frac{d+1}{2}\big)}. \] Along the subsequence $N_k$, from (\ref{Levendpts}), $\alpha_1 = \gamma^{1,0}_{k,1},$ where $\gamma^{1,0}_{k,i}$ is the $i$-th largest zero of $P^{1,0}_k(t)$ and \begin{align*}(t-\alpha_1)Q^{1,0}_{k-1}(t,\alpha_1) & = r_{k-1}m_{k-1}(P^{1,0}_{k}(t)P^{1,0}_{k-1}(\alpha_1)-P^{1,0}_{k-1}(t)P^{1,0}_k(\alpha_1))\\ & = r_{k-1}m_{k-1}(P^{1,0}_{k}(t)P^{1,0}_{k-1}(\alpha_1));\\ \end{align*} thus the quadrature nodes are given by \[\alpha_i = \gamma^{1,0}_{k,i}, \ \ \ \ \ \ \ \ i = 1,2,\ldots, k.\] For a fixed $m$ and all $k\geq m$ we have by Theorem \ref{thm.hbound} \[\frac{\mathcal{E}_s(N_k)}{N_k^{1+s/d}}\geq\frac{\sum_{i=1}^k\rho_ih_s(\alpha_i)}{N_k^{-1+s/d}}\geq\frac{\sum_{i=1}^m\rho_ih_s(\alpha_i)}{N_k^{-1+s/d}}.\]\ For a fixed $i\leq m$, we next establish asymptotics for $\rho_ih(\alpha_i)$. By Corollary \ref{thm.jacobi zeros} we have \begin{equation} \lim_{k\to \infty} \frac{h_s(\alpha_i)}{k^{s}} = \lim_{k\to \infty} \frac{(2-2\alpha_i)^{-s/2} }{k^s}= (z_{i})^{-s}, \label{eq.halpha} \end{equation} and by (\ref{eq.jacobi derivative}) and Lemma \ref{lem.subindex}, \begin{equation} \lim_{k\to\infty} \frac{(P^{1,0}_k)^{'}(\alpha_i)}{k^2} = \frac{\Gamma(d/2+2)}{d+2}\bigg(\frac{z_{i}}{2}\bigg)^{-\frac{d+2}{2}}J_{d/2+1}(z_{i}). \label{eq.derivative limit} \end{equation} Furthermore, from Lemma \ref{lem.aha}, it follows that \begin{equation} \lim_{k\to\infty}kP^{1,0}_{k-1}(\alpha_i) = 2\Gamma(d/2+1)\bigg(\frac{z_{i}}{2}\bigg)^{-\frac{d-2}{2}}J_{d/2+1}(z_{i}). \label{eq.aha} \end{equation} From the weight formula given in equation (\ref{weights}) and the Cristoffel-Darboux formula (\ref{CD2}) we deduce that \begin{equation} \lim_{k\to\infty}k^d\rho_i = \lim_{k\to\infty}k^d\left(\frac{\lambda_d}{\lambda_d^{1,0}}(1-\alpha_{i})r^{1,0}_{k-1}m^{1,0}_{k-1}(P^{1,0}_{k})^{'}(\alpha_i)P^{1,0}_{k-1}(\alpha_i)\right)^{-1}, \label{eq.weight limit 1} \end{equation} and combining equations (\ref{eq.jacobi zeros}),(\ref{eq.jacnorms}),(\ref{eq.coeff ratio}),(\ref{eq.derivative limit}), and (\ref{eq.aha}), this yields \begin{equation} \begin{split} \lim_{k\to\infty}k^d\rho_i = \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ & \\ \bigg[\lambda_d\bigg(\frac{z_{i}^2}{2}\bigg)\bigg(\frac{1}{2^{d-1}\Gamma(d/2+1)^2}\bigg)\frac{1}{2}\bigg(& \frac{\Gamma(d/2+2)}{d+2}\bigg(\frac{z_{i}}{2}\bigg)^{-d/2-1}J_{d/2+1}(z_{i}) \bigg)\\ & \cdot \ 2\Gamma(d/2+1)\bigg(\frac{z_{i}}{2}\bigg)^{-d/2+1}J_{d/2+1}(z_{i}) \bigg]^{-1}. \end{split} \label{weight limit 2} \end{equation} Simplifying gives \begin{equation} \lim_{k\to\infty}k^d\rho_i = \frac{2}{\lambda_d z_{i}^{2-d}\big(J_{d/2+1}(z_{i})\big)^{2}}. \label{weight limit 3} \end{equation} Finally, combining the asymptotics for $N_k$, $h_s(\alpha_i)$, and $\rho_i$, equations (\ref{eq.subsequence}), (\ref{eq.halpha}), and (\ref{weight limit 3}) respectively, we obtain \begin{equation} \lim_{k\to\infty}\frac{\rho_ih(\alpha_i)}{N_k^{s/d-1}} = \frac{2}{\lambda_d\big(\frac{2}{\Gamma(d+1)}\big)^{s/d-1}z_{i}^{2-d+s}\big(J_{d/2+1}(z_{i})\big)^{2}}, \label{eq.single term limit} \end{equation} and thus \[\frac{C_{s,d}}{\mathcal{H}_d(\mathbb{S}^d)^{s/d}} = \lim_{k\to\infty}\frac{\mathcal{E}_s(N_k,\mathbb{S}^d)}{N_k^{1+s/d}}\geq \sum_{i=1}^m\frac{2}{\lambda_d\big(\frac{2}{\Gamma(d+1)}\big)^{s/d-1}z_{i}^{2-d+s}\big(J_{d/2+1}(z_{i})\big)^{2}}.\] Multiplying by $\mathcal{H}_d(\mathbb{S}^d)^{s/d}$ and letting $m\to \infty$ gives (\ref{eq.new bound}) and hence (\ref{eq.Asd}). \end{proof} \begin{proof}[\textbf{Proof of Proposition \ref{thm.2tight}}] We first establish the limit involving $\xi_{s,d}$: \begin{equation}\label{xilim} \begin{split} \lim_{s\to d^+}(s-d)\xi_{s,d}&=\lim_{s\to d^+}d\bigg[\frac{\pi^{d/2}\Gamma(1+\frac{s-d}{2})}{\Gamma(1+\frac{s}{2})}\bigg]^{s/d} =\frac{d\pi^{d/2}}{\Gamma(1+\frac{d}{2})}=\frac{2\pi^{d/2}}{\Gamma(\frac{d}{2})}. \end{split} \end{equation} If $\Lambda$ is a $d$-dimensional lattice with co-volume $|\Lambda|>0$ then it is known (see \cite{Terras}) that the Epstein zeta function has a simple pole at $s=d$ with residue \begin{equation} \frac{2\pi^{d/2}}{\Gamma(d/2)|\Lambda |}.\end{equation} Proposition~\ref{bhsbound}, the bound \eqref{eq.lattice bound}, and \eqref{xilim} then show \begin{equation} \lim_{s\to d^+} (s-d)C_{s,d} = \frac{2\pi^{d/2}}{\Gamma(\frac{d}{2})}. \end{equation} Finally, we establish the limit involving $A_{s,d}$. The well-known asymptotic behavior of $J_{\frac{d}{2}+1}(z)$ \cite{Szego}, as $z\to\infty$, is given by \begin{equation} J_{\frac{d}{2}+1}(z) = -\sqrt{\frac{2}{\pi z}}\big(\cos\big(z-(d-3)\frac{\pi}{4}\big)+O\big(z^{-3/2}\big)\big) \label{asympBessel} \end{equation} and $z_n$, the $n$-th zero of the $J_{\frac{d}{2}}(z)$, is given by (see \cite{Bessel}) \begin{equation} z_n = n\pi+(d-1)\frac{\pi}{4}+O(n^{-1}). \label{asmpyBeszero} \end{equation} Thus, $$ J_{\frac{d}{2}+1}(z_n)^{-2}=\frac{\pi z_n}{2}+O(n^{-1}), $$ and so we have \[\sum_{n=1}^\infty \frac{1}{z_n^{s-d+1}J_{\frac{d}{2}}(z_n)^2} = \frac{\pi}{2}\sum_{n=1}^\infty\frac{1}{z_n^{s-1}+a_n} = \frac{1}{2\pi^{s-d}}\sum_{n=1}^\infty\frac{1}{(n+(d-1)/4+b_n)^{s-1}+a_n}\] where $a_n$, $b_n =o(1)$. As $s\to d^+$, this sum approaches the Hurwitz zeta function, $\zeta(s-d+1,(d+3)/4),$ where \begin{equation} \zeta(s,q) := \sum_{n=0}^\infty \frac{1}{(n+q)^{s}}. \label{Hurzeta} \end{equation} That is, \begin{equation} \lim_{s\to d^+} \frac{\sum_{n=1}^\infty ((n+(d-1)/4+b_n)^{s-d+1}+a_n)^{-1}}{\zeta(s-d+1,(d+3)/4)} = 1. \label{eq.Hurlimit} \end{equation} Indeed, suppose $a = \sup |a_n|$ and $b = \sup |b_n|$. Then, \begin{align*} &\sum_{n=1}^\infty\frac{1}{(n+(d-1)/4+b_n)^{s-d+1}+a_n} \geq\sum_{n=1}^\infty\frac{1}{(n+(d-1)/4+b)^{s-d+1}+a}\\ &\geq\sum_{n=1}^\infty\frac{1}{(n+(d-1)/4+b+a)^{s-d+1}} = \sum_{n=0}^\infty\frac{1}{(n+(d+3)/4+b_n)^{s-d+1}+a_n} \\& = \zeta(s-d+1,(d+3)/4+a+b), \end{align*} and similarly \[\sum_{n=1}^\infty\frac{1}{(n+(d-1)/4+b_n)^{s-d+1}+a_n}\leq \zeta(s-d+1,(d+3)/4-a-b).\] Since $\zeta(s,q)\to \infty$ as $s\to 1^+$ (and the terms in the series in \eqref{eq.Hurlimit} stay bounded) the limit \eqref{eq.Hurlimit} holds. In fact $\zeta(s,q)$ has a simple pole of residue 1 at $s=1$ for all $q$ and so we obtain: \begin{equation*} \begin{split} \lim_{s\to d^+} (s-d)A_{s,d} &= \lim_{s\to d^+}\bigg[\frac{\pi^{\frac{d+1}{2}}\Gamma(d+1)}{\Gamma(\frac{d+1}{2})}\bigg]^{s/d}\frac{4(s-d)}{\lambda_d\Gamma(d+1)}\sum_{i=1}^\infty(z_{i})^{d-s-2}\big(J_{d/2+1}(z_{i})\big)^{-2}\\ &=\lim_{s\to d^+}\frac{4\pi^{d/2}}{\Gamma(\frac{d}{2})}\frac{(s-d)}{2}\zeta(s-d+1,(d+3)/4)=\frac{2\pi^{d/2}}{\Gamma(\frac{d}{2})}, \end{split} \end{equation*} which completes the proof of Proposition~\ref{thm.2tight}. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm.cohnire}}] For a fixed $\rho$ and a Gaussian potential $f(|x-y|) = h(\langle x,y\rangle) = e^{-\alpha(2-2\langle x,y\rangle)}$, set $$c:= (a_d \rho)^{1/d},\,\,\, \mathrm{where} \,\,\, a_d:=\frac{(d+1)\pi^{\frac{d+1}{2}}}{\Gamma(1+\frac{d+1}{2})}=\frac{2\pi^{\frac{d+1}{2}}}{\Gamma(\frac{d+1}{2})}$$ is the area of $\mathbb{S}^d$, and let \[f_N(|x-y|) = h_N(\langle x,y\rangle) := e^{-\alpha\frac{2-2\langle x,y\rangle}{(cN^{-1/d})^2}}.\] Our approach is to first obtain estimates for the $h_N$-energy of $N$-point configurations on the sphere $\mathbb{S}^d$.\ For each $N$, $h_N$ is absolutely monotone on $[-1,1)$, and so Theorem \ref{thm.hbound} holds. We apply the same asymptotic argument as in the proof of Theorem \ref{thm.csd bound} to $h_N(t)$. In particular we sample along the subsequence \[N_k := D(d,2k),\] where the nodes $\alpha_i$ are given by the zeros of $P_k^{1,0}(t)$. Using the asymptotic formulas for $N_k$, the quadrature nodes $\alpha_i$, and the weights $\rho_i$, we obtain from Corollary \ref{thm.jacobi zeros} and \eqref{weight limit 3} that \begin{equation} \liminf_{N\to \infty} \frac{\mathcal{E}_{h_{N}}(\mathbb{S}^d,N)}{N}\geq \frac{4}{\lambda_d\Gamma(d+1)}\sum_{i=1}^\infty\frac{z_i^{d-2}}{(J_{d/2+1}(z_i))^2}e^{-\alpha\big(\frac{z_i}{c(2/\Gamma(d+1))^{-1/d}}\big)^2}. \label{GaussSphasymp} \end{equation} Let $0<\epsilon<1$. Then there is a collection $\{ C(a_\ell,r_\ell)\colon \ell=1, 2, \ldots, L\}$ of disjoint closed spherical caps on $\mathbb{S}^d$ such that $r_\ell<\epsilon$ and $$\sum_{\ell=1}^L\mathcal{H}_d( C(a_\ell,r_\ell))\ge (1-\epsilon)\mathcal{H}_d(\mathbb{S}^d).$$ Using \eqref{HdCap} and the fact that the caps are disjoint, it follows that there is a constant $\kappa_1>0$, independent of $\epsilon$, such that \begin{equation}\label{kappa1} (1+\kappa_1 \epsilon)^{-1}d\lambda_d\le \sum_{\ell=1}^L r_\ell^d \le d\lambda_d(1+\kappa_1\epsilon). \end{equation} Furthermore, there are mappings $\phi_\ell:B^d(r_\ell)\to C(a_\ell,r_\ell)$, $\ell=1,2, \ldots, L$ and a constant $\kappa_2$ (again independent of $\epsilon$) such that \begin{equation}\label{phiineq} |\phi_\ell(x)-\phi_\ell(y)|\ge (1- \kappa_2\epsilon)|x-y|, \qquad (x,y\in B^d(r_\ell)). \end{equation} Let $\mathcal{C}$ be a configuration in $\mathbb{R}^d$ with density $\rho$ and $f_\alpha$-energy $E_{f_\alpha}(\mathcal{C})$; i.e., the limits in Definitions~\ref{lowerfenergy} and \ref{lowerdensity} both exist. Then, as $R\to \infty$, we have for any $\alpha>0$, \begin{equation}\label{cardCr} \#(\mathcal{C}\cap B^d(R))={\rho\, \textup{vol}(B^d(R))}(1+o(1)), \end{equation} and \begin{equation} E_{f_\alpha}\left(\mathcal{C}\cap B^d(R)\right)\le [\rho\, \textup{vol}(B^d(R))]\, E_{f_\alpha}(\mathcal{C})(1+o(1)). \end{equation} For $\ell=1, 2, \ldots, L$, let $$ \omega_N^\ell:=\phi_{\ell}(cN^{-1/d}\mathcal{C}\cap B^d(r_\ell N^{1/d}/c)), $$ and \begin{equation} \omega_N^{\mathcal{C}}:=\bigcup_{\ell=1}^L\omega_N^\ell. \end{equation} Observing that $\rho\, \textup{vol}(B^d(1))d\lambda_d/c^d=1$, we see from \eqref{kappa1} and \eqref{cardCr} that as $N\to \infty$ the cardinality of $\omega_N^{\mathcal{C}}$ satisfies: \begin{equation}\label{cardOmegaNC} \#\omega_N^{\mathcal{C}}=\sum_{\ell=1}^L \#(\mathcal{C}\cap B^d(r_\ell N^{1/d}/c))\ge (1+\kappa_1 \epsilon)^{-1}N(1+o(1)). \end{equation} Let $\delta$ denote the smallest distance between any pair of distinct spherical caps $C(a_\ell,r_\ell)$ and $C(a_{\ell'},r_{\ell'})$. The {\em cross energy} for $\ell\neq \ell'$ satisfies \begin{equation}\label{crossen} \frac{E_{h_N}(\omega_N^\ell,\omega_N^{\ell'})}{N}:=\frac{1}{N} \sum_{\substack{x\in \omega_N^\ell\\ y\in \omega_N^{\ell'}}}h_N(\langle x,y \rangle)\le N\exp\left(-\frac{\alpha\delta^2}{c^2}N^{2/d}\right)=o(1), \end{equation} as $N\to \infty$. Using \eqref{phiineq} and defining $\alpha_\epsilon=\alpha(1- \kappa_2\epsilon)^2$, we obtain \begin{equation*} \begin{split} E_{h_N}(\omega_N^\ell)&=\sum_{\substack{x, y\in \mathcal{C}\cap B^d(r_\ell N^{1/d}/c)\\ x\neq y}}\exp({-\alpha\frac{|\phi_\ell(cN^{-1/d} x)- \phi_\ell(cN^{-1/d}y)|^2}{(cN^{-1/d})^2}})\\ &\le \sum_{\substack{x, y\in \mathcal{C}\cap B^d(r_\ell N^{1/d}/c)\\ x\neq y}}\exp(-\alpha(1- \kappa_2\epsilon)^2|x-y|^2)=E_{f_{\alpha_\epsilon}}(\mathcal{C}\cap B^d(r_\ell N^{1/d}/c))\\ &\le [\rho\, \textup{vol}(B^d(1))]\,(r_\ell^d N/c^d) E_{f_{\alpha_\epsilon}}(\mathcal{C})(1+o(1))= \frac{Nr_\ell^d}{d\lambda_d}E_{f_{\alpha_\epsilon}}(\mathcal{C})(1+o(1)). \end{split} \end{equation*} Using the above estimate for $E_{h_N}(\omega_N^\ell)$ together with \eqref{cardOmegaNC} and \eqref{crossen} we obtain as $N\to\infty$, \begin{equation}\label{alltogether} \begin{split} \frac{\mathcal{E}_{h_{N}}(\mathbb{S}^d,\#\omega_N^{\mathcal{C}})}{\#\omega_N^{\mathcal{C}}}&\leq\frac{E_{h_N}(\omega_N^{\mathcal{C}})}{\#\omega_N^{\mathcal{C}}}\le (1+\kappa_1\epsilon)\sum_{\ell=1}^L \frac{E_{h_N}(\omega_N^\ell)}{N}(1+o(1)) +o(1)\\ &\le (1+\kappa_1\epsilon) \frac{1}{d\lambda_d}\left(\sum_{\ell=1}^L r_\ell^d\right)E_{f_{\alpha_\epsilon}}(\mathcal{C})(1+o(1)) +o(1)\\&\le (1+\kappa_1 \epsilon)^2E_{f_{\alpha_\epsilon}}(\mathcal{C})(1+o(1)) +o(1), \end{split} \end{equation} Taking the limit inferior as $N\to\infty$ and then $\epsilon\to 0$ in \eqref{alltogether} and using \eqref{GaussSphasymp} completes the proof. \end{proof} \section{Numerics}\label{section.ulbnum} Translated into packing density and using Corollary \ref{thm.jacobi zeros}, inequality \eqref{eq.spheresep} provides an alternate proof of the following best-packing bound of Levenshtein \cite{LevPacking}: \begin{corollary} \[\Delta_d\leq \frac{z_{1}^d}{\Gamma(d/2+1)^24^d} =:L_d\] \label{packingbound} \end{corollary} As $s\to\infty$, the series in $A_{s,d}$ is dominated by the first term $z_{1}^{-s}$ and using the asymptotics of $C_{s,d}$ in (\ref{eq.Csinfty}), we see that \begin{equation} \lim_{s\to\infty}\bigg[\frac{C_{s,d}}{A_{s,d}}\bigg]^{1/s} = \bigg[\frac{L_d}{\Delta_d}\bigg]^{1/d}=: B_d \geq 1 \label{eq.Bd} \end{equation} The following table shows the values of $B_d$ in dimensions $d=1,2,3,8,$ and $24$ where $\Delta_d$ is known precisely. For $d=4,5,6,7$ where $\Delta_d$ is conjectured to be given by lattice packings, the table provides an upper bound for $B_d$. \begin{table}[h!] \begin{center} \caption{Upper Bounds on $B_d$} \begin{tabular}{|c|c|} \hline $d$ & $B_d$ \\ \hline 1 & 1\\ 2 & 1.00589479\\ 3 & 1.02703993\\ 4 & 1.02440844\\ 5 & 1.03861371\\ 6 & 1.03461793\\ 7 & 1.03156355\\ 8 & 1.01742074\\ 24 & 1.02403055\\ \hline \end{tabular} \label{tab.Bd} \end{center} \end{table} \begin{figure} \includegraphics[scale = 0.48]{Hexasymptotic.pdf} \includegraphics[scale = 0.48]{D4asymptotic}\\ \includegraphics[scale = 0.48]{E8asymptotic.pdf} \includegraphics[scale = 0.48]{Leechasymptotic} \caption{Graphs of $f(s) = (\widetilde{C}_{s,d}/A_{s,d})^{1/s}$ for $d=2,4,8$ and $24$.} \label{Asdfigure} \end{figure} For $d = 2,4,8,$ and $24$, where $\widetilde{C}_{s,d}$ is given in Conjecture \ref{Csdconj} we plot \[f(s):=\bigg[\frac{\widetilde{C}_{s,d}}{A_{s,d}}\bigg]^{1/s}.\] The Epstein zeta functions for the $D_4$, $E_8$, and Leech lattices are calculated using known formulas for the theta functions (see \cite[Ch. 4]{Conway}) \[ \Theta_\Lambda(z) = \sum_{x\in\Lambda}e^{i\pi z|x|^2},\ \ \ \ \ \ \ \ \ \ \ \mathrm{Im}\,z > 0.\] Since these three lattices have vectors whose squared norms are even integers, we let $q=e^{i\pi z}$ and write \[\Theta_{\Lambda_d}(z) = \sum_{m=1}^\infty N_d(m) q^{2m},\] where $N_d(m)$ counts the number of vectors in $\Lambda_d$, $d=4,8,24$ of squared norm $2m$. Thus the Epstein zeta function \[\zeta_{\Lambda_d}(s) = \sum_{m=1}^\infty \frac{N_d(m)} {(2m)^{s/2}}.\] For the $D_4$ lattice, a classical result from number theory gives \[N_4(m) = 24\sum_{\substack{d|2m,\\ d \textup{ odd}}}d.\] For the $E_8$ lattice, we have \[N_8(m) = 240 \sigma_3(m),\] where \[\sigma_k(m) = \sum_{d|m}d^k \] is the divisor function. Finally for the Leech lattice, it is known that \[N_{24}(m) = \frac{65520}{691} \left(\sigma_{11} (m) - \tau (m) \right), \] where $\tau(m)$ is the Ramanujan tau function defined in \cite{Ramtau}. Figure \ref{Asdfigure} plots $f(s)$ for $d=2,4,8$ and $24$. In these dimensions the graphs monotonically increase to the limit $B_d$ as $s\to \infty$ and decrease to 1 as $s\to d^+$, demonstrating Proposition \ref{thm.2tight}. We remark that in high dimensions, it is likely that lattice packings are no longer optimal and less is known or conjectured regarding $C_{s,d}$. The Levenshtein packing bound from Corollary \ref{packingbound} yields for large $d,$ \begin{equation} \Delta_d\leq 2^{-0.5573d} \label{asymppacking} \end{equation} and thus \[B_d = O\bigg(\frac{2^{-0.5573}}{\Delta_d^{1/d}}\bigg).\]\ \noindent\emph{Acknowledgment.} The authors are grateful to J. S. Brauchart for his helpful suggestions. \bibliographystyle{plain}
{ "timestamp": "2018-04-17T02:07:09", "yymm": "1804", "arxiv_id": "1804.05237", "language": "en", "url": "https://arxiv.org/abs/1804.05237" }
\section{\label{sec:1}Introduction} In a superfluid with internal degrees of freedom such as superfluid helium-$3$ and spinor Bose-Einstein condensates (BECs), supercurrents are accompanied by spatio-temporal variations of spin and nematicity as well as the U($1$) phase~\cite{Salomaa1,Salomaa2}. Here, the rotation of the superfluid velocity, in general, does not vanish but depends on the spin-nematic texture, reflecting the nonholonomic nature of the texture. For the case of the $A$ phase of superfluid $^3$He ($^3$He-$A$), the nonholonomy leads to the celebrated Mermin-Ho relation~\cite{Mermin2}. The Memin-Ho relation is expressed in terms of three generators of the underlying so(3) symmetry of the order parameter, and implies that the circulation of the superfluid velocity is quantized when the loop encloses certain types of vortices~\cite{Salomaa1,Mermin1} such as the Mermin-Ho (MH) vortex~\cite{Mermin2} and the Chechetkin-Anderson-Toulouse (CAT) vortex~\cite{Chechetkin,Anderson}. In this paper we investigate the corresponding relation for spin-$1$ BECs. A new feature for this system is that not only the direction but also the magnitude of the spin vector can change over space and time and the spin nematicity arises as a consequence. We generalize the Mermin-Ho relation so as to be applicable to spin-$1$ BECs. The obtained relation involves eight generators and the corresponding structure constants of the su($3$) algebra, which implies the existence of vortices that belong to the other su($2$) subalgebra of the su($3$) algebra rather than the ordinary one corresponding to the above-mentioned so($3$) symmetry. \begin{figure}[t] \begin{center} \includegraphics[bb = 0 0 1214 473, clip, scale = 0.175]{fig1.eps} \end{center} \caption{(Color Online) Three basis functions corresponding to $|x\rangle$, $|y\rangle$, and $|z\rangle$ in the Cartesian representation plotted in the spherical coordinate $(r, \theta , \phi)$, where $r$, $\theta$, and $\phi$ represent the radial coordinate, the polar angle, and the azimuth angle, respectively. Here, $\phi \in [0, 2\pi )$, which is indicated by a rainbow spectral gradient, and is defined as the angle between the $x$ axis and the vector projected onto the $xy$ plane. The basis functions are represented in terms of the rank-$1$ spherical harmonics as $u_x(\theta ,\phi ) := \langle \theta,\phi | x \rangle = ( - Y_1^1 (\theta ,\phi ) + Y_1^{-1} (\theta ,\phi ) ) / \sqrt{2}$, $u_y(\theta ,\phi ) := \langle \theta,\phi | y \rangle = i ( Y_1^1 (\theta ,\phi ) + Y_1^{-1} (\theta ,\phi ) ) / \sqrt{2}$, and $u_z(\theta ,\phi ) := \langle \theta,\phi | z \rangle = Y_1^0 (\theta ,\phi )$. The radial coordinate is given by $r(\theta ,\phi ) = |u_{\mu} (\theta ,\phi )|$. The color gauge shown on the right indicates the value of $\phi$.} \label{fig:1} \end{figure} In the mean-field description of a spin-$1$ BEC, all bosons condense into a single spin state as well as a single spatio-temporal mode. To describe the spin degrees of freedom, we adopt the Cartesian basis $\{ |\mu \rangle | \mu = x, y, z \}$ satisfying $F_{\mu} |\mu \rangle = 0$~\cite{Ohmi}, where $F_{\mu}$ is the $\mu$ component of the spin-vector matrix. In the Cartesian representation, the spin matrices are given by $(F_{\mu})_{\nu \lambda} = - i {\epsilon}_{\mu \nu \lambda}$, where ${\epsilon}_{\mu \nu \lambda}$ is the completely antisymmetric tensor. The basis state $|\mu \rangle$ can be expressed in terms of the eigenstates $\{ |m\rangle | m = 1, 0, -1 \}$ of $F_z$, namely the basis states of the irreducible representation, as \begin{align} & |x \rangle = \frac{1}{\sqrt{2}} \left ( - |1 \rangle + |-1 \rangle \right ), \label{eq:|x>} \\ & |y \rangle = \frac{i}{\sqrt{2}} \left ( |1 \rangle + |-1 \rangle \right ), \label{eq:|y>} \\ & |z \rangle = |0 \rangle. \label{eq:|z>} \end{align} Here, $| m\rangle$ can be represented in terms of the spherical harmonics of rank $1$, $Y_{l=1}^m (\theta ,\phi )$, with the polar angle $\theta$ against the $z$ axis and the azimuth angle $\phi$ against the $x$ axis. Thus $| \mu \rangle$ can also be expressed in terms of $Y_{l=1}^m (\theta ,\phi )$ as illustrated in Fig.~\ref{fig:1}. This paper is organized as follows. In Sec.~\ref{sec:3}, we derive the generalized Mermin-Ho relation for a spin-$1$ BEC and express it in terms of the su($3$) generators. In Sec.~\ref{sec:4}, we parametrize the su($3$) generators in terms of the polarization and the direction of the spin vector. We use this parametrization to construct spin-nematic vortices belonging to an su($2$) subalgebra of the su($3$) algebra. In Sec.~\ref{sec:5}, we derive a formula for the mass-current circulations, which is expressed in terms of four independent winding numbers, and apply it to ferromagnetic and polar-core vortices. In Sec. VI, we summarize the main results of this paper. The Gell-Mann matrices and the su($3$) structure constants are listed in Appendix~\ref{as1}, and the derivation of the original Mermin-Ho relation from the su($3$) Mermin-Ho relation is given in Appendix~\ref{as2}. \section{\label{sec:3}SU($3$) Mermin-Ho relation} A mean-field state of a spin-$1$ BEC can be expanded in terms of the Cartesian basis $| \mu \rangle$ as \begin{align} | \bm{\psi} (\bm{r} ,t) \rangle = \sum_{\mu} {\psi}_{\mu} (\bm{r} ,t) |\mu \rangle . \label{eq:mcwf} \end{align} Here, ${\psi}_{\mu} (\bm{r} ,t)$ is the $\mu$th component of the order parameter which gives the density as $\rho (\bm{r} ,t) \equiv \sum_{\mu} |{\psi}_{\mu} (\bm{r} ,t)|^2$. In the mean-field description, the spin and phase degrees of freedom are separable from the density degree of freedom; hence we can define the rescaled order parameter ${\xi}_{\mu} (\bm{r} ,t)$ as ${\xi}_{\mu} (\bm{r} ,t) \equiv {\psi}_{\mu} (\bm{r} ,t) / \sqrt{\rho (\bm{r} ,t)}$. In the following discussion, we omit the spatial and temporal coordinates $(\bm{r} ,t)$. The mass current can be expressed in terms of the rescaled order parameters as \begin{align} \bm{v} \equiv \frac{\hbar}{2Mi} \sum_{\mu} [ {\xi}_{\mu}^* (\nabla {\xi}_{\mu}) - (\nabla {\xi}_{\mu}^*) {\xi}_{\mu}], \label{eq:v} \end{align} and its rotation can be transformed into \begin{align} \nabla \times \bm{v} = \frac{i\hbar}{M} \sum_{\mu ,\nu ,\lambda} {\xi}_{\mu}^* {\xi}_{\nu} (\nabla {\xi}_{\nu}^* {\xi}_{\lambda} ) \times (\nabla {\xi}_{\lambda}^* {\xi}_{\mu} ), \label{eq:rotv1} \end{align} where the suffixes $\mu$, $\nu$, and $\lambda$ over $x$, $y$, and $z$. Here, ${\xi}_{\mu}^* {\xi}_{\nu}$ in Eq.~(\ref{eq:rotv1}) can be interpreted as the transition amplitude from the state $\nu$ to the state $\mu$. This implies that Eq.~(\ref{eq:rotv1}) can be expressed in terms of the su($3$) roots, which connect two of the three basis states, or equivalently in terms of the Gel-Mann matrices ${\Lambda}_i$ ($i = 1, \cdots ,8$) (see Appendix A for their explicit representations)~\cite{Barnett,Yukawa}. In fact, as shown in Appendix~\ref{as1}, Eq.~(\ref{eq:rotv1}) can be rewritten as \begin{align} \nabla \times \bm{v} = \frac{\hbar}{8M} \sum_{i,j,k=1}^8 f_{ijk} {\lambda}_i (\nabla {\lambda}_j ) \times (\nabla {\lambda}_k ), \label{eq:rotv2} \end{align} where the su($3$) structure constant $f_{ijk}$ is given in Eq.~(\ref{eq:sf}) of Appendix~\ref{as1} and \begin{align} {\lambda}_i \equiv \sum_{\mu ,\nu} ({\Lambda}_i)_{\mu \nu} {\xi}_{\mu}^* {\xi}_{\nu}, \end{align} which give the physical quantities such as the spin vector $f_{\mu}$ and observables concerning nematicity $q_{xy}$, $q_{yz}$, $q_{zx}$, $d_{x^2-y^2}$, and $d_{3z^2-f^2}$, as listed in Table~\ref{tab:1}. Here, the spin vector $f_{\mu}$ and the nematicity observables $q_{\mu \nu}$, $d_{x^2-y^2}$, and $d_{3z^2-f^2}$ are defined as follows: \begin{align} &f_{\mu} = \sum_{\nu ,\lambda} (F_{\mu})_{\nu \lambda} {\xi}_{\nu}^* {\xi}_{\lambda}, \label{eq:f} \\ &q_{\mu \nu} = \sum_{\lambda ,\eta} (F_{\mu} F_{\nu} + F_{\nu} F_{\mu} )_{\lambda \eta} {\xi}_{\lambda}^* {\xi}_{\eta}, \label{eq:q} \\ &d_{x^2-y^2} = \sum_{\mu ,\nu} (F_x^2 - F_y^2 )_{\mu \nu} {\xi}_{\mu}^* {\xi}_{\nu}, \label{eq:d} \\ &d_{3z^2-f^2} = \frac{1}{\sqrt{3}} \sum_{\mu ,\nu} (- F_x^2 - F_y^2 + 2 F_z^2)_{\mu \nu} {\xi}_{\mu}^* {\xi}_{\nu}. \label{eq:y} \end{align} Here, $d_{3z^2-f^2}$ corresponds to the hyperchage. Equation~(\ref{eq:rotv2}) is the main result of this paper which we refer to as the su($3$) Mermin-Ho relation. It is a generalization of the Mermin-Ho relation to an arbitrary spin-$1$ BEC in which the magnitude of the spin can change over space and time. For a fully spin-polarized case, Eq.~(\ref{eq:rotv2}) reduces to the Mermin-Ho relation as shown in Appendix~\ref{as2}. \begin{table}[b] \caption{\label{tab:1}Correspondence between the expectation values of the Gell-Mann matrices (upper row) and the observables concerning the spin vector and nematicity (lower row). } \begin{ruledtabular} \begin{tabular}{cccccccc} ${\lambda}_1$ & ${\lambda}_2$ & ${\lambda}_3$ & ${\lambda}_4$ & ${\lambda}_5$ & ${\lambda}_6$ & ${\lambda}_7$ & ${\lambda}_8$ \\ \colrule $-q_{xy}$ & $f_z$ & $-d_{x^2-y^2}$ & $-q_{zx}$ & $-f_y$ & $-q_{yz}$ & $f_x$ & $d_{3z^2-f^2}$ \end{tabular} \end{ruledtabular} \end{table} \begin{figure*}[t] \begin{center} \includegraphics[bb = 0 0 2197 452, clip, scale = 0.22]{fig2.eps} \end{center} \caption{(Color Online) (i) Physical meanings of the parameters $\alpha$, $\beta$, and $\gamma$ giving the principal axes ${\bm{e}}_1$ and ${\bm{e}}_2$, and the normal direction ${\bm{e}}_3 = {\bm{e}}_1 \times {\bm{e}}_2$ through ${\bm{e}}_1 = (\cos {\alpha} \cos {\beta}, \sin {\alpha} \cos {\beta}, - \sin {\beta})^T$, ${\bm{e}}_2 = (- \sin {\alpha}, \cos {\alpha}, 0)^T$ and ${\bm{e}}_3 = (\cos {\alpha} \sin {\beta}, \sin {\alpha} \sin {\beta}, \cos {\beta})^T$. The parameters $\alpha$ and $\beta$ represent the azimuth and polar angles of the spin vector $\bm{f} = |\bm{f}| {\bm{e}}_3$ which is indicated by the thick solid arrow parallel to ${\bm{e}}_3$, and $\gamma$ is the rotation angle of the order parameter around ${\bm{e}}_3$. In the left panel, major and minor principal axes of the order parameter coincide with ${\bm{e}}_1$ and ${\bm{e}}_2$, respectively. In the right panel, the coordinate system is rotated about ${\bm{e}}_3$ through $\gamma$, where ${\bm{e}}_1^{\prime}$ and ${\bm{e}}_2^{\prime}$ give the major and minor principal axes in the rotated frame of reference. (ii) Parameter $\vartheta$ dependence of the order parameter, where $\mathcal{\theta}=0, 0.15\pi$, and $0.25\pi$ give the unpolarized (polar), partially polarized (broken-axisymmetry), and fully polarized (ferromagnetic) states, respectively. The wire frames are plotted as a guide to the eye. } \label{fig:2} \end{figure*} \section{\label{sec:4}Chechetkin-Anderson-Toulouse and Mermin-Ho vortices and their dual vortices \sout{in spin-1 systems}} \subsection{Chechetkin-Anderson-Toulouse and Mermin-Ho vortices} For later discussions, it is convenient to expressed $\xi_\mu$ in terms of five parameters $(\varphi ,\alpha ,\beta ,\gamma ,\vartheta )$ as follows: \begin{widetext} \begin{align} \begin{pmatrix} {\xi}_x \\ {\xi}_y \\ {\xi}_z \end{pmatrix} &= e^{i\varphi} \exp {(-i\alpha F_z)} \exp {(-i\beta F_y)} \exp {(-i\gamma F_z)} \begin{pmatrix} \cos {\vartheta} \\ i \sin {\vartheta} \\ 0 \end{pmatrix} \nonumber \\ &= e^{i\varphi} \begin{pmatrix} (\cos {\alpha} \cos {\beta} \cos {\gamma} - \sin {\alpha} \sin {\gamma} ) \cos {\vartheta} - i (\cos {\alpha} \cos {\beta} \sin {\gamma} + \sin {\alpha} \cos {\gamma} ) \sin {\vartheta} \\ (\sin {\alpha} \cos {\beta} \cos {\gamma} + \cos {\alpha} \sin {\gamma} ) \cos {\vartheta} + i (- \sin {\alpha} \cos {\beta} \sin {\gamma} + \cos {\alpha} \cos {\gamma} ) \sin {\vartheta} \\ - \sin {\beta} \cos {\gamma} \cos {\vartheta} + i \sin {\beta} \sin {\gamma} \sin {\vartheta} \end{pmatrix}. \label{eq:wf} \end{align} \end{widetext} Here, the parameter $\varphi$ is the U($1$) phase, $\alpha$ and $\beta$ represent the azimuth and polar angles of the spin vector $\bm{f}$ in the Cartesian coordinates, and $\gamma$ is the rotation angle about the direction of $\bm{f}$ as illustrated in Figs.~\ref{fig:2} (i). The parameter $\vartheta$ indicates the polarization of the spin vector as $|\bm{f}| = |\sin {2\vartheta}|$. Thus, $\vartheta = 0$, $\pi / 2$, $\pi$, $\cdots$ show the unpolarized states, $\vartheta = \pi / 4$, $3\pi / 4$, $\cdots$ show the fully polarized states, and $\vartheta \in (0, \pi / 4)$, $(\pi / 4, \pi / 2)$, $\cdots$ show partially polarized states such as broken-axisymmetry states~\cite{Murata} (see Figs.~\ref{fig:2} (ii)). Then, the spin vector and the nematicity observables in Eqs.~(\ref{eq:f})-(\ref{eq:y}) can be expressed in terms of $\alpha$, $\beta$, $\gamma$, and $\vartheta$ as \begin{widetext} \begin{align} &\bm{f} = \sin {2\vartheta} \begin{pmatrix} \cos {\alpha} \sin {\beta} \\ \sin {\alpha} \sin {\beta} \\ \cos {\beta} \end{pmatrix}, \label{eq:f_param} \\ &q_{xy} = \frac{1}{2} \{ \sin {2\alpha} \ {\sin}^2 {\beta} - \cos {2\vartheta} [ \sin {2\alpha} (1 + {\cos}^2 {\beta} ) \cos {2\gamma} + 2 \cos {2\alpha} \cos {\beta} \sin {2\gamma} ] \}, \label{eq:qxy_param} \\ &q_{yz} = \sin {\beta} [ \sin {\alpha} \cos {\beta} + \cos {2\vartheta} ( \sin {\alpha} \cos {\beta} \cos {2\gamma} + \cos {\alpha} \sin {2\gamma} ) ], \label{eq:qyz_param} \\ &q_{zx} = \sin {\beta} [ \cos {\alpha} \cos {\beta} + \cos {2\vartheta} (\cos {\alpha} \cos {\beta} \cos {2\gamma} - \sin {\alpha} \sin {2\gamma} ) ], \label{eq:qzx_param} \\ &d_{x^2-y^2} = \frac{1}{2} \{ \cos {2\alpha} \ {\sin}^2 {\beta} - \cos {2\vartheta} [ \cos {2\alpha} ( 1 + {\cos}^2 {\beta} ) \cos {2\gamma} - 2 \sin {2\alpha} \cos {\beta} \sin {2\gamma} ] \}, \label{eq:dxy_param} \\ &d_{3z^2-f^2} = \frac{1}{2\sqrt{3}} [ - 1 + 3 {\cos}^2 {\beta} ( 1 - \cos {2\vartheta} \cos {2\gamma} ) ]. \label{eq:y_param} \end{align} \end{widetext} When the spin is fully polarized, which corresponds, for example, to $\vartheta = \pi / 4$, Eqs.~(\ref{eq:f_param})-(\ref{eq:y_param}) imply that the nematicity observables can be expressed in terms of the spin vector as $q_{\mu \nu} = f_{\mu} f_{\nu}$, $d_{x^2-y^2} = (f_x^2 - f_y^2) / 2$, and $d_{3z^2-f^2} = (- f_x^2 - f_y^2 + 2 f_z^2) / 2\sqrt{3}$, and Eq.~(\ref{eq:rotv2}) reduces to be the following Mermin-Ho relation~\cite{Mermin2}: \begin{align} \nabla \times \bm{v} = \frac{\hbar}{2M} {\epsilon}_{\mu \nu \lambda} f_{\mu} (\nabla f_{\nu} ) \times (\nabla f_{\lambda} ). \label{eq:MH} \end{align} Here, the completely antisymmetric tensor ${\epsilon}_{\mu \nu \lambda}$ is nothing but the so($3$) structure constant. The derivation of Eq.~(\ref{eq:MH}) from Eq.~(\ref{eq:rotv2}) is shown in Appendix~\ref{as2}. The Mermin-Ho relation can describe the well-known Chechetkin-Anderson-Toulouse (CAT) and Mermin-Ho (MH) vortices shown in Fig.~\ref{fig:3} and we can obtain their winding numbers from this relation. \begin{figure*}[t] \begin{center} \includegraphics[bb = 0 0 2058 831, clip, scale = 0.24]{fig3.eps} \end{center} \caption{(Color Online) Fully polarized spin textures in CAT (i) and MH (ii) vortices. Each arrow represents the direction of a local spin vector with the color showing the value of $\beta$ according to the upper right gauge. The projection circle below represents the value of $\alpha$ according to the lower right gauge. In both (i) and (ii), $\alpha$ changes by $2\pi$ along the circumference of the vortex, while $\beta$ changes by $\pi$ in (i) and $\pi/2$ in (ii) in the radial direction. } \label{fig:3} \end{figure*} \subsection{Dual Chechetkin-Anderson-Toulouse and Mermin-Ho vortices} In spin-$1$ BECs, the spin nematicity can also form a texture in a vortex and we can consider such spin-nematic vortices analogous to the CAT and MH vortices. We shall refer to such vortices as dual CAT and MH vortices. When $\alpha = \beta = 0$, Eqs.~(\ref{eq:f_param})-(\ref{eq:y_param}) reduce to \begin{align} &f_x = f_y = 0, \ f_z = \sin {2\vartheta}, \label{eq:sn_fz} \\ &q_{xy} = - \sin {2\gamma} \cos {2\vartheta}, \ q_{yz} = q_{zx} = 0, \label{eq:sn_qxy} \\ &d_{x^2-y^2} = - \cos {2\gamma} \cos {2\vartheta}, \label{eq:sn_dxy} \\ &d_{3z^2-f^2} = \frac{1}{\sqrt{3}}. \end{align} In terms of new parameters $2 \gamma \equiv {\alpha}^{\prime}$ and $2 \vartheta - \pi / 2 \equiv {\beta}^{\prime}$, the spin-vector and nematic-tensor quantities together can be cast into the form of a unit-length pseudo-spin ${\bm{f}}^{\prime} \equiv (d_{x^2-y^2} ,q_{xy} ,f_z)^T = (\cos {{\alpha}^{\prime}} \sin {{\beta}^{\prime}} , \sin {{\alpha}^{\prime}} \sin {{\beta}^{\prime}} , \cos {{\beta}^{\prime}})^T$. Then, we can construct vortices dual to the CAT and MH vortices as shown in Fig.~\ref{fig:4}. Among these two types of vortices, the spin-nematic vortex shown in Fig.~\ref{fig:4} (i) is identified as the ferromagnetic core vortex in Figs.~6 (c) and (d) in Ref.~\cite{SKobayashi}. Here, we note that $f_z$, $q_{xy}$, and $d_{x^2-y^2}$ in the pseudo-spin vector ${\bm{f}}^{\prime}$ form an su($2$) subalgebra of the su($3$), which is characterized by the structure constant $f_{123} = 2$. The Mermin-Ho relation in Eq.~(\ref{eq:rotv2}) for these vortices can be expressed in terms of the pseudo-spin ${\bm{f}}^{\prime}$ as \begin{align} \nabla \times \bm{v} = \frac{\hbar}{4M} {\epsilon}_{\mu \nu \lambda} f_{\mu}^{\prime} (\nabla f_{\nu}^{\prime} ) \times (\nabla f_{\lambda}^{\prime} ). \label{eq:MH_prime} \end{align} The right-hand side of Eq.~(\ref{eq:MH_prime}) is half of the original Mermin-Ho relation in Eq.~(\ref{eq:MH}). \begin{figure*}[t] \begin{center} \includegraphics[bb = 0 0 2060 1751, clip, scale = 0.24]{fig4.eps} \end{center} \caption{(Color Online) Spatial distributions of the order parameter (first row) and the spin vector (second row) in spin-nematic vortices which are dual to the Chechetkin-Anderson-Toulouse and Mermin-Ho vortices in Fig.~\ref{fig:3}. In the upper panels, the color of the order parameter indicates the value of $\vartheta$ according to the upper right gauge and the color on the projection circle of the wave-function texture represents the value of $\gamma$ according to the lower right gauge. In the lower panels, the color maps of the spin texture and its projection circle are the same as those in Fig.~\ref{fig:3}. (i) In a vortex dual to the Chechetkin-Anderson-Toulouse vortex, $\gamma$ and $\vartheta$ change by $\pi$ and $\pi / 2$. In terms of the pseudo-spin vector ${\bm{f}}^{\prime}$, ${\alpha}^{\prime}$ and ${\beta}^{\prime}$ change by $2\pi$ and $\pi$, as in $\alpha$ and $\beta$ of the CAT vortex. (ii) In a vortex dual to the Mermin-Ho vortex, $\gamma$ and $\vartheta$ change by $\pi$ and $\pi / 4$, which implies that ${\alpha}^{\prime}$ and ${\beta}^{\prime}$ change by $2\pi$ and $\pi / 2$. On the other hand, $\alpha$ and $\beta$ stay constant in both cases. } \label{fig:4} \end{figure*} \begin{figure}[t] \begin{center} \includegraphics[bb = 0 0 800 600, clip, scale = 0.25]{fig5.eps} \end{center} \caption{Circumference of a vortex $\mathcal{C}$ (left) decomposed into infinitesimal rectangular loops (upper middle), one of which is enlarged and labeled as $OABC$ (right) with $R_1$ and $R_2$ showing SO($3$) rotations given in Eqs.~(\ref{eq:R1}) and (\ref{eq:R2}), where $\mathcal{S}$ indicates the area enclosed by $\mathcal{C}$. } \label{fig:5} \end{figure} \subsection{Nonholonomy of dual CAT and MH vortices} We follow Ref.~\cite{Leggett} to analyze the Mermin-Ho relation for the spin-nematic vortices in Eq.~(\ref{eq:MH_prime}) from a viewpoint of nonholonomy. Suppose that $\mathcal{C}$ is a two-dimensional circle enclosing a spin-nematic vortex at the center $\mathcal{C}$ of Fig.~\ref{fig:5}. The phase that the pseudo-spin vector ${\bm{f}}^{\prime}$ acquires along the circumference of $\mathcal{C}$ can be obtained as follows. First, we decompose the circle into infinitesimal rectangular loops $O \to A \to B\to C \to O$ as shown in Fig.~\ref{fig:5}. Along this loop, we consider two non-commutative SO($3$) rotations \begin{align} & R_1 = \exp {(- i \delta {\phi}_1 {\bm{n}}_1 \cdot {\bm{F}}^{\prime})}, \label{eq:R1} \\ & R_2 = \exp {(- i \delta {\phi}_2 {\bm{n}}_2 \cdot {\bm{F}}^{\prime})}, \label{eq:R2} \end{align} where the unit vector $\bm{n}_i$ and the angle $\delta {\phi}_i$ ($i=1,2$) represent the axis and the angle of the rotation, and the components of ${\bm{F}}^{\prime}$ are given by $F^{\prime}_x \equiv D_{x^2-y^2} / 2= - {\Lambda}_3 / 2$, $F^{\prime}_y \equiv Q_{xy} / 2= - {\Lambda}_1 / 2$, and $F^{\prime}_z \equiv F_z / 2= {\Lambda}_2 / 2$. The factor $2$ in the denominator in the definition of $F_{\mu}^{\prime}$ is due to the structure constant $f_{123} = 2$ among ${\Lambda}_i$ ($i = 1,2,3$). On the paths $O \to A$, $A \to B$, $B \to C$, and $C \to O$, we apply $R_1$, $R_2$, $R_1^{-1}$, and $R_2^{-1}$, respectively, to ${\bm{f}}^{\prime}$; then the total action on ${\bm{f}}^{\prime}$ is given up to the second order of $\delta {\phi}_i$ as \begin{align} R_2^{-1} R_1^{-1} R_2 R_1 = & I_3 + \delta {\phi}_1 \delta {\phi}_2 ({\bm{n}}_1 \times {\bm{n}}_2 ) \cdot {\bm{F}}^{\prime} \nonumber \\ &+ \mathcal{O} (\delta {\phi}_i^3), \label{eq:operator1} \end{align} where $I_3$ indicates the three-dimensional identity matrix. The infinitesimally small vectors $\delta {\phi}_1 {\bm{n}}_1$ and $\delta {\phi}_2 {\bm{n}}_2$ can be expressed in terms of ${\bm{f}}^{\prime}$ and its spatial derivative as \begin{align} \delta {\phi}_1 {\bm{n}}_1 = \delta x {\bm{f}}^{\prime} \times ({\nabla}_x {\bm{f}}^{\prime} ), \ \delta {\phi}_2 {\bm{n}}_2 = \delta y {\bm{f}}^{\prime} \times ({\nabla}_y {\bm{f}}^{\prime} ), \end{align} where $\delta x$ and $\delta y$ are the lengths between $O$ and $A$ and between $B$ and $C$. Then, Eq.~(\ref{eq:operator1}) can be expressed as \begin{align} R_2^{-1} R_1^{-1} R_2 R_1 \simeq I_3 + \delta x \delta y {\epsilon}_{\mu \nu \lambda} f_{\mu}^{\prime} ({\nabla}_x f_{\nu}^{\prime} ) ({\nabla}_y f_{\lambda}^{\prime}) ({\bm{f}}^{\prime} \cdot {\bm{F}}^{\prime}). \label{eq:operator2} \end{align} In the limit of $\delta x \to 0$ and $\delta y \to 0$, the second term on the right-hand side of Eq.~(\ref{eq:operator2}) can be considered as the generator of the phase ${\bm{f}}^{\prime}$ and the gained phase can be expressed as \begin{align} \frac{1}{2} \int_{\square} d\bm{S} \cdot [{\epsilon}_{\mu \nu \lambda} f_{\mu}^{\prime} (\nabla f_{\nu}^{\prime} ) \times (\nabla f_{\lambda}^{\prime} )], \end{align} where $\square$ represents the rectangle $OABC$. On the other hand, $\delta \chi$ can be expressed in terms of the local phase $\chi$ as \begin{align} \delta \chi = \oint_{\square} d\bm{l} \cdot (\nabla \chi ), \end{align} which implies \begin{align} \oint_{\square} d\bm{l} \cdot (\nabla \chi ) = \frac{1}{2} \int_{\square} d\bm{S} \cdot [{\epsilon}_{\mu \nu \lambda} f_{\mu}^{\prime} (\nabla f_{\nu}^{\prime} ) \times (\nabla f_{\lambda}^{\prime} )]. \end{align} Summing up all small loops, we can obtain the total phase $\Delta \chi$ that ${\bm{f}}^{\prime}$ gains along $\mathcal{C}$ as \begin{align} \Delta \chi = \oint_{\mathcal{C}} d\bm{l} \cdot (\nabla \chi ) = \frac{1}{2} \int_{\mathcal{S}} d\bm{S} \cdot [{\epsilon}_{\mu \nu \lambda} f_{\mu}^{\prime} (\nabla f_{\nu}^{\prime} ) \times (\nabla f_{\lambda}^{\prime} )], \end{align} where $\mathcal{S}$ indicates the area enclosed by $\mathcal{C}$. The Mermin-Ho relation for spin vortices and that for spin-nematic vortices in Eqs.~(\ref{eq:MH}) and (\ref{eq:MH_prime}) imply that the circulation of the mass current around a vortex is given by \begin{align} \oint_{\mathcal{C}} d\bm{l} \cdot \bm{v} = \frac{\hbar}{M} \left ( \oint_{\mathcal{C}} d\bm{l} \cdot (\nabla \varphi ) + \frac{1}{2} \Delta \chi \right ). \label{eq:vcirc} \end{align} The first term on the right-hand side of Eq.~(\ref{eq:vcirc}) is nonvanishing when the U($1$) phase is singular at the center of a vortex. As discussed later in Eq.~(\ref{eq:singularity}), the spin-nematic vortices dual to the CAT and MH vortices have singularities at the center of the vortex, so $\oint_{\mathcal{C}} d\bm{l} \cdot (\nabla \varphi ) = \pi$ in both cases. The second term on the right-hand side of Eq.~(\ref{eq:vcirc}) indicates the nonholonomy of the spinor order parameter. The phases are given by $\Delta \chi = 2\pi$ and $\pi$, which are the same as the cases of the ordinary CAT and MH vortices~\cite{Leggett}; however, due to the difference in the structure constant, the circulations of the mass currents are given by $3h/2M$ for the dual CAT vortex and $h/M$ for the dual MH vortex. \section{\label{sec:5}Winding number of a spin-nematic texture} Next, we examine the circulation of the mass current $\bm{v}$. Here, $\bm{v}$ can be expressed in terms of the set of parameters introduced in the preceding section as \begin{align} \bm{v} = \frac{\hbar}{M} \{ (\nabla \varphi ) - [ (\nabla \alpha ) \cos {\beta} + (\nabla \gamma ) ] \sin {2\vartheta} \}. \label{eq:mc} \end{align} Equation (35) reduces to $\bm{v} = (\hbar / M) \{ [ \nabla (\varphi \mp \gamma ) ] \mp (\nabla \alpha ) \cos {\beta} \}$ ($-(+)$ sign for $\vartheta = \pm \pi / 4$ ($\pm 3\pi / 4$)) for the fully-polarized case and $\bm{v} = (\hbar / M) (\nabla \varphi )$ for the unpolarized case. When the spin polarization is constant, the circulation of $\bm{v}$ around a vortex core can be computed on the basis of these expressions. In general, however, the spin polarization can change over space and time and form a spin-nematic texture. To derive the expression of the mass-current circulation around a vortex with nonuniform polarization, we consider a situation in which spin-$1$ bosons are confined in a two-dimensional disk with unit radius in the $x$-$y$ plane and the mass current $\bm{v}$ flows in the $x$-$y$ plane. The core of the vortex, which is assumed to locate at the center of the disk, should be fully polarized along the $+z$ or $-z$ direction, i.e., $\bm{\xi} \propto (1, i, 0)^{\mathrm{T}}$ or $(1, -i, 0)^{\mathrm{T}}$ ($| \pm 1\rangle$ in the irreducible representation), or unpolarized, i.e., $\bm{\xi} \propto (0, 0, 1)^{\mathrm{T}}$ ($|0 \rangle$ in the irreducible representation) due to the symmetry around the vortex core. Setting the cylindrical coordinate $(R, \Phi )$ with the vortex core at the origin, we display the spatial dependence of the five parameters as $\alpha (R, \Phi )$, $\beta (R, \Phi )$, $\gamma (R, \Phi )$, and $\vartheta (R, \Phi )$. We assume \begin{align} & \alpha (R, \Phi ) = n_{\alpha} \Phi , \label{eq:alpha} \\ & \beta (R, \Phi ) = \frac{\pi}{2} ( {\tilde{\beta}}_0 + n_{\beta} R) , \label{eq:beta} \\ & \gamma (R, \Phi ) = \frac{1}{2} n_{\gamma} \Phi , \label{eq:gamma} \\ & \vartheta (R, \Phi ) = \frac{\pi}{4} ( {\tilde{\vartheta}}_0 + n_{\vartheta} R ), \label{eq:theta} \end{align} where ${\tilde{\beta}}_0$ and ${\tilde{\vartheta}}_0$ depend on the state of the vortex core as noted in Table~\ref{tab:2}, and the winding numbers $n_{\alpha}$, $n_{\beta}$, $n_{\gamma}$, and $n_{\vartheta}$ are determined by the boundary conditions on the circumference of the disk. It follows from Table~\ref{tab:2} that the ferromagnetic and polar-core states are respectively given by $e^{i(\varphi - \alpha - \gamma)} (1, i, 0)^T / \sqrt{2}$ and $e^{i\varphi} (-\sin {\alpha} \sin {\gamma} , \cos {\alpha} \sin {\gamma} , \cos {\gamma} )^T$, and a vortex with a ferromagnetic core can always be filled, whereas a vortex with a polar core can be filled only when $n_{\gamma} = 0$. In Eqs.~(\ref{eq:alpha}) and (\ref{eq:gamma}), we assume that $\alpha (R, \Phi = 0 ) = 0$ and $\gamma (R, \Phi = 0) = 0$ without loss of generality, since a vortex with $\alpha (R, 0 ) \neq 0$ or $\gamma (R, 0) \neq 0$ can be obtained from the vortex in Eqs.~(\ref{eq:alpha}) and (\ref{eq:gamma}) by a uniform rotation of the spin vector or the orientation of the order parameter, leaving the winding number unchanged. Let us consider the mass-current circulation for vortices with the boundary condition given in Eqs.~(\ref{eq:alpha})-(\ref{eq:theta}). Substituting the boundary conditions into Eq.~(\ref{eq:mc}), we obtain the circulation of $\bm{v}$ as \begin{widetext} \begin{align} \oint {\bm{v}} \cdot d\bm{l} = \oint (\nabla \varphi ) \cdot d \bm{l} + \frac{h}{M} \biggl \{ & n_{\alpha} \left [ \cos {\frac{\pi}{2} {\tilde{\beta}}_0} \sin {\frac{\pi}{2} {\tilde{\vartheta}}_0} - \cos {\frac{\pi}{2} ( {\tilde{\beta}}_0 + n_{\beta} )} \sin {\frac{\pi}{2} ( {\tilde{\vartheta}}_0 + n_{\vartheta})} \right ] \nonumber \\ + & \frac{1}{2} n_{\gamma} \left [ \sin {\frac{\pi}{2} {\tilde{\vartheta}}_0} - \sin {\frac{\pi}{2} ( {\tilde{\vartheta}}_0 + n_{\vartheta} )} \right ] \biggr \} , \label{eq:circv} \end{align} \end{widetext} where the first term on the rignt-hand side is determined to satisfy the single-valuedness of the order parameter. \begin{table}[b] \caption{\label{tab:2}Boundary conditions at the center of a vortex with a ferromagnetic core and that with a polar core.} \begin{ruledtabular} \begin{tabular}{ccc} & Ferromagnetic core & Polar core \\ \colrule \addlinespace[2pt] ${\tilde{\beta}}_0$ & $0$ & $-1$ \\ ${\tilde{\vartheta}}_0$ & $1$ & $0$ \\ \end{tabular} \end{ruledtabular} \end{table} Let us apply Eq.~(\ref{eq:circv}) to vortices with ferromagnetic cores of the CAT and MH vortices in Fig.~\ref{fig:3} and their spin-nematic version in Fig.~\ref{fig:4}. In this case, the circulation of the mass current can be obtained from Eq.~(\ref{eq:circv}) and Table~\ref{tab:2} as \begin{widetext} \begin{align} &\oint {\bm{v}} \cdot d\bm{l} = \frac{\hbar}{M} \oint (\nabla \varphi ) \cdot d \bm{l} + \frac{h}{M} \left [ n_{\alpha} \left ( 1 - \cos {\frac{\pi}{2} n_{\beta}} \cos {\frac{\pi}{2} n_{\vartheta}} \right ) + \frac{1}{2} n_{\gamma} \left ( 1 - \cos {\frac{\pi}{2} n_{\vartheta}} \right ) \right ]. \label{eq:circv_f} \end{align} \end{widetext} The winding numbers characterizing the CAT and MH vortices are given by $(n_{\alpha} ,n_{\beta} ,n_{\gamma} ,n_{\vartheta} ) = (1, 2, 0, 0)$ and $(1, 1, 0, 0)$, respectively, and their U($1$) phases $\varphi$ are not singular at the core. Hence their circulations are given by \begin{align} \oint {\bm{v}} \cdot d\bm{l} = \left \{ \begin{array}{ll} \frac{2h}{M} &(\text{CAT}); \\ \frac{h}{M} & (\text{MH}). \end{array} \right . \end{align} \begin{figure*}[t] \begin{center} \includegraphics[bb = 0 0 2058 1703, clip, scale = 0.23]{fig6.eps} \end{center} \caption{(Color Online) Order-parameter and spin-vector textures in the polar-core vortices in Fig.~1 (b) of Ref.~\cite{Kawaguchi} and Fig.~6 (b) and (d) in Ref.~\cite{SKobayashi} displayed in the same manner as in Fig.~\ref{fig:4}. Figures (i) and (ii) correspond to the vortices in Refs.~\cite{Kawaguchi} and~\cite{SKobayashi}, respectively. } \label{fig:6} \end{figure*} On the other hand, the spin-nematic vortices dual to the CAT and MH vortices have $(n_{\alpha} ,n_{\beta} ,n_{\gamma} ,n_{\vartheta} ) = (0, 0, 1, 2)$ and $(0, 0, 1, 1)$, respectively, and the U($1$) phase $\varphi$ should increase by $\pi$ along circumference of the vortex for both cases. This can be confirmed by substituting Eqs.~(\ref{eq:gamma}) and (\ref{eq:theta}) into the rescaled order parameter in Eq.~(\ref{eq:wf}) with $\alpha = \beta = 0$ giving \begin{align} \bm{\xi} = \frac{e^{i\varphi (R, \Phi )}}{\sqrt{2}} \begin{pmatrix} e^{- \frac{i}{2} \Phi} \cos {\frac{\pi}{4} n_{\vartheta} R} - e^{\frac{i}{2} \Phi} \sin {\frac{\pi}{4} n_{\vartheta} R} \\ i (e^{- \frac{i}{2} \Phi} \cos {\frac{\pi}{4} n_{\vartheta} R} + e^{\frac{i}{2} \Phi} \sin {\frac{\pi}{4} n_{\vartheta} R}) \\ 0 \end{pmatrix}. \label{eq:wf_sn} \end{align} Equation~(\ref{eq:wf_sn}) should be single-valued at $\Phi = 0$ and $2\pi$, that is to say, \begin{align} &\bm{\xi} (R, 0) = \frac{e^{i\varphi (R, 0)}}{\sqrt{2}} \begin{pmatrix} \cos {\frac{\pi}{4} n_{\vartheta} R} - \sin {\frac{\pi}{4} n_{\vartheta} R} \\ i (\cos {\frac{\pi}{4} n_{\vartheta} R} + \sin {\frac{\pi}{4} n_{\vartheta} R}) \\ 0 \end{pmatrix} \nonumber \\ = \ & \bm{\xi} (R, 2\pi ) = - \frac{e^{i\varphi (R, 2\pi )}}{\sqrt{2}} \begin{pmatrix} \cos {\frac{\pi}{4} n_{\vartheta} R} - \sin {\frac{\pi}{4} n_{\vartheta} R} \\ i (\cos {\frac{\pi}{4} n_{\vartheta} R} + \sin {\frac{\pi}{4} n_{\vartheta} R}) \\ 0 \end{pmatrix}, \label{eq:singularity} \end{align} which implies that $\varphi (R, 2\pi ) - \varphi (R, 0) = \pi$. In this case, however, the core of a vortex is filled because it is ferromagnetic and has the spin-gauge symmetry, which implies that the U($1$) phase at the center is always given by ${\varphi}^{\prime} (0, \Phi ) \equiv \varphi (0, \Phi ) - \gamma (0, \Phi )= 0$. Thus, the circulations for the spin-nematic vortices are given by \begin{align} \oint {\bm{v}} \cdot d\bm{l} = \left \{ \begin{array}{ll} \frac{3h}{2M} & (\text{dual CAT}); \\ \frac{h}{M} & (\text{dual MH}). \end{array} \right . \end{align} The mass-current circulation around the dual CAT vortex takes a half-quantized value unlike that around the CAT vortex. On the other hand, the mass-current circulation around the dual MH vortex coincides with that of the MH vortex, in spite of the differences in the structure constants and the U($1$) phase. The circulation of the mass current around a polar-core vortex can also be calculated from Eq.~(\ref{eq:circv}). In this case, ${\tilde{\beta}}_0 = -1$ and ${\tilde{\vartheta}}_0 = 0$ according to Table~\ref{tab:2}, and Eq.~(\ref{eq:circv}) becomes \begin{widetext} \begin{align} \oint {\bm{v}} \cdot d\bm{l} = \frac{\hbar}{M} \oint (\nabla \varphi ) \cdot d \bm{l} + \frac{h}{M} \left ( n_{\alpha} \sin {\frac{\pi}{2} n_{\beta}} - \frac{1}{2} n_{\gamma} \right ) \sin {\frac{\pi}{2} n_{\vartheta}}. \label{eq:circv_p} \end{align} \end{widetext} Let us take the polar-core vortices in Refs.~\cite{Kawaguchi,SKobayashi} as an example. The polar-core vortex in (b) of Fig.~1 in Ref.~\cite{Kawaguchi} can be characterized by the spin vector lying in the $xy$ plane whose azimuth angle $\alpha$ changes by $2\pi$ along the circumference. The polarization $\vartheta$ changes by $\pi / 2$ along the radius direction. On the other hand, the polarization parameter $\vartheta$ and the polar angle $\beta$ of the polar-core vortex in (b) and (d) of Fig.~6 in Ref.~\cite{SKobayashi} changes by $\pi / 4$ and $\pi / 2$ in the radius and the azimuth angle $\alpha$ changes by $2\pi$ along the circumference of the vortex. The spin and wave-function textures of these vortices are illustrated in Figs.~\ref{fig:6}. Their winding numbers are given by $(n_{\alpha} ,n_{\beta} ,n_{\gamma} ,n_{\vartheta}) = (1, 0, 0, 2)$ and $(1, 1, 0, 1)$. The U($1$) phases should satisfy $\varphi (R, 2\pi) - \varphi (R, 0) = 0$ for the cases in Refs.~\cite{Kawaguchi} and~\cite{SKobayashi}. Then, the circulations of the mass currents can be obtained as \begin{align} \oint {\bm{v}} \cdot d\bm{l} = \left \{ \begin{array}{ll} 0 & (\text{Ref.~\cite{Kawaguchi}}); \\ \frac{h}{M} & (\text{Ref.~\cite{SKobayashi}}). \end{array} \right. \end{align} Here, we note that the texture of the polar-core vortex in Ref.~\cite{Kawaguchi} is obtained from the vortex with the boundary conditions in Eqs.~(\ref{eq:alpha})-(\ref{eq:theta}) by a uniform $-\pi /2$ spin-rotation $\exp {(i \pi F_z / 2)}$ around the $z$ axis, which does not affect the mass-current circulation. The circulation of polar-core half-quantum vortices can also be obtained from Eq.~(\ref{eq:circv_p}). Let us consider a polar-core vortex characterized by the winding number of $(n_{\alpha} ,n_{\beta} ,n_{\gamma} ,n_{\vartheta}) = (0, 1, 1, 0)$ shown in Fig.~\ref{fig:7}. The order parameter of this vortex is given by \begin{align} \bm{\xi} = e^{i\varphi (R, \Phi )} \begin{pmatrix} \sin {R} \cos { \frac{1}{2} \Phi} \\ \sin {\frac{1}{2} \Phi} \\ \cos {R} \cos { \frac{1}{2} \Phi} \end{pmatrix}, \end{align} which implies that the U($1$) gauge satisfies the condition $\varphi (R, \Phi + 2\pi ) - \varphi (R, \Phi ) = \pi$ and therefore the mass-current circulation is given by \begin{align} \oint {\bm{v}} \cdot d\bm{l} = \frac{h}{2M}. \end{align} \begin{figure}[t] \begin{center} \includegraphics[bb = 0 0 1024 727, clip, scale = 0.23]{fig7.eps} \end{center} \caption{(Color Online) Order-parameter texture of the polar-core half-quantum vortex characterized by the winding number of $(n_{\alpha} ,n_{\beta} ,n_{\gamma} ,n_{\vartheta}) = (0, 1, 1, 0)$. The color gauge for the polarization $\vartheta$ and that for the orientation of the order parameter $\gamma$ are the same as those in the upper panels of Fig.~\ref{fig:4}. } \label{fig:7} \end{figure} \section{\label{sec:6}Summary and Discussion} In this paper, we derive the su($3$) Mermin-Ho relation~(\ref{eq:rotv2}), which is expressed in terms of the Gell-Mann matrices and the su($3$) structure constants, and indicates nonholonomic structures of vortices in spin-$1$ BECs. For a BEC with fully-polarized spins, Eq.~(\ref{eq:rotv2}) reduces to the original Mermin-Ho relation in Eq.~(\ref{eq:MH}), which is expressed in terms of the so($3$) generators and structure constants. The local isomorphism between SO($3$) and SU($2$) implies that vortices dual to the CAT and MH vortices, both of which are described by Eq.~(\ref{eq:MH}), can be constructed from a set of three generators forming an su($2$) subalgebra of the su($3$) algebra. We show that three spin and nematicity observables, namely, $f_z$, $q_{xy}$, and $d_{x^2-y^2}$, belong to the other su($2$) subalgebra with the structure constant of $2$ and they form vortices dual to the spin vortices, i.e., the CAT and MH vortices, as visualized in Figs.~\ref{fig:3} and \ref{fig:4}. We also derive the formula to calculate the mass-current circulation around a vortex and identify the mass-current circulations of these spin-nematic vortices to be $3h / 2M$ (CAT) and $h/M$ (MH), respectively. The mass-current circulation for the dual CAT spin-nematic vortex in Fig.~\ref{fig:4} (i) is quantized in units of the half quantum $h/2M$, while the CAT vortex is characterized by the integer mass-current circulation of $2h/M$. On the other hand, the dual MH spin-nematic vortex has the same mass-current circulation as that of the MH vortex, both of which are given by $h/M$. The obtained formula to calculate the mass-current circulation around a vortex is applicable also to polar-core vortices. The vortices are not necessarily topologically stable in the enlarged order-parameter manifold~\cite{SKobayashi}; however, vortices such as the spin-nematic vortices can be realized by imposing appropriate boundary conditions. Such boundary conditions can be implemented by using a linearly polarized microwave, which can change the magnitude and the sign of the quadratic Zeeman energy~\cite{Gerbier,Leslie}.
{ "timestamp": "2018-04-17T02:13:56", "yymm": "1804", "arxiv_id": "1804.05518", "language": "en", "url": "https://arxiv.org/abs/1804.05518" }
\section*{Guide to using this template on Overleaf} Please note that whilst this template provides a preview of the typeset manuscript for submission, to help in this preparation, it will not necessarily be the final publication layout. For more detailed information please see the \href{http://www.pnas.org/site/authors/format.xhtml}{PNAS Information for Authors}. If you have a question while using this template on Overleaf, please use the help menu (``?'') on the top bar to search for \href{https://www.overleaf.com/help}{help and tutorials}. You can also \href{https://www.overleaf.com/contact}{contact the Overleaf support team} at any time with specific questions about your manuscript or feedback on the template. \subsection*{Author Affiliations} Include department, institution, and complete address, with the ZIP/postal code, for each author. Use lower case letters to match authors with institutions, as shown in the example. Authors with an ORCID ID may supply this information at submission. \subsection*{Submitting Manuscripts} All authors must submit their articles at \href{http://www.pnascentral.org/cgi-bin/main.plex}{PNAScentral}. If you are using Overleaf to write your article, you can use the ``Submit to PNAS'' option in the top bar of the editor window. \subsection*{Format} Many authors find it useful to organize their manuscripts with the following order of sections; Title, Author Affiliation, Keywords, Abstract, Significance Statement, Results, Discussion, Materials and methods, Acknowledgments, and References. Other orders and headings are permitted. \subsection*{Manuscript Length} PNAS generally uses a two-column format averaging 67 characters, including spaces, per line. The maximum length of a Direct Submission research article is six pages and a Direct Submission Plus research article is ten pages including all text, spaces, and the number of characters displaced by figures, tables, and equations. When submitting tables, figures, and/or equations in addition to text, keep the text for your manuscript under 39,000 characters (including spaces) for Direct Submissions and 72,000 characters (including spaces) for Direct Submission Plus. \subsection*{References} References should be cited in numerical order as they appear in text; this will be done automatically via bibtex, e.g. \cite{belkin2002using} and \cite{berard1994embedding,coifman2005geometric}. All references should be included in the main manuscript file. \subsection*{Data Archival} PNAS must be able to archive the data essential to a published article. Where such archiving is not possible, deposition of data in public databases, such as GenBank, ArrayExpress, Protein Data Bank, Unidata, and others outlined in the Information for Authors, is acceptable. \subsection*{Language-Editing Services} Prior to submission, authors who believe their manuscripts would benefit from professional editing are encouraged to use a language-editing service (see list at www.pnas.org/site/authors/language-editing.xhtml). PNAS does not take responsibility for or endorse these services, and their use has no bearing on acceptance of a manuscript for publication. \begin{figure \centering \includegraphics[width=.8\linewidth]{frog} \caption{Placeholder image of a frog with a long example caption to show justification setting.} \label{fig:frog} \end{figure} \begin{SCfigure*}[\sidecaptionrelwidth][t] \centering \includegraphics[width=11.4cm,height=11.4cm]{frog} \caption{This caption would be placed at the side of the figure, rather than below it.}\label{fig:side} \end{SCfigure*} \subsection*{Digital Figures} Only TIFF, EPS, and high-resolution PDF for Mac or PC are allowed for figures that will appear in the main text, and images must be final size. Authors may submit U3D or PRC files for 3D images; these must be accompanied by 2D representations in TIFF, EPS, or high-resolution PDF format. Color images must be in RGB (red, green, blue) mode. Include the font files for any text. Figures and Tables should be labelled and referenced in the standard way using the \verb|\label{}| and \verb|\ref{}| commands. Figure \ref{fig:frog} shows an example of how to insert a column-wide figure. To insert a figure wider than one column, please use the \verb|\begin{figure*}...\end{figure*}| environment. Figures wider than one column should be sized to 11.4 cm or 17.8 cm wide. Use \verb|\begin{SCfigure*}...\end{SCfigure*}| for a wide figure with side captions. \subsection*{Tables} In addition to including your tables within this manuscript file, PNAS requires that each table be uploaded to the submission separately as a “Table” file. Please ensure that each table .tex file contains a preamble, the \verb|\begin{document}| command, and the \verb|\end{document}| command. This is necessary so that the submission system can convert each file to PDF. \subsection*{Single column equations} Authors may use 1- or 2-column equations in their article, according to their preference. To allow an equation to span both columns, use the \verb|\begin{figure*}...\end{figure*}| environment mentioned above for figures. Note that the use of the \verb|widetext| environment for equations is not recommended, and should not be used. \begin{figure*}[bt!] \begin{align*} (x+y)^3&=(x+y)(x+y)^2\\ &=(x+y)(x^2+2xy+y^2) \numberthis \label{eqn:example} \\ &=x^3+3x^2y+3xy^3+x^3. \end{align*} \end{figure*} \begin{table \centering \caption{Comparison of the fitted potential energy surfaces and ab initio benchmark electronic energy calculations} \begin{tabular}{lrrr} Species & CBS & CV & G3 \\ \midrule 1. Acetaldehyde & 0.0 & 0.0 & 0.0 \\ 2. Vinyl alcohol & 9.1 & 9.6 & 13.5 \\ 3. Hydroxyethylidene & 50.8 & 51.2 & 54.0\\ \bottomrule \end{tabular} \addtabletext{nomenclature for the TSs refers to the numbered species in the table.} \end{table} \subsection*{Supporting Information (SI)} Authors should submit SI as a single separate PDF file, combining all text, figures, tables, movie legends, and SI references. PNAS will publish SI uncomposed, as the authors have provided it. Additional details can be found here: \href{http://www.pnas.org/page/authors/journal-policies}{policy on SI}. For SI formatting instructions click \href{https://www.pnascentral.org/cgi-bin/main.plex?form_type=display_auth_si_instructions}{here}. The PNAS Overleaf SI template can be found \href{https://www.overleaf.com/latex/templates/pnas-template-for-supplementary-information/wqfsfqwyjtsd}{here}. Refer to the SI Appendix in the manuscript at an appropriate point in the text. Number supporting figures and tables starting with S1, S2, etc. Authors who place detailed materials and methods in an SI Appendix must provide sufficient detail in the main text methods to enable a reader to follow the logic of the procedures and results and also must reference the SI methods. If a paper is fundamentally a study of a new method or technique, then the methods must be described completely in the main text. \subsubsection*{SI Datasets} Supply Excel (.xls), RTF, or PDF files. This file type will be published in raw format and will not be edited or composed. \subsubsection*{SI Movies} Supply Audio Video Interleave (avi), Quicktime (mov), Windows Media (wmv), animated GIF (gif), or MPEG files and submit a brief legend for each movie in a Word or RTF file. All movies should be submitted at the desired reproduction size and length. Movies should be no more than 10 MB in size. \subsubsection*{3D Figures} Supply a composable U3D or PRC file so that it may be edited and composed. Authors may submit a PDF file but please note it will be published in raw format and will not be edited or composed. \matmethods{Please describe your materials and methods here. This can be more than one paragraph, and may contain subsections and equations as required. Authors should include a statement in the methods section describing how readers will be able to access the data in the paper. \subsection*{Subsection for Method} Example text for subsection. } \showmatmethods{} \acknow{Please include your acknowledgments here, set in a single paragraph. Please do not include any acknowledgments in the Supporting Information, or anywhere else in the manuscript.} \showacknow{} \section{Introduction} Pre- and postselected systems are ubiquitous in quantum mechanics. In many quantum information schemes the intended process is only realized by the interplay of preselection and postselection. The addition of postselection, often together with conditioned transformations, is the basis of protocols such as universal quantum computation within the Knill-Laflamme-Milburn scheme~\cite{Knill2001}, entanglement swapping~\cite{Zukowski1993} and heralding in general~\cite{Zeilinger1997}. The two-state vector formalism (TSVF)~\cite{AVreview} provides a general framework for the description of pre- and postselected systems. It introduces a state evolving backwards in time and thereby treats the postselection on equal footing as the pre\-selection. The key element of the TSVF is the weak value of an observable. As long as the interaction is sufficiently weak or short the observable effect on the external system is completely characterized by the weak value~\cite{beyond}. For such interactions, the state of the external systems after the postselection can deviate significantly from the states expected by just considering the coupling to preselected systems~\cite{AAV}. The concept of weak values became the basis of several successful applications in precision measurement techniques~\cite{Hosten2008,Dixon2009}. While there are theoretical controversies about the optimality of the weak value-based tomography and precision-measurement methods~\cite{Zilberberg2011,Wu2012,Hofmann2012a,Xu2013,Dressel2014,Jordan2014,Ferrie2014,Knee2014,Magana-Loaiza2014,Pusey2014,Zhang2015a,Piacentini2018} a plethora of fruitful applications continues to emerge~\cite{Martinez-Rincon2017a,Li2017,Araujo2017,Qiu2017,Liu2017,Chen2018,Kim2018,Zhou2018,Li2018,Qin2018,Ren2018,Huang2018,Fang2018,Li2018a}. We take a step back and investigate the fundamental properties of pre- and postselected systems. We find that there exists a general universality principle characterizing how the effects of the interactions in one location of a spatially pre- and post-selected quantum system are modified as a function of pre- and postselection. All these modifications are specified by a single complex number, the weak value of the spatial projection operator. One of the innovations of our approach is that it does not rely on the specific form of the interaction Hamiltonian. Instead, it expresses the change of the state via the complex amplitude of an orthogonal component, which emerges due to the interaction. If the weak value is a positive number, the size of the changes in every variable is multiplied by this number and when it is negative, all modifications happen in the opposite direction. If the effect originally changed a particular variable, in the case of an imaginary weak value, the effect will occur in a variable conjugate to the initial one, and when the weak value is a complex number, both effects are combined together. This approach allows a formal definition of a quantum particle's presence. Until now, most accounts considered the weak value to be limited to the case of weak interactions, e.g.~\cite{Wu2011,DiLorenzo2012,Kofman2012,Zhang2016,Denkmayr2017}. It is another crucial innovation of our approach, however, that we explicitly apply the formalism to the case of much stronger interactions. We use an expression for the weak value which takes into account changes due to interactions of finite strength in the time interval between pre- and postselection. Besides incorporating the stronger interactions we also account for decoherence or imperfections in the measurement system. We show experimentally that this weak value can in fact be measured using weakly coupled pointers. An interferometer, especially a Mach-Zehnder type interferometer, can be seen as the iconic example for pre- and postselected systems. The reflectivity/transmittivity of the first beamsplitter together with the phase shifter defines the preselected state of the system. The final beamsplitter together with detection of the particle in one output of the interferometer sets the postselected state. The effect of weak interactions of the particle with external systems, which can be seen as a trace the particle leaves inside the interferometer, is characterized by the weak value of the projection operator on the corresponding arm. Surprisingly, we also found that for Gaussian states of the external system, the weak value characterizes the modification of the trace for arbitrary strength of the interaction. The interferometer enables straightforward experimental implementation where we consider a pre- and postselected photon passing through. We experimentally characterize the various effects of multiple interactions in one of the interferometer's arms using the mode and the polarization of the propagating photon as the external systems coupled to the photon's path. We find that the modifications of the weak effects on the photon can be described by the weak value of the projection operator on the corresponding arm for various types and strengths of couplings. We can now turn the picture upside down and view any coupling to the external degrees of freedom as being due to misalignment of the interferometer. For example, a tilted mirror in one of the beams now becomes an interaction deflecting the Gaussian mode of the beam from its ideal direction. This analogy directly leads us to an efficient alignment technique for interferometers where our analysis provides a simple model for the image observable at the output of an interferometer. More precisely, by measuring the phase dependent trajectory of the centroid of the output mode on only a single spatially resolving detector we can extract the misalignment parameters in one go. This technique harnesses the benefits of the weak amplification method~\cite{AAV} to improve precision. \section{Weak value of local projection and its connection to the trace} \label{sec::wv_ideal} Let us first consider the effect of a quantum particle on external systems due to all kinds of local interactions in the channel through which it passes. The interactions might be caused by various properties of the particle, e.g., charge, mass, magnetic moment, etc., but we assume that the particle passing through the channel does not change its quantum state.\footnote{To deal with cases, where the states of some degrees of freedom of the particle change, we treat them equivalently to the external degrees of freedom. In fact, this is the case in our experiment, see Sec.~\ref{sec::univ}.} If the quantum particle is not present in the channel, the state of the external systems at a particular time is $| \chi \rangle$. When the quantum particle is localized in the channel as shown in Fig.~\ref{fig::MZI_basic}a, the interactions change the total state of the external systems as \begin{equation}\label{eq::iaSingle} |\chi \rangle \rightarrow|\chi^\prime \rangle \equiv \eta\left( |\chi \rangle + \epsilon |\chi^\perp \rangle \right), \end{equation} where $| \chi^\perp \rangle$ denotes the component of $| \chi^\prime \rangle$ which is orthogonal to $| \chi \rangle$. By definition we choose the phase of $|\chi^\perp\rangle$ such that $\epsilon > 0$. For simplicity, but without loss of generality we also disregard the global phase and consider the coefficient $\eta$ to be positive such that $\eta = \langle \chi | \chi^\prime \rangle = \frac{1}{\sqrt{1+\epsilon^2}}$. The trace left by the particle is manifested by the presence of the orthogonal component $| \chi^\perp \rangle$ and is quantified by the parameter $\epsilon$. Next, let this channel be an arm of a Mach-Zehnder interferometer (MZI), see Fig.~\ref{fig::MZI_basic}b. We assume that the arm $B$ of the MZI is ideal, i.e., the particle leaves no trace there.\footnote{This assumption is made for simplicity of presentation. The main results of the paper about the universality of modification of interactions are easily transformed to the case when some weak traces are left in all parts of the interferometer.} \begin{figure}[t] \includegraphics[width=0.45\textwidth]{./basicMZI6_SC_noLine.pdf}\\ \caption{\textbf{Comparison between effect of the local coupling of the particle when it passes through a single channel and when it passes through an interferometer.} a). The particle interacts with external systems in a single channel originally in the state $|\chi\rangle$. b) The particle passes through a Mach-Zehnder interferometer with arm $A$ identical to the channel described in (a) with all its local interactions, while it is assumed that there are no local couplings in arm $B$. } \label{fig::MZI_basic} \end{figure} For the creation of the preselected state inside the interferometer $| \psi \rangle$ the unbalanced input beam splitter is followed by a phase shifter resulting in \begin{align}\label{eq::preSelSimp} | \psi \rangle = \cos \alpha | A \rangle + \sin \alpha e^{i\varphi} | B \rangle, \end{align} where $| A \rangle$ and $| B \rangle$ represent the eigenstates of the path degree of freedom, and $\alpha$ and $\varphi$ are the two real parameters of the state. The second beam splitter is balanced, so its operation can be modeled as \begin{subequations} \begin{align} | A \rangle &\rightarrow \frac {1}{\sqrt 2} ( | C \rangle + | D \rangle), \\ | B \rangle &\rightarrow \frac{1}{\sqrt 2} ( | C \rangle - | D \rangle). \end{align} \end{subequations} We collect photons in output port $C$, which corresponds to a postselection of the state \begin{align}\label{eq::postSelSimp} |\phi \rangle = \frac{1}{\sqrt{2}}\left( | A \rangle + | B \rangle \right). \end{align} Accounting for the interactions in arm $A$ (see Fig.~\ref{fig::MZI_basic}b) the composite state $| \Psi \rangle$ of the particle and the external systems before the second beam splitter is \begin{align} \label{eq::stateFinal} |\Psi \rangle &= \cos \alpha | A \rangle | \chi^\prime \rangle + \sin \alpha e^{i\varphi} | B \rangle | \chi \rangle, \end{align} where here and in the rest of the paper we employ a shorthand notation for tensor products with $| A \rangle | \chi^\prime \rangle \equiv | A \rangle \otimes | \chi^\prime \rangle$. After detection of the particle in arm $C$, i.e., postselection of the particle in state [\ref{eq::postSelSimp}], the state of the external systems becomes \begin{align}\label{eq::compPostSel} |\tilde{\chi} \rangle =\mathcal{N} \left( |\chi \rangle + \frac{\eta \epsilon}{\eta + \tan \alpha e^{i\varphi}} |\chi^\perp \rangle \right), \end{align} where $\mathcal{N}$ is the normalization factor. Here and in the rest of the paper we use the accent symbol ``$\sim$'' to denote situations with pre- and postselection. We start by considering interactions which are sufficiently small, with $\epsilon \ll 1$. In the case of a single channel, the particle passing through it leads to the change of the state of the external systems, \begin{equation}\label{eq::iaAmp1} | \chi \rangle \rightarrow |\chi^\prime \rangle = |\chi \rangle +\epsilon |\chi^\perp \rangle + \mathcal{O}(\epsilon^2), \end{equation} which is just an expansion of [\ref{eq::iaSingle}] in orders of $\epsilon$. For the particle that has passed through the corresponding MZI and has been detected in $C$ we observe a different change of the state of the external systems. Expanding Eq.~[\ref{eq::compPostSel}] in orders of $\epsilon$ we can see that the weak effect of the interaction is modified relative to [\ref{eq::iaAmp1}] by a single parameter, the weak value of projection on arm $A$, \begin{equation}\label{eq::iaAmp} | \chi \rangle \rightarrow |\tilde{\chi} \rangle=|\chi \rangle +\epsilon \left( \mathbf{P}_A \right)_w |\chi^\perp \rangle + \mathcal{O}(\epsilon^2), \end{equation} where, for defining the weak value, we neglect the coupling to external systems \begin{align}\label{eq::wvIdealMZI} \left( {\rm \bf P}_A \right)_w &\equiv \frac{ \langle \phi | {\rm \bf P}_A | \psi \rangle }{\langle \phi | \psi \rangle} = \frac{1}{1 + \tan \alpha ~ e^{i\varphi}}. \end{align} The design of the interferometer allows the full range of weak values of projection onto arm $A$, by varying the parameters $\tan \alpha$ and $\varphi$. Note that we did not restrict the number of interactions as long as their combined effect is sufficiently weak. When the trace left in the interferometer is small, ${\epsilon \ll 1}$, the weak value can be considered neglecting the effect of the interactions as in [\ref{eq::wvIdealMZI}]. In the next Section we will turn towards scenarios with stronger couplings for which the interactions cannot be neglected. \section{Weak Value considering finite coupling strength and imperfections} \label{sec::wv_finite} Calculating the weak value as in Eq.~[$\ref{eq::wvIdealMZI}$] we have implicitly assumed that it only depends on the pre- and postselection states at the boundaries of the considered time interval. This is correct in the limit of weak coupling, which is considered in most works about weak measurements. Yet, sometimes even in scenarios with coupling of finite strength the weak value has been treated as if there was no coupling, i.e., using formula [$\ref{eq::wvIdealMZI}$] \cite{Wu2011,DiLorenzo2012,Kofman2012,Zhang2016,Denkmayr2017,Piacentini2018,Vaidman2017a}. To correctly account for couplings of finite strength, we turn to the proper definition of the weak value in the framework of the TSVF, which refers to a single point in time $t$, at which the particular forward and backward evolving quantum states have to be evaluated \cite{AV90}. All interactions of finite strength and imperfections of optical devices between preselection and $t$ as well as between $t$ and postelection, must be considered. Thus, Eq.~[\ref{eq::preSelSimp}] correctly describes the forward evolving state only immediately after the first beam splitter and Eq.~[\ref{eq::postSelSimp}] describes the backward evolving state only immediately before the second beam splitter. Since all evolutions due to imperfections or interactions with the different external systems are local, i.e, they have the common eigenstates $|A\rangle$ and $|B\rangle$, the time ordering of the evolutions is of no consequence. Therefore, the weak value $(\textbf{P}_A)_w$ stays constant in time and we are free to choose any moment in time to calculate it. For convenience, we calculate the weak value immediately before postselection on state [\ref{eq::postSelSimp}] and modify only the forward evolving state to account for the evolution due to interactions inside the interferometer. Due to the interactions the system becomes entangled with the external systems as described by Eq.~[\ref{eq::stateFinal}]. Thus, the particle is in the mixed state described by the density matrix in the basis $\left\lbrace | A \rangle, | B \rangle \right\rbrace$ \begin{align}\label{eq::preSelEff} \rho = \begin{pmatrix} \cos^2 \alpha & \cos \alpha \sin \alpha e^{-i\varphi} \eta \\ \cos \alpha \sin \alpha e^{i\varphi} \eta & \sin^2 \alpha \end{pmatrix}. \end{align} The weak value in the case of mixed states has been derived in \cite{beyond} (Eq.~(32) therein), \begin{align} \label{eq::mixedWV} A_w = \frac{\mathrm{Tr} \left( \rho_\text{post} A \rho_\text{pre} \right)}{\mathrm{Tr} \left( \rho_\text{post} \rho_\text{pre} \right)}. \end{align} In our case this formula is not applicable for arbitrary time between the pre- and postselection due to entanglement in both forward and backward evolving states with the same external systems, see Section VI of \cite{beyond}, but it can be used to calculate the weak value immediately before the last beam splitter since the backward evolving state is not entangled, see also \cite{Wu2011,Wiseman2002,Silva2014}. As we explained above, the weak value of the projector $\mathbf{P}_A$ is constant between pre- and postselection, so it can be calculated as \begin{align}\label{eq::wvModified} \left( \mathbf{P}_A \right)_w = \frac{\mathrm{Tr} \left( |\phi \rangle \langle \phi| \mathbf{P}_A \rho \right)}{\mathrm{Tr} \left( |\phi \rangle \langle \phi| \rho \right)} = \frac{1 + \tan \alpha \, \eta e^{-i\varphi}}{1 + \tan^2 \alpha + 2 \tan \alpha \, \eta \cos\varphi}. \end{align} From Eq.~[\ref{eq::preSelEff}], we see that the overlap $\eta$ quantifies the loss of coherence between the two arms of the interferometer due to interactions and imperfections, which consequently leads to a reduction of the maximally achievable weak value. The dependence of the weak value [\ref{eq::wvModified}] on $\eta$ as well as on $\alpha$ and $\varphi$ is presented in Fig.~\ref{fig::wv_parameters}. Figs.~\ref{fig::wv_parameters}a,b show the case with ideal overlap $\eta = 1$, while Figs.~\ref{fig::wv_parameters}c-f illustrate the dependence for the non ideal case with reduced overlap and thus smaller $\left( {\rm \bf P}_A \right)_w$. \begin{figure*} \includegraphics[width=1\textwidth]{./surfaceplot_20180124.pdf} \caption{\textbf{Exact parameter dependence of weak value.} Real (upper row) and imaginary (lower row) parts of weak value of the projection operator on arm $A$ for $\eta=1$, $\eta=0.990$ and $\eta=0.936$. Each plot shows the dependence on the phase $\varphi$ and the amplitude ratio $\tan \alpha$. The highlighted colored lines represent the parameter values that are set in the various measurements, see Fig.~\ref{fig::dataUniversality} and Fig.~\ref{fig::eta_dependence_plot} below. } \label{fig::wv_parameters} \end{figure*} The weak value [\ref{eq::wvModified}] which accounts for multiple and even strong interactions is not useful to describe the whole of the external systems when inserted into expansion [\ref{eq::iaAmp}] because $\epsilon$ is large. However, Eq.~[\ref{eq::wvModified}] can be used to describe the modification for those interactions which are weak, even if some of the other interactions or all of them together are arbitrarily strong. We will show this now. In our scenario we neglect the interactions of external systems in arm $A$ among themselves. If between some particular systems the interaction cannot be neglected, they are considered as a single composite system. Thus, the interactions [\ref{eq::iaSingle}] in a single channel (Fig.~\ref{fig::MZI_basic}a) can be decomposed as \begin{align}\label{eq::productPointers} |\chi\rangle = \bigotimes_j |\chi_j\rangle \rightarrow |\chi^\prime\rangle = \bigotimes_j \eta_j \left(| \chi_j \rangle + \epsilon_j |\chi^\perp_j \rangle \right). \end{align} Here, as for [\ref{eq::iaSingle}], we absorbed the phases in the definitions of states, such that $\epsilon_j$ and $\eta_j$ are positive numbers. In the case where the coupling to the, say, $k$-th system is weak, the change of the state of the system can be also expressed using density matrix language in the $\left\lbrace |\chi_k\rangle, |\chi^\perp_k\rangle \right\rbrace$ basis as \begin{align} \label{eq::matOrig} \rho_k =\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \rightarrow \rho^\prime_k = \begin{pmatrix} 1 & \epsilon_k \\ \epsilon_k & 0 \end{pmatrix} + \mathcal{O}(\epsilon^2_k). \end{align} For a particle passing through the MZI, when both the pre- as well as the postselection state are superpositions of $|A\rangle$ and $|B\rangle$, several interactions in $A$ (Fig.~\ref{fig::MZI_basic}b) will lead to entanglement between the various external systems. Thus, each of the systems will be described by a mixed state. The modified evolution of the weakly coupled $k$-th system is \begin{align} \label{eq::matOrig1} \rho_k \rightarrow \tilde{\rho}_k = \begin{pmatrix} 1 & \left( {\rm \bf P}_A \right)^\ast_w \epsilon_k \\ \left( {\rm \bf P}_A \right)_w \epsilon_k & 0 \end{pmatrix} + \mathcal{O}(\epsilon^2_k). \end{align} Again, the modification of the effect of the weak interaction is characterized by the weak value $\left( {\rm \bf P}_A \right)_w$. \section{Manifestation of the trace as shifts in pointer states} In the previous sections we described the trace a particle leaves as the appearance of an orthogonal component in the quantum state of external systems. Another language, frequently closer to experimental evidence is the change in the expectation values of the external systems. Given the small change due to interactions in Fig.~\ref{fig::MZI_basic}a, expressed in [\ref{eq::iaAmp1}], every observable $O$ of the external system changes its expectation value as \begin{align}\label{eq::shiftOsingle} \delta \langle O \rangle \equiv \langle \chi^\prime | O | \chi^\prime \rangle - \langle \chi | O | \chi \rangle = 2 \epsilon \,\mathrm{Re} \left[ \langle \chi |O| \chi^\perp \rangle \right] + \mathcal{O}(\epsilon^2). \end{align} Then, using [\ref{eq::iaAmp}] (or [\ref{eq::matOrig1}] respectively) we see that for the pre- and postselected particle (Fig.~\ref{fig::MZI_basic}b) the change in the expectation value of $O$ is modified according to \begin{align}\label{eq::shiftOunivers} \tilde{\delta} \langle O \rangle = 2 \epsilon \,\mathrm{Re} \left[ \langle \chi |O| \chi^\perp \rangle \left( {\rm \bf P}_A \right)_w \right] + \mathcal{O}(\epsilon^2). \end{align} This formula is universal - it is valid for every system which was coupled weakly in arm $A$ to the particle passing through the interferometer. Eq. [\ref{eq::shiftOunivers}] represents a new result in a very general scenario. Let us now focus on the less general but very common measurement situation, which is usually considered when treating weak values \cite{AV90}. There, a single observable $O$ is the pointer variable $Q$, the pointer wavefunction $\chi (Q)$ is real and the interaction with the particle in the channel shifts the wave function in the pointer variable representation as \begin{align} \chi(Q)\rightarrow \chi^\prime(Q) = \chi(Q-\delta Q). \label{eq::measTypeWvFct} \end{align} Obviously, this also shifts the expectation value \begin{align} \delta \langle Q \rangle = \delta Q. \label{eq::measTypeExpVal} \end{align} In this scenario $\chi^\perp (Q)$ is also real, as well as $\langle \chi |Q| \chi^\perp \rangle$. Then, a positive weak value $\left( {\rm \bf P}_A \right)_w$ just tells us how the effect of the interaction is amplified or reduced according to \begin{align}\label{delQ} \tilde{\delta} \langle Q \rangle \approx \delta Q \, \mathrm{Re} [\left( {\rm \bf P}_A \right)_w]. \end{align} If $\left( {\rm \bf P}_A \right)_w$ is negative, it tells us that the pointer will be shifted in the opposite direction. If the weak value is imaginary, the expectation value of the pointer position will not be changed. However, an orthogonal component in the quantum state of the pointer will still appear. It will manifest itself in the shift of the expectation value of the momentum $P_Q$ conjugate to $Q$ \begin{align}\label{delP} \tilde{\delta} \langle P_Q \rangle \approx 2 \delta Q \ (\Delta P_Q)^2 \; \mathrm{Im}[\left( {\rm \bf P}_A \right)_w], \end{align} where $(\Delta P_Q)^2=\langle \chi | P_Q^2 | \chi \rangle - \langle \chi | P_Q | \chi \rangle^2$ and $\hbar=1$. \begin{figure*} \includegraphics[width=1\textwidth]{./setup_universality5.pdf} \caption{ \textbf{Schematic experimental setup.} The preselection state $| \psi \rangle$ is set using a non-polarizing beam splitter (BS) creating a spatial superposition between arms $A$ and $B$. Two equally oriented polarizers (POL) and a half wave plate ($\mathrm{HWP}_{\mathrm{var}}$) are used to define the relative amplitudes. Angle and position shifts, e.g. $\delta \theta_x$ and $\delta x$, are introduced by moving and tilting of optical components, whereas polarization rotations are imposed using a half wave plate (HWP). The postselection is done by considering only one of the output ports ($C$) of the interferometer. Analysis of the polarization degree of freedom is achieved by means of half and quarter wave plates (HWP and QWP), polarizing beam splitters (PBS), and photodiodes (PD), allowing the projection onto the polarization states $1/\sqrt{2}\left(|H\rangle \pm |V\rangle\right)$, $1/\sqrt{2}\left(|H\rangle \pm i |V\rangle\right)$, $|H\rangle$, and $|V\rangle$. Position sensing detectors (PSD) at different $z$-positions allow to determine position and angle, respectively, in $x$ and in $y$ direction. } \label{fig::MZI_setup} \end{figure*} Eqs.~[\ref{delQ}] and [\ref{delP}] were obtained from [\ref{eq::shiftOsingle}] and [\ref{eq::shiftOunivers}] on the assumption of weak coupling when higher orders of $\epsilon$ can be neglected. In the measurement situation [\ref{eq::measTypeWvFct}] with a Gaussian pointer, $\chi = e^{- Q^2/4(\Delta Q)^2}$ (we omit normalization), the usual range of validity of the weak value formalism is extended. Even when the coupling is strong and the pointer distribution is significantly distorted during the measurement, the expressions for the shifts of the expectation values of $Q$, [\ref{eq::shiftOsingle}] and [\ref{eq::shiftOunivers}], remain exact, with \begin{subequations} \begin{align} \delta \langle Q \rangle &= 2 \epsilon \mathrm{Re} \left[ \langle \chi |Q| \chi^\perp \rangle \right], \label{eq::gaussianExactStd} \\ \tilde{\delta} \langle Q \rangle &= 2 \epsilon \mathrm{Re} \left[ \langle \chi |Q| \chi^\perp \rangle \left( {\rm \bf P}_A \right)_w \right]. \label{eq::gaussianExactWeakVal} \end{align} \end{subequations} Indeed, for the Gaussian pointer $ \langle \chi |Q| \chi \rangle=0$, $ \langle \chi^\prime |Q| \chi^\prime \rangle=\delta Q$, and also the following expressions are easily calculated as \begin{equation} \eta = \langle \chi | \chi^\prime \rangle = e^{-(\delta Q)^2/8(\Delta Q)^2}, \quad \langle \chi |Q| \chi^\prime \rangle = \langle \chi | \chi^\prime \rangle \frac{\delta Q}{2}. \end{equation} Then [\ref{eq::gaussianExactStd}] is proven by substituting [\ref{eq::measTypeExpVal}] and [\ref{eq::iaSingle}], while including [\ref{eq::compPostSel}] and [\ref{eq::wvModified}] proves [\ref{eq::gaussianExactWeakVal}]. If the pointer is a Gaussian in the position variable $Q$ it is of course also a Gaussian in the conjugate momentum $P_Q$ representation. Therefore, [\ref{delQ}] and [\ref{delP}], in analogy to the above, become exact formulas with $\Delta P_Q= \frac{1}{2 \Delta Q}$. There are corresponding exact formulas for the effect of a shift in momentum $\delta P_Q$ with \begin{subequations} \begin{align} \tilde{\delta} \langle P_Q \rangle &= \delta P_Q \mathrm{Re} [\left( {\rm \bf P}_A \right)_w], \label{delPP} \\ \tilde{\delta} \langle Q \rangle &= -2 \delta P_Q (\Delta Q)^2 \; \mathrm{Im}[\left( {\rm \bf P}_A \right)_w], \label{delQP} \end{align} \end{subequations} see also \cite{Dressel2012}. Direct substitution shows that the expressions remain correct for Gaussians in the regime of strong interactions also in the case of combinations of shifts in $Q$ and $P_Q$, such that \begin{subequations} \begin{align} \tilde{\delta} \langle Q \rangle &= \delta Q \mathrm{Re} [\left( {\rm \bf P}_A \right)_w] - 2\delta P_Q (\Delta Q)^2 \; \mathrm{Im}[\left( {\rm \bf P}_A \right)_w], \label{delQQ+P} \\ \tilde{\delta} \langle P_Q \rangle &= \delta P_Q \mathrm{Re} [\left( {\rm \bf P}_A \right)_w] + \frac{\delta Q}{2 (\Delta Q)^2} \; \mathrm{Im}[\left( {\rm \bf P}_A \right)_w]. \label{delPQ+P} \end{align} \end{subequations} These equations are the basis of the alignment method presented in Section \ref{sec::alignment}. \section{Observing the universality property} \label{sec::univ} We use an optical Mach-Zehnder interferometer to experimentally visualize our central claim, namely, that all kinds of small effects of spatially pre- and postselected systems taking place at a specific location are modified in a universal manner characterized by the weak value of spatial projection. In the experiment we demonstrate the universal change for three different couplings. In every case the effect is modified in the same manner. There are proposals and actual experiments where the photon couples to other particles in one arm of the interferometer \cite{Simon2011,Feizpour2011,Fu2015,Ben-Israel2017,Steinberg}. In \cite{Steinberg} one arm of the interferometer is a Kerr medium and the photon passing through this arm changes the quantum state of the pointer by introducing a shift in the relative phase between the wave packets of the pointer photons. As it is done in most weak measurement experiments, instead of coupling to external particles we rather study interactions of the photons in an arm of the interferometer by observing the effect on other degrees of freedom of the photons itself. We also used a (weak) laser beam, so all the results can be explained using Maxwell equations (although in a much more difficult way), but the observations would not change by employing single photons since intensity measurements are in one-to-one correspondence to single photon probability distributions. The interactions in arm $A$ are realized by introducing controlled changes of spatial and polarization degrees of freedom. The initial state of the position degree of freedom can be well approximated by a Gaussian along the $x$ as well as the $y$ coordinates. The interaction is implemented by shifting the center of the Gaussian intensity distribution of the light beam going through arm $A$ by $\delta x$ compared to the beam going through arm $B$, \begin{align}\label{spatialx} \chi_x(x) = e^{-x^2 / w^2_0} \rightarrow \chi^\prime_x(x) = e^{-(x - \delta x)^2 / w^2_0}, \end{align} where $w_0$ denotes the waist of the beam and normalization factors are omitted. \begin{figure*} \includegraphics[width=0.98\textwidth]{./univ_neg_new_V5_SC.pdf} \caption{\textbf{Observed Universality.} (Upper row) The introduced displacements of arm $A$ in $x$ direction, angle around $x$-axis, and angle of polarization $\Theta$ ($\delta x$, $\delta \theta_x$, $\delta \Theta$) can be seen from the single red datapoints plotted at an arbitrary phase position. The blue datapoints corresponding to arm $B$ are taken as a reference and thus show zero shift. The axes are scaled such that the readings of $A$ agree for the three external systems. For each of these three, the same behavior of the interference signal (black datapoints) is observed for the shifts of the variables $\tilde{\delta} \langle x \rangle$, $\tilde{\delta} \langle \theta_x \rangle$, and $\tilde{\delta} \langle\Theta\rangle$: the effect seen from the measurement of the single arm is multiplied with the phase dependent real part of the weak value. (Lower row) The analogous plots for the shift of the respective conjugate variables represented by $\tilde{\delta} \langle \theta_y \rangle$, $\tilde{\delta} \langle y \rangle$, and $\tilde{\delta} \langle\Upsilon\rangle$ show nicely the dependence on the imaginary part of the weak value. The violet theoretical curves represent the rescaled real and imaginary parts of the weak value (no fit). } \label{fig::dataUniversality} \end{figure*} Another degree of freedom is the spatial state in $y$ direction of the light beam, which we modified by changing the angle of the beam around the $x$ axis, which for small angles corresponds to the momentum shift $\delta p_y = \frac{2\pi}{\lambda} \delta \theta_x$. The resulting modification in arm $A$ can be expressed by \begin{align}\label{spatialy} \chi_y(p_y) = e^{- w^2_0 p_y^2 / 4} \rightarrow \chi^\prime_y(p_y) = e^{- w^2_0 (p_y - \delta p_y)^2 / 4}. \end{align} As a third external system we use the photon polarization. The interaction parameter here is the rotation of polarization by the angle $\delta \Theta$, \begin{align} |\chi_{\sigma} \rangle = | H \rangle \rightarrow |\chi_{\sigma}^\prime \rangle = \cos \frac{\delta \Theta}{2} |H \rangle +\sin \frac{\delta \Theta}{2} |V \rangle, \end{align} where the states $| H \rangle$ and $| V \rangle$ are defined via $\sigma_z | H \rangle = | H \rangle$ and $\sigma_z | V \rangle = -| V \rangle$ for the Pauli matrix $\sigma_z$. All other properties of the photon are expressed in the state $|\chi_{O} \rangle$. Any imperfections of the interferometer can be understood to lead to a change of the initial state of these properties in arm $A$, $|\chi_{O} \rangle \rightarrow |\chi^\prime_{O} \rangle$ . It is a good approximation to assume that there are no interactions between the external degrees of freedom we consider and thus we can express the quantum state of the photon in arm $B$ just before reaching the final beam splitter of the interferometer as \begin{align}\label{eq::armB} | B \rangle |\chi\rangle = | B \rangle|\chi_x \rangle |\chi_{y} \rangle |\chi_{\sigma} \rangle |\chi_{O} \rangle, \end{align} while in arm $A$ it is \begin{align}\label{eq::armA} | A \rangle |\chi^\prime\rangle = | A \rangle |\chi^\prime_x \rangle |\chi^\prime_{y} \rangle |\chi^\prime_{\sigma} \rangle |\chi^\prime_{O} \rangle. \end{align} To test the universality of modifications of effects for various degrees of freedom one could either perform complete tomographies of the final pointer states [\ref{eq::matOrig}] and [\ref{eq::matOrig1}] or, more clearly, show the modification of the effects of the three couplings according to [\ref{delQ}] and [\ref{delP}]. We follow the second approach. More explicitly, we test the differences between effects of the interactions on the expectation values in three degrees of freedom when the particle passes through the single arm (expressed by $\delta$) and when the particle passes through both arms (expressed by $\tilde \delta$)\footnote{We chose this method since our measurements of the shifts $\delta \langle x \rangle$, $\delta \langle \theta_x \rangle$, and $\delta \langle \Theta \rangle$ in a single channel are more precise than our control of the shifts $\delta x$, $\delta \theta_x $, and $\delta \Theta $ via manual stages.}. Because of the linear relation between $\theta_y$ and $p_x$ as well as $\theta_x$ and $p_y$, one obtains \begin{eqnarray} \tilde{\delta}\langle x \rangle =& \delta \langle x \rangle \mathrm{Re} [\left( {\rm \bf P}_A \right)_w], \label{delQx} \\ \tilde{\delta}\langle \theta_y \rangle =& \frac{\delta \langle x \rangle}{z_R}\mathrm{Im}[\left( {\rm \bf P}_A \right)_w], \label{delPx} \\ \tilde{\delta}\langle \theta_x \rangle =& \delta \langle \theta_x \rangle\mathrm{Re} [\left( {\rm \bf P}_A \right)_w], \label{delPy} \\ \tilde{\delta}\langle y \rangle =& -z_R \delta \langle \theta_x \rangle \mathrm{Im}[\left( {\rm \bf P}_A \right)_w]. \label{dely} \end{eqnarray} Here we have used the Rayleigh range $z_R \equiv \frac{\pi w^2_0}{\lambda}$ as the characteristic parameter of the Gaussian beam. \begin{figure*} \includegraphics[width=1\textwidth]{./eta_plot_single_error_V5_SC.pdf} \caption{\textbf{Modification of weak value due to decoherence.} The colored dots represent the measured values for the modification of the shift $\delta x$ in the interference signal when varying the weak value via the relative amplitudes of the paths $A$ and $B$ ($\tan \alpha$ in Eq. [\ref{eq::wvModified}]) and fixed $\varphi = \pi$. The four datasets correspond to four different values of the overlap $\eta$, which quantifies the coherence between the states of the external systems from the two arms. The lines are theoretical curves as highlighted by the colored lines in Fig.~\ref{fig::wv_parameters}c,e. Respective average error bars are shown for each $\eta$ on one of the first data points. For comparison also the theoretical line with $\eta = 1$ (Fig.~\ref{fig::wv_parameters}a) is shown. } \label{fig::eta_dependence_plot} \end{figure*} The conjugate variable to the angle $\Theta$ defining polarization changes in the $\sigma_x$-$\sigma_z$ plane is an angle $\Upsilon$ describing polarization rotations in the $\sigma_y$-$\sigma_z$ plane relative to the initial state $| H \rangle$. For small deviations these angles relate linearly to $\langle \sigma_x \rangle$ and $\langle \sigma_y \rangle$, respectively, and are given by \begin{align} \tilde{\delta} \langle \Theta \rangle &= \delta \langle \Theta \rangle \mathrm{Re} [\left( {\rm \bf P}_A \right)_w], \label{delQTheta} \\ \tilde{\delta} \langle \Upsilon \rangle &= - \delta \langle \Theta \rangle \mathrm{Im}[\left( {\rm \bf P}_A \right)_w]. \label{delPEpsilon} \end{align} The test was performed for the full range of $\varphi$ and thus for a large range of values $\left( {\rm \bf P}_A \right)_w$, see violet lines on the graphs of Fig.~\ref{fig::wv_parameters}. The parameters for the calculation of $\left( {\rm \bf P}_A \right)_w$ necessary for testing relations [\ref{delQx}] - [\ref{delPEpsilon}] were also obtained from measurements. The signals from separate arms (when the other arm was blocked) provided $\tan \alpha$. The phase $\varphi$ and the overlap $\eta$ were obtained from the intensity of the interference signal and visibility measurements, respectively. The relation between the visibility $\mathcal{V}$ and the overlap $\eta$ for the phase dependent output intensity $\mathcal{I} \propto \langle \phi | \rho |\phi \rangle \propto 1 + \tan^2 \alpha + 2 \tan \alpha \, \eta \cos \varphi$ is given by \begin{align} \mathcal{V} &\equiv \frac{\mathcal{I}_\text{max} - \mathcal{I}_\text{min}}{\mathcal{I}_\text{max} + \mathcal{I}_\text{min}} = \eta \frac{2 \tan \alpha}{1 + \tan^2 \alpha}. \end{align} The experiment is shown schematically in Fig.~\ref{fig::MZI_setup}. After propagation through a single mode fiber for spatial filtering the horizontally polarized light from a laser diode ($\lambda=780\,\mathrm{nm}$) is split by a non-polarizing beam splitter. The moduli of the amplitudes of the preselection state [\ref{eq::preSelSimp}] are controlled by means of rotating the polarization using a half wave plate in arm $A$ followed by a horizontal polarization filter. The relative phase between the arms $\varphi$ is set by an optical trombone system with retroreflecting prisms moved by a piezoelectric crystal (not shown). This setup enables to directly implement the three desired interactions along beam $A$ and simultaneuously measure their effect. Fig.~\ref{fig::MZI_setup} depicts the setup. The spatial displacement $\delta x$, which is schematically depicted as a shift of the mirror, was achieved by lateral movement of the prism from the trombone system. Instead of a vertical tilt of this mirror, we incorporate the vertical rotation $\delta \theta_x$ by tilting the second beam splitter. The polarization rotation $\delta \Theta$ is controlled by rotating a half wave plate in arm $A$. Detecting light only from the output port $C$ provides the post-selection onto state $|\phi\rangle$, Eq.~[\ref{eq::postSelSimp}]. The photons at port $C$ are distributed onto several detectors using beam splitters for position and polarization analysis. A position sensing detector $\mathrm{PSD}_1$ placed near the interferometer and a detector $\mathrm{PSD}_2$ placed farther away allows the estimation of position and angle in $x$ and $y$ directions. We perform tomography of the polarization state using half and quarter wave plates in combination with polarizing beam splitters as shown in Fig.~\ref{fig::MZI_setup}. A measurement run consists of three steps, namely, first a measurement of light propagating in arm $A$ alone, second of arm $B$ alone, and last a measurement of the interference signal. The six expectation values obtained from measurements of arm $B$ are used as a reference for the subsequent analysis. The measurement with only beam $A$ shows the effect of the interactions when the photons pass through a single channel as in Fig.~\ref{fig::MZI_basic}a. The results are indicated in the graphs of Fig.~\ref{fig::dataUniversality} as red dashed horizontal lines since they exhibit no dependence on the phase\footnote{Please contact J.D. (jan.dziewior@physik.lmu.de) if you desire access to the raw experimental data for this plot as well as for all other plots.}. The universality is clearly shown by the similarity of the results for the three couplings (Fig.~\ref{fig::dataUniversality}). Of course in all graphs the observed values are different and have different units. For demonstration purposes we arranged the scales of the graphs in the upper row of Fig.~\ref{fig::dataUniversality} such that the signals of all interactions, $\langle x \rangle_A$, $\langle \theta_y \rangle_A$, $\langle \Theta\rangle_A$ have the same size. We were trying to avoid shifts in conjugate variables as much as possible. Our measurement results, red dashed lines in the plots from the lower row of Fig.~\ref{fig::dataUniversality}, show that the tuning was good, although not perfect. \begin{figure*} \includegraphics[width=1\textwidth]{./run_24_with_fit_alternative_a_SC.pdf} \caption{\textbf{a) Trajectories of beam centroids in output $C$ for misaligned MZI.} The \textit{blue} and \textit{red} spots correspond to the measurements of the beams from the single arms when the other arm is blocked. While the \textit{blue} spot at the origin corresponds to beam $B$ without interaction, the \textit{red} spot corresponds to the misaligned beam $A$. The elliptic trajectory of the interference pattern is represented by the \textit{black} points. \textbf{b) Fits onto $x$ and $y$ projections of trajectory.} By fitting the vector function [\ref{eq::alignFormulaCoarse}] to the $x$- and $y$-projections of the interference ellipse we determine the parameters of the misalignment. } \label{fig::trajAlignment} \end{figure*} \begin{figure} \includegraphics[width=0.5\textwidth, height=0.45\textwidth]{./run25_new_wInset_a.pdf} \caption{\textbf{Trajectories of beam centroids after one alignment step.} It can be clearly seen how size of the ellipse and the distance between the centroids of the single beams $A$ (\textit{red}) and $B$ (\textit{blue}) are significantly reduced in comparison to Fig.~\ref{fig::trajAlignment}a. } \label{fig::trajAfter} \end{figure} Continuous violet lines on these graphs provide theoretical predictions based on the weak value $ \left( {\rm \bf P}_A \right)_w $ given by [\ref{eq::wvModified}] and the interactions in the single arms are presented as red and blue dashed lines in the graphs. The intensities obtained measuring arm $A$ and arm $B$ alone yield $\tan\alpha = 1.3323 \pm 0.0002$. From the visibility measurement, ${\cal V} = 95.09 \pm 0.02\%$, we obtained $\eta=0.9904 \pm 0.0003$. For these parameters we observed amplifications with factors up to $4$ and $-3$. The very good agreement between experimental data and theoretical predictions, shown in Fig.~\ref{fig::dataUniversality}, demonstrates the universality of the modification of several fundamentally distinct forms of interactions for couplings with a pre- and postselected system. To evaluate the dependence of the weak value on the coherence between the two arms parametrized by $\eta$, we measured the effect of the displacement in $x$ on the output beam. For this run we kept the phase fixed at $\varphi = \pi$ and varied the amplitude ratio $\tan \alpha$ covering another region of the parameter space from Fig.~\ref{fig::wv_parameters}. We changed the coherence by varying the polarization misalignment leading to a smaller overlap between the photon states passing through the two arms. The modification of the shift in $x$ direction presented in Fig.~\ref{fig::eta_dependence_plot} follows nicely the weak value [\ref{eq::wvModified}]. \section{Alignment Method} \label{sec::alignment} In the previous sections we considered a scenario in which the path state of a photon in an arm of an interferometer is coupled to its other degrees of freedom, in particular its spatial degrees of freedom in $x$ and $y$ direction. This scenario exactly represents a situation encountered in real experimental interferometric setups, namely when the arms of the interferometer are misaligned. The differences in position $\delta \vec{r} \equiv \left( \delta x, \; \delta y \right)$ and angle $\vec{\delta \theta} \equiv \left( \delta \theta_x, \; \delta \theta_y \right)$ between the photons passing through distinct arms of the interferometer can be considered as results of interactions in one arm, which change the initially identical spatial states of the particle. It is well known that the picture generated by the interference of the beams from a misaligned interferometer displays a strong phase dependence. Fig.~\ref{fig::trajAlignment}a shows the centroid trajectory during the phase scan of a misaligned interferometer. We demonstrate that it is possible to quantitatively determine the exact misalignment parameters of the interferometer by analyzing this phase dependent movement. In fact, the misalignment parameters $\delta \vec{r}$ and $\vec{\delta \theta}$ could be calculated from measurements described in the previous section. Disregarding the polarization analysis it was a measurement of the misalignment parameters based on position measurements of centroids of the beams on two detectors at different locations. But the method is more powerful and can be implemented with only a single position sensitive detector as well. The basis for our alignment method are Eqs.~[\ref{delQQ+P}] and [\ref{delPQ+P}] which, somewhat surprisingly, remain precise even for large misalignment. The shift observed on the single detector $\tilde \delta \vec{R}$ is the sum of the position shift $\tilde \delta \vec{r}$ and the position shift due to the shift in direction $\vec{\delta \theta}\times\vec{L}$, where $\vec{L} = (0,0,z)$ is the vector parallel to the beam with the length equal to the distance $z$ along the beam between the waist and the detector. Thus, the position shift of the centroid on the detector $\tilde{\delta} \vec{R}$ is given by \begin{align}\label{eq::alignFormulaCoarse}\nonumber \tilde \delta\vec{R} &= \left(\delta x + z \delta \theta_y,~\delta y - z \delta \theta_x \right) \mathrm{Re}[\left( {\rm \bf P}_A \right)_w] + \\ &\left(\frac{z}{z_R}\delta x - z_R \delta \theta_y,~\frac{z}{z_R} \delta y + z_R \delta \theta_x \right) \mathrm{Im}[\left( {\rm \bf P}_A \right)_w]. \end{align} The weak value is given by [\ref{eq::wvModified}]. The parameters $\tan \alpha$, $\eta$, $z$, and $z_R$ are found experimentally as in the previous section. The function [\ref{eq::alignFormulaCoarse}] corresponds to the trajectory of the beam centroid on the detector as shown in Fig.~\ref{fig::trajAlignment}a. Even small misalignments which otherwise might be difficult to resolve become detectable due to the effect of weak amplification. Fig.~\ref{fig::trajAlignment}b shows the $x$- and $y$-components of $\tilde \delta\vec{R}$ as functions of $\varphi$. A least squares fit of this function provides the four unknown misalignment parameters $\delta \vec{r}$ and $\vec{\delta \theta}$. It is remarkable that a fit function with so few parameters accurately models the experimental results. For the data shown the fit provided $\delta \vec{r} = (49 \pm 2,7 \pm 2) \,\mathrm{{\mu}m}$ and $\vec{\delta \theta} = (12.7 \pm 0.4,0.2 \pm 0.4)\,\mathrm{{\mu}rad}$. We have performed corrections according to these parameters and repeated our procedure, see Fig.~\ref{fig::trajAfter}. The stability of the centroid shows excellent alignment and a subsequent fit procedure provides the parameters $\delta \vec{r} = (-1 \pm 2,2 \pm 2) \,\mathrm{{\mu}m}$ and $\vec{\delta \theta} = (0.2 \pm 0.4,-0.6 \pm 0.4)\,\mathrm{{\mu}rad}$. In our method to obtain the misalignment parameters we rely on the knowledge of the beam parameters, i.e., Rayleigh range $z_R$ and longitudinal position of the detector relative to the waist $z$. In some situations the reversed task might be of interest. If we control the misalignment parameters, we can also use our algorithm with the fit to obtain the beam parameters. In fact, the general idea of alignment using weak values was already used in alignment of the interferometer demonstrating the past of a particle in nested interferometers \cite{Danan} and since then it was significantly developed and improved \cite{DimaMSc,NimrodMSc} until it reached the efficiency presented in the current work when a single scan led to a very good alignment. \section{Trace and Presence}\label{sec::discussion} A generic property of weak measurements is the possibility to perform several weak measurements on the same system. Thus, we can interpret our experiment as multiple weak measurements of the projection operator which all yield the same result, the weak value of the projection on the arm of the interferometer. However, it also implies a broader meaning with respect to the discussion of the local presence of quantum particles. A classical particle can either be in a particular location or not. The presence of a quantum particle in a certain location, however, is a subtle issue and its analysis strongly depends on the adopted interpretation of quantum mechanics. To avoid controversial interpretational issues, we do not discuss ontological aspects of the concept of presence of a particle and instead argue within the operational approach. When the wavefunction of a quantum particle is well localized in a particular location, the trace is specified in a unique way by the local interaction in that location in analogy to the trace of a classical particle when it is present, see Eq.~[\ref{eq::iaSingle}]. Given that there are only local interactions in nature, there is no trace when the wavefunction vanishes. Similarly, there is no trace in a classical channel when the particle is absent. Scenarios when the wavefunction does not vanish, but is also not fully localized at this location, are no longer understandable from a classical perspective. The universal relation between the trace in these scenarios and the trace of a fully localized particle which we found in our work can be considered as a basis of an operational concept of presence of a particle. It goes beyond defining the particle as present when it leaves a trace and not present, when it does not \cite{past}. According to our operational approach, the ``presence'' of a pre- and postselected particle in the arm $A$ of an interferometer is defined according to the way it affected the external systems to which it was coupled and is quantified by the complex number $(\mathbf{P}_A)_w$. This definition yields ``presence'' 1 when the forward evolving wavefunction of the particle is solely inside the arm $A$ independently of postselection, but ``presence'' 1 can happen also when neither the pre- nor the postselected states are eigenstates of the local projection on arm $A$. The ``presence'' 0 or ``absence'' of the particle is ensured when the forward evolving wavefunction of the particle vanishes in arm $A$, but it is not a necessary condition. The postselected particle might have been ``absent'' in arm $A$ (no effect on local external systems can be observed) even when the forward evolving wavefunction did not vanish there. Our concept provides a quantification and characterization of presence by describing the modification of effects of the particle's interactions with external systems. It can be increased, decreased, or changed in a particular, well defined way and this change is the same for all local interactions - it is universal. \section{Conclusions} We have analyzed theoretically and experimentally the modifications of the effect of weak interactions on pre- and postselected particles. We have shown that there is a universal description of the modification of these couplings for all weak interactions given by a single complex number, the weak value of the projection on the corresponding location. Our approach is based on expressing the effect of external systems in terms of the orthogonal components which appear due to the interactions. This allows to formalize the meaning of the weak value without reference to a specific form of coupling. The weak value not only modifies a shift of expectation values as usual, but also the relative amplitudes of the orthogonal components of all external quantum systems interacting with the particle. The experiment shows for three different couplings that each of the effects is modified in exactly the same way. This is shown for not just a few cases of pre- and postselected particles, but for a continuum of parameters with a large range of weak values of projection. The approach derives the general expression [\ref{eq::wvModified}] which allows to apply the concept of weak values for several couplings which are not necessarily weak. These findings enable one to understand seemingly complicated dependencies seen in experiments, for example \cite{Kofman2012}, and can facilitate multi-parameter precision measurements in the future. We define an operational paradigm for the presence of a pre- and postselected particle according to the trace it leaves. It is more intricate than the dichotomic concept of the presence of a classical particle which can only be present or not. This complexity is surprising in light of the fact that in all scenarios the external systems are in a superposition or a mixture of the undisturbed state with a single particular orthogonal component. Our demonstration of the universality of the modification of the interactions led us to a novel alignment method. Its effectiveness relies on the unexpected robustness of the modification of Gaussian pointers, where the weak value expressions remain precise even for strong couplings. In our method a single phase scan suffices to recover all misalignment parameters from the analysis of the position of the centroid of a single output beam, clearly reducing the effort in an often tedious task, while at the same time potentially harnessing the benefits of weak value amplification. \acknow{This work has been supported in part by the Israel Science Foundation Grant No. 1311/14, the German-Israeli Foundation for Scientific Research and Development Grant No. I-1275-303.14, the DFG Beethoven 2 Project No. WE2541/7-1, and by the German excellence initiative Nanosystems Initiative Munich. J.D. acknowledges support by the international Max-Planck-Research school for Quantum Science and Technology (IMPRS-QST), L.K. acknowledges support by the international Ph.D. program ExQM from the Elite Network of Bavaria, and J.M. acknowledges support of the LMU research fellowship.} \showacknow{}
{ "timestamp": "2019-04-16T02:28:44", "yymm": "1804", "arxiv_id": "1804.05400", "language": "en", "url": "https://arxiv.org/abs/1804.05400" }
\section{Introduction} \label{intro} The paper is organized as follows. In the current section we introduce necessary notation and state our main result (Theorem~\ref{Th:irred1}). Its proof is in Section~3. Then, in Section~4, we introduce the notion of the logarithmic contact of irreducible Weierstrass polynomials and in Theorem~\ref{Th:irred3} rewrite the main results of the paper in terms of the logarithmic contact. At the end of the paper we show that Abhyankar-Moh irreducibility criterion follows from Theorem~\ref{Th:irred1}. Throughout the paper $\mathbb{K}$ is an algebraically closed field of characteristic zero. We use the notation $\mathbb{K}[[X]]$ for the ring $\mathbb{K}[[X_1,\dots,X_d]]$ of formal power series in $d$ variables with coefficients in $\mathbb{K}$ and the notation $\mathbb{K}[[X^{1/n}]]$ for the ring $\mathbb{K}[[X_1^{1/n},\dots,X_d^{1/n}]]$. In one variable case the elements of this ring are called {\em Puiseux series}. We will use a multi-index notation $X^q:=X_1^{q_1}\cdots X_d^{q_d}$ for $q=(q_1,\dots,q_d)$. Let $f=Y^n+a_{n-1}(X)Y^{n-1}+\cdots+a_0(X)\in\mathbb{K}[[X]][Y]$ be a unitary polynomial. Such a polynomial is called \textit{quasi-ordinary} if its discriminant equals $u(X)X^q$ with $u(0)\neq0$. We call $f$ \textit{a Weierstrass polynomial} if $a_i(0)=0$ for all $i=0,\dots, n-1$. The classical Abhyankar-Jung theorem (see \cite{Parusinski-Rond}) states that every quasi-ordinary polynomial $f\in\mathbb{K}[[X]][Y]$ has its roots in $\mathbb{K}[[X^{1/m}]]$ for some positive integer $m$. Hence one can factorize $f$ to the product $\prod_{i=1}^n(Y-\alpha_i)$, where $\alpha_i\in \mathbb{K}[[X^{1/m}]]$. We put $\mathrm{Zer} f=\{\alpha_1,\dots,\alpha_n\}$. Since the discriminant of a monic polynomial is a product of differences of its roots, we have $\alpha_i - \alpha_j=u_{ij}(X)X^{\lambda_{ij}}$ with $u_{ij}(0)\neq 0$. The $d$-tuple $d(\alpha_i,\alpha_j):=\lambda_{ij}$ of non-negative rational numbers will be called the \textit{contact between $\alpha_i$ and $\alpha_j$}. For irreducible $f$ the contacts $d(\alpha,\alpha')$ for $\alpha,\alpha'\in\mathrm{Zer} f$, $\alpha\neq\alpha'$, are called the \textit{characteristic exponents} of~$f$. Let us introduce a partial order in the set $\mathbb{Q}_{\geq0}^d$: $q\leq q'$ if and only if $q'-q\in \mathbb{Q}_{\geq0}^d$. Then the characteristic exponents can be set to the increasing sequence $(h_1,\dots,h_s)$ (see \cite[Lemma~5.6]{Lipman}). We call this sequence the \textit{characteristic} of $f$ and denote it by ${\rm Char}(f)$. With the sequence of characteristic exponents we associate the increasing sequence of lattices $M_0\subset M_1\subset \dots\subset M_s$ defined as follows: $M_0=\mathbb{Z}^d$ and $M_i=\mathbb{Z}^d+\mathbb{Z} h_1+\cdots +\mathbb{Z} h_i$ for $i=1,\dots,s$. We set $n_i=[M_i:M_{i-1}]$ for $i=1,\dots,s$, $n_{s+1}=1$ and $e_i=n_{i+1}\cdots n_{s+1}$ for $i=0,\dots,s$. Then $\deg f=n_1\cdots n_s$ (see \cite[Remark~2.7]{GP}). Finally we set \begin{equation}\label{Eq1} q_{i}=\sum_{j=1}^{i} (e_{j-1}-e_j)h_j +e_{i} h_i \end{equation} for $i=1,\dots,s$. If $f,g\in \mathbb{K}[[X]][Y]$, then the resultant of this polynomials is denoted by $\mathrm{Res}(f,g)$. \medskip We can now formulate our main result. \begin{Theorem}\label{Th:irred1} Let $f\in \mathbb{K}[[X]][Y]$ be a quasi-ordinary irreducible polynomial of characteristic $(h_1,\dots,h_s)$ and let $g\in \mathbb{K}[[X]][Y]$ be a Weierstrass polynomial of degree $\leq n_1\cdots n_k$, where $1\leq k\leq s$. If all monomials appearing in $\mathrm{Res}(f,g)$ have exponents greater than $(\deg g)q_k$ then \begin{itemize} \item[{\rm (i)}] $g$ is irreducible and quasi-ordinary of degree $n_1\cdots n_k$ and characteristic $(h_1,\dots, h_k)$; \item[{\rm (ii)}] for every $\gamma\in {\rm Zer}\, g$ there exists $\alpha\in {\rm Zer}\, f$ such that $\gamma-\alpha=\sum_{h>h_k}c_{h}X^{h}$. \end{itemize} Moreover, if $X^{(\deg g)q_{k+1}}$ divides $\mathrm{Res}(f,g)$ then \begin{itemize} \item[{\rm (iii)}] $\mathrm{Res}(f,g) = u(X)\, X^{(\deg g)q_{k+1}}$, where $u(0)\neq0$; \item[{\rm (iv)}] for every $\gamma\in {\rm Zer}\, g$ there exists $\alpha \in {\rm Zer}\, f$ such that $\gamma-\alpha=c_{h_{k+1}}X^{h_{k+1}}+\sum_{h>h_{k+1}}c_{h}X^{h}$. \end{itemize} \end{Theorem} \begin{Remark} {\rm In the point (iv) of Theorem~\ref{Th:irred1}, the monomial $X^{h_{k+1}}$ does not appear in the power series $\gamma$.} \end{Remark} \begin{Example} {\rm Let $f=Y^4-2X_1^3X_2^2Y^2-4X_1^5X_2^4Y-X_1^7X_2^6+X_1^6X_2^4$. The polynomial $f$ is quasi-ordinary and irreducible in $\mathbb{C}[[X_1,X_2]][Y]$ with the roots \begin{eqnarray*} \alpha_1&=&X_1^{3/2}X_2+X_1^{7/4}X_2^{3/2} \\ \alpha_2&=&X_1^{3/2}X_2-X_1^{7/4}X_2^{3/2} \\ \alpha_3&=&-X_1^{3/2}X_2+\sqrt{-1}X_1^{7/4}X_2^{3/2} \\ \alpha_4&=&-X_1^{3/2}X_2-\sqrt{-1}X_1^{7/4}X_2^{3/2} \end{eqnarray*} and characteristic exponents $h_1=(\frac32,1)$ and $h_2=(\frac74,\frac32)$. Let $g=(Y^2-X_1^3X_2^2)^2-4X_1^5X_2^4Y$. Then $\mathrm{Res}(f,g)=X_1^{28}X_2^{24}$. We have $X^{(\deg g)q_2}=X_1^{26}X_2^{20}$, so according to Theorem~\ref{Th:irred1}, the polynomial $g$ is irreducible and quasi-ordinary of characteristic $(h_1,h_2)$.} \end{Example} \section{Auxiliary results} For $g=\sum_{a}c_a X^a \in\mathbb{K}[[X^{1/m}]]$ we define the {\it Newton polytope} $\Delta(g)$ as the convex hull of the set $\bigcup_{c_a\ne0}(a+\mathbb{R}_{\geq0}^d)$. The Newton polytope $\Delta(f)$ of a polynomial $f\in\mathbb{K}[[X^{1/m}]][Y]$ is the Newton polytope of $f$ treated as an element of the ring $\mathbb{K}[[X_1^{1/m},\dots,X_d^{1/m},Y^{1/m}]]$. In two variable case Newton polytopes are called {\em Newton polygons}. Let $T$ be a single variable. The order of a fractional power series $\gamma\in \mathbb{K}[[T^{1/m}]]$ will be denoted ${\rm ord}\, \gamma$. Note that for all $\alpha,\beta,\gamma\in \mathbb{K}[[T^{1/m}]]$ we have ${\rm ord}(\alpha-\beta)\geq \min\{{\rm ord}(\alpha-\gamma),{\rm ord}(\gamma-\beta)\}$. We call this property the {\it strong triangle inequality}. \begin{Lemma}\label{L:comp} Let $g$, $\tilde g\in\mathbb{K}[[T^{1/m}]] [Y]$ be Weierstrass polynomials such that $\Delta(g)=\Delta(\tilde g)$. Then $\{ \mathrm{\mathrm{ord}} \gamma: \gamma \in \mathrm{Zer} g\}=\{ \mathrm{\mathrm{ord}} \gamma: \gamma \in \mathrm{Zer} \tilde g\}$. \end{Lemma} \begin{proof} The Newton polygon of the product $g=\prod_{i=1}^{\deg g}(Y-\gamma_i(T))$ is the Min\-kow\-ski sum of the Newton polygons of its factors and the shape of the Newton polygon of each factor $Y-\gamma_i(T)$ determines the order of $\gamma_i(T)$. For a more detailed proof see \cite[Theorem~2.1]{Ploski1}. \end{proof} Let $\mathbb{Q}_+$ be the set of positive rational numbers. For a Newton polytope $\Delta\subset \mathbb{R}_{\geq 0}^d$ and $c\in\mathbb{Q}_+^d$ we define the {\em face} $\Delta^c:=\{v\in\Delta:\langle c,v\rangle=\min_{w\in\Delta}\langle c,w\rangle\}$. We will say that a condition depending on $c\in\mathbb{Q}_{+}^d$ is satisfied for generic $c$ if it holds in an open and dense subset of $\mathbb{Q}_{+}^d$. \begin{Lemma}\label{L:polygon} Let $\Delta$ be the Newton polytope of some nonzero fractional power series~$\gamma\in \mathbb{K}[[X^{1/m}]]$. Then for generic $c\in \mathbb{Q}_{+}^d$ a face $\Delta^c$ is a vertex of $\Delta$. \end{Lemma} \begin{proof} Let $V$ be the (finite) set of vertices of $\Delta$. Then the set $$ U=\{c\in \mathbb{Q}_{+}^d: \forall v,w\in V (v\neq w \Rightarrow \langle c, v\rangle \neq \langle c, w\rangle) \} $$ is open and dense in $\mathbb{Q}_{+}^d$ and for every $c\in U$ there is exactly one vertex $v$ of $\Delta$ such that $\langle c, v\rangle = \min \{\langle c, w\rangle: w\in V \}$. \end{proof} With every $c=(c_1,\dots,c_d)\in \mathbb{Q}_{+}^d$ we associate the monomial substitution $(X_1,\dots,X_d)=(T^{c_1},\dots,T^{c_d})$ written $X=T^c$. Applying this substitution to~$f=f(X,Y) \in \mathbb{K}[[X^{1/m}]][Y]$ we define $f^{[c]} := f(T^c,Y)\in \mathbb{K}[[T^{1/Nm}]][Y]$, where $N$ is a common denominator of coordinates of~$c$. \begin{Lemma}\label{L:comp1} Let $\gamma_1$, $\gamma_2 \in \mathbb{K}[[X^{1/m}]]$ be nonzero fractional power series. If $\mathrm{\mathrm{ord}}\gamma_1^{[c]} = \mathrm{\mathrm{ord}} \gamma_2^{[c]}$ for generic $c\in\mathbb{Q}_{+}^d$, then $\Delta(\gamma_1)=\Delta(\gamma_2)$. \label{cor:delta} \end{Lemma} \begin{proof} Suppose that $\Delta(\gamma_1)\neq \Delta(\gamma_2)$. Without loss of generality we may assume that $\Delta(\gamma_1)\setminus \Delta(\gamma_2)$ is nonempty. Since $\Delta(\gamma_2)$ is convex and closed, for any $v\in \Delta(\gamma_1)\setminus \Delta(\gamma_2)$ there exists $c\in\mathbb{R}_{+}^d$ such that $\langle c,v\rangle < \inf_{w\in \Delta(\gamma_2)}\langle c,w\rangle$. Then by Lemma~\ref{L:polygon} there is a vertex $v_0$ of $\Delta(\gamma_1)$ and an open set $U\subset \mathbb{Q}_+^d$ such that $\Delta(\gamma_1)^c=\{v_0\}$ and $\langle c,v_0\rangle < \inf_{w\in \Delta(\gamma_2)}\langle c,w\rangle$ for all $c\in U$. We get $\mathrm{\mathrm{ord}}\gamma_1^{[c]}=\langle c, v_0 \rangle$ because in the fractional power series $\gamma_1^{[c]}$ there is no cancellation of the terms of order $\langle c, v_0 \rangle$ and $\mathrm{\mathrm{ord}}\gamma_2^{[c]}>\langle c, v_0 \rangle$ (since all monomials appearing in~ $\gamma_2^{[c]}$ have orders bigger than $\langle c, v_0 \rangle$). Thus $\mathrm{\mathrm{ord}}\gamma_1^{[c]} < \mathrm{\mathrm{ord}} \gamma_2^{[c]}$ for $c\in U$. \end{proof} \begin{Lemma}\label{Wn1} Let $f\in\mathbb{K}[[X^{1/m}]][Y]$ be a nonzero polynomial. Given $c\in \mathbb{Q}_{+}^d$ we define the linear mapping $L_c:\mathbb{R}^d \times \mathbb{R}\to \mathbb{R}^2$, $L_{c}(x,y)=(\langle c, x \rangle,y)$. Then for generic $c\in\mathbb{Q}_{+}^d$ $$\Delta( f^{[c]})=L_{c}(\Delta(f)).$$ \end{Lemma} \begin{proof} Write $f=a_n(X)Y^n+a_{n-1}(X)Y^{n-1}+\cdots+a_0(X)$ and $ f^{[c]}=\bar a_n(T)Y^n+\bar a_{n-1}(T)Y^{n-1}+\cdots+\bar a_0(T)$. By Lemma~\ref{L:polygon} for generic $c\in \mathbb{Q}_{+}^d$ and for any nonzero $a_i(X)$ the polygon $\Delta(a_i(X))^{c}$ is a vertex of $\Delta(a_i(X))$. Denote this vertex by $v_i$. Then $\mathrm{\mathrm{ord}} \bar a_i(T)=\langle c, v_i \rangle$ because in the fractional power series $\bar a_i(T)=a_i(T^{c})$ there is no cancellation of the terms of the lowest order. Thus the vertices of $L_c(\Delta(f))$ belong to $\Delta( f^{[c]})$ which gives the desired equality. \end{proof} \begin{Remark}\label{Galois} {\rm Let $K$ (respectively $L$) be the field of fractions of the ring $\mathbb{K}[[X]]$ (respectively $\mathbb{K}[[X^{1/m}]]$). Denote by ${\rm Gal}(L/K)$ the Galois group of the extension $K<L$. Then $L$ is normal over $K$ (as the splitting field of the family of polynomials $\{Y^m-X_i\in\mathbb{K}[[X]][Y]:i=1,\dots,d\}$) and every $\sigma\in {\rm Gal}(L/K)$ is given by $$ \sigma\Bigl( \sum_{a\in\mathbb{N}^d} c_a X^{a/m}\Bigr)= \sum_{a\in\mathbb{N}^d}\underline{\varepsilon}^{a}c_a X^{a/m} $$ for some $\underline{\varepsilon} =(\varepsilon_1,\dots,\varepsilon_d)$ with $\varepsilon_l^m=1$. In particular, $\Delta(\sigma(\gamma))=\Delta(\gamma)$ for all nonzero $\gamma\in \mathbb{K}[[X^{1/m}]]$.} \end{Remark} For $\alpha\in \mathbb{K}[[T^{1/m}]]$ and a finite set $A\subset \mathbb{K}[[T^{1/m}]]$ we define the {\it contact between} $\alpha$ and $A$ as ${\rm cont}(A,\alpha):=\max_{\gamma\in A} {\rm ord}(\alpha-\gamma)$. From now on up to the end of this section we work under the assumption that $f\in \mathbb{K}[[X]][Y]$ is a quasi-ordinary irreducible polynomial of characteristic $(h_1,\dots,h_s)$ and ${\rm Zer}\, f=\{\alpha_1,\dots,\alpha_n\}$ is the set of its roots. \begin{Theorem}\label{T21} Let $g\in \mathbb{K}[[X]][Y]$ be a Weierstrass polynomial. If $c\in\mathbb{Q}_{+}^d$ is generic, then for any $\beta,\beta'\in\mathrm{Zer} f^{[c]}$ one has ${\rm cont}(\mathrm{Zer}\, g^{[c]},\beta)= {\rm cont}(\mathrm{Zer}\, g^{[c]},\beta')$. \end{Theorem} \begin{proof} For brevity we will write $\overline{p}$ instead of ${p}^{[c]}$ for every $p\in \mathbb{K}[[X^{1/m}]][Y]$. Since $f=\prod_{i=1}^n(Y-\alpha_i)$, we get $\bar f = \prod_{i=1}^n(Y-\bar\alpha_i)$ and consequently $\beta= \bar\alpha_i$, $\beta'=\bar\alpha_j$ for some $\alpha_i, \alpha_j\in\mathrm{Zer} f$. The roots $\alpha_i$, $\alpha_j$ are conjugate by the Galois automorphism. Hence by Remark~\ref{Galois} the Newton polytopes of $g_i=g(Y+\alpha_i)$ and $g_j=g(Y+\alpha_j)$ are equal. By Lemma~\ref{Wn1}, the Newton polygons of $\bar g_i$ and $\bar g_j$ are also equal. If $\mathrm{Zer}\, \bar g=\{\gamma_1,\dots,\gamma_k\}$ then $\mathrm{Zer}\, \bar g_i = \{\gamma_1-\beta,\dots,\gamma_k-\beta\}$ and $\mathrm{Zer}\, \bar g_j= \{\gamma_1-\beta' ,\dots,\gamma_k-\beta'\}$. Hence it follows from Lemma~\ref{L:comp} that ${\rm cont}(\mathrm{Zer}\,\bar g,\beta)={\rm cont}(\mathrm{Zer}\,\bar g,\beta')$. \end{proof} If $A$ is any set, then $\# A$ denotes the cardinality of $A$. \begin{Lemma}[Contact structure of $\mathrm{Zer}\,f$]\label{contact} For every $\tilde\alpha \in\mathrm{Zer} f$ and $i\in\{1,\dots,s\}$ we have $\#\{\alpha \in {\rm Zer}\, f:d(\alpha,\tilde\alpha)> h_i\}=e_i$ and $\# \{\alpha\in {\rm Zer}\, f: d(\alpha,\tilde\alpha)=h_i\}=e_{i-1}-e_i.$ \end{Lemma} \begin{proof} See the proof of Proposition~3.1 from \cite{GP}. \end{proof} Fix $c\in\mathbb{Q}_{+}^d$ and for every $w\in\mathbb{Q}^d$ denote $\overline{w}:=\langle c,w\rangle$. We set $\overline{h_0}=0$, $\overline{h_{s+1}}=+\infty$, $\overline{q_0}=0$ and define a continuous function $\phi_c:[0,+\infty)\to[0,+\infty)$ such that \begin{itemize} \item[{\rm (i)}] $\phi_c(\overline{h_i})=\overline{q_i}$ for $i=0,\dots, s$; \item[{\rm (ii)}] $\phi_c$ is linear in each interval $(\overline{h_i},\overline{h_{i+1}})$ for $i=0,\dots s$; \item[{\rm (iii)}] the graph of $\phi_c$ has slope 1 over the interval $(\overline{h_s},+\infty)$. \end{itemize} \begin{Lemma}\label{L:ineq} The function $\phi_c:[0,+\infty)\to[0,+\infty)$ is increasing. If $\bar\gamma$ is a~Puiseux series and ${\rm cont}(\mathrm{Zer} f^{[c]},\bar\gamma)=h$ then $\mathrm{\mathrm{ord}} f^{[c]}(\bar\gamma)=\phi_c(h)$. \end{Lemma} \begin{proof} By equality~(\ref{Eq1}) we get $\overline{q_{i+1}}=\overline{q_i} + e_{i}(\overline{h_{i+1}}-\overline{h_{i}})$ for $i=0,\dots, s-1$, hence the numbers $\overline{q_i}$ form an increasing sequence. Let $h={\rm cont}(\mathrm{Zer} f^{[c]},\overline{\gamma} )={\rm ord}(\overline{\gamma}-\overline{\alpha})$ for some $\alpha\in \mathrm{Zer} f$. Assume that $h\in(\overline{h_r},\overline{h_{r+1}}]$. Then, by the strong triangle inequality and Lemma~\ref{contact}, we get \begin{eqnarray*} \mathrm{\mathrm{ord}} f^{[c]}(\overline{\gamma}) &=& \sum_{j=1}^n \mathrm{\mathrm{ord}}(\overline{\gamma}-\overline{\alpha_j}) \\ &=& \sum_{\mathrm{\mathrm{ord}}\,(\overline{\alpha_j}-\overline{\alpha})\leq h_r} \mathrm{\mathrm{ord}}\,(\overline{\gamma}-\overline{\alpha_j}) + \sum_{\mathrm{\mathrm{ord}}\,(\overline{\alpha_j}-\overline{\alpha})>h_r} \mathrm{\mathrm{ord}}\,(\overline{\gamma}-\overline{\alpha_j}) \\ &=& \sum_{\mathrm{\mathrm{ord}}\,(\overline{\alpha_j}-\overline{\alpha})\leq h_r} \mathrm{\mathrm{ord}}\,(\overline{\alpha}-\overline{\alpha_j}) + \sum_{\mathrm{\mathrm{ord}}\,(\overline{\alpha_j}-\overline{\alpha})>h_r} \mathrm{\mathrm{ord}}\,(\overline{\gamma}-\overline{\alpha}) \\ &=& \sum_{i=1}^{r} (e_{i-1}-e_i)\overline{h_i} + e_{r}h= \overline{q_{r}}+e_r(h-\overline{h_r}). \end{eqnarray*} \end{proof} Let $\alpha \in \mathrm{Zer} f$ and $h_r$ be a characteristic exponent of $f$. By definition, the $h_r$--{\it truncation} of $\alpha$ is the fractional power series $\mathrm{\mathrm{trunc}}_r( \alpha)$ obtained from $\alpha$ by omitting all terms of order $\geq h_r$. We denote by $f_r$ the minimal polynomial of $\mathrm{\mathrm{trunc}}_r(\alpha)$ over the field $K$. As we will see in the lemma below, this polynomial does not depend on $\alpha$. \begin{Lemma} $\quad$ \begin{itemize} \item[{\rm (i)}] ${\rm Zer}\, f_r=\{\,\mathrm{\mathrm{trunc}}_r(\alpha_j):j=1,\dots,n\,\}$; \item[{\rm (ii)}] $f_r \in \mathbb{K}[[X]]$ is monic, irreducible and quasi-ordinary; \item[{\rm (iii)}] $\deg f_r=n_1\cdots n_{r-1}$; \item[{\rm (iv)}] ${\rm Char}(f_r)=\{h_1,\dots,h_{r-1}\}$. \end{itemize} \end{Lemma} \begin{proof} Since $\mathrm{\mathrm{trunc}}_r(\alpha)\in \mathbb{K}[[X^{1/n}]]$ and $L$ is normal over $K$, all the roots of the~polynomial $ f_{r}$ are elements of $L$. It is easy to see that $\sigma(\mathrm{\mathrm{trunc}}_r(\alpha))=\mathrm{\mathrm{trunc}}_r(\sigma(\alpha))$ for every $\sigma\in {\rm Gal}(L/K)$ . The polynomial $f$ is irreducible over the field $K$, so ${\rm Gal}(L/K)$ acts transitively on the set $\mathrm{Zer} f$ and hence on the set $\mathrm{Zer} f_r$, as well. This implies~(i) and (ii). If $d(\alpha_i,\alpha_j)\leq h_{r-1}$, then $d(\alpha_i,\alpha_j)= d({\rm trunc}_r(\alpha_i),{\rm trunc}_r(\alpha_j))$ and if $d(\alpha_i,\alpha_j)\linebreak \geq h_r$, then ${\rm trunc}_r(\alpha_i)= {\rm trunc}_r(\alpha_j)$. Thus (iv) holds true and, as a consequence, we also obtain (iii). \end{proof} \section{Proof of Theorem \ref{Th:irred1}} The proof will be organized as a sequence of claims. We denote the roots of $f$ (respectively the roots of $f_{k+1}$) by $\alpha_1$, \dots, $\alpha_n$ (respectively by $\beta_1$, \dots, $\beta_l$, where $l=n_1\cdots n_k$). We will use the bar notation for polynomials and power series after the monomial substitution $X=T^c$. Let $c\in\mathbb{Q}_{+}^d$ be generic in the sense that the conclusion of Theorem~\ref{T21} for a~polynomial $g$ and any $\overline{\beta}$, $\overline{\beta'}\in \mathrm{Zer} \overline{f_{k+1}}$ is true. \medskip\noindent \textbf{Claim 1.} For every $\overline{\beta}\in {\rm Zer}\, \overline{f_{k+1}}$ there exists exactly one $\gamma\in {\rm Zer}\,\overline{g}$ such that $\mathrm{\mathrm{ord}}(\gamma-\overline{\beta})> \bar h_k$. \begin{proof} By assumptions of the theorem we get $\mathrm{\mathrm{ord}} \mathrm{Res}(\overline{f}, \overline{g})> (\deg g)\bar q_k$. If $\mathrm{cont}(\mathrm{Zer}\overline{f},\gamma) \leq \bar h_k$ for all the roots $\gamma$ of $\overline{g}$, then by Lemma~\ref{L:ineq} we obtain $\mathrm{\mathrm{ord}} \mathrm{Res}(\overline{f},\overline{g}) = \sum_{\gamma\in\mathrm{Zer} \overline{g}} \mathrm{\mathrm{ord}} \overline{f}(\gamma) \leq (\deg \overline{g})\bar q_k$. It follows that $\mathrm{\mathrm{ord}}(\overline{\alpha}-\gamma)> \overline{h_k}$ for some $\gamma\in \mathrm{Zer}\, \overline{g}$ and $\alpha\in\mathrm{Zer} f$. Let $\beta=\mathrm{\mathrm{trunc}}_{k+1}(\alpha)$. Since $\mathrm{\mathrm{ord}} (\overline{\beta}-\overline{\alpha})>\overline{h_k}$, we get $\mathrm{\mathrm{ord}}(\overline{\beta}-\gamma)> \overline{h_k}$ and consequently $\mathrm{cont}(\mathrm{Zer}\,\overline{g},\overline{\beta}) > \overline{h_k}$. It follows from Theorem~\ref{T21} that for every $\overline{\beta'}\in \mathrm{Zer} \overline{f_{k+1}}$ there exists $\gamma\in {\rm Zer}\,\overline{g}$ such that $\mathrm{\mathrm{ord}}(\gamma-\overline{\beta'})>\overline{h_k}$. Take any $\overline{\beta},\overline{\beta'}\in\mathrm{Zer}\overline{f_{k+1}}$ and $\gamma,\gamma'\in {\rm Zer}\,\overline{g}$ such that $\mathrm{\mathrm{ord}}(\gamma-\overline{\beta})>\overline{h_k}$ and $\mathrm{\mathrm{ord}}(\gamma'-\overline{\beta'})>\overline{h_k}$. Assume that $\overline{\beta}\neq \overline{\beta'}$. Then $\gamma\neq\gamma'$. Indeed, if $\gamma=\gamma'$ then $\mathrm{\mathrm{ord}}(\overline{\beta}-\overline{\beta'})>\overline{h_k}$ and we arrive at contradiction. From the above and the assumption $\deg g\leq n_1\cdots n_k=\deg f_{k+1}$ we obtain that $\overline{g}$ has exactly $n_1\cdots n_k$ roots, which completes the proof. \end{proof} Using Claim 1 we may assume, without loss of generality, that $\mathrm{Zer}\,\overline{g}=\{\gamma_1,\dots,\gamma_l\}$, where $\mathrm{\mathrm{ord}}(\gamma_i-\overline{\beta_i})> \overline{h_k}$ for all $1\leq i\leq l$. It follows immediately from the strong triangle inequality that \begin{equation}\label{orders} \mathrm{\mathrm{ord}}(\gamma_i - \gamma_j)=\mathrm{\mathrm{ord}}(\overline{\beta_i}-\overline{\beta_j}) \quad\mbox{for all $1\leq i<j\leq l$}. \end{equation} Hence the orders of the discriminants of polynomials $\overline{g}$ and $\overline{f_{k+1}}$ are equal. Therefore, by Lemma~\ref{cor:delta}, the Newton polytopes of the discriminants of $g$ and $f_{k+1}$ are equal too, so we conclude that $g$ is quasi-ordinary. \medskip\noindent \textbf{Claim 2.} Let $v$ be a vertex of $\Delta(\gamma - \gamma')$ for $\gamma, \gamma' \in\mathrm{Zer}\, g$. Then $v\in\{h_1,\dots,h_k\}$. \begin{proof} Since $g$ is quasi-ordinary, $\Delta(\gamma-\gamma')$ has only one vertex. Thus for every $c\in\mathbb{Q}_{+}^d$ we have $\mathrm{\mathrm{ord}}(\gamma^{[c]}-\gamma'^{[c]}))=\langle c,v\rangle$. It follows from~(\ref{orders}) that $\langle c,v\rangle\in \{\langle c,h_i\rangle: 1\leq i\leq k\}$. Observe that if $v\ne h_i$, then the set $\{c'\in\mathbb{Q}_{+}^d: \langle c',v\rangle=\langle c',h_i\rangle \}$ is contained in a finite union of hyperplanes, hence is nowhere dense. This implies that $v=h_i$ for some $i\in\{1,\dots,k\}$, since $c$ is generic. \end{proof} \noindent \textbf{Claim 3.} Let $v$ be a vertex of $\Delta(\gamma-\beta)$ for $\gamma\in \mathrm{Zer}\,g$ and $\beta\in\mathrm{Zer} f_{k+1}$. Then $v\in\{h_1,\dots,h_k\}$ or $v>h_k$. \begin{proof} By the strong triangle inequality and~(\ref{orders}) we obtain $\mathrm{\mathrm{ord}}(\gamma^{[c]}-\beta^{[c]}) \in \{\langle c,h_i\rangle: 1\leq i\leq k\}$ or $\mathrm{\mathrm{ord}}(\gamma^{[c]}-\beta^{[c]})>\langle c,h_k\rangle$. Let $v$ be a vertex of $\Delta(\gamma-\beta)$ which is not in $\Delta(X^{h_k})$. Then there exists an open set $U\subset \mathbb{Q}_{+}^d$ such that a face $\Delta(\gamma-\beta)^c=\{v\}$ and $\langle c,v\rangle < \langle c,h_k\rangle$ for all $c\in U$. Hence we have $\mathrm{\mathrm{ord}}(\gamma^{[c]}-\gamma'^{[c]}))=\langle c,v\rangle<\langle c,h_k\rangle$. Using the same argument as in the proof of Claim~2, we conclude that $v\in \{h_1,\dots, h_{k}\}$ which completes the proof. \end{proof} \noindent \textbf{Claim 4.} For every $\beta\in \mathrm{Zer} f_{k+1}$ there exists exactly one $\gamma\in \mathrm{Zer}\,g$ such that $\Delta(\gamma-\beta)\subsetneq \Delta(X^{h_k})$. \begin{proof} Let $\Delta=\Delta(\gamma-\beta)$ for $\gamma\in \mathrm{Zer}\,g$ and $\beta\in\mathrm{Zer} f_{k+1}$. Then by Claim 3 two cases are possible: either some $h_i\in \{h_1,\dots, h_k\}$ is a vertex of $\Delta$ and $\mathrm{\mathrm{ord}}(\overline{\gamma}-\overline{\beta})=\overline{h_i}$ or $\Delta\subsetneq \Delta(X^{h_k})$ and $\mathrm{\mathrm{ord}}(\overline{\gamma}-\overline{\beta})>\overline{h_k}$. To finish the proof it is enough to use Claim~1. \end{proof} We will show that ${\rm Gal}(L/K)$ acts transitively on the set $\mathrm{Zer}\,g$. Indeed, take arbitrary $\gamma, \gamma'\in \mathrm{Zer}\,g$. By Claim~4 there exist unique $\beta,\beta'\in \mathrm{Zer} f_{k+1}$ such that $\Delta(\gamma-\beta)\subsetneq \Delta(X^{h_k})$ and $\Delta(\gamma'-\beta')\subsetneq \Delta(X^{h_k})$. Take $\sigma\in {\rm Gal}(L/K)$ such that $\sigma(\beta)=\beta'$. Then by Remark~\ref{Galois} we have $\Delta(\sigma(\gamma)-\beta')\subsetneq \Delta(X^{h_k})$, hence $\sigma(\gamma)=\gamma'$. It follows from the above that the polynomial $g$ is irreducible. From~(\ref{orders}) and Claim~2 we deduce that $(h_1,\dots, h_k)$ is the characteristic of $g$. Point~(ii) of the theorem follows directly from Claim~4. Now we prove statements (iii) and (iv) of Theorem~\ref{Th:irred1}. Assume that $X^{(\deg g)q_{k+1}}$ divides $\mathrm{Res}(f,g)$. If the monomial $X^{(\deg g)q_{k+1}}$ does not appear in $\mathrm{Res}(f,g)$ then by the first part of the theorem we obtain $\deg g = n_1\cdots n_{k+1}$, which contradicts the assumption $\deg g \leq n_1\cdots n_k$. Thus $\mathrm{Res}(f,g) = u(X)\, X^{(\deg g)q_{k+1}}$, where $u(0)\neq0$. By Claim~4 for every $\gamma\in \mathrm{Zer}\,g$ there exists $\beta\in \mathrm{Zer} f_{k+1}$ such that $\Delta(\gamma-\beta)\subsetneq \Delta(X^{h_k})$. Suppose that the Newton polytope $\Delta=\Delta(\gamma-\beta)$ has a vertex which is not contained in $\Delta(X^{h_{k+1}})$. Then there exists $c\in \mathbb{Q}_{+}^d$ such that $\Delta^c=\{v\}$ and $\langle c,v\rangle<\langle c,h_{k+1}\rangle$. Thus $\mathrm{cont}(\mathrm{Zer} \overline{f},\overline{\gamma})<\overline{h_{k+1}}$. Since for any $\sigma\in {\rm Gal}(L/K)$ we have $\Delta(\gamma-\beta)=\Delta(\sigma(\gamma)-\sigma(\beta))$, the same is true for any $\gamma'\in \mathrm{Zer}\, g$. Then by Lemma~\ref{L:ineq} we get $\mathrm{\mathrm{ord}}(\mathrm{Res}(\overline{f},\overline{g}))<(\deg g)\overline{q_{k+1}}$ and we arrive at contradiction. We proved that $\Delta(\gamma-\beta)\subseteq \Delta(X^{h_{k+1}}) $ which gives (iv) of Theorem~\ref{Th:irred1}. \section{Logarithmic distance} For any irreducible Weierstrass polynomials $f$, $g\in \mathbb{K}[[X]][Y]$ we define the Newton polytope $\mathrm{cont}_A(f,g)=\frac{1}{(\deg f)(\deg g)}\Delta(\mathrm{Res}(f,g))$ called the {\it logarithmic distance} between $f$ and $g$. We introduce the partial order in the set of Newton polytopes: $\Delta_1\geq \Delta_2$ if and only if $\Delta_1\subset \Delta_2$. For any irreducible quasi-ordinary Weierstrass polynomials $f$, $g$, $h$ the strong triangle inequality $\mathrm{cont}_A(f,g)\geq \inf\{\mathrm{cont}_A(f,h),\mathrm{cont}_A(h,g)\}$ holds true, where $\inf\{A,B\}$ denotes the Newton polytope spanned by the union of $A$ and $B$. Let us prove it now. For $\alpha =\sum_{a\in\mathbb{Q}^d} c_a X^a\in\mathbb{K}[[X^{1/\mathbb{N}}]]$ and $\omega\in\mathbb{R}_{+}^d$ we define a {\it weighted order} ${\rm ord}_w(\alpha):=\min\{\langle \omega,a\rangle:c_a\ne 0\}$ and a {\it weighted contact} between quasi-ordinary polynomials $f,g\in\mathbb{K}[[X]][Y]$ as follows: $${\rm cont}_{\omega}(f,g):=\frac{1}{\deg f \deg g}\sum_{\substack{\alpha\in {\rm Zer}\, f\\ \beta\in {\rm Zer}\, g}} {\rm ord}_{\omega} (\alpha-\beta)=\frac{1}{\deg f \deg g} l(\omega,\Delta(\mathrm{Res}(f,g))),$$ where $l(\omega,\Delta(\mathrm{Res}(f,g))):=\min\{\langle \omega,a\rangle:a\in\Delta(\mathrm{Res}(f,g)\}$. For every irreducible quasi-ordinary polynomials $f,g\in \mathbb{K}[[X]][Y]$ and for any $\gamma,\gamma'\in {\rm Zer}\, g$ we have ${\rm ord}_{\omega} f(\gamma)={\rm ord}_{\omega} f(\gamma')$. Therefore, using the same method as in the proof of \cite[Proposition 2.2]{Pl}, we get for any irreducible quasi-ordinary polynomials $f$, $g$, $h \in \mathbb{K}[[X]][Y]$ a strong triangle inequality ${\rm cont}_{\omega}(f,g)\geq \min\{{\rm cont}_{\omega}(f,h),{\rm cont}_{\omega}(h,g)\}$. \begin{Lemma} Assume that $\Delta_1,\Delta_2,\Delta_3$ are Newton polytopes. If \begin{equation}l(\omega,\Delta_1)\geq \min\{l(\omega,\Delta_2), l(\omega,\Delta_3\}\label{e1}\end{equation} for all $\omega \in \mathbb{R}_{+}^d$, then $\Delta_1\geq \inf\{\Delta_2,\Delta_3\}$. \end{Lemma} \begin{proof} Suppose that the inequality $\Delta_1\geq \inf\{\Delta_1,\Delta_2\}$ is false. Therefore there exists $v\in \Delta_1\setminus {\rm conv}(\Delta_2\cup\Delta_3)$ and then we can find a~linear form $L$ such that $L(v)<L(x)$ for all $x\in {\rm conv}(\Delta_2\cup \Delta_3)$. It means that $\langle \omega , v\rangle < \langle \omega, x\rangle $, $x\in {\rm conv}(\Delta_2\cup \Delta_3)$, for some $\omega \in \mathbb{R}_{+}^d$. Thus $l(\omega,\Delta_1)<l(\omega,{\rm conv}(\Delta_1\cup\Delta_2))$ and the inequality (\ref{e1}) does not hold. \end{proof} The above lemma implies immediately the strong triangle inequality for the~logarithmic distance of irreducible quasi-ordinary Weierstass polynomials. Unfortunately, the strong triangle inequality does not extend to a wider class of irreducible Weierstrass polynomials as the following examples show. \begin{Example} {\rm Let $f=Y$, $g=Y-X_1-X_2^2$, $h=Y^2-(X_1+X_2)Y+2X_1^3+X_2^3$. The polynomials $f$, $g$, $h$ are irreducible in $\mathbb{K}[[X_1,X_2]][Y]$. We have $\mathrm{Res}(f,g)=-X_1-X_2^2$, $\mathrm{Res}(f,h)=2X_1^3+X_2^3$, $\mathrm{Res}(g,h)=X_1X_2^2-X_1X_2+2X_1^3+X_2^4$, hence there is no strong triangle inequality between ${\rm cont}_A(f,g)$, ${\rm cont}_A(f,h)$ and ${\rm cont}_A(h,g)$ as illustrated in the following picture. } \end{Example} \begin{tikzpicture}[ scale = 1, foreground/.style = { ultra thick }, background/.style = { dashed }, line join=round, line cap=round ] \draw[fill=black, opacity=0.1] (1,0)--(2.9,0)--(2.9,2.9)--(0,2.9)--(0,2)--cycle; \draw[foreground,->] (0,0)--+(3,0); \draw[foreground,->] (0,0)--+(0,3); \draw[thick] (1,-0.1)--+(0,0.15); \draw[thick] (2,-0.1)--+(0,0.15); \draw[thick] (-0.1,1)--+(0.15,0); \draw[thick] (-0.1,2)--+(0.15,0); \draw[thick] (1,0)--(0,2); \draw (1,-0.15) node[below] {$ $}; \draw (-0.15,1) node[left] {$ $}; \foreach \x in{} { \foreach \y in{}{ \draw[fill, opacity=0.9] (\x,\y) circle (0.5pt); } } \node at (1.8,1.5) {$cont_A(f,g)$}; [ scale = 1, foreground/.style = { ultra thick }, background/.style = { dashed }, line join=round, line cap=round ] \draw[fill=black, opacity=0.1] (5.5,0)--(6.9,0)--(6.9,2.9)--(4,2.9)--(4,1.5)--cycle; \draw[foreground,->] (4,0)--+(3,0); \draw[foreground,->] (4,0)--+(0,3); \draw[thick] (5,-0.1)--+(0,0.15); \draw[thick] (6,-0.1)--+(0,0.15); \draw[thick] (3.9,1)--+(0.15,0); \draw[thick] (3.9,2)--+(0.15,0); \draw[thick] (5.5,0)--(4,1.5); \draw (1,-0.15) node[below] {$ $}; \draw (-0.15,1) node[left] {$ $}; \foreach \x in{} { \foreach \y in{}{ \draw[fill, opacity=0.9] (\x,\y) circle (0.5pt); } } \node at (5.8,1.5) {$cont_A(f,h)$}; [ scale = 1, foreground/.style = { ultra thick }, background/.style = { dashed }, line join=round, line cap=round ] \draw[fill=black, opacity=0.1] (9.5,0)--(10.9,0)--(10.9,2.9)--(8,2.9)--(8,2)--(8.5,0.5)--cycle; \draw[foreground,->] (8,0)--+(3,0); \draw[foreground,->] (8,0)--+(0,3); \draw[thick] (9,-0.1)--+(0,0.15); \draw[thick] (10,-0.1)--+(0,0.15); \draw[thick] (7.9,1)--+(0.15,0); \draw[thick] (7.9,2)--+(0.15,0); \draw[thick] (9.5,0)--(8.5,0.5); \draw[thick] (8.5,0.5)--(8,2); \draw (1,-0.15) node[below] {$ $}; \draw (-0.15,1) node[left] {$ $}; \foreach \x in{} { \foreach \y in{}{ \draw[fill, opacity=0.9] (\x,\y) circle (0.5pt); } } \node at (9.8,1.5) {$cont_A(g,h)$}; \end{tikzpicture} \begin{Example} {\rm Let $f=Y-2X_1^3$, $g=(Y-X_1)(Y-X_1^3-X_1^4)+X_2$, $h=Y-X_1^3$. Then $\mathrm{Res}(f,g)=-2X_1^7+2X_1^6+X_1^5-X_1^4+X_2$, $\mathrm{Res}(f,h)=X_1^3$, $\mathrm{Res}(g,h)=-X_1^7+X_1^5+X_2$. The polynomials $f$, $g$, $h$ are irreducible, $fh$ is quasi-ordinary and the inequality ${\rm cont}_A(f,g)\geq \inf\{{\rm cont}_A(f,h),{\rm cont}_A(h,g)\}$ does not hold (see the picture below).} \end{Example} \begin{tikzpicture}[ scale = 0.8, foreground/.style = { ultra thick }, background/.style = { dashed }, line join=round, line cap=round ] \draw[fill=black, opacity=0.1] (2,0)--(3.9,0)--(3.9,3.9)--(0,3.9)--(0,0.5)--cycle; \draw[foreground,->] (0,0)--+(4,0); \draw[foreground,->] (0,0)--+(0,4); \draw[thick] (1,-0.1)--+(0,0.15); \draw[thick] (2,-0.1)--+(0,0.15); \draw[thick] (3,-0.1)--+(0,0.15); \draw[thick] (-0.1,1)--+(0.15,0); \draw[thick] (-0.1,2)--+(0.15,0); \draw[thick] (-0.1,3)--+(0.15,0); \draw[thick] (2,0)--(0,0.5); \draw (1,-0.15) node[below] {$ $}; \draw (-0.15,1) node[left] {$ $}; \foreach \x in{} { \foreach \y in{}{ \draw[fill, opacity=0.9] (\x,\y) circle (0.5pt); } } \node at (1.8,2) {$cont_A(f,g)$}; [ scale = 0.8, foreground/.style = { ultra thick }, background/.style = { dashed }, line join=round, line cap=round ] \draw[fill=black, opacity=0.1] (8,0)--(8.9,0)--(8.9,3.9)--(8,3.9)--cycle; \draw[foreground,->] (5,0)--+(4,0); \draw[foreground,->] (5,0)--+(0,4); \draw[thick] (6,-0.1)--+(0,0.15); \draw[thick] (7,-0.1)--+(0,0.15); \draw[thick] (8,-0.1)--+(0,0.15); \draw[thick] (4.9,1)--+(0.15,0); \draw[thick] (4.9,2)--+(0.15,0); \draw[thick] (4.9,3)--+(0.15,0); \draw[thick] (8,0)--(8,3.9); \draw (1,-0.15) node[below] {$ $}; \draw (-0.15,1) node[left] {$ $}; \foreach \x in{} { \foreach \y in{}{ \draw[fill, opacity=0.9] (\x,\y) circle (0.5pt); } } \node at (6.8,2) {$cont_A(f,h)$}; [ scale = 0.8, foreground/.style = { ultra thick }, background/.style = { dashed }, line join=round, line cap=round ] \draw[fill=black, opacity=0.1] (12.5,0)--(13.9,0)--(13.9,3.9)--(10,3.9)--(10,0.5)--cycle; \draw[foreground,->] (10,0)--+(4,0); \draw[foreground,->] (10,0)--+(0,4); \draw[thick] (11,-0.1)--+(0,0.15); \draw[thick] (12,-0.1)--+(0,0.15); \draw[thick] (13,-0.1)--+(0,0.15); \draw[thick] (9.9,1)--+(0.15,0); \draw[thick] (9.9,2)--+(0.15,0); \draw[thick] (9.9,3)--+(0.15,0); \draw[thick] (12.5,0)--(10,0.5); \draw (3,-0.15) node[below] {$ $}; \draw (-0.15,1) node[left] {$ $}; \foreach \x in{} { \foreach \y in{}{ \draw[fill, opacity=0.9] (\x,\y) circle (0.5pt); } } \node at (11.8,2) {$cont_A(g,h)$}; \end{tikzpicture} The results of the first section can be reformulated in terms of the logarithmic distance. \begin{Theorem}\label{Th:irred3} Let $f\in \mathbb{K}[[X]][Y]$ be a quasi-ordinary irreducible Weierstrass polynomial of characteristic $(h_1,\dots,h_s)$ and let $g\in \mathbb{K}[[X]][Y]$ be a Weierstrass polynomial such that $\deg g \leq \deg f_{k+1}$ and $\mathrm{cont}_A(f,g) > \mathrm{cont}_A(f,f_k)$. Then $g$ is an irreducible quasi-ordinary polynomial of characteristic $(h_1,\dots, h_{k})$ and $\deg g = \deg f_{k+1}$. Moreover, if $\mathrm{cont}_A(f,g)\geq \mathrm{cont}_A(f,f_{k+1})$ then $\mathrm{cont}_A(f,g) =\mathrm{cont}_A(f,f_{k+1})$. \end{Theorem} \section{The Abhyankar-Moh irreducibility criterion} In this section we will show that our main result is a generalization of the~well-known Abhyankar-Moh irreducibility criterion (see e.g. \cite[Theorem~1.2]{am}). At the beginning let us recall the classical Weierstrass preparation theorem for the ring $\mathbb{K}[[X,Y]]$. \begin{Theorem}[Weierstrass] Assume that $f=\sum_{i=0}^{\infty}a_i Y^i\in \mathbb{K}[[X,Y]]$ and there exists $m>0$ such that $a_i(0)= 0$ for $i<m$ and $a_m(0)\ne0$. Then there exist unique $u_f,f_1\in\mathbb{K}[[X,Y]]$ such that $f=u_f f_1$, $u_f(0)\ne 0$ and $f_1$ is a~Weierstrass polynomial with respect to the variable $Y$. \label{We} \end{Theorem} The polynomial $f_1$ from the above theorem is called the {\it Weierstrass polynomial} of $f$. If we assume additionally that $f\in \mathbb{K}[[X,Y]]$ is irreducible, then its Weierstrass polynomial is irreducible in the ring $\mathbb{K}[[X]][Y]$ and the characteristic of this polynomial will be denoted by ${\rm Char}(f)$. For $f,g\in \mathbb{K}[[X,Y]]$ we define the {\it intersection multiplicity number} $i_0(f,g)$ as the~dimension of the $\mathbb{K}$--vector space $\mathbb{K}[[X,Y]]/\langle f,g\rangle$. \begin{Theorem}[Abhyankar-Moh] Let $f$, $g\in \mathbb{K}[[X,Y]]$. Assume that $f$ is irreducible, $i_0(f,X)=n<+\infty$ and ${\rm Char}(f)=\{h_1,\dots,h_s\}$. If $i_0(g,X)=n$ and $i_0(f,g)>nq_s$, then $g$ is irreducible and ${\rm Char}(g)={\rm Char}(f)$. \end{Theorem} \begin{proof} Let $f_1$ and $g_1$ be the Weierstass polynomials of $f$ and $g$. Then $\deg f_1=i_0(f,X)=n$, $\deg g_1=i_0(g,X)=n$ and ${\rm ord}\,\mathrm{Res}(f_1, g_1)=i_0(f,g)>nq_s$. Since $f$ is irreducible, the polynomial $f_1\in\mathbb{K}[[X]][Y]$ is also irreducible. Therefore Theorem \ref{Th:irred1} with $k=s$ implies that $g_1$ is irreducible of characteristic $(h_1,\dots,h_s)$ and the~theorem follows. \end{proof}
{ "timestamp": "2018-04-17T02:10:55", "yymm": "1804", "arxiv_id": "1804.05366", "language": "en", "url": "https://arxiv.org/abs/1804.05366" }
\section{Introduction} Electrides are a special type of ionic crystals. In these fascinating materials the electrons, trapped in cavity, are the anions. Due to this characteristic feature, many possibly useful properties are realized such as high magnetic susceptibility, low work-function, and strong reducing character, highly variable conductivity, low temperature thermionic emission, high hyperpolarizability \cite{dye_electrides:_2009,yanagi_electron_2012,buchammagari_room_2007,xu_structures_2007}. Recently room-temperature-stable organic and inorganic electride have been synthesized \cite{,satoru_matsuishi_high-density_2003,redko_design_2005}. Most of organic electrides are known to be antiferromagnetic (AFM) from the field response \cite{,redko_design_2005,dawes_cesium_1991,huang_structure_1997,xie_structure_2000,wagner_cs_2000,ichimura_anisotropic_2002,wagner_[cs+15-crown-518-crown-6e-]6_1995}. In the sense that their magnetic properties are presumably originated from the electrons in the cavity, the magnetism of electrides is of unique interest. Although some features of their magnetic interactions have been modeled, such as that the electrons are interacting via a vacant aisle, and classified accordingly \cite{,dye_cavities_1996,dye_electrides:_1997,ryabinkin_solution_2010,ryabinkin_two_2010,ryabinkin_interelectron_2011}, a large part of their fundamental nature still remains elusive. It is largely due to the limitation of conventional ab initio calculation method. The conventional way of investigating magnetic property from first-principles is to calculate the interaction parameter by comparing multiple total energies corresponding to the ground state and meta-stable magnetic orders. In this way, not only the ground state spin configuration but the magnetic interaction strength are also calculated as shown recently by Dale and Johnson for electrides \cite{dale_explicit_2016}. However, this conventional approach is severely limited when the system size is large which is indeed the case for many organic electrides. For large systems, it is difficult to calculate the long-range interactions as the supercell contains too many atoms. While the magnetic interaction in solid is typically classified into the long-range (e.g., Ruderman–Kittel–Kasuya–Yosida (RKKY) interactions) \cite{,ruderman_indirect_1954,yosida_magnetic_1957,kasuya_theory_1956} or short-range (e.g., superexchange interactions) \cite{,kanamori_crystal_1960,goodenough_magnetism_1963} nature, the identification of even such basic character has been hampered by the large unitcell size for organic electrides. On top of their intriguing features of interacting path presumably through some cavity aisle, this practical issue limits the ab initio study. In order to meet this challenge, here we introduce a new approach. First we employ so-called ‘magnetic force response theory (MFT)’ \cite{liechtenstein_local_1987,antropov_spin_1996,antropov_exchange_1997,katsnelson_first-principles_2000,han_electronic_2004,yoon_reliability_2018} for calculating magnetic interactions. MFT enables us to calculate all the magnetic interactions residing in a given material within a primitive unitcell and at one time. Thus, without a supercell, one can estimate the magnetic coupling parameter $J$ as a function of distance for both short and long range. In order to understand the magnetic properties of organic electrides and to demonstrate the capability of our computation scheme, we take four different materials; namely, Rb$^{+}$(cryptand[2.2.2])e$^{-}$ (Fig.~\ref{Figure_2}(a)), Li$^{+}$(cryptand[2.1.1])e$^{-}$ (Fig.~\ref{Figure_2}(b)), [Cs$^{+}$(15C5)(18C6)e$^{-}$]$_6$(18C6) (Fig.~\ref{Figure_3}(a)), and K$^{+}$(cryptand[2.2.2])e$^{-}$ (Fig.~\ref{Figure_4}(a)). Our calculations clearly show that the magnetic interactions in these electrides indeed come from the localized electrons as anions. Further we unveil their short-range versus long-range nature of the interactions. In fact, for some electrides, there is an indication of oscillating $J$ which is a signature of RKKY type magnetic couplings. In order to make MFT feasible, one has to identify the trapped electron states properly. For this purpose, we further employ maximally-localized Wannier function (MLWF) technique \cite{,marzari_maximally_1997,souza_maximally_2001}. Another difficulty in dealing with electrides within first-principles framework is about controlling magnetic order for the cavity electrons. With a special constraint DFT scheme, we successfully stabilized the magnetic solution of K$^{+}$(cryptand[2.2.2])e$^{-}$ for the first time. Our current work provides useful information to understand the magnetism of organic electrides. \section{ Computational methods } \subsection{Magnetic force response theory} MFT is a method to calculate magnetic interactions at a given electronic structure. In this method the exchange coupling is estimated as a response to small spin tiltings as a perturbation from the given converged solution \cite{liechtenstein_local_1987,yoon_reliability_2018}: \begin{equation} \label{Eq_Jij_momentumspace} J_{ij}({\bf{q}} ) = \frac{1}{\pi} {\rm{Im}} \int_{}^{} \int_{}^{\epsilon_{\rm{F}}} d{\rm{k}} \, d\epsilon \rm{\, Tr}[ V_{{\bf{k}},i}^{\downarrow \uparrow } {\mathbf{G}}_{{\bf k},ij}^{\uparrow\uparrow}(\epsilon) V_{{\bf{k+q}},j}^{\uparrow \downarrow} {\mathbf{G}}_{{\bf k+q},ji}^{\downarrow\downarrow}(\epsilon) ]. \end{equation} Here i and j are the site indices, and up and down arrow indicate the spin direction. Green's function $\mathbf{G}$ is represented as \begin{equation} \label{Eq_green_DFT} \mathbf{G}^{\uparrow \uparrow}_{{\bf{k}},ij}(\epsilon) = \sum_{n}^{} \frac{ \ket{\psi_{{\bf{k}},i}^{\uparrow}} \bra{\psi^{\uparrow}_{{\bf{k}},j}} }{\epsilon-\epsilon_{\uparrow n,{\bf{k}}} + i\eta} \end{equation} where $\epsilon_{n,{\bf{k}}}$ and $\ket{\psi_{{\bf{k}},i}}$ refers to the n-th eigenvalue and eigenstate, respectively. $\mathbf{V}$ is given by \begin{equation}\label{Eq_Vdef} V_{{\bf{k}},i}^{\downarrow \uparrow } = {\frac{1}{2}}( {\bf{H}}_{{\bf k},i}^{\downarrow \downarrow } - {\bf{H}}_{{\bf k},i}^{\uparrow \uparrow } ) \end{equation} where ${\bf{H}}_{{\bf k},i}^{\uparrow \uparrow (\downarrow \downarrow) }$ is Kohn-Sham Hamiltonian corresponding to the collinear up(down) spin. The exchange interaction between two sites is calculated by Fourier transformation from $\bf{k}$ to real space. Thus one can just take the minimal size of unitcell with no need to consider large supercells. Once the localized magnetic sites are well defined, MFT provides the exchange constants, $J$’s, as a function of distance. Note that in MFT we extract all the information from a single self-consistently converged electronic structure. The further details of our implementation and the results of some classical example can be found in our previous studies \cite{,han_electronic_2004,yoon_reliability_2018}. \subsection{Calculation details} We perform density functional theory (DFT) calculations within generalized gradient approximation proposed by Perdew \textit{et al.} (GGA-PBE) \cite{,perdew_generalized_1996} by employing LCPAO (linear combination of pseudo-atomic orbitals) method \cite{,ozaki_variationally_2003,ozaki_numerical_2004} as implemented in our ‘OpenMX’ software package \cite{openmx}. It should be noted that the limitation of GGA in describing the correlation effect can cause the overestimation of magnetic couplings or the underestimation of the charge occupation \cite{han_electronic_2004,solovyev_effective_1998,johnson_extreme_2013}. 3×3×3 k-points and 500 Ry energy cutoff are used for numerical integration. Poisson equations are solved by using fast Fourier transformations, and the projector expansion method is used to accurately calculate three-center integrals associated with the deep neutral atom potential \cite{ozaki_efficient_2005}. \begin{figure}[h] \begin{center} \includegraphics[width=8.6cm,angle=0]{Figure1} \caption{ (Color online) (a), (b) The calculated band dispersion of (a) Rb$^{+}$(cryptand[2.2.2])e$^{-}$ and (b) Li$^{+}$(cryptand[2.1.1])e$^{-}$ (blue line). The calculated MLWF band dispersion is expressed in red lines. \label{Figure_1}} \end{center} \end{figure} In order to describe the characteristic feature of electride, namely the electrons trapped in the cavity space, we employ the ‘empty atom’ technique for all of our systems. The empty atom technique, also called as `ghost atom' or `empty sphere', has been used, for example, to correctly estimate the basis set superposition error (BSSE) \cite{,latajka_dissection_1989,simon_how_1996}, and to treat a large void space \cite{,doi:10.2138/am-1996-11-1201,kalpana_electronic_1996,ossicini_selfconsistent_1989,sque_transfer_2007} within local basis schemes. In this method, an empty atom is represented as an atom with a nuclear charge of zero, which acts as a basis for describing the wave function of the empty space. We used two $s$, one $p$, and one $d$ orbitals with a cutoff radius of 11 a.u.~ as the basis of empty atom. We determine the number of empty atoms by comparing the band structure with the plane-wave result \cite{,dale_density-functional_2014}. MLWF \cite{marzari_maximally_1997,souza_maximally_2001} is also used to identify the magnetic bands as a localized state. We found that the position of Wannier functions was well compared with the distances known from experiments \cite{,huang_structure_1997,xie_structure_2000,wagner_[cs+15-crown-518-crown-6e-]6_1995,dye_electrides:_1990}. This combination of empty atom and MLWF techniques enables us to perform the MFT calculation to estimate the magnetic couplings. The s-wave symmetry for Wannier functions is considered to describe the cavity-electron states since they are clearly of s-orbital character from spin density plots seen in Fig.~\ref{Figure_2}(a) and \ref{Figure_2}(b) for example. Figure~\ref{Figure_1}(a) and \ref{Figure_1}(b) shows the calculated band dispersion of Rb$^{+}$(cryptand[2.2.2])e$^{-}$ and Li$^{+}$(cryptand[2.1.1])e$^{-}$, respectively (blue line). The MLWF band is expressed by red lines. An excellent overlap of the two bands (blue and red) shows that MLWF well identifies the electronic states in the cavity. MFT calculations are conducted based on the previously-known magnetic phase \cite{,dale_explicit_2016}, namely, G-type AFM order (in which all of the directly connected neighbors have different spins) except for [Cs$^{+}$(15C5)(18C6)e$^{-}$]$_6$(18C6). For [Cs$^{+}$(15C5)(18C6)e$^{-}$]$_6$(18C6), we consider the intra-ring AFM order (see Fig.~\ref{Figure_3}(a)). In order to get the AFM phase for all other electrides except K$^{+}$(cryptand[2.2.2])e$^{-}$, the initial spin polarization is applied to the surrounding hydrogen atoms as in the previous calculation \cite{dale_explicit_2016}. For K$^{+}$(cryptand[2.2.2])e$^{-}$, however, even such an ad hoc technique does not work as reported in the previous theoretical study \cite{,dale_explicit_2016}. As a result, the magnetic property of this electride has never been theoretically addressed. Here, we used a kind of special constraint DFT technique to stabilize the magnetic order. In this scheme, the initial spin density is assigned onto the ‘empty atom’ sites which means that the constraint is directly imposed on the cavity. We further confirmed that this magnetic solution is not just consistent with experiment but also robust through the further self-consistent steps. It is eventually converged into the magnetic solution without any constraint. From this process, we successfully obtained the well-stabilized self-consistent magnetic solution of K$^{+}$(cryptand[2.2.2])e$^{-}$ for the first time, on top of which MFT can be conducted. \section{RESULTS AND DISCUSSION}\label{sec:parameter_J} \subsection{Rb$^{+}$(cryptand[2.2.2])e$^{-}$ and Li$^{+}$(cryptand[2.1.1])e$^{-}$} Rb$^{+}$(cryptand[2.2.2])e$^{-}$ and Li$^{+}$(cryptand[2.1.1])e$^{-}$ are the electrides with ‘ladder-like’ channel \cite{,huang_structure_1997,xie_structure_2000}. Experimentally, the magnetic property has been studied with electron paramagnetic resonance (EPR) spectroscopy as a function of temperature, and fitting to a certain exchange model. For these two materials, the EPR data is well fit to ‘FN (first-neighbor)-1D Heisenberg’ model. However, a recent calculation study \cite{,dale_explicit_2016} raises a question against this conclusion \cite{comment1}. The calculated $J$ from the total energy difference mapped onto the same FN-1D model is found to be quite different from the experimental data \cite{huang_structure_1997,xie_structure_2000}. It can be argued that this difference is attributed to the next and longer-range magnetic couplings \cite{,dale_explicit_2016}. Importantly, however, a solid conclusion could not be made because the next neighbor interactions were not accessible within the conventional computation scheme. \begin{figure}[t] \begin{center} \includegraphics[width=8.6cm,angle=0]{Figure2.png} \caption{ (Color online) (a), (b) The calculated spin density of (a) Rb$^{+}$(cryptand[2.2.2])e$^{-}$ and (b) Li$^{+}$(cryptand[2.1.1])e$^{-}$ where red and green spheres represent the up and down spin density, respectively. We used the isosurface value of 0.00075 in atomic unit (a.u.). $J_{1}$, $J_{2}$, and $J_{3}$ refer to the first, second, and third neighbor interactions in (a) and (b). (c), (d) The calculated magnetic coupling parameters for (c) Rb$^{+}$(cryptand[2.2.2])e$^{-}$ and (d) Li$^{+}$(cryptand[2.1.1])e$^{-}$. Our calculation results by MFT (dark blue circles) are compared with the previous calculation by total energy difference (green triangles; Ref. \onlinecite{dale_explicit_2016}) and experiment (magenta squares; Ref. \onlinecite{,huang_structure_1997,xie_structure_2000}). Note that both short and long range interactions are calculated from MFT while only nearest neighbor values can be obtained from experiments and total-energy-based computation scheme. } \label{Figure_2} \end{center} \end{figure} Hereby using MFT combined with MLWF, we calculated the exchange coupling constants as a function of distance; from the nearest neighbor to the long-range interactions. The results are presented in Figs. \ref{Figure_2}(a)-(d). For Rb$^{+}$(cryptand[2.2.2])e$^{-}$ (Fig.~\ref{Figure_2}(a) and \ref{Figure_2}(c)), MFT calculation shows that the first neighbor interaction is dominant, $J_{1}/k_{B}$=56.3 K, while the second and third neighbor interaction are much smaller, $J_{2}/k_{B}$=0.17 K and $J_{3}/k_{B}$= $-$4.56 K (see Table \ref{table1}). Thus our results confirm that Rb$^{+}$(cryptand[2.2.2])e$^{-}$ is well classified as a FN-1D Heisenberg system. On the other hand, Li$^{+}$(cryptand[2.1.1])e$^{-}$ is not the case (Fig.~\ref{Figure_2}(b) and \ref{Figure_2}(d)). The calculated first neighbor interaction is $J_{1}/k_{B}$=51.7 K, and the second and third neighbor value is quite significant; $J_{2}/k_{B}$=6.69 K and $J_{3}/k_{B}$ =14.0 K (see Table \ref{table1}). Note that this corresponds to 12.9\% and 27.1\% of $J_{1}$, respectively. Thus it is hardly regarded as a FN-1D spin system contrary to the previous study. Our results demonstrate the usefulness of the current computation scheme for magnetic electrides. The MFT in combination with MLWF provides the long-range interaction parameters and therefore one can have a reliable picture for the magnetism even if the system size is too large to be calculated by conventional total energy method. Further, our calculation confirms that the magnetic properties measured in the experiment indeed come from the localized electron state in the cavity sites. It is because MFT estimates the magnetic force in between two sites, and in the current case, these ‘sites’ are defined as the cavity electrons by means of empty atoms and MLWF technique. We also emphasize that, in our scheme, one does not need to build any a priori model considering only a few neighbor interactions. The exchange parameters are calculated as the response to spin tilting. This feature is advantageous when one needs to consider any type of model building or to justify a certain model. \subsection{[Cs$^{+}$(15C5)(18C6)e$^{-}$]$_6$(18C6)} \begin{figure}[!ht] \begin{center} \includegraphics[width=8.6cm,angle=0]{Figure3.png} \caption{ (Color online) (a) The calculated spin density of [Cs$^{+}$(15C5)(18C6)e$^{-}$]$_6$(18C6) where red and green spheres represent the up and down spin density, respectively. We used the isosurface value of 0.00075 a.u.. The spin order in this material can be referred as ‘intra-ring antiferromagnetic (intra-ring AFM)’ configuration: Black lines show the connections of the intra-vacant spaces of ‘six-memebered rings’ while yellow lines show the connections between the different ‘six-memebered rings’. Yellow points represent the connections which are invisible in the current view of Figure. Note that each site is connected by two black lines and two yellow lines or points. In terms of distance, the black lines correspond to the first neighbor interactions ($J_{1}$) and the yellow dots/lines to the second neighbors ($J_{2}$). The stacking sequence of the six-membered rings is A-B-C-A-… from the side view. (b) The calculated exchange interaction for [Cs$^{+}$(15C5)(18C6)e$^{-}$]$_6$(18C6). We estimated $J$ values based on two different spin density configurations (see main text for more details), and the results are presented with dark-blue circles and pink diamonds. For comparison, the previous calculation (Ref. \onlinecite{,dale_explicit_2016}) and experimental (Ref. \onlinecite{,wagner_[cs+15-crown-518-crown-6e-]6_1995}) data are also presented. The two inset figures show the schematic spin configuration based on which we performed the MFT calculation for $J$; namely, intra-ring AFM and ferromagnetic (FM) order. } \label{Figure_3} \end{center} \end{figure} [Cs$^{+}$(15C5)(18C6)e$^{-}$]$_6$(18C6) is an electride having ‘six-membered ring’ cluster structure as shown in Fig.~\ref{Figure_3}(a), and its magnetic susceptibility was reported by Wagner and Dye \cite{,wagner_[cs+15-crown-518-crown-6e-]6_1995}. We note that, in this experimental study, the measured data has been analyzed based on a simple model assumption that the intra-ring couplings are the only magnetic interactions \cite{,wagner_[cs+15-crown-518-crown-6e-]6_1995}. From this analysis, it was concluded that this electride has strong AFM interactions. The theoretical study was also conducted under the same assumption that only the first-neighbor interaction is important \cite{,dale_explicit_2016}. It is presumably because the computational cost is too large to calculate multiple total energies within the enlarged supercell geometry. In fact, in order to simulate the theoretically-estimated G-type spin ground state, one needs to calculate the 4080-atom supercell. In Ref.~\onlinecite{,dale_explicit_2016}, the cell size of 510 atoms has been used within which only the nearest neighboring $J$ is accessible. This result is plotted in Fig.~\ref{Figure_3}(b) (denoted by green triangle) showing that the difference between the previous calculation and experiment is about 395 K. The origin of the relatively large difference is unclear. It was indeed concluded in Ref.~\onlinecite{,dale_explicit_2016} that this difference is attributed to the supercell size effect. In this context, MFT can give a useful insight since it does not require any supercell calculation but it does provide the longer range interactions. Fig.~\ref{Figure_3}(b) shows our MFT calculation results of exchange couplings as a function of distance (see dark-blue circles and pink diamonds). Since the magnetic ground state cannot be represented within the structural primitive cell, we considered two different magnetic solutions which can be realized within this primitive cell; namely, FM (pink diamonds) and AFM (dark-blue circles). Here AFM order refers to the AFM order in between the intra-rings (see the inset of Fig.~\ref{Figure_3}(b)). We first note that the two calculation results based on FM and AFM spin density are quite similar; the difference is less than 6.5 K. It not only indicates that the spin-polarized electron states in cavity are basically well localized as previously discussed \cite{,yoon_reliability_2018}, but also implies that we do not need the real spin ground state density in order to estimate $J$ values \cite{,yoon_reliability_2018}. \begin{table*}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \backslashbox{ $J_{n}/k_{B} $}{ Name} & \multicolumn{3}{c|}{ Rb$^{+}$(cryptand[2.2.2])e$^{-}$} & \multicolumn{3}{c|}{ Li$^{+}$(cryptand[2.1.1])e$^{-}$} & \multicolumn{3}{c|}{ [Cs$^{+}$(15C5)(18C6)e$^{-}$]$_6$(18C6)} & \multicolumn{3}{c|}{ K$^{+}$(cryptand[2.2.2])e$^{-}$} \\ \hline & \enspace MFT \enspace & \enspace $\Delta$E\enspace & Exp & \enspace MFT \enspace & \enspace $\Delta$E\enspace & Exp & \enspace MFT \enspace & \enspace $\Delta$E\enspace & Exp & \enspace MFT \enspace & \enspace $\Delta$E\enspace & Exp \\ \hline $J_1$ & 56.3 & 78.2 & 30 & 51.7 & 177 & 54 & 19.4&15.2 & 410 & 9.6 & - & 440 \\ \hline $J_2$& 0.2 & - & - & 6.7 & - & - & 24.9 & -& -& 5.7 & - & - \\ \hline $J_3$& $-$4.6 & - & - & 14.0 & -& - & 6.6 & - & - & 148 & - & - \\ \hline $J_7$& 0.1 & - & - & 0.02 & - & - & $-$1.3 & - & - & $-$33.5 & - & - \\ \hline \end{tabular} \caption{\label{table1} Exchange parameters of 4 different organic electrides in the unit of K. Our calculation results by MFT are compared with the previous calculations ($\Delta$E; Ref.~\cite{dale_explicit_2016}) and experiments (Exp). The experimental data can be found in Ref.~\cite{xie_structure_2000}, Ref.~\cite{huang_structure_1997}, Ref.~\cite{wagner_[cs+15-crown-518-crown-6e-]6_1995} and Ref.~\cite{,ichimura_anisotropic_2002} for Rb$^{+}$(cryptand[2.2.2])e$^{-}$, Li$^{+}$(cryptand[2.1.1])e$^{-}$, [Cs$^{+}$(15C5)(18C6)e$^{-}$]$_6$(18C6) and K$^{+}$(cryptand[2.2.2])e$^{-}$, respectively. } \end{table*} The sum of all magnetic interactions is $J_{tot}/k_{B}$ = ($J_{1}$+$J_{2}$+… )$/k_{B}$= 53 K which is notably smaller than the experimental value of 410 K (magenta square) \cite{wagner_[cs+15-crown-518-crown-6e-]6_1995} (see Table \ref{table1}). It means that the difference between the previous calculation and experiment is not originated from the longer-range interactions or supercell-size effects. Comparing our MFT result with the previous total energy-based estimation, the first neighbor $J_{1}$ is in good agreement with each other within 4.2 K. Importantly, the second neighbor $J_{2}$ is comparable with and slightly larger than $J_{1}$. Namely, the inter-ring interaction is sizable and the nearest neighbor spin model is not relevant to this material. Our calculations clearly show that the inter-ring coupling needs to be taken into account. Further, our calculation confirms that the spin ground state of this material is indeed G-type. As mentioned above, this material was speculated to have G-type AFM ground state \cite{dale_explicit_2016}. However, there is no experimental or theoretical evidence for that. Theoretical confirmation has been hampered by the large supercell size of 4080 atoms. Our MFT results of Fig.~\ref{Figure_3}(b) clearly shows that $J_1$ and $J_2$ are the two dominant couplings and both of them are AFM. \subsection{K$^{+}$(cryptand[2.2.2])e$^{-}$} \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm,angle=0]{Figure4.png} \caption{ (Color online) (a) The calculated spin density of K$^{+}$(cryptand[2.2.2])e$^{-}$ where the red and green spheres represent the up and down spins, respectively. We used the isosurface value of 0.0003 a.u.. (b) The calculation results of magnetic interaction as a function of distance. The magenta square shows the experimental result (Ref. \onlinecite{ichimura_anisotropic_2002}) while the dark-blue circles are our calculation results. For this material, there is no previous calculation result because of the difficulty in stabilizing the magnetic solution (see the main text for more details). } \label{Figure_4} \end{center} \end{figure} Our final example, K$^{+}$(cryptand[2.2.2])e$^{-}$, is also known to be an AFM ordered organic electride \cite{,ichimura_anisotropic_2002}. For this material, however, there has been no successful calculation in obtaining the magnetic solution. The conventional DFT calculation gives the paramagnetic spin ground state as the converged solution even if starting from the spin polarized initial condition. Even with ad hoc treatment in which initial spins are assigned to the hydrogen atoms around the vacant space, the magnetic solution is hardly achieved \cite{,dale_explicit_2016}. First of all, we successfully obtained the magnetic solution by applying initial spins directly on the vacant space. We developed a constrained DFT scheme for assigning the initial spin moment to `empty sphere'. This process is not straightforward in the sense that electron spins need to be polarized within the empty spheres. We applied the magnetic constraint during the initial $~20$ self-consistent steps, through which empty spheres are occupied by polarized electrons. After that, the usual DFT self-consistent calculations are performed with the constraint turned off. With this scheme, we successfully generated the well-stabilized AFM solution as shown in Fig.~\ref{Figure_4}(a). On top of this spin density we performed MFT calculation and successfully estimated magnetic interactions for the first time. Our results are summarized in Fig.~\ref{Figure_4}(b). First of all, our calculation confirms that this magnetic phase is indeed G-type AFM ground state as speculated in the previous study \cite{dale_explicit_2016}. It is also consistent with an experiment \cite{ichimura_anisotropic_2002}. The largest interaction is the third neighbor $J_{3}$ which is notably larger than $J_{1}$, $J_{2}$ and others. Interestingly the second largest interaction is $J_{7}$ which is about 25\% of the $J_{3}$. Our results is consistent with the experiment \cite{ichimura_anisotropic_2002} in the sense that the magnetic property of this material can be described with two parameters in two dimension. The total sum of all our $J/k_{B}$’s is about 110 K (see Table \ref{table1}). \section{Summary}\label{sec:MFT_metastable} A new theoretical approach is applied to study magnetic electrides. Spin-polarized electrons trapped in the cavity are identified by empty atom and MLWF method, and their interactions calculated within MFT. The usefulness of this scheme is shown by calculating four different organic electrides for which the validity of pre-assumed models have remained unclear. The long range magnetic interaction profile as a function of distance is calculated and compared, which has not been accessed by the conventional total energy calculations. For K$^{+}$(cryptand[2.2.2])e$^{-}$, we apply a constraint DFT method to stabilize the magnetic solution and calculate the magnetic interaction for the first time. Our study provides useful insights to understand magnetic electrides and related materials. \section{Acknowledgments} This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2018R1A2B2005204).
{ "timestamp": "2018-06-12T02:13:01", "yymm": "1804", "arxiv_id": "1804.05244", "language": "en", "url": "https://arxiv.org/abs/1804.05244" }
\section{Introduction} The electric dipole moments (EDMs) of atoms due to violations of time-reversal (T) and parity (P) symmetries are among the leading table-top probes of physics beyond the Standard Model of particle interactions \cite{S.M.Barr, M. Pospelov} and they are sensitive to new physics at the TeV scale \cite{N.Yamanaka}. The EDMs of diamagnetic atoms are primarily sensitive to the nuclear Schiff moment (NSM) and the electron-nucleus tensor-pseudotensor (T-PT) interaction, which arise from hadronic and semi-leptonic T or CP violation respectively \cite{N.Yamanaka}. A number of experiments are currently under way to observe such EDMs \cite{arXiv:1710.02504, arXiv:1803.06821}. The current best EDM limit comes from Hg, which is a diamagnetic atom \cite{B. Graner}. Three EDM experiments on another atom of this class, $^{129}$Xe are in progress and new results are expected in the foreseeable future \cite{arXiv:1710.02504, arXiv:1803.06821}. These new experimental results for $^{129}$Xe in combination with atomic many-body calculations of the ratios of $^{129}$Xe EDM to the NSM and the coupling constant of the T-PT interaction $(C_T)$ separately will yield limits for the NSM and $C_T$. It is necessary to assess the quality of the atomic many-body calculations of the quantities related to $^{129}$Xe EDM mentioned above. One important step in this direction would be to perform calculations of the ground state electric dipole polarizability of $^{129}$Xe, which has the same rank and parity as the EDM mentioned above, and therefore both these quantities depend on the same physical effects. The theoretical result obtained for $^{129}$Xe polarizability can be compared with its experimental value which has been measured to high accuracy \cite{Hohm U}. These calculations must be relativistic in character as $^{129}$Xe is a heavy atom with 54 electrons. Furthermore, it is necessary to use a many-body theory that can capture the correlation effects to as high an order as possible in an atom with a large of number of electrons. Taking these two points into consideration, it would be appropriate to use the relativistic coupled-cluster (RCC) theory, which is arguably the gold standard for the relativistic theory of atoms and molecules \cite{H. S. Nataraj, V. S. Prasannaa}. One important virtue of this theory is that it takes into account correlation effects to all orders in perturbation at every level of particle-hole excitation \cite{R. F. Bishop}. Furthermore it is size-extensive \cite{R. F. Bishop}. In the present paper, we have performed rigorous calculations of the electric dipole polarizability of the ground state of $^{129}$Xe using a self-consistent RCC method (RCCM) \cite{arXiv:1801.07045} and the relativistic normal coupled-cluster method (RNCCM) \cite{arXiv:1801.07045}. This is the first application of the latter method to the calculation of the electric dipole polarizability of the ground state of $^{129}$Xe. The next section gives the salient features of these two methods and some key aspects of the calculations. This is followed by a presentation and discussion of our results and finally, we present our conclusions. \section{THEORY AND METHOD OF CALCULATIONS} The static polarizability in the uniform dc electric field $\bm{E}$ is defined by \begin{equation} \langle D \rangle = \alpha {\bm E}, \label{eq:eq_ed} \end{equation} where $\langle D \rangle = \langle \Psi_0| D |\Psi_0 \rangle$ is induced electric dipole moment of state $| \Psi_0 \rangle$ of an atom. In the first order perturbation, $|\Psi_0 \rangle$ can be expressed as \begin{equation} |\Psi_0 \rangle = |\Psi_0^{(0)}\rangle +\lambda |\Psi_0^{(1)} \rangle, \label{eq:psi_eq} \end{equation} where $\lambda$ is perturbed parameter for the Dirac-Coulomb (DC) Hamiltonian given by \begin{equation} H_0^{(DC)} = \sum_{i}^{N_e} [c\bm{\alpha}\cdot \bm{p}_i + m_i c^2 \bm{\beta}+ V_N (r_i) ] +\cfrac{1}{2} \sum_{i,j} \cfrac{1}{r_{ij}}, \end{equation} and the superscript (0) and (1) represent unperturbed and first-order perturbed wave functions, respectively. In more explicit form, $| \Psi_0^{(1)}\rangle$ can be written as \begin{eqnarray} |\Psi_0^{(1)}\rangle &=& \sum_{I} |\Psi_I^{(0)}\rangle \frac{\langle \Psi_I^{(0)} | H_{int}| \Psi_0^{(0)}\rangle}{{E}_0 - { E}_I} \nonumber \\ &=& \sum_{I} |\Psi_I^{(0)}\rangle \frac{\langle{ \Psi_I^{(0)} |D| \Psi_0^{(0)}}\rangle}{{E}_0 - {E}_I}, \label{eq:psi_1_1} \end{eqnarray} where $|\Psi_I^{(0)}\rangle$ represents an excited state of $H^{(DC)}_0$, ${E}_0$ and ${E}_I$ are the energies of the ground and excited states, respectively, $\lambda H_{int} = - \bm{D} \cdot \bm{E} $ is a perturbed Hamiltonian, and $\bm{D}$ is the electric-dipole operator. In the above equation, we have used $\lambda H_{int} = - \bm{D} \cdot \bm{E} = DEcos\theta$ and $\lambda = E cos\theta$, where $\theta$ is an angle between ${\bf D}$ and {\bf E}. Using Eqs (\ref{eq:psi_eq}) and (\ref{eq:psi_1_1}), $\langle \bm{D} \rangle = \langle \Psi_0|\bm{D}|\Psi_0 \rangle$ is written as \begin{eqnarray} \langle \bm{D}\rangle &\simeq& \langle \Psi_0^{(0)}|\bm{D}|\Psi_0^{(0)} \rangle + 2 \lambda \langle \Psi_0^{(0)}|{\bf D}|\Psi_0^{(1)} \rangle \nonumber \\ &=& 2\sum_{I} \cfrac{ \langle{ \Psi_0^{(0)}| {D} |\Psi_I^{(0)} }\rangle \langle{ \Psi_I^{(0)} | {D} | \Psi_0^{(0)} }\rangle }{{E}_0 - { E}_I}\bm{E}, \label{eq:eq_d} \end{eqnarray} where the first term does not contribute since the electric dipole operator $\bm{D}$ is an odd parity operator. From Eqs (\ref{eq:eq_ed}) and (\ref{eq:eq_d}), $\alpha$ is given by \begin{equation} \alpha = -2\sum_{I} \frac{ |\langle \Psi_I^{(0)}|{D}|\Psi_0^{(0)} \rangle|^2}{{E}_0 - { E}_I}. \end{equation} \subsection{Unperturbed wave function of Coupled Cluster Method (CCM)} \, In the CCM, the unperturbed wave function $|\Psi_0^{(0)}\rangle$ for closed-shell atoms can be expressed as \cite{I. Shavitt} \begin{equation} |\Psi_0^{(0)}\rangle= e^{T^{(0)}} |\Phi_0\rangle \label{eq:psi_0} \end{equation} where $|\Phi_0\rangle$ is the Dirac-Fock (DF) wave function, which is determined using the mean-field approximation and $T^{(0)}$ is the sum of all particle-hole excitation operators. In the coupled-cluster singles and doubles (CCSD) approximation, the excitation operator is $T^{(0)} = T_1^{(0)} + T_2^{(0)}$. In the second quantization notation, these operators can be written as \begin{equation} T_1^{(0)} = \sum_{a,i} {t^a_i a_a^{\dagger} a_i} \mbox{~~ and ~~} T_2^{(0)} = \frac{1}{4}\sum_{a,b,i,j} {t^{ab}_{ij} a_a^{\dagger}a_b^{\dagger}a_j a_i}, \end{equation} where $t^a_i $ and $t^{ab}_{ij}$ are the particle-hole cluster amplitudes, $a_n^{\dagger}$ and $a_n$ are the creation and annihilation operators respectively, and the scripts $ n = a,b$ and $ n = i,j $ represent virtual and occupied orbitals respectively. To obtain the $T^{(0)}$ amplitudes, we solve the following equations \cite{I. Shavitt}: \begin{equation} \langle{ \Phi_0^*|({H_N^{DC} e^{T^{(0)}}})_{con}|\Phi_0}\rangle = 0. \label{eq:jaco} \end{equation} Here $|\Phi_0^*\rangle$ represents an excited determinantal state with respect to these reference state, $H_N^{DC}$ is the normal ordered Hamiltonian, and we use the relation $ e^{-T^{(0)}}H _N^{DC}e^{T^{(0)}} = ({H_N^{DC} e^{T^{(0)}}})_{con}$ with the subscript ``con" representing connected terms \cite{I. Shavitt}. In the present work, we have used the Jacobi iterative method to numerically solve Eq. (\ref{eq:jaco}) \cite{https:}. \subsection{First-order perturbed wave function for the Coupled Cluster Method} In the presence of a uniform dc electric field, the atomic Hamiltonian is given by \begin{equation} H = H_0^{(DC)} + \lambda H_{int}, \end{equation} where the perturbed Hamiltonian is $\lambda H_{int} = - \bm{D} \cdot \bm{E}$ has been define earlier. The first order perturbation equation can be expressed as \begin{eqnarray} ( H_0^{(DC)} + \lambda H_{int}) (|\Psi_0^{(0)}\rangle +\lambda |\Psi_0^{(1)}\rangle) \nonumber \\ = (E^{(0)}+\lambda E^{(1)}) (|\Psi_0^{(0)}\rangle +\lambda |\Psi_0^{(1)}\rangle), \end{eqnarray} where $E^{(0)}$ and $E^{(1)}$ are the unperturbed and the first order perturbed energies, respectively. Keeping only the first-order terms in $\lambda$ in the above equation, we get \begin{eqnarray} (H_0^{(DC)} - E^{(0)} ) |\Psi_0^{(1)}\rangle &=& - H_{int} |\Psi_0\rangle + E^{(1)} |\Psi_0^{(0)}\rangle \nonumber \\ &=& D | \Psi_0 \rangle + \langle \Psi_0^{(0)} |H_{int}| \Psi_0^{(0)}\rangle |\Psi_0^{(0)}\rangle \nonumber \\ &=& D | \Psi_0 \rangle, \label{eq:eq_1} \end{eqnarray} where $E^{(1)}$ is zero because $D$ has odd parity. Using the CCM ansatz for closed-shell atoms, we can express the total wave function $|\Psi_0\rangle$, which has a definite parity as \begin{equation} |\Psi_0\rangle = e^{T}|\Phi_0\rangle, \label{eq:psi_def} \end{equation} where we define \begin{equation} T = T^{(0)} + \lambda T^{(1)}, \label{eq:t_def} \end{equation} where $T^{(1)}$ is the first-order excitation operator due to $H_{int}$. Substituting Eq. (\ref{eq:t_def}) in Eq. (\ref{eq:psi_def}), we get \begin{equation} |\Psi_0\rangle = e^{T^{(0)}+\lambda T^{(1)}} |\Phi_0\rangle = e^{T^{(0)}}(1+\lambda T^{(1)})|\Phi_0\rangle, \end{equation} where only terms up to linear in $T^{(1)}$ have been kept. Comparing the above equation with Eq. (\ref{eq:psi_def}), it is clear that the first-order wave function can be written as \cite{J. Cizek} \begin{equation} |\Psi_0^{(1)}\rangle = e^{T^{(0)}}T^{(1)} |\Phi_0\rangle. \label{eq:psi_1} \end{equation} To obtain the $T^{(1)}$ amplitudes, we substitute Eq. (\ref{eq:psi_1}) in Eq. (\ref{eq:eq_1}), and get \begin{eqnarray} \langle \Phi_0^*|e^{-T^{(0)}}H _N^{DC}e^{T^{(0)}} T^{(1)}|\Psi_0 \rangle &=& \langle \Phi_0^*|e^{-T^{(0)}} D e^{T^{(0)}}|\Phi_0\rangle \nonumber \\ \langle \Phi_0^*|\overline{H_0^{DC}}T^{(1)}|\Psi_0 \rangle &=& \langle \Phi_0^*|\overline{D}|\Phi_0 \rangle , \end{eqnarray} where we have used the relation $\bar{A} = e^{-T^{(0)}}Ae^{T^{(0)}}= ({A e^{T^{(0)}}})_{con}$ for the operator $A$ \cite{R. F. Bishop}. \subsection{CCM expression for polarizability} \, Using Eqs. (\ref{eq:psi_0}) and (\ref{eq:psi_1}), the expression of the polarizability for the CCM can be written as \cite{R. F. Bishop} \begin{eqnarray} \alpha &=& \frac{\langle \Psi_0|D|\Psi_0 \rangle} {\langle \Psi_0|\Psi_0 \rangle } = 2\frac{\langle \Psi_0^{(0)}|D|\Psi_0^{(1)}\rangle}{\langle \Psi_0^{(0)}|\Psi_0^{(0)} \rangle }\nonumber \\ &=& 2\langle \Phi_0|({D^{(0)}} {T^{(1)}})_{con}|\Phi_0 \rangle, \label{eq:alpha_ccm} \end{eqnarray} where we define ${D^{(0)}} = e^{{T^{(0)}}^{\dagger}}D e^{T^{(0)}}$. In the above equation, we use the connected form of the expectation value for a closed shell atom \cite{I. Shavitt}, which is non terminating. Therefore in order to calculate the expectation value given in Eq. (\ref{eq:alpha_ccm}), we have used a self-consistent coupled-cluster approach in which the combined power of ${T^{(0)}}^{\dagger}$ and $T^{(0)}$ is systematically increased till the result for $\alpha$ converges. \subsection{Unperturbed wave function of Normal Coupled Cluster Method} Using the NCCM ansatz, the unperturbed bra state $\langle \Psi_0^{(0)} |$ can be written as \begin{equation} \langle \widetilde{\Psi}_0^{(0)}| = \langle \Phi_0 |(1+\widetilde{T}^{(0)})e^{-T^{(0)}}, \label{eq:psi_chi} \end{equation} where $T_0$ contains the excitation operators as defined earlier, $\widetilde{T}_0$ is the sum of de-excitation operators and is like $T_0^{\dagger}$. Using Eqs (\ref{eq:psi_0}) and (\ref{eq:psi_chi}), we get \begin{eqnarray} \langle \widetilde{\Psi}_0^{(0)}|\Psi_0^{(0)} \rangle &=& \langle \Phi_0|(1+\widetilde{T^{(0)}})e^{-T^{(0)}}e^{T^{(0)}}|\Phi_0\rangle \nonumber \\ &=& \langle \Phi_0|\Phi_0\rangle \nonumber \\ &=& 1. \end{eqnarray} Using the above bra state, the expectation value of an one-body operator corresponding to a particular property can be expressed as \begin{equation} \langle \hat{A} \rangle = \langle \Phi_0 | (1+\widetilde{T}^{(0)})e^{-T^{(0)}} \hat{A} e^{T^{(0)}} | \Phi_0\rangle, \label{eq:aki} \end{equation} where, A is a general one body operator. The presence of $e^{-T^{(0)}} \hat{A} e^{T^{(0)}}$ ensures that the expression on the right hand side of Eq. (\ref{eq:aki}) terminates. An important attribute of the NCCM is that it satisfies the Hellman-Feynman theorem \cite{R. F. Bishop}. To obtain the $\widetilde{T}^{(0)}$ amplitude, we solve the following equation: \begin{eqnarray} \langle \Phi_0|(1+\widetilde{T}^{(0)})[(He^{T^{(0)}})_{con} , C^+_I]|\Phi_0 \rangle = 0, \end{eqnarray} here we express as $T^{(0)} = \sum_{I=1}^{Ne} t_I^{(0)} C_I^{+} $, $t_I^{(0)}$ are the amplitudes of the excitations and $C_I^{+}$ represents a string of creation and annihilation operators corresponding to a given level of particle-hole excitation \cite{R. F. Bishop}. \subsection{First-order perturbed wave function for NCCM} \, Similar to $T^{(1)}$, we express the perturbed wave function for the bra state as \begin{eqnarray} \langle \widetilde{\Psi}_0 | &=&\langle \Phi_0 | (1+\widetilde{T}^{(0)} + \lambda \widetilde{T}^{(1)})e^{-T^{(0)}-\lambda T^{(1)}} \label{eq:nccm_psi} \end{eqnarray} In the above expression only terms up to linear in $T^{(1)}$ have been kept, and $ \widetilde{T}^{(1)}$ is given by $ \widetilde{T}^{(1)} = \sum_{I=1}^{Ne} t_I^{(1)} C_I$. To obtain the amplitudes for $\widetilde{T}^{(1)}$, we solve the following equations: \begin{eqnarray} \langle \Phi_0|[\widetilde{T}^{(1)}, \overline{H_N} ]|\Phi_0^*\rangle + \langle \Phi_0|(1+\widetilde{T}^{(0)} )\overline{H_N}|\Phi^*_0 \rangle \nonumber \\ = - { \langle \Phi_0| [\overline{H_N},(1+\widetilde{T}^{(0)} )T^{(1)}] | \Phi^*_0 \rangle } , \end{eqnarray} where $\overline{H_N} = e^{-T^{(0)}}H_N e^{T^{(0)}}$. \subsection{NCCM expression for polarizability } Using Eqs. (\ref{eq:psi_def} ) and (\ref{eq:nccm_psi}), the NCCM expression for polarizability can be written as \begin{eqnarray} \alpha &=& \langle \widetilde{\Psi}^{(0)}_0 |D| \Psi^{(1)}_0 \rangle + \langle \widetilde{\Psi}^{(1)}_0 |D| \Psi^{(0)}_0 \rangle \nonumber \\ &=& \langle \Phi_0|\widetilde{T}^{(1)} \overline{D}|\Phi_0\rangle + \langle \Phi_0 | (1+\widetilde{T}^{(0)})\overline{D}T^{(1)} |\Phi_0 \rangle \label{eq:polarizability} \end{eqnarray} where, we have used relations ${T^{(n)}}^{\dagger}|\Phi_0 \rangle = 0 $ and $\langle \Phi_0 | {T^{(n)}}=0$, where $n$ is integer. It is clear from the above expression for polarizability that it terminates naturally. The NCCM is more versatile than another coupled-cluster approach to properties that was proposed by Monkhorst \cite{Int. J}. The calculation of atomic polarizabilities by the latter method is less straightforward than that using the NCCM as it would entail the computation of the double derivative of the energy with respect to the electric field and this would require the knowledge of complicated perturbed coupled-cluster amplitudes \cite{A. Shukla}. \begin{table*}[t] \caption{The $\alpha_0$ and $\beta_0$ parameters of the GTOs, which have used in the present calculations.} \begin{tabular}{c||ccccccccc} \hline\hline \\ Orbital & $s_{1/2}$ & $p_{1/2}$ & $p_{3/2}$ & $d_{3/2}$ & $d_{5/2}$ & $f_{5/2}$ & $f_{7/2}$ & $g_{7/2}$ & $g_{9/2}$ \\\\ \hline \\ $\alpha_0$ & 0.020422 & 0.042695 & 0.042695 & 0.024227 & 0.024227 & 0.00084 & 0.00084 & 0.0082 & 0.0082 \\\\ $\beta_0$ & 2.016 & 2.025 & 2.025 & 2.02 & 2.02 & 2.25 & 2.25 & 2.23 & 2.23\\\\ \hline\hline \end{tabular} \label{tab:alpha_beta} \end{table*} \subsection{Error Estimate from triples excitations}\label{sec:Error} In the present work, the contributions to the polarizability of atomic Xe from three particle-three hole (triple) and higher order excitations have not been included. In order to estimate the size of these neglected effects, we define the following approximate triples RCC amplitudes in a perturbative manner \begin{equation} T_3^{(0),pert}= \frac{1}{3!}\sum_{ijk , abc} \frac{ ( H_0^{DC} T_2^{(0)})_{ijk}^{abc }}{{\epsilon}_i + {\epsilon}_j+{\epsilon}_k-{\epsilon}_a -{\epsilon}_b -{\epsilon}_c} \label{eq:t30} \end{equation} and \begin{eqnarray} T_3^{(1),pert}= \frac{1}{3!}\sum_{{ijk,abc}} \frac{ ( H_0^{DC} T_2^{(1)})_{ijk}^{abc}}{\epsilon_i+ \epsilon_j+\epsilon_k-\epsilon_a-\epsilon_b -\epsilon_c} \label{eq:t31} \end{eqnarray} with $i,j,k$ and $a,b,c$ subscripts denoting the occupied and unoccupied orbitals, respectively, and $\epsilon$ representing the orbital energies. The contributions of $T_3^{(0),pert}$ will be larger than that of $T_3^{(1),pert}$ as $T_2^{(0)}$ contains physical effects arising in lower order perturbation. In a similar way, $T_1^{(1)}$ contributions will dominate over those from $T_1^{(0)}$. Based on these considerations, the dominant uncertainty due to the neglected triples excitations are estimated by evaluating the expression \begin{equation} \Delta \alpha = 2 \langle \Phi_0 | T_3^{\dagger (0),pert} D T_2^{(0)} T_1^{(1)} | \Phi_0 \rangle. \end{equation} \section{Result and Discussions} In atomic relativistic many-body calculations, the commonly used basis sets are Gaussian type orbitals (GTOs). In our present work on the polarizability of the xenon atom, we use a two point Fermi nuclear distribution \cite{M. K. Advani}. For a finite size nucleus, the GTOs can represent the natural behavior of the relativistic wave functions \cite{Ishikawa Y}. The radial part of the relativistic wave function using the GTOs are given by \begin{equation} G_{k}^{L/S} = C_{k}^{L/S} r^{k} e^{-\alpha_k r^2}, \end{equation} where the index $k = 0,1,2,\cdots$ for $s,p,d,\cdots$ type orbital symmetry, respectively, and the index $L(S)$ means the large(small) component of the relativistic wave function. Using the kinetic balance condition, we can obtain the radial part of the small component of the wave function from the large component \cite{K. G. Dyall}. We have considered 9 relativistic symmetries in the present calculations with 40 basis functions for $s_{1/2}$, 39 for both $p_{1/2}$ and $p_{3/2}$, 38 for both $d_{3/2}$ and $d_{5/2}$, 37 for both $f_{5/2}$ and $f_{7/2}$, and 36 for both $g_{7/2}$ and $g_{9/2}$ symmetries. We have used even tempered condition for which the exponent $\alpha_i$ can be expressed as $\alpha_i = \alpha_0 \beta_0^{i-1}$ \cite{ALPHA}. In our calculation, the values of $\alpha_0$ and $\beta_0$ are unique for orbitals of a given symmetry. The accuracies of the results for the DF and CCM calculations depend on these values, (especially $\beta_0$). The DF equations in matrix form are solved for given values of these two parameters and they are suitably varied so that the energies and the expectation values of $r$, $1/r$ and $1/r^2$ of the occupied orbitals matches with those obtained from the numerical GRASP2 code \cite{K.G.Dyall et al}. Keeping this value of $\alpha_0$ fixed, the optimal value of $\beta_0$ is obtained by minimizing the DF energy as it is derived from the Rayleigh-Ritz variational principle. This leads to \begin{equation} \frac{\partial E_{DF}}{\partial\beta_0} = 0, \end{equation} here $E_{DF}$ is total energy at the DF level. In the present work we have carried out the aforementioned minimization by using the gradient descent method \cite{https://www.benfrederickson.com/numerical-optimization/}. The $\alpha_0$ and $\beta_0$ values from this approach are listed in Table \ref{tab:alpha_beta}. \begin{table}[t] \caption{Result of static dipole polarizability of $^{129}$Xe in $[ e a_0^3 ]$ . \begin{tabular}{clcll} & Method & Our work & Others \\ \hline & DF & 26.865 & 26.87 \cite{arXiv:1710.10946v1} , 26.918 \cite{singh} , 26.97 \cite{Latha} \\ & CPDF & 26.973 & 26.98 \cite{arXiv:1710.10946v1} , 26.987 \cite{singh} , 27.7 \cite{Latha} \\ & LPRCCSD \footnotemark [1] & & 26.432 \cite{Chattopadhyay} \\ & RCCSD(SC) & 28.115 & 28.13 \cite{arXiv:1710.10946v1} \\ & RNCCSD & 27.508 & \\ & Experiment & 27.815(27) \cite{Hohm U} & \\ \hline \end{tabular} \footnotetext[1]{Linearized perturbed RCCSD} \label{tb:result} \caption{\,Contributiones of the polarizability of $^{129}$Xe in $[ e a_0^3 ]$ from different terms in RCCSD . \begin{tabular}{c|ccc|cc|c} \hline Leading Contributions & & $\alpha$ \\ \hline $(D {T^{(1)}_1} + c.c. )_{con} $ & & 30.416 \\ $ ({T_1^{(0)}}^{\dagger} D T^{(1)}_1 + c.c.)_{con}$ & & -0.376 \\ $ ({T_1^{(0)}}^{\dagger}D T_2^{(1)} + c.c.)_{con} $ & & 0.115 \\ $( {T^{(0)}_2}^{\dagger} D T^{(1)}_1+ c.c. )_{con}$ & & -3.408 \\ $({T_2^{(0)}}^{\dagger}D T_2^{(1)} + c.c.)_{con}$ & & 1.268 \\ \hline \end{tabular} \label{tb:cont} \end{table} \begin{figure} \includegraphics[width=8.5cm, height=8.0cm]{Diagram_alpha.pdf} \caption{Decomposition of $DT_1^{(1)}$ coupled-cluster diagram into the DF and many-body perturbation theory diagrams. Here, $D$ and $H^{(DC)}_0$ refer to the dipole and the Dirac-Coulomb (DC) Hamiltonian, which are shown as single dotted and dashed lines, respectively. } \label{fig:t_1} \end{figure} We have performed our polarizability calculations for $^{129}$Xe in the relativistic self-consistent CCSD (RCCSD(SC)) framework and also using the relativistic NCCSD (RNCCSD) separately. The idea behind the first approach has been stated briefly in the previous section. In order to make this more transparent, we express Eq. (\ref{eq:alpha_ccm}) as \begin{eqnarray} \alpha &=& 2\langle \Phi_0|({D^{(0)}} {T^{(1)}})_{con}|\Phi_0 \rangle \nonumber \\ &=& 2 \langle \Phi_0 | [ (D+( D T^{(0)}+ c.c) +\cdots) {T^{(1)}} ]_{con} |\Phi_0\rangle \label{eq:alpha_m} \end{eqnarray} in increasing powers of $T^{(0)}$. In the self consistent method, $\alpha$ is calculated by increasing successively the combined powers of ${T^{(0)}}^{\dagger}$ and $T^{(0)}$ till self consistency is achieved. The result from the calculations by this method is given in Table \ref{tb:result} . The leading contributions from the terms in Eq. (\ref{eq:alpha_m}) are listed in Table \ref{tb:cont}. In Fig. \ref{fig:t_1}, $D T^{(1)}_1$ has been decomposed in terms of the DF, and some lower order many order perturbation theory diagrams. It illustrates that a CCM diagram subsumes diagrams corresponding to different physical effects to all orders in perturbation of the residual Coulomb interaction. In Figs. (1-b) and (1-f) represent typical core polarization and pair correlation effects respectively. From the viewpoint of many-body physics, the terms in Table \ref{tb:cont} correspond to various kinds of interplay between the core polarization and the pair correlation effects. The relativistic coupled Hartree-Fock, i.e. the coupled perturbed Dirac-Fock (CPDF) method contains the core polarization effects to all orders in the residual Coulomb interaction. Our DF and CPDF results are given in Table \ref{tb:result} and compared with those of other calculations that were carried out using the same approximations. They are in very good agreement with the results of Refs. \cite{arXiv:1710.10946v1} and \cite{singh}. However, our CPDF result differs from that of Ref. \cite{Latha} by about two and a half percent. The reason for this seems to be the different number of basis functions and values of the parameters in them that were chosen for the two calculations. All the results for the polarizability calculations given in this paper are in atomic units $[ e a_0^3 ]$. In Table \ref{tb:result}, we also give results of different full fledged relativistic coupled-cluster calculations. Our RCCSD(SC) result is very close to that of another calculation using the same method \cite{arXiv:1710.10946v1}, but with somewhat different single particle GTO basis functions. The result of our RNCCSD calculation is also given in Table \ref{tb:result}. The dominant contributions to $\alpha$ come from $DT^{(1)}_1$ and $\widetilde{T}^{(1)}_1D$, which arise from $\overline{D}T^{(1)}$ and $\widetilde{T}^{(1)} \overline{D}$, respectively. These values are 15.208 $(DT^{(1)})$ and 13.180 $(\widetilde{T}^{(1)}D)$ in atomic units (a.u.). The remaining contribution ($-0.88$ a.u.) is due to higher order correlation effects that are present in the three terms given in Eq(\ref{eq:polarizability}). The differences in the contributions between the individual terms of the RCCSD(SC) and their counterparts in the RNCCSD are not negligible. However, the final results for the two methods given in Table \ref{tb:result} differ by only two percent. Both of them are in reasonable agreement with an earlier calculation using the RCCSD method which only took into account lower order ${T^{(0)}}^{\dagger}$ and $T^{(0)}$ terms for which the result is 27.744 a.u. \cite{singh}. But they differ from a calculation based on a linearized perturbed relativistic coupled-cluster singles and doubles (LPRCCSD) approach \cite{Chattopadhyay} by about 5 \%. An important reason for this appears to be the non inclusion of correlation effects characterized by the non linear terms in the RCC wave function in the latter work. We identify the three particle-three hole (triples) excitations and the Breit interaction \cite{.P Grant} as the major sources of uncertainties in our polarizability calculations. The error due to the former can be estimated to by calculating the perturbative triple excitations as explained earlier in Sec. \ref{sec:Error}. The absolute value of this contribution was found in the present case to be 0.105 a.u. Given the closeness of the values of $^{129}$Xe polarizability at the CPDF and the different coupled-cluster levels (see Table \ref{tb:result}), the Breit interaction for the latter cases can be estimated by calculating the contribution of this interaction in the CPDF approximation, and the absolute value obtained for it is 0.051 a.u. The net uncertainty estimated for $^{129}$Xe polarizability calculated by the two variants of RCC theory employed in our present work comes from the two above mentioned uncertainties, whose absolute value is 0.156 a.u. for RCCSD(SC). It is reasonable to assume that the uncertainties associated with our RCCSD(SC) and RNCCSD calculations are approximately of the same size; i.e. about 0.6 \% of the total values in the two cases. \section{Conclusion} \, The results of our calculations of the electric dipole polarizability of $^{129}$Xe using the self-consistent relativistic coupled-cluster theory and the relativistic normal coupled-cluster theory have been presented and discussed. They are within two percent of each other and differ with the measured value by only one percent. The role of correlation effects has been highlighted, and the neglected contributions of these effects and the higher order relativistic effects together are estimated to be about 0.6 \% of the total values of both the relativistic coupled-cluster methods. The present work paves the way for high precision studies of the electric dipole moments of $^{129}$Xe using the two above mentioned relativistic coupled-cluster methods.
{ "timestamp": "2018-04-17T02:14:23", "yymm": "1804", "arxiv_id": "1804.05547", "language": "en", "url": "https://arxiv.org/abs/1804.05547" }
\section*{Acknowledgments} This research was supported in part by the National Science Foundation award IIS-1723943. We thank Brandon Araki and Kiran Vodrahalli for valuable discussions and helpful suggestions. We would also like to thank Kasper Green Larsen, Alexander Mathiasen, and Allan Gronlund for pointing out an error in an earlier formulation of Lemma~\ref{lem:order-statistic-sampling}. \subsection{Analytical Results for Section~\ref{sec:analysis_empirical} (Empirical Sensitivity)} \label{app:analysis_empirical} Recall that the sensitivity $\s[j]$, of an edge $j \in \mathcal{W}$ is defined as the maximum (approximate) relative importance over a subset $\SS \subseteq \PP \stackrel{i.i.d.}{\sim} {\mathcal D}^n$, $\abs{\SS} = n'$ (Definition~\ref{def:empirical-sensitivity}). We now establish a technical result that quantifies the accuracy of our approximations of edge importance. \subsection{Order Statistic Sampling} \begin{lemma} \label{lem:order-statistic-sampling} Let $C \ge 2$ be a constant and ${\mathcal D}$ be a distribution with CDF $F(\cdot)$ satisfying $F(\nicefrac{M}{C}) \leq \exp(-1/K)$ where $K \in \Reals_+$ is a universal constant and $M = \min \{x \in [0,1] : F(x) = 1\}$. Let $\SS = \{X_1, \ldots, X_n\}$ be a set of $n = |\SS|$ independent and identically distributed (i.i.d.) samples each drawn from the distribution ${\mathcal D}$. Let $X_{n+1} \sim {\mathcal D}$ be an i.i.d. sample. Then, \begin{align*} \Pr \left(C \, \max_{X \in \SS} X \leq X_{n+1} \right) \leq \exp(-n/K) \end{align*} \end{lemma} \begin{proof} Let $X_\mathrm{max} = \max_{X \in \SS}$ and let ${\mathcal F}$ denote the failure event $C \, X_\mathrm{max} < X_{n+1}$. Then, \begin{align*} \Pr({\mathcal F}) &= \Pr(C \, X_\mathrm{max} < X_{n+1}) \\ &= \int_{0}^M \Pr(X_\mathrm{max} < \nicefrac{x}{C} | X_{n+1} = x) \, p(x) \, dx \\ &= \int_{0}^M \Pr\left(X < \nicefrac{x}{C} \right)^n \, p(x) \, dx &\text{since $X_1, \ldots, X_n$ are i.i.d.} \\ &\leq \int_{0}^M F(\nicefrac{x}{C})^n \, p(x) \, dx &\text{where $F(\cdot)$ is the CDF of $X \sim {\mathcal D}$} \\ &\leq F(\nicefrac{M}{C})^n \int_{0}^M p(x) \, dx &\text{by monotonicity of $F$} \\ &= F(\nicefrac{M}{C})^n \\ &\leq \exp(-n/K) &\text{CDF Assumption}, \end{align*} and this completes the proof. \LL{I am worried that the pdf does not always exist. Igor and I discussed that (and we could double-check with him) but if we skip the integrals involving the pdf, we can just formulate it as Lebesgue integral, such that we don't need to ever rely on the pdf.} \CB{How about the new version of the proof (above) which doesn't use Riemann-Stieltjes? I'm not sure whether this fixes the problem, but at least there is no Riemann-Stieltjes.} \end{proof} \subsection{Case of Positive Weights} In this subsection, we establish approximation guarantees under the assumption that the weights are strictly positive. The next subsection will then relax this assumption to conclude that a neuron's value can be approximated well even when the weights are not all positive. Let $C = 9 K \ge 9 K$ be a constant. We would like to apply Lemma~\ref{lem:order-statistic-sampling} in conjunction with Assumption~\ref{asm:cdf} to conclude that a logarithmic (in $1/\delta$ and $\eta \cdot \eta^*$) sized $\SS \subseteq \PP$ suffices to obtain $\Pr_{x \sim {\mathcal D}}(C \, \s < \gHat{x}) \leq \delta$. However, Assumption~\ref{asm:cdf} is defined with respect to the CDF of $\g{x}$, denoted by $\cdf{\cdot}$, and \emph{not} that of $\gHat{x}$, denoted by $\cdfHat{\cdot}$. To bridge this gap, we establish the following technical result that relates the CDF of $\gHat{x}$ to that of $\g{x}$, provided that $\hat a^{\ell -1}(x) \in (1 \pm \epsilon) a^{\ell}(x)$. \begin{lemma} \label{lem:cdf-relationship} Let $\epsilon \in (0,1/2)$, $\ell \in \br{2,\ldots,L}$. Let $\Input \sim {\mathcal D}$ be a randomly drawn point and assume that $\hat a^{\ell-1}(\Input) \in (1 \pm \epsilon) a^{\ell-1}(\Input)$. Then, for all $j \in \mathcal{W} \subseteq [\eta^{\ell-1}]$ and for any constant $\gamma \in [0,1]$, \begin{align*} \cdf{\gamma/3} \leq \cdfHat{\gamma} \leq \cdf{3\gamma}. \end{align*} \end{lemma} \begin{proof} Let $\Input \sim {\mathcal D}$ be a randomly drawn point and let $j \in \mathcal{W}$ be arbitrary. By definitions of the CDF and $\g{x}$, we have \begin{align*} \cdfHat{\gamma} &= \Pr \left( \gHat{x} \leq \gamma \right) \\ &= \Pr \Big( \gHatDef{x} \leq \gamma \Big) \\ &= \Pr \Big( \WWRowCon_j \, \hat a_{j} (\Input) \leq \gamma \, \sum_{k \in \mathcal{W}} \WWRowCon_k \, \hat a_{k}(\Input) \Big) \\ &= \Pr \Big( \WWRowCon_j \, \hat a_{j}(\Input) \leq \gamma \, \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, \hat a_{k}(\Input) + \gamma \WWRowCon_j \, \hat a_{j}(\Input) \Big). \end{align*} Define $\hat{\Sigma}_{(-j)} = \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, \hat a_{k}(\Input)$ and $\Sigma_{(-j)} = \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, a_{k}(\Input)$ for notational brevity. Note that since $\hat a^{\ell-1}(\Input) \in (1 \pm \epsilon) a^{\ell-1}(\Input)$, we have \begin{align*} \hat{\Sigma}_{(-j)} &= \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, \hat a_{k}(\Input) \\ &\leq (1 +\epsilon) \sum_{k \in \mathcal{W} \, : \, k \neq j} \WWRowCon_k \, a_{k}(\Input) \\ &= (1 + \epsilon) \Sigma_{(-j)}. \end{align*} Equipped with this inequality, we continue from above by rearranging the expression \begin{align*} \cdfHat{\gamma} &= \Pr \Big( (1 - \gamma) \WWRowCon_j \, \hat a_{j}(\Input) \leq \gamma \, \hat{\Sigma}_{(-j)} \Big) \\ &\leq \Pr \Big( (1 - \gamma) \WWRowCon_j \, \hat a_{j}(\Input) \leq \gamma \, (1 + \epsilon) \Sigma_{(-j)} \Big) \\ &\leq \Pr \Big( (1 - \gamma) (1- \epsilon) \WWRowCon_j \, a_{j}(\Input) \leq \gamma \, (1 + \epsilon) \Sigma_{(-j)} \Big), \end{align*} where in the last inequality we used the fact that $a_{j} (1-\epsilon) \leq \hat a_{j}$ by assumption of the lemma. Moreover, since $\epsilon \in (0,1/2)$, observe that the ratio $\nicefrac{1 + \epsilon}{1 - \epsilon} \leq 3$. Dividing both sides by $1 - \epsilon$ in the expression above and applying this inequality we obtain \begin{align*} \cdfHat{\gamma} &\leq \Pr \Big( (1 - \gamma) \WWRowCon_j \, a_{j}(\Input) \leq 3 \gamma \, \Sigma_{(-j)} \Big) \\ &= \Pr \Big( \WWRowCon_j \, a_{j} \leq 3 \gamma \Sigma_{(-j)} + \gamma \WWRowCon_j \, a_{j}(\Input) \Big) \\ &\leq \Pr \Big( \WWRowCon_j \, a_{j}(\Input) \leq 3 \gamma \left( \Sigma_{(-j)} + \WWRowCon_j \, a_{j}(\Input) \right)\Big) \\ &= \Pr(\g{x} \leq 3 \gamma), \end{align*} and this concludes the proof for the upper bound. The argument for the lower bound is symmetric in that it uses the lower bound $\hat{\Sigma}_{(-j)} \ge (1 - \epsilon) \Sigma_{(-j)}$ and upper bound $a_{j}(\Input)(1+\epsilon) \ge \hat a_{j}(\Input)$ instead in conjunction with the fact that $\nicefrac{1-\epsilon}{1 + \epsilon} \ge 1/3$ for $\epsilon \in (0,1/2)$. \end{proof} We now combine Lemmas~\ref{lem:order-statistic-sampling} and \ref{lem:cdf-relationship} to establish our main result of the section. \begin{theorem}[Empirical Sensitivity Approximation] \label{thm:sensitivity-approximation} Let $\epsilon \in (0,1/2), \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, Consider a set $\SS = \{\Input_1, \ldots, \Input_n\} \subseteq \PP$ of size $|\SS| = \ceil*{\kPrime \logTerm }$ such that $\hat a^{\ell-1}(\Input') \in (1 \pm \epsilon) a^{\ell-1}(\Input')$ for all $\Input' \in \SS$. Then, $$ \Pr_{\Input \sim {\mathcal D}} \left(\exists{j \in \mathcal{W}} : C \, \s < \gHat{x} \right) \leq \frac{\delta |\mathcal{W}|} {4 \eta \, \eta^*}, $$ where $C = \Cdef \ge 9 K$ and $\mathcal{W} \subseteq [\eta^{\ell-1}]$. \end{theorem} \begin{proof} Consider an arbitrary $j \in \mathcal{W}$ and $x' \in \SS$ corresponding to $\g{x'}$ with CDF $\cdf{\cdot}$ and recall that $M = \min \{x \in [0,1] : \cdf{x} = 1\}$ as in Assumption~\ref{asm:cdf}. Let $\hat{M} = \min \{x \in [0,1] : \cdfHat{x} = 1\}$ be the analogous bound for the CDF associated with our relative importance approximation $\gHat{x}$. Invoking Lemma~\ref{lem:cdf-relationship}, we have $ \cdfHat{3 \, M} \geq \cdf{M} = 1$. Thus, $\hat{M} \leq 3 M$ by definition of $\hat{M}$. Now, we have \begin{align*} \cdfHat{\nicefrac{\hat{M}}{C}} &\leq \cdfHat{\nicefrac{3 \, M}{C}} &\text{Since $\hat{M} \leq 3 M$} \\ &\leq \cdf{\nicefrac{9 \, M}{C}} &\text{By the upper bound of Lemma~\ref{lem:cdf-relationship}} \\ &\leq \cdf{\nicefrac{M}{K}} &\text{Since $C \ge 9 K$} \\ &\leq \exp(-1/K) &\text{By Assumption~\ref{asm:cdf}}. \end{align*} Thus we have shown that the random variables $\gHat{x'}$ for $x' \in \SS$ satisfy the CDF condition required by Lemma~\ref{lem:order-statistic-sampling}. Thus, invoking Lemma~\ref{lem:order-statistic-sampling}, we obtain \begin{align*} \Pr(C \, \s < \gHat{x} ) &= \Pr \left(C \, \max_{\Input' \in \SS} \gHat{x'} < \gHat{x} \right) \\ &\leq \exp(-|\SS|/K). \end{align*} Since our choice of $j \in \mathcal{W}$ was arbitrary, the bound applies for any $j \in \mathcal{W}$. Thus, we have by the union bound \begin{align*} \Pr(\exists{j \in \mathcal{W}} \,: C \, \s < \gHat{x}) &\leq \sum_{j \in \mathcal{W}} \Pr(C \, \s < \gHat{x} ) \\ &\leq \abs{\mathcal{W}} \exp(-|\SS|/K) \\ &= \left(\frac{|\mathcal{W}|}{\eta^*} \right) \frac{\delta}{4 \eta}, \end{align*} and this concludes the proof. \end{proof} In practice, the set $\SS$ referenced above is chosen to be a subset of the original data points, i.e., $\SS \subseteq \PP$ (see Alg.~\ref{alg:main}, Line~\ref{lin:s-construction}). Thus, we henceforth assume that the size of the input points $|\PP|$ is large enough (or the specified parameter $\delta \in (0,1)$ is sufficiently large) so that $|\PP| \ge |\SS|$. \CB{TODO: Move this over to Importance Sampling Section} \CB{Don't we need this to hold for the maximum $\DeltaNeuron$ over all $i \in [\eta^\ell]$????????} \begin{lemma}[Empirical $\Delta_\neuron^{\ell}$ Approximation] \label{lem:delta-hat-approx} Let $\delta \in (0,1)$, $\lambda_* = \lambdamax$, and define $$ \DeltaNeuronHat = \DeltaNeuronHatDef, $$ where $\kappa = \sqrt{2 \lambda_*} \left(1 + \sqrt{2 \lambda_*} \logTerm \right)$ and $\SS \subseteq \PP$ is as in Alg.~\ref{alg:main}. Then, $$ \Pr_{\Input \sim {\mathcal D} } \left(\max_{i \in [\eta^\ell]} \DeltaNeuron[\Input] \leq \DeltaNeuronHat \right) \ge 1 - \frac{\delta}{4 \eta}. $$ \end{lemma} \begin{proof} Define the random variables $\mathcal{Y}_{\Input'} = \E[\DeltaNeuron[\Input']] - \DeltaNeuron[\Input']$ for each $\Input' \in \SS$ and consider the sum $$ \mathcal{Y} = \sum_{\Input' \in \SS} \mathcal{Y}_{\Input'} = \sum_{\Input' \in \SS} \left(\E[\DeltaNeuron[\Input]] - \DeltaNeuron[\Input']\right). $$ We know that each random variable $\mathcal{Y}_{\mathbf{\Input}'}$ satisfies $\E[\mathcal{Y}_{\mathbf{\Input}'}] = 0$ and by Assumption~\ref{asm:subexponential}, is subexponential with parameter $\lambda \leq \lambda_*$. Thus, $\mathcal{Y}$ is a sum of $|\SS|$ independent, zero-mean $\lambda_*$-subexponential random variables, which implies that $\E[\mathcal{Y}] = 0$ and that we can readily apply Bernstein's inequality for subexponential random variables~\cite{vershynin2016high} to obtain for $t \ge 0$ $$ \Pr \left(\frac{1}{|\SS|} \mathcal{Y} \ge t\right) \leq \exp \left(-|\SS| \, \min \left \{-\frac{t^2}{4 \, \lambda_*^2}, \frac{t}{2 \, \lambda_*} \right\} \right). $$ Since $\SS = \ceil*{\kPrime \logTerm } \ge 2 \lambda^* \log \left(\logTermInside / \delta \right)$, we have for $t = \sqrt{2 \lambda_*}$, \begin{align*} \Pr \left(\E[\DeltaNeuron[\Input]] - \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] \ge t \right) &= \Pr \left(\frac{1}{|\SS|} \mathcal{Y} \ge t\right) \\ &\leq \exp \left( -|\SS| \frac{t^2}{4 \lambda_*^2} \right) \\ &\leq \exp \left( - \log \left(\logTermInside / \delta \right) \right) \\ &= \frac{\delta}{8 \, \eta \, \eta^* }. \end{align*} Moreover, for single $\Input \sim {\mathcal D}$, by the equivalent definition of a subexponential random variable~\cite{vershynin2016high}, we have for $u \ge 0$ $$ \Pr(\DeltaNeuron[\Input] - \E[\DeltaNeuron[\Input]] \ge u) \leq \exp \left(-\min \left \{-\frac{u^2}{4 \, \lambda_*^2}, \frac{u}{2 \, \lambda_*} \right\} \right). $$ Thus, for $u = 2 \lambda_* \, \log \left(\logTermInside / \delta \right)$ we obtain $$ \Pr(\DeltaNeuron[\Input] - \E[\DeltaNeuron[\Input]] \ge u) \leq \exp \left( - \log \left(\logTermInside / \delta \right) \right) = \frac{\delta}{ 8 \, \eta \, \eta^* }. $$ Therefore, by the union bound, we have with probability at least $1 - \frac{\delta}{4 \eta \, \eta^*}$: \begin{align*} \DeltaNeuron[\Input] &\leq \E[\DeltaNeuron[\Input]] + u \\ &\leq \left(\frac{1}{|\SS|} \sum_{\mathbf{\Input}' \in \SS} \DeltaNeuron[\Input'] + t \right) + u \\ &= \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] + \left(\sqrt{2 \lambda_*} + 2 \lambda_* \, \log \left(\logTermInside / \delta \right) \right) \\ &= \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] + \kappa \\ &\leq \DeltaNeuronHat, \end{align*} where the last inequality follows by definition of $\DeltaNeuronHat$. Thus, by the union bound, we have \begin{align*} \Pr_{\Input \sim {\mathcal D} } \left(\max_{i \in [\eta^\ell]} \DeltaNeuron[\Input] > \DeltaNeuronHat \right) &= \Pr \left(\exists{i \in [\eta^\ell]}: \DeltaNeuron[\Input] > \DeltaNeuronHat \right) \\ &\leq \sum_{i \in \eta^{\ell-1}} \Pr \left(\DeltaNeuron[\Input] > \DeltaNeuronHat \right) \\ &\leq \eta^{\ell-1} \left(\frac{\delta}{4 \eta \, \eta^*} \right) \\ &\leq \frac{\delta}{4 \, \eta}, \end{align*} where the last line follows by definition of $\eta^* \ge \eta^{\ell -1}$. \end{proof} \subsection{Amplification} \label{sec:analysis-amplification} In the context of Lemma~\ref{lem:neuron-approx}, define the relative error of a (randomly generated) row vector $\WWHatRowCon^\ell = \WWHatRowCon^{\ell +} - \WWHatRowCon^{\ell -} \in \Reals^{1 \times \eta^{\ell-1}}$ with respect to a realization $\mathbf{\Input}$ of a point $\Input \sim {\mathcal D}$ as $$ \err{\WWHatRowCon^\ell} = \left |\frac{\dotp{\WWHatRowCon^\ell}{ \hat{a}^{\ell-1}(\Point)}}{\dotp{\WWRowCon^\ell}{a^{\ell-1}(\Point)}} - 1 \right|. $$ Consider a set $\TT \subseteq \PP \setminus \SS$ of size $\abs{\TT}$ such that $\TT \stackrel{i.i.d.}{\sim} {\mathcal D}^{|\TT|}$, and let $$ \err[\TT]{\WWHatRowCon^\ell} = \frac{1}{|\TT|} \sum_{\Input \in \TT} \err[\Input]{\WWHatRow^\ell}. $$ When the layer is clear from the context we will refer to $\err[\Input]{\WWHatRowCon}$ as simply $\err[\Input]{\WWHatRowCon}$. \begin{restatable}[Expected Error]{lemma}{lemexpectederror} \label{lem:expected-error} Let $\epsilon, \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, and $i \in [\eta^\ell]$. Conditioned on the event $\mathcal{E}^{\ell-1}$, \textsc{CoreNet} generates a row vector $\WWHatRowCon = \WWHatRowCon^{ +} - \WWHatRowCon^{-} \in \Reals^{1 \times \eta^{\ell-1}}$ such that $$ \E[\err[\Input]{\WWHatRowCon} \, \mid \, \mathcal{E}^{\ell-1}] \leq % \epsilonLayer \, \DeltaNeuronHat \left(k + \frac{5 \, (1 + k \epsilonLayer)}{\sqrt{\log(8 \eta/\delta)}} \right) + \frac{\delta \, \left(1 + k \epsilonLayer \right)}{\eta} \, \E_{\Input \sim {\mathcal D}} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right], $$ where $k = 2 \, (\ell -1)$. \end{restatable} We now state the advantageous effect of amplification, i.e., constructing multiple approximations for each neuron's incoming edges and then picking the best one, as formalized below. \begin{restatable}[Amplification]{theorem}{thmamplification} \label{thm:amplification} Given $\epsilon, \delta \in (0,1)$ such that $\frac{\delta}{\eta}$ is sufficiently small, let $\ell \in \br{2,\ldots,L}$ and $i \in [\eta^{\ell}]$. Let $\tau = \ceil*{\frac{\log(4 \, \eta / \delta)}{\log(10/9)}}$ and consider the reparameterized variant of Alg.~\ref{alg:main} where we instead have \begin{enumerate} \item $\SS \subseteq \PP$ of size $|\SS| \ge \ceil*{\logTermAmplif \kPrime}$, \item $\DeltaNeuronHat = \DeltaNeuronHatDef$ as before, but $\kappa$ is instead defined as $$ \kappa = \sqrt{2 \lambda_*} \left(1 + \sqrt{2 \lambda_*} \logTermAmplif \right), \qquad \text{and} $$ \item $m \ge \SampleComplexityAmplif$ in the sample complexity in \textsc{SparsifyWeights}. \end{enumerate} Among $\tau$ approximations $(\WWHatRow^\ell)_1, \ldots, (\WWHatRow^\ell)_\tau$ generated by Alg.~\ref{alg:sparsify-weights}, let $$ \WWHatRow^* = \argmin_{\WWHatRow^\ell \in \{(\WWHatRow^\ell)_1, \ldots, (\WWHatRow^\ell)_\tau\}} \err[\TT]{\WWHatRow^\ell}, $$ and $\TT \subseteq (\PP \setminus \SS)$ be a subset of points of size $|\TT| = \ceil*{8 \log \left( 8 \, \tau \, \eta / \, \delta\right) }$. Then, $$ \Pr_{\WWHatRow^*, \hat{a}^{l-1}(\cdot)} \left( \E_{\Input | \WWHatRow^*} \, [\err[\Input]{\WWHatRow^*} \, \mid \, \WWHatRow^*, \mathcal{E}^{\ell-1}] \leq k \epsilonLayer[\ell + 1] \, \mid \, \mathcal{E}^{\ell-1} \right) \ge 1 - \frac{\delta}{\eta}, $$ where $k = 2 \, (\ell -1)$. \end{restatable} \section{Analysis} \label{sec:analysis} In this section, we establish the theoretical guarantees of our neural network compression algorithm (Alg.~\ref{alg:main}). The full proofs of all the claims presented in this section can be found in the Appendix. \subsection{Preliminaries} \label{sec:analysis_empirical} Let $\Input \sim {\mathcal D}$ be a randomly drawn input point. We explicitly refer to the pre-activation and activation values at layer $\ell \in \{2, \ldots, \ell\}$ with respect to the input $x \in \mathrm{supp}({\mathcal D})$ as $z^{\ell}(\Input)$ and $a^{\ell}(\Input)$, respectively. The values of $z^{\ell}(\Input)$ and $a^{\ell}(\Input)$ at each layer $\ell$ will depend on whether or not we compressed the previous layers $\ell' \in \{2, \ldots, \ell\}$. To formalize this interdependency, we let $\hat z^{\ell}(x)$ and $\hat a^{\ell}(x)$ denote the respective quantities of layer $\ell$ when we replace the weight matrices $W^2, \ldots, W^{\ell}$ in layers $2, \ldots, \ell$ by $\hat{W}^2, \ldots, \hat{W}^{\ell}$, respectively. For the remainder of this section (Sec.~\ref{sec:analysis}) we let $\ell \in \br{2,\ldots,L}$ be an arbitrary layer and let $i \in [\eta^\ell]$ be an arbitrary neuron in layer $\ell$. For purposes of clarity and readability, we will omit the the variable denoting the layer $\ell \in \br{2,\ldots,L}$, the neuron $i \in [\eta^\ell]$, and the incoming edge index $j \in [\eta^{\ell-1}]$, whenever they are clear from the context. For example, when referring to the intermediate value of a neuron $i \in [\eta^\ell]$ in layer $\ell \in \br{2,\ldots,L}$, $z_i^\ell (\Input) = \dotp{\WWRow^\ell}{ \hat a^{\ell-1}(\Input)} \in \Reals$ with respect to a point $\Input$, we will simply write $z(\Input) = \dotp{\WWRowCon}{a(\Input)} \in \Reals$, where $\WWRowCon := \WWRow^\ell \in \Reals^{1 \times \eta^{\ell -1}}$ and $a(\Input) := a^{\ell-1}(\Input) \in \Reals^{\eta^{\ell-1} \times 1}$. Under this notation, the weight of an incoming edge $j$ is denoted by $\WWRow[j] \in \Reals$. \subsection{Importance Sampling Bounds for Positive Weights} \label{sec:analysis_positive} In this subsection, we establish approximation guarantees under the assumption that the weights are positive. Moreover, we will also assume that the input, i.e., the activation from the previous layer, is non-negative (entry-wise). The subsequent subsection will then relax these assumptions to conclude that a neuron's value can be approximated well even when the weights and activations are not all positive and non-negative, respectively. Let $\mathcal{W} = \{\edge \in [\eta^{\ell-1}] : \WWRow[\edge] > 0\} \subseteq [\eta^{\ell-1}]$ be the set of indices of incoming edges with strictly positive weights. To sample the incoming edges to a neuron, we quantify the relative importance of each edge as follows. \begin{definition}[Relative Importance] The importance of an incoming edge $j \in \mathcal{W}$ with respect to an input $\Input \in \supp$ is given by the function $\g{\Input}$, where $ \g{\Input} = \gDef{x} \quad \forall{j \in \mathcal{W}}. $ \end{definition} Note that $\g{x}$ is a function of the random variable $x \sim {\mathcal D}$. We now present our first assumption that pertains to the Cumulative Distribution Function (CDF) of the relative importance random variable. \begin{assumption} \label{asm:cdf} There exist universal constants $K, K' > 0 $ such that for all $j \in \mathcal{W}$, the CDF of the random variable $\g{x}$ for $\Input \sim {\mathcal D}$, denoted by $\cdf{\cdot}$, satisfies $$ \cdf{\nicefrac{M_j}{K}} \leq \exp\left(-\nicefrac{1}{K'}\right), $$ where $M_j = \min \{y \in [0,1] : \cdf{y} = 1\}$. \end{assumption} Assumption~\ref{asm:cdf} is a technical assumption on the ratio of the weighted activations that will enable us to rule out pathological problem instances where the relative importance of each edge cannot be well-approximated using a small number of data points $\SS \subseteq \PP$. Henceforth, we consider a uniformly drawn (without replacement) subsample $\SS \subseteq \PP$ as in Line~\ref{lin:sample-s} of Alg.~\ref{alg:main}, where $\abs{\SS} = \ceil*{\kPrime \logTerm }$, and define the sensitivity of an edge as follows. \begin{definition}[Empirical Sensitivity] \label{def:empirical-sensitivity} Let $\SS \subseteq \PP$ be a subset of distinct points from $\PP \stackrel{i.i.d.}{\sim} {\mathcal D}^{n}$.Then, the sensitivity over positive edges $j \in \mathcal{W}$ directed to a neuron is defined as $ \s[j] \, = \, \max_{\Input \in \SS} \g{x}. $ \end{definition} Our first lemma establishes a core result that relates the weighted sum with respect to the sparse row vector $\WWHatRowCon$, $\sum_{k \in \mathcal{W}} \WWHatRowCon_k \, \hat a_{k}(x)$, to the value of the of the weighted sum with respect to the ground-truth row vector $\WWRowCon$, $\sum_{k \in \mathcal{W}} \WWRowCon_k \, \hat a_{k}(x)$. We remark that there is randomness with respect to the randomly generated row vector $\WWHatRow^\ell$, a randomly drawn input $\Input \sim {\mathcal D}$, and the function $\hat{a}(\cdot) = \hat{a}^{\ell-1}(\cdot)$ defined by the randomly generated matrices $\hat{W}^2, \ldots, \hat{W}^{\ell-1}$ in the previous layers. Unless otherwise stated, we will henceforth use the shorthand notation $\Pr(\cdot)$ to denote $\Pr_{\WWHatRowCon^\ell, \, \Input, \, \hat{a}^{\ell-1}} (\cdot)$. Moreover, for ease of presentation, we will first condition on the event $\mathcal{E}_{\nicefrac{1}{2}}$ that $ \hat{a}(\Input) \in (1 \pm \nicefrac{1}{2}) a(\Input) $ holds. This conditioning will simplify the preliminary analysis and will be removed in our subsequent results. \begin{restatable}[Positive-Weights Sparsification]{lemma}{lemposweightsapprox} \label{lem:pos-weights-approx} Let $\epsilon, \delta \in (0,1)$, and $\Input \sim {\mathcal D}$. \textsc{Sparsify}$(\mathcal{W}, \WWRowCon, \epsilon, \delta, \SS, a(\cdot))$ generates a row vector $\WWHatRowCon$ such that \begin{align*} \Pr \left(\sum_{k \in \mathcal{W}} \WWHatRowCon_k \, \hat a_{k}(x) \notin (1 \pm \epsilon) \sum_{k \in \mathcal{W}} \WWRowCon_k \, \hat a_{k}(x) \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} \right) &\leq \frac{3 \delta}{8 \eta} \end{align*} where $ \nnz{\WWHatRowCon} \leq \SampleComplexity[\epsilon], $ and $S = \sum_{j \in \mathcal{W}} \s[j]$. \end{restatable} \subsection{Importance Sampling Bounds} \label{sec:analysis_sampling} We now relax the requirement that the weights are strictly positive and instead consider the following index sets that partition the weighted edges: $\mathcal{W}_+ = \{\edge \in [\eta^{\ell-1}] : \WWRow[\edge] > 0\}$ and $\mathcal{W}_- = \{\edge \in [\eta^{\ell-1}]: \WWRow[\edge] < 0 \}$. We still assume that the incoming activations from the previous layers are positive (this assumption can be relaxed as discussed in Appendix~\ref{app:negative}). We define $\DeltaNeuron[\Input]$ for a point $\Input \sim {\mathcal D}$ and neuron $i \in [\eta^\ell]$ as $ \DeltaNeuron[\Input] = \DeltaNeuronDef[\Input]. $ The following assumption serves a similar purpose as does Assumption~\ref{asm:cdf} in that it enables us to approximate the random variable $\DeltaNeuron[\Input]$ via an empirical estimate over a small-sized sample of data points $\SS \subseteq \PP$. \begin{assumption}[Subexponentiality of ${\DeltaNeuron[\Input]}$] \label{asm:subexponential} There exists a universal constant $\lambda > 0$, $\lambda < K'/2$\footnote{Where $K'$ is as defined in Assumption~\ref{asm:cdf}} such that for any layer $\ell \in \br{2,\ldots,L}$ and neuron $i \in [\eta^\ell]$, the centered random variable $\Delta = \DeltaNeuron[\Input] - \E_{\Input \sim {\mathcal D}}[\DeltaNeuron[\Input]]$ is subexponential~\citep{vershynin2016high} with parameter $\lambda$, i.e., $ \E[\exp \left(s \Delta \right)] \leq \exp(s^2 \lambda^2) \quad \forall{|s| \leq \frac{1}{\lambda}}$. \end{assumption} For $\epsilon \in (0,1)$ and $\ell \in \br{2,\ldots,L}$, we let $\epsilon' = \frac{\epsilon}{\epsilonDenomContant \, (L-1)}$ and define $ \epsilonLayer[\ell] = \epsilonLayerDef = \epsilonLayerDefWordy, $ where $\DeltaNeuronHat = \DeltaNeuronHatDef$. To formalize the interlayer dependencies, for each $i \in [\eta^\ell]$ we let $\mathcal{E}^\ell_i$ denote the (desirable) event that $\hat{z}_i^\ell (\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^{\ell}_i (\Input)$ holds, and let $\mathcal{E}^\ell = \cap_{i \in [\eta^\ell]} \, \mathcal{E}_{i}^\ell$ be the intersection over the events corresponding to each neuron in layer $\ell$. \begin{restatable}[Conditional Neuron Value Approximation]{lemma}{lemneuronapprox} \label{lem:neuron-approx} Let $\epsilon, \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, $i \in [\eta^\ell]$, and $\Input \sim {\mathcal D}$. \textsc{CoreNet} generates a row vector $\WWHatRow^\ell = \WWHatRow^{\ell +} - \WWHatRow^{\ell -} \in \Reals^{1 \times \eta^{\ell-1}}$ such that \begin{align} \label{eq:neuronapprox} \Pr \big(\, \mathcal{E}_i^\ell\, \, \mid \, \mathcal{E}^{\ell -1}\big) = % \Pr \left( \hat{z}_i^\ell(\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z_i^\ell(\Input) \, \mid \, \mathcal{E}^{\ell -1} \right) % \ge 1 - \nicefrac{\delta}{\eta}, \end{align} where $\epsilonLayer = \epsilonLayerDef$ and $ \nnz{\WWHatRow^\ell} \leq \SampleComplexity + 1, $ where $S = \sum_{j \in \Wplus} \s[j] + \sum_{j \in \Wminus} \s[j]$. \end{restatable} The following core result establishes unconditional layer-wise approximation guarantees and culminates in our main compression theorem. \begin{restatable}[Layer-wise Approximation]{lemma}{lemlayer} \label{lem:layer} Let $\epsilon, \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, and $\Input \sim {\mathcal D}$. \textsc{CoreNet} generates a sparse weight matrix $\hat{W}^\ell \in {\REAL}^{\eta^\ell \times \eta^{\ell-1}}$ such that, for $\hat{z}^\ell(\Input) = \hat{W}^\ell \hat a^\ell(\Input)$, $$ \Pr_{(\hat{W}^2, \ldots, \hat{W}^\ell), \, \Input } (\mathcal{E}^{\ell}) % = \Pr_{(\hat{W}^2, \ldots, \hat{W}^\ell), \, \Input } \left(\hat z^{\ell}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^\ell (\Input) \right) \geq 1 - \frac{\delta \, \sum_{\ell' = 2}^{\ell} \eta^{\ell'}}{\eta}. $$ \end{restatable} \begin{restatable}[Network Compression]{theorem}{thmmain} \label{thm:main} For $\epsilon, \delta \in (0, 1)$, Algorithm~\ref{alg:main} generates a set of parameters $\hat{\theta} = (\hat{W}^2, \ldots, \hat{W}^L)$ of size \begin{align*} \size{\hat{\theta}} &\leq \sum_{\ell = 2}^{L} \sum_{i=1}^{\eta^\ell} \left( \ceil*{\frac{32 \, (L-1)^2 \, (\DeltaNeuronHatLayers)^2 \, S_\neuron^\ell \, \kmax \, \log (8 \, \eta / \delta) }{\epsilon^2}} + 1\right) \\ \end{align*} in $\Bigo \left( \eta \, \, \eta^* \, \log \big(\eta \, \eta^*/ \delta \big) \right)$ time such that $\Pr_{\hat{\theta}, \, \Input \sim {\mathcal D}} \left(f_{\paramHat}(x) \in (1 \pm \epsilon) f_\param(x) \right) \ge 1 - \delta$. \end{restatable} We note that we can obtain a guarantee for a set of $n$ randomly drawn points by invoking Theorem~\ref{thm:main} with $\delta' = \delta / n$ and union-bounding over the failure probabilities, while only increasing the sampling complexity logarithmically, as formalized in Corollary~\ref{cor:generalized-compression}, Appendix~\ref{app:analysis_sampling}. \ificlr \else \input{analysis_amplification} \fi \subsection{Generalization Bounds} As a corollary to our main results, we obtain novel generalization bounds for neural networks in terms of empirical sensitivity. Following the terminology of~\cite{arora2018stronger}, the expected margin loss of a classifier $f_\param:\Reals^d \to \Reals^k$ parameterized by $\theta$ with respect to a desired margin $\gamma > 0$ and distribution ${\mathcal D}$ is defined by $ L_\gamma(f_\param) = \Pr_{(x,y) \sim {\mathcal D}_{\mathcal{X},\mathcal{Y}}} \left(f_\param(x)_y \leq \gamma + \max_{i \neq y} f_\param(x)_i\right)$. We let $\hat{L}_\gamma$ denote the empirical estimate of the margin loss. The following corollary follows directly from the argument presented in~\cite{arora2018stronger} and Theorem~\ref{thm:main}. \begin{corollary}[Generalization Bounds] \label{cor:generalization-bounds} For any $\delta \in (0,1)$ and margin $\gamma > 0$, Alg.~\ref{alg:main} generates weights $\hat{\theta}$ such that with probability at least $1 - \delta$, the expected error $L_0(f_{\paramHat})$ with respect to the points in $\PP \subseteq \mathcal{X}$, $|\PP| = n$, is bounded by \begin{align*} L_0(f_{\paramHat}) &\leq \hat{L}_\gamma(f_\param) + \widetilde{\Bigo} \left(\sqrt{\frac{\max_{\Input \in \PP} \norm{f_\param (x)}_2^2 \, L^2 \, \sum_{\ell = 2}^{L} (\DeltaNeuronHatLayers)^2 \, \sum_{i=1}^{\eta^\ell} S_\neuron^\ell }{\gamma^2 \, n}} \right). \end{align*} \end{corollary} \subsection{Analytical Results for Section~\ref{sec:analysis-amplification} (Theorem~\ref{thm:amplification}, Amplification)} \label{app:analysis-amplification} In the context of the notation introduced in Section~\ref{sec:analysis-amplification} recall that the relative error of a (randomly generated) row vector $\WWHatRowCon := \WWHatRowCon^{\ell} = \WWHatRowCon^{+} - \WWHatRowCon^{-} \in \Reals^{1 \times \eta^{\ell-1}}$ with respect to a point $\Input \in \supp$ is defined as \begin{align*} \err[\Input]{\WWHatRowCon} &= \errDef = \abs{\frac{\dotp{\WWHatRowCon^\ell}{ \hat{a}^{\ell-1}(\Point)}}{\dotp{\WWRowCon^\ell}{a^{\ell-1}(\Point)}} - 1}, \end{align*} where $\WWHatRowCon$ is shorthand for $\WWHatRow^\ell$ as before. Similarly define \begin{align*} \errPlus &= \errPlusDef = \abs{\errRatioPlus - 1} \quad \text{and} \\ \errMinus &= \errMinusDef = \abs{\errRatioMinus - 1}. \end{align*} The following lemma establishes the expected performance of a randomly constructed coreset with respect to the distribution of points ${\mathcal D}$ conditioned on coreset constructions for previous layers $(\hat{W}^2, \ldots, \hat{W}^{\ell-1})$ that define the realization $\mathbf{\hat a}(\cdot)$ of $\hat{a}^{\ell-1}(\cdot)$. \lemexpectederror* \begin{proof} For clarity of exposition we will omit explicit references to the layer $\ell$ and neuron $i$ as they are assumed to be arbitrary. In the context of this definition, let $\WWHatRowCon = \WWHatRowCon^{+} - \WWHatRowCon^{-} \in \Reals^{1 \times \eta^{\ell-1}}$ and let $\Input \sim {\mathcal D}$. The proof outline is to bound the overall error term $\err{\WWHatRowCon}$ by bounding $\err{\WWHatRowCon^{+}}$ and $\err{\WWHatRowCon^{-}}$. Let $\mathcal{E}_\Delta(\Input)$ denote the event that the inequality $$ \DeltaNeuron[\Input] \leq \DeltaNeuronHat $$ holds and recall that we condition on the event $\mathcal{E}^{\ell-1}$ occurring as in the premise of the lemma. Let $k = 2 \, (\ell - 2)$ and let $\xi = k \, \epsilonLayer[\ell]$. We begin by observing that conditioned on the events $\mathcal{E}_\Delta$ and $\mathcal{E}^{\ell-1}$, for any constant $u \ge 0$, the following inequality $$ \max \{\errPlus, \errMinus\} \leq \frac{u \, \xi}{1 + \xi } := \epsilon_* $$ implies that $$ \err{\WWHatRowCon} \leq k \, \epsilonLayer[\ell + 1] \left(u + 1 \right). $$ Henceforth we will at times omit the variable $\Input$ when referring to the point-specific variables for clarity of exposition with the understanding that the results hold for any arbitrary $\Input$. To see the previous implication explicitly, observe that conditioning on $\mathcal{E}^{\ell-1}$ implies that we have $\hat a^{\ell-1}(\Input) \in \left(1 \pm 2 \, (\ell - 2) \, \epsilonLayer[\ell] \right) a^{\ell-1} (\Input) = (1 \pm \xi) a^{\ell-1} (\Input)$, which yields by the triangle inequality \begin{align} \abs{\tilde z - z} &\leq \abs{\tilde z^+ - z^+} + \abs{\tilde z^- - z^-} \nonumber = \abs{ \sum_{k \in \Wplus} \WWRow[k] \, (\hat a_k - a_k)} + \abs{ \sum_{k \in \Wminus} (-\WWRow[k]) \, (\hat a_k - a_k)} \nonumber \\ &\leq \sum_{k \in \Wplus} \WWRow[k] \, \abs{\hat a_k - a_k} + \sum_{k \in \Wminus} (-\WWRow[k]) \, \abs{ \hat a_k - a_k} \nonumber \\ &\leq \sum_{k \in \Wplus} \WWRow[k] \, \xi \, a_k + \sum_{k \in \Wminus} (-\WWRow[k]) \, \xi \, a_k \nonumber \\ &=\xi \, (z^+ + z^-). \label{eqn:z-tilde-ineq} \end{align} Moreover, via a similar triangle-inequality type argument and the premise $\max \{\errPlus, \errMinus\} \leq \epsilon_*$ we obtain \begin{align} \abs{\hat z - \tilde z} &\leq \abs{\hat z^+ - \tilde z^+} + \abs{\hat z^- - \tilde z^-} \nonumber \\ &\leq \epsilon_* \left( \tilde z^+ + \tilde z^- \right) \nonumber \\ &\leq \epsilon_* ((1 + \xi)z^+ + (1 + \xi)z^-) &\text{By event $\mathcal{E}^{\ell-1}$} \nonumber \\ &= \epsilon_* \, (1 + \xi) (z^+ + z^-). \label{eqn:z-bar-ineq} \end{align} Combining the inequalities~\eqref{eqn:z-tilde-ineq} and \eqref{eqn:z-bar-ineq}, we obtain \begin{align*} \abs{\hat z - z} &\leq \abs{\hat z - \tilde z} + \abs{\tilde z - z} \\ &\leq \epsilon_* (1 + \xi) \, (z^+ + z^-) + \xi (z^+ + z^-) \\ &= |z| \, \DeltaNeuron[\Input] \left( \epsilon_* \, ( 1 + \xi) + \xi \right) \\ &= |z| \, \DeltaNeuron[\Input] \left(u \xi + \xi \right) \\ &\leq |z| \, \DeltaNeuronHat \xi \left(u + 1 \right) &\text{By event $\mathcal{E}_\Delta$} \\ &= |z| \, \DeltaNeuronHat k \, \epsilonLayer \, \left(u + 1 \right) &\text{By definition of $\xi = k \epsilonLayer$} \\ &= |z| \, k \, \epsilonLayer[\ell + 1] \left(u + 1 \right), &\text{By definition of $\epsilonLayer[\ell + 1] = \DeltaNeuronHat \, \epsilonLayer$} \end{align*} and dividing both sides by $|z|$ yields the bound on $\err{\WWHatRowCon}$. Let ${\mathcal Z} \subseteq \supp$ denote the set of \emph{well-behaved} points, i.e., the set of points that satisfy the sensitivity inequality with respect to edges in both $\mathcal{W}_+$ and $\mathcal{W}_-$, i.e., $$ {\mathcal Z} = \left\{x' \in \supp \, : \, \gHat{x'} \leq C \s \quad \forall{j \in \mathcal{W}_+ \cup \mathcal{W}_-} \right \}, $$ and let $\mathcal{E}_{{\mathcal Z}(\Input)}$ denote the event $\Input \in {\mathcal Z}$. Let $\mathcal{E}_{\GG(\Input)} = \mathcal{E}_{{\mathcal Z}(\Input)} \cap \mathcal{E}_{\Delta(\Input)}$ denote the \emph{good} event that both of the events $\mathcal{E}_{{\mathcal Z}(\Input)}$ and $\mathcal{E}_{\Delta(\Input)}$ occur. Note that since $\err[\Input]{\WWHatRowCon} \ge 0$ for all $\Input$ and $\WWHatRowCon$, we obtain by the equivalent formulation of expectation of non-negative random variables \begin{align} \E[\err[\Input]{\WWHatRowCon} \given \condAmplif \cap \mathcal{E}^{\ell-1} ] &= \int_0^\infty \Pr \left(\err[\Input]{\WWHatRowCon} \ge v \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \, dv \nonumber \\ &\leq k \, \epsilonLayer[\ell + 1] + \int_{k \, \epsilonLayer[\ell + 1]}^\infty \Pr \left(\err[\Input]{\WWHatRowCon} \ge v \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \, dv \nonumber \\ &= k \, \epsilonLayer[\ell + 1] \left( 1 + \int_0^\infty \Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \, du \right) \label{eqn:err-integral} \end{align} where the last equality follows by the change of variable $v = k \, \epsilonLayer[\ell + 1] (u + 1)$. Recall that by the argument presented in the proof of Lemma~\ref{lem:pos-weights-approx}, we have by Bernstein's inequality for $t \ge 0$ \begin{align*} \Pr \left (\errPlus \ge t \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) &\leq 2 \exp \left(-\frac{3 t^2 m}{ S \, C \left(6 + 2 t \right)} \right) \\ &\leq 2 \exp \left(- \frac{8 \, \log(8 \eta / \delta) }{6 + 2 t} \cdot \left(\frac{t}{\epsilonLayer}\right)^2 \right) \\ &= 2 \exp \left(- \frac{a \, t^2}{6 + 2 t} \right) \end{align*} where $$ a := \frac{8 \log(8 \eta / \delta)}{\epsilonLayer^2}, $$ in the last equality and the second inequality follows by definition of $m = \SampleComplexity$ and the fact that $C = 3 \, \kmax$. Via the same reasoning, we have for $\errMinus$: $$ \Pr \left (\errMinus \ge t \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \leq 2 \exp \left(- \frac{a \, t^2}{6 + 2 t} \right). $$ Hence, combining the implication established in the beginning of the proof with the bounds established above, we invoke the union bound to obtain \begin{align*} \Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) &\leq \Pr \left(\max \{\errPlus, \errMinus\} > \frac{u \, \xi}{1 + \xi } \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \\ &\leq \min \left \{ 4 \exp \left(- \frac{a \, t^2}{6 + 2 t} \right), 1 \right\}, \end{align*} where $t = \frac{u \, \xi}{1 + \xi}$ and as before, $a = \frac{8 \log(8 \eta / \delta)}{\epsilonLayer^2}$. From the expression above, we see that for a value of $t$ satisfying $$ t \ge \frac{2 \sqrt{a \log 8 + \log^2 2} + \log 4}{a}, $$ we have $ 4 \exp \left(- \frac{a \, t^2}{6 + 2 t} \right) \leq 1$. Bounding the expression above via elementary computations, we have \begin{align*} \frac{2 \sqrt{a \log 8 + \log^2 2} + \log 4}{a} &\leq \frac{2 \sqrt{2 a \log 8} + \log 4}{a} \\ &\leq \frac{3 \sqrt{2 a \log 8}}{a} \\ &\leq \frac{7}{\sqrt{a}} \\ &:= t^*. \end{align*} Now note that for $t \ge t^*$, we have \begin{align*} \exp \left(- \frac{a \, t^2}{6 + 2 t} \right) &\leq \exp \left(- \frac{a \, t^2}{6 \, (t / t^*)+ 2 t} \right) \\ &= \exp \left(- \frac{a \, t \, t^*}{6 + 2 t^*} \right). \end{align*} Let $$ b = \frac{\xi}{1 + \xi} $$ and recall that $t = \frac{u \, \xi}{1 + \xi} = ub$. Letting $$ u^* = \frac{t^*}{b} = \frac{7}{b \sqrt{a}}, $$ we reformulate the bound above in terms of $u$ and $u^*$, \begin{align*} \exp \left(- \frac{a \, t \, t^*}{6 + 2 t^*} \right) &= \exp \left(- \frac{a b^2 u^* \, u}{6 + 2 u^* b} \right) \\ &= \exp \left(- \left(\frac{a b^2 u^* }{6 + 2 u^* b}\right) u \right) \\ &= \exp(-c \, u), \end{align*} where $$ c = \frac{a b^2 u^* }{6 + 2 u^* b}. $$ This implies that for $u \ge u^*$, we have $$ \Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \leq 4 \exp(-c u), $$ and for $u \in [0, u^*]$, we trivially have $$ \Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \leq 1. $$ Putting it all together, we bound the integral from \eqref{eqn:err-integral} as follows \begin{align*} \int_0^\infty \Pr \left(\err[\Input]{\WWHatRowCon} \ge k \epsilonLayer[\ell + 1] (u + 1) \given \condAmplif \cap \mathcal{E}^{\ell-1} \right) \, du &\leq \int_0^{u^*} 1 \, du + 4\, \int_{u^*}^\infty \exp(- c u) \, du \\ &= u^* + \frac{4 \exp(-c u^*)}{c} \\ &\leq u^* + \frac{4 \exp(-2)}{c} \\ &\leq u^* + \frac{80 \exp(-2)}{49} \, u^* \\ &\leq 2 u^*, \end{align*} where the first inequality follows from the definitions of $u^*$ and $c$, which imply that $u^* b= 7 / \sqrt{a}$ and so by straightforward simplification, \begin{align*} c \, u^* &= \left(\frac{ab (u^* b)}{6 + 2 (u^*b)}\right) \, u^* = \left(\frac{7 a b}{6 \sqrt{a} + 14}\right) \, u^* \\ &= \frac{49 \sqrt{a}}{6 \sqrt{a}+ 14} \\ &\ge \frac{49 \sqrt{a}}{6 \sqrt{a}+ 14\sqrt{a}} = \frac{49}{20} > 2, \end{align*} where we used the inequality $a = \frac{8 \log(8 \eta / \delta)}{\epsilonLayer^2} \ge 1$. This implies that $\exp(-cu^*) \leq \exp(-2)$. Similarly, the second inequality follows from the calculations above and the definition of $u^*$ \begin{align*} \frac{1}{c} &\leq \frac{20}{7 b \, \sqrt{a}} = \frac{20}{49} \, u^*. \end{align*} Plugging this inequality on the integral back to our bound on the conditional expectation \eqref{eqn:err-integral}, we establish \begin{align*} \E[\err[\Input]{\WWHatRowCon} \given \condAmplif \cap \mathcal{E}^{\ell-1} ] &\leq k \, \epsilonLayer[\ell + 1] \left( 1 + 2 \, u^*\right). \end{align*} To bound the conditional expectation given the event $ (\mathcal{E}_{\GG(\Input)})^\mathsf{c}$ we first observe that since $\WWHatRowCon^+$ and $\WWHatRowCon^-$ are unbiased estimators, we have $$ \E[\dotp{\WWHatRowCon^+}{\cdot} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} , \mathbf{\Input}] = \dotp{\WWRowCon^+}{\cdot} \quad \text{and} \quad \E[\dotp{\WWHatRowCon^-}{\cdot} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} , \mathbf{\Input}] = \dotp{\WWRowCon^-}{\cdot}. $$ Moreover, note that conditioning on event $\mathcal{E}^{\ell-1}$ implies that for any $\Input \in \supp$ \begin{align*} \abs{\hat{z}(\Input)} &\leq \abs{\tilde z(\Input)} + \xi \left(\tilde z^+ (\Input) + \tilde z^- (\Input) \right), \end{align*} where $\xi = k \, \epsilonLayer[\ell]$ as before. Thus, invoking the triangle inequality and applying the definition of $\DeltaNeuron[\Input]$, we bound $\err[\Input]{\WWHatRowCon}$ as \begin{align*} \err[\Input]{\WWHatRowCon} &= \abs{\frac{\hat z(\Input)}{z(\Input)} - 1} \leq \abs{\frac{\hat z(\Input)}{z(\Input)}} + 1 \\ &\leq \DeltaNeuron[\Input] \, \frac{\abs{\tilde z(\Input)} + \xi \left(\tilde z^+ (\Input) + \tilde z^- (\Input) \right)}{z^+(\Input) + z^-(\Input)} + 1 \\ &\leq \DeltaNeuron[\Input] \, \frac{\tilde z^+(\Input) + \tilde z^-(\Input) + \xi \left(\tilde z^+ (\Input) + \tilde z^- (\Input) \right)}{z^+(\Input) + z^-(\Input)} + 1 \\ &= \left(1 + \xi \right) \DeltaNeuron[\Input] \, \frac{\tilde z^+ (\Input) + \tilde z^- (\Input)}{z^+(\Input) + z^-(\Input)} + 1. \end{align*} Since the above bound holds for any arbitrary $\Input$, we obtain by monotonicity of expectation, law of iterated expectation, and the unbiasedness of our estimators \begin{align*} &\E[\err[\Input]{\WWHatRowCon} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} ] \\ &\quad \leq \left(1 + \xi \right) \, \E \left[ \DeltaNeuron[\Input] \, \frac{\tilde z^+ (\Input) + \tilde z^- (\Input)}{z^+(\Input) + z^-(\Input)} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} \right] + 1 \\ &\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\frac{\DeltaNeuron[\Input]}{z^+(\Input) + z^-(\Input)} \, \E \left[\tilde z^+ (\Input) + \tilde z^- (\Input) \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} , \Input \right] \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} \right] + 1 \\ &\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\frac{\DeltaNeuron[\Input]}{z^+(\Input) + z^-(\Input)} \, \left(z^+(\Input) + z^-(\Input) \right) \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} \right] + 1 \\ &\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} \right] + 1 \\ &\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \, \mid \, \mathcal{E}_{\GG(\Input)}^\mathsf{c} \right] + 1 \\ &\quad = \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right] + 1 \\ &\quad \leq 2 \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right]. \end{align*} By the law of total expectation, we obtain \begin{align*} \E[\err[\Input]{\WWHatRowCon} \, \mid \, \mathcal{E}^{\ell-1}] &= \underbrace{\E[\err[\Input]{\WWHatRowCon} \given \condAmplif \cap \mathcal{E}^{\ell-1} ]}_{=A} \Pr(\mathcal{E}_{\GG(\Input)}) + \underbrace{\E[\err[\Input]{\WWHatRowCon} \given \condAmplif^\compl \cap \mathcal{E}^{\ell-1} ]}_{=B} \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) \\ &= A \, \left(1 - \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) \right) + B \, \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) \\ &\leq A_\mathrm{max} \, \left(1 - \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) \right) + B_\mathrm{max} \, \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}), \end{align*} where $A_\mathrm{max}$ and $B_\mathrm{max}$ are the upper bounds on the conditional expectations as established above: $$ A_\mathrm{max} = k \, \epsilonLayer[\ell + 1] \left( 1 + 2 \, u^*\right) \qquad \text{and} \qquad B_\mathrm{max} = 2 \left(1 + \xi \right) \, \E_{\Input} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right]. $$ We now bound $\Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c})$ by the union bound and applications of Lemma~\ref{lem:sensitivity-approximation} (twice, one for positive and the other for negative weights) and Lemma~\ref{lem:delta-hat-approx} \begin{align*} \Pr(\mathcal{E}_{\GG(\Input)}^\mathsf{c}) &\leq \Pr(\mathcal{E}_{{\mathcal Z}(\Input)}^\mathsf{c}) + \Pr(\mathcal{E}_{\Delta(\Input)}^\mathsf{c}) \\ &\leq \left( \frac{\delta}{8 \eta} + \frac{\delta}{8 \eta} \right) + \frac{\delta}{4 \eta} \\ &= \frac{\delta}{2 \eta}. \end{align*} Moreover, by definition of $a$, $\xi$, $u^*$ we have \begin{align*} A_\mathrm{max} &= \xi \DeltaNeuronHat \left(1 + 2 u^*\right) \\ &= \DeltaNeuronHat \left(\xi + \frac{14 (1 + \xi)}{\sqrt{a}} \right) \\ &\leq \DeltaNeuronHat \left(\xi + \frac{5 \, \epsilonLayer \, (1 + \xi)}{\sqrt{\log(8 \eta/\delta)}} \right) \\ &= \epsilonLayer \, \DeltaNeuronHat \left(k + \frac{5 \, (1 + \xi)}{\sqrt{\log(8 \eta/\delta)}} \right). \end{align*} Putting it all together, we establish \begin{align*} \E[\err[\Input]{\WWHatRowCon} \, \mid \, \mathcal{E}^{\ell-1}] &\leq \left(1 - \frac{\delta}{2 \eta}\right) A_\mathrm{max} + \frac{\delta}{2\eta} B_\mathrm{max} \\ &\leq A_\mathrm{max} + \frac{\delta}{2\eta} B_\mathrm{max} \\ &\leq \epsilonLayer \, \DeltaNeuronHat \left(k + \frac{5 \, (1 + \xi)}{\sqrt{\log(8 \eta/\delta)}} \right) + \frac{\delta \, \left(1 + \xi \right)}{\eta} \, \E_{\Input } \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right] \\ &= \epsilonLayer \, \DeltaNeuronHat \left(k + \frac{5 \, (1 + k \epsilonLayer)}{\sqrt{\log(8 \eta/\delta)}} \right) + \frac{\delta \, \left(1 + k \epsilonLayer \right)}{\eta} \, \E_{\Input } \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right] \end{align*} and this concludes the proof. \end{proof} Next, consider $\tau \in {\mathbb N}_+$ coreset constructions corresponding to the approximations $\{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}$ generated as in Alg.~\ref{alg:sparsify-weights} for layer $\ell \in \br{2,\ldots,L}$ and neuron $i \in [\eta^{\ell}]$. We overload the $\mathrm{err}_\mathcal C(\cdot)$ function so that the error with respect to the set $\TT$ is defined as the mean error, i.e., \begin{equation} \err[\TT]{\WWHatRowCon^\ell} = \frac{1}{|\TT|} \sum_{\Input \in \TT} \err[\Input]{\WWHatRowCon^\ell}. \end{equation} Equipped with this definition, we proceed to prove Theorem~\ref{thm:amplification}. \thmamplification* \begin{proof} Let $$ \xi = k \epsilonLayer[\ell+1]. $$ We observe that the reparameterization above enables us to invoke Lemma~\ref{lem:neuron-approx} with $\delta' = \frac{\delta}{4 |\PP| \tau}$ to obtain \begin{align*} \Pr(\err[\Input]{\WWHatRowCon} \ge \xi \, \mid \, \mathcal{E}^{\ell-1}) \leq \frac{\delta}{4 \, |\PP| \, \tau \, \eta}. \end{align*} Now let $\BB$ denote the event that the inequality $$ \max_{\WWHatRowCon \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}} \, \max_{\mathbf{\Input} \in \TT } \, \err{\WWHatRowCon} < \xi $$ holds, where $\TT \subseteq (\PP \setminus \SS)$ is a set of size $\ceil*{8 \log \left( 8 \, \tau \, \eta / \, \delta\right) }$. By the probabilistic inequality established above, we have by the union bound \begin{align*} \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) &= \Pr \left(\max_{\WWHatRowCon \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}} \, \max_{\mathbf{\Input} \in \TT } \, \err{\WWHatRowCon} \ge \xi \, \mid \, \mathcal{E}^{\ell-1} \right) \\ & \leq \sum_{\WWHatRowCon \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}} \sum_{\mathbf{\Input} \in \TT} \Pr \left(\err{\WWHatRowCon^\ell} \ge \xi \, \mid \, \mathcal{E}^{\ell-1} \right) \\ &\leq \frac{\tau \, |\TT| \, \delta }{4 \, |\PP| \, \tau \, \eta} \\ &\leq \frac{\delta}{4 \, \eta}, \end{align*} where the last inequality follows from the fact that $|\TT| \leq |\PP|$. Conditioning on $\BB$ enables us to reason about the bounded random variables $\err{\WWHatRowCon^\ell}$ for each $\Input \in \TT$ via Hoeffding's inequality to establish that for any $\WWHatRowCon^\ell \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}$ \begin{align*} \Pr \left(|\err[\TT]{\WWHatRowCon^\ell} - \E_{\Input | \WWHatRowCon^\ell} \, [\err[\Input]{\WWHatRowCon^\ell} \, \mid \, \WWHatRowCon^\ell, \BB \cap \mathcal{E}^{\ell-1}]| \ge \frac{\xi}{4} \, \mid \, \BB \cap \mathcal{E}^{\ell-1} \right) &\leq 2 \, \exp \left( - \frac{(\xi \, |\TT|)^2}{ 8 \, (\xi)^2 |\TT|} \right) \\ &= 2 \, \exp \left( - \frac{|\TT|}{ 8 } \right) \end{align*} where, as stated earlier, we implicitly condition on the realization $\mathbf{\hat a}(\cdot)$ of $\hat{a}^{\ell}(\cdot)$ in the expression above and in the subsequent parts of the proof since it can be marginalized out and does not affect our bounds. Applying the union bound, we further obtain \begin{align} &\Pr \left(\max_{\WWHatRowCon^\ell \in \{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}} \, \, |\err[\TT]{\WWHatRowCon^\ell} -\E_{\Input | \WWHatRowCon^\ell} \, [\err[\Input]{\WWHatRowCon^\ell} \, \mid \, \WWHatRowCon^\ell, \BB \cap \mathcal{E}^{\ell-1}]| \ge \frac{\xi}{4} \, \mid \, \BB \cap \mathcal{E}^{\ell-1} \right) \nonumber \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\leq 2 \, \tau \exp \left( - \frac{|\TT|}{ 8 } \right) \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \leq \frac{\delta}{4 \, \eta} \label{eqn:hoeffding-bound}, \end{align} where the last inequality follows by our choice of $|\TT|$: $$ |\TT| = \ceil*{8 \log \left( 8 \, \tau \, \eta / \, \delta\right) }. $$ Let $\LLL$ denote the event that $$ \max_{\WWHatRowCon^\ell \in \{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}} \, \, |\err[\TT]{\WWHatRowCon^\ell} -\E_{\Input | \WWHatRowCon^\ell} \, [\err[\Input]{\WWHatRowCon^\ell} \, \mid \, \WWHatRowCon^\ell, \BB \cap \mathcal{E}^{\ell-1}]| \leq \frac{\xi}{4}. $$ By law of total probability, we have \begin{align*} \Pr( \LLL^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) &= \Pr( \LLL^\mathsf{c} \, | \, \BB, \mathcal{E}^{\ell-1}) \Pr( \BB \, \mid \, \mathcal{E}^{\ell-1}) + \Pr( \LLL^\mathsf{c} \, | \, \BB^\mathsf{c}, \mathcal{E}^{\ell-1}) \Pr( \BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \\ &\leq \frac{\delta}{4 \, \eta} \left(1 - \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \right) + \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \\ &= \frac{\delta}{4 \, \eta} + \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \left(1 - \frac{\delta}{4 \, \eta} \right) \\ &\leq \frac{\delta}{4 \, \eta} + \frac{\delta}{4 \, \eta}\\ &= \frac{\delta}{2 \, \eta}. \end{align*} Now let $\WWHatRowCon^{\dagger}$ denote the \emph{true} minimizer of $\E_{\Input | \WWHatRowCon^\ell} \, [\err[\Input]{\WWHatRowCon^\ell} \, \mid \, \WWHatRowCon^\ell, \mathcal{E}^{\ell-1}]$ among $\WWHatRowCon^\ell \in \{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}$ (note that it is not necessarily the case that $\WWHatRowCon^{\dagger} = \WWHatRowCon^*$), i.e., $$ \WWHatRowCon^{\dagger} = \argmin_{\WWHatRowCon \in \{(\WWHatRowCon)_1, \ldots, (\WWHatRowCon)_\tau\}} \E_{\Input | \WWHatRowCon} \, [\err[\Input]{\WWHatRowCon} \, \mid \, \WWHatRowCon, \mathcal{E}^{\ell-1}]. $$ For each constructed $(\WWHatRowCon)_t, \, t \in [\tau]$, invoking Markov's inequality and the result of Lemma~\ref{lem:expected-error} corresponding to the adjusted size of $\SS$ and sample complexity $m$ yields \begin{align*} &\Pr \left(\E_{\Input | (\WWHatRowCon)_t} \, [\err[\Input]{(\WWHatRowCon)_t} \, \mid \, (\WWHatRowCon^\ell)_t, \mathcal{E}^{\ell-1}] \ge \frac{\xi}{4} \, \mid \, \mathcal{E}^{\ell-1} \right) \\ &\qquad \qquad \leq \frac{4 \, \E [\mathrm{err}_{(\WWHatRowCon^\ell)_t}(\Input) \, \mid \, \mathcal{E}^{\ell-1}]}{\xi} \\ &\qquad \qquad \leq \frac{4 \epsilonLayer \, \DeltaNeuronHat}{\xi} \left(k + \frac{5 \, (1 + k \epsilonLayer)}{\sqrt{\log(8 \eta/\delta)}} \right) + \frac{4 \, \delta \, \left(1 + k \epsilonLayer \right)}{\xi \, \eta} \, \E_{\Input \sim {\mathcal D}} \left[\DeltaNeuron[\Input] \, \mid \, \DeltaNeuron[\Input] > \DeltaNeuronHat \right] \\ &\qquad \qquad \leq \frac{9}{10}, \end{align*} where the last inequality follows for $\frac{\delta}{\eta}$ small enough. Thus, the event $\E[\mathrm{err}_{\WWHatRowCon^{\dagger}}(\Input) \, \mid \, \WWHatRowCon^{\dagger}, \mathcal{E}^{\ell-1}] \ge \xi/4$ occurs if and only if we fail (i.e., exceed $\xi/4$ expected error) in \emph{all} $\tau$ trials. This implies that \begin{align*} \Pr \left(\E_{\Input | \WWHatRowCon^{\dagger}}[\mathrm{err}_{\WWHatRowCon^{\dagger}}(\Input) \, | \, \WWHatRowCon^{\dagger}, \mathcal{E}^{\ell-1}] \ge \frac{\xi}{4} \, \mid \, \mathcal{E}^{\ell-1}\right) &= \Pr \left(\forall{t \in [\tau]} : \, \E_{\Input | (\WWHatRowCon^\ell)_t} \, [\err[\Input]{(\WWHatRowCon^\ell)_t} \, \mid \, (\WWHatRowCon^\ell)_t, \mathcal{E}^{\ell-1}] \ge \frac{\xi}{4} \, \mid \, \mathcal{E}^{\ell-1} \right) \\ &\leq \left(\frac{9}{10}\right)^\tau \\ &\leq \frac{\delta}{4 \, \eta}, \end{align*} where the last inequality follows by our choice of $\tau$: $$ \tau = \ceil*{\frac{\log(4 \, \eta / \delta)}{\log(10/9)}}. $$ Let $\GG$ denote the event that $\E_{\Input | \WWHatRowCon^*}[\mathrm{err}_{\WWHatRowCon^{\dagger}}(\Input) \, | \, \WWHatRowCon^{\dagger}, \mathcal{E}^{\ell-1}] \leq \frac{\xi}{4}$ and recall that $$ \WWHatRowCon^* = \argmin_{\WWHatRowCon^\ell \in \{(\WWHatRowCon^\ell)_1, \ldots, (\WWHatRowCon^\ell)_\tau\}} \err[\TT]{\WWHatRowCon^\ell}. $$ If events $\BB, \LLL$, and $\GG$ all occur, i.e., $\BB \cap \LLL \cap \GG \neq \emptyset$, then we obtain \begin{align*} &\E_{\Input | \WWHatRowCon^*} \, [\mathrm{err}_{\WWHatRowCon^*}(\Input) \, \mid \, \WWHatRowCon^*, \mathcal{E}^{\ell-1}] \\ &\quad = \E_{\TT | \WWHatRowCon^*}[\err[\TT]{\WWHatRowCon^*} \, \mid \, \WWHatRowCon^*, \mathcal{E}^{\ell-1}] & \\ &\quad= \E[\err[\TT]{\WWHatRowCon^*} \, \mid \, \WWHatRowCon^*, \BB, \mathcal{E}^{\ell-1}] \Pr(\BB \, \mid \, \mathcal{E}^{\ell-1}) \\ &\quad\quad\quad\quad\quad \quad + \E [\err[\TT]{\WWHatRowCon^*} \, \mid \, \WWHatRowCon^*, \BB^\mathsf{c}, \mathcal{E}^{\ell-1}] \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \\ &\quad \leq \E [\mathrm{err}_{\WWHatRowCon^*}(\TT) \, \mid \, \WWHatRowCon^*, \BB, \mathcal{E}^{\ell-1}] + \E \, [\mathrm{err}_{\WWHatRowCon^*}(\Input) \, \mid \, \WWHatRowCon^*, \BB^\mathsf{c}, \mathcal{E}^{\ell-1}] \, \left(\frac{\delta}{4 \, \eta} \right) & \\ &\quad \leq \E \, [\mathrm{err}_{\WWHatRowCon^*}(\TT) \, \mid \, \WWHatRowCon^*, \BB, \mathcal{E}^{\ell-1}] + \frac{\xi}{4} &\text{for $\frac{\delta}{\eta}$ small enough} \,\, \\ &\quad \leq \mathrm{err}_{\WWHatRowCon^*}(\TT) + \frac{\xi}{2} &\text{By $\BB \cap \LLL \neq \emptyset$} \\ &\quad \leq \mathrm{err}_{\WWHatRowCon^{\dagger}}(\TT) + \frac{\xi}{2} &\text{By definition of $\WWHatRowCon^*$} \\ &\quad \leq \E_{\TT | \WWHatRowCon^\dagger} \, [\mathrm{err}_{\WWHatRowCon^{\dagger}}(\TT) \, | \, \WWHatRowCon^{\dagger}, \, \BB, \mathcal{E}^{\ell-1}] + \frac{3 \, \xi}{4} &\text{By $\BB \cap \LLL \neq \emptyset$} \\ &\quad \leq \E_{\Input \, \mid \, \WWHatRowCon^{\dagger}} [\mathrm{err}_{\WWHatRowCon^{\dagger}}(\Input) \, | \, \WWHatRowCon^{\dagger}, \mathcal{E}^{\ell-1}] + \frac{3 \, \xi}{4} & \\ &\quad \leq \xi &\text{By $\GG \neq \emptyset$}, \end{align*} where in the second to last inequality, we used the fact that conditioning on $\BB$ leads to a decrease in the expected value relative to the unconditional expectation. By the union bound over the failure events, we have that the sequence of inequalities above holds with probability at least $1- \delta/ \eta$: \begin{align*} \Pr \left( \E_{\Input | \WWHatRowCon^*} \, [\err[\Input]{\WWHatRowCon^*} \, \mid \, \WWHatRowCon^*, \mathcal{E}^{\ell-1}] \leq \xi \, \mid \, \mathcal{E}^{\ell-1} \right) &\ge \Pr( \BB \cap \LLL \cap \GG \, \mid \, \mathcal{E}^{\ell-1}) \\ &= 1 - \Pr \left( (\BB \cap \LLL \cap \GG)^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1} \right) \\ &\geq 1 - \left( \Pr(\BB^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) + \Pr(\LLL^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) + \Pr(\GG^\mathsf{c} \, \mid \, \mathcal{E}^{\ell-1}) \right) \\ &\geq 1 - \left( \frac{\delta}{4 \, \eta} + \frac{\delta}{2 \, \eta} + \frac{\delta}{4 \, \eta} \right) \\ &= 1 - \frac{\delta}{\eta}, \end{align*} and this establishes the theorem. \end{proof} \section{Proofs of the Analytical Results in Section~\ref{sec:analysis}} \label{sec:appendix} This section includes the full proofs of the technical results given in Sec.~\ref{sec:analysis}. \input{appendix_empirical} \input{appendix_sampling} \ificlr \else \input{appendix_amplification} \fi \subsection{Analytical Results for Section~\ref{sec:analysis_positive} (Importance Sampling Bounds for Positive Weights)} \label{app:analysis_empirical} \subsubsection{Order Statistic Sampling} We now establish a couple of technical results that will quantify the accuracy of our approximations of edge importance (i.e., sensitivity). \begin{lemma} \label{lem:order-statistic-sampling} Let $K, K' > 0$ be universal constants and let ${\mathcal D}$ be a distribution with CDF $F(\cdot)$ satisfying $F(\nicefrac{M}{K}) \leq \exp(-1/K')$, where $M = \min \{x \in [0,1] : F(x) = 1\}$. Let $\PP = \{X_1, \ldots, X_n\}$ be a set of $n = |\PP|$ i.i.d. samples each drawn from the distribution ${\mathcal D}$. Let $X_{n+1} \sim {\mathcal D}$ be an i.i.d. sample. Then, \begin{align*} \Pr \left(K \, \max_{X \in \PP} X < X_{n+1} \right) \leq \exp(-n/K) \end{align*} \end{lemma} \begin{proof} Let $X_\mathrm{max} = \max_{X \in \PP}$; then, \begin{align*} \Pr(K \, X_\mathrm{max} < X_{n+1}) &= \int_{0}^M \Pr(X_\mathrm{max} < \nicefrac{x}{K} | X_{n+1} = x) \, d \Pr(x) \\ &= \int_{0}^M \Pr\left(X < \nicefrac{x}{K} \right)^n \, d \Pr(x) &\text{since $X_1, \ldots, X_n$ are i.i.d.} \\ &\leq \int_{0}^M F(\nicefrac{x}{K})^n \, d \Pr(x) &\text{where $F(\cdot)$ is the CDF of $X \sim {\mathcal D}$} \\ &\leq F(\nicefrac{M}{K})^n \int_{0}^M \, d \Pr(x) &\text{by monotonicity of $F$} \\ &= F(\nicefrac{M}{K})^n \\ &\leq \exp(-n/K') &\text{CDF Assumption}, \end{align*} and this completes the proof. \end{proof} We now proceed to establish that the notion of empirical sensitivity is a good approximation for the relative importance. For this purpose, let the relative importance $\gHat{x}$ of an edge $j$ after the previous layers have already been compressed be $$ \gHat{x} = \gHatDef{x}. $$ \begin{lemma}[Empirical Sensitivity Approximation] \label{lem:sensitivity-approximation} Let $\epsilon \in (0,1/2), \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, Consider a set $\SS = \{\Input_1, \ldots, \Input_n\} \subseteq \PP$ of size $|\SS| \ge \ceil*{\kPrime \logTerm }$. Then, conditioned on the event $\mathcal{E}_{\nicefrac{1}{2}}$ occurring, i.e., $\hat{a}(\Input) \in (1 \pm \nicefrac{1}{2}) a(\Input)$, $$ \Pr_{\Input \sim {\mathcal D}} \left(\exists{j \in \mathcal{W}} : C \, \s < \gHat{x} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} \right) \leq \frac{\delta} {8 \, \eta}, $$ where $C = 3 \, \kmax$ and $\mathcal{W} \subseteq [\eta^{\ell-1}]$. \end{lemma} \begin{proof} Consider an arbitrary $j \in \mathcal{W}$ and $x' \in \SS$ corresponding to $\g{x'}$ with CDF $\cdf{\cdot}$ and recall that $M = \min \{x \in [0,1] : \cdf{x} = 1\}$ as in Assumption~\ref{asm:cdf}. Note that by Assumption~\ref{asm:cdf}, we have $$ F(\nicefrac{M}{K}) \leq \exp(-1/K'), $$ and so the random variables $\g{x'}$ for $x' \in \SS$ satisfy the CDF condition required by Lemma~\ref{lem:order-statistic-sampling}. Now let $\mathcal{E}$ be the event that $K \, \s < \g{x}$ holds. Applying Lemma~\ref{lem:order-statistic-sampling}, we obtain \begin{align*} \Pr( \mathcal{E}) &= \Pr(K \, \s < \g{x} ) = \Pr \left(K \, \max_{\Input' \in \SS} \g{x'} < \g{x} \right) \leq \exp(-|\SS|/K'). \end{align*} Now let $\hat{\mathcal{E}}$ denote the event that the inequality $C \s < \gHat{x} = \gHatDef{x}$ holds and note that the right side of the inequality is defined with respect to $\gHat{x}$ and not $\g{x}$. Observe that since we conditioned on the event $\mathcal{E}_{\nicefrac{1}{2}}$, we have that $ \hat{a}(\Input) \in (1 \pm \nicefrac{1}{2}) a(\Input). $ Now assume that event $\hat{\mathcal{E}}$ holds and note that by the implication above, we have \begin{align*} C \, \s < \gHat{x} &= \gHatDef{x} \leq \frac{ (1 + \nicefrac{1}{2}) \WWRowCon_j \, a_{j}(x)}{(1 - \nicefrac{1}{2}) \sum_{k \in \mathcal{W}} \WWRowCon_k \, a_{k}(x) } \\ &\leq 3 \cdot \gDef{x} = 3 \, \g{x}. \end{align*} where the second inequality follows from the fact that $\nicefrac{1 + 1/2}{1 - 1/2} \leq 3$. Moreover, since we know that $C \ge 3 K$, we conclude that if event $\hat{\mathcal{E}}$ occurs, we obtain the inequality $$ 3 \, K \, \s \leq 3 \, \g{x} \Leftrightarrow K \, \s \leq \g{x}, $$ which is precisely the definition of event $\mathcal{E}$. Thus, we have shown the conditional implication $\big(\hat{\mathcal{E}} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} \big) \Rightarrow \mathcal{E}$, which implies that \begin{align*} \Pr(\hat{\mathcal{E}} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}}) &= \Pr(C \, \s < \gHat{x} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}}) \leq \Pr(\mathcal{E}) \\ &\leq \exp(-|\SS|/K'). \end{align*} Since our choice of $j \in \mathcal{W}$ was arbitrary, the bound applies for any $j \in \mathcal{W}$. Thus, we have by the union bound \begin{align*} \Pr(\exists{j \in \mathcal{W}} \,: C \, \s < \gHat{x} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}}) &\leq \sum_{j \in \mathcal{W}} \Pr(C \, \s < \gHat{x} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}}) \leq \abs{\mathcal{W}} \exp(-|\SS|/K') \\ &= \left(\frac{|\mathcal{W}|}{\eta^*} \right) \frac{\delta}{8 \eta} \leq \frac{\delta}{8 \eta}. \end{align*} \end{proof} In practice, the set $\SS$ referenced above is chosen to be a subset of the original data points, i.e., $\SS \subseteq \PP$ (see Alg.~\ref{alg:main}, Line~\ref{lin:s-construction}). Thus, we henceforth assume that the size of the input points $|\PP|$ is large enough (or the specified parameter $\delta \in (0,1)$ is sufficiently large) so that $|\PP| \ge |\SS|$. \subsubsection{Proof of Lemma~\ref{lem:pos-weights-approx}} We now state the proof of Lemma~\ref{lem:pos-weights-approx}. In this subsection, we establish approximation guarantees under the assumption that the weights are strictly positive. The next subsection will then relax this assumption to conclude that a neuron's value can be approximated well even when the weights are not all positive. \lemposweightsapprox* \begin{proof} Let $\epsilon, \delta \in (0,1)$ be arbitrary. Moreover, let $\mathcal C$ be the coreset with respect to the weight indices $\mathcal{W} \subseteq [\eta^{\ell-1}]$ used to construct $\WWHatRowCon$. Note that as in \textsc{Sparsify}, $\mathcal C$ is a multiset sampled from $\mathcal{W}$ of size $ m = \SampleComplexity[\epsilon], $ where $S = \sum_{j \in \mathcal{W}} \s $ and $\mathcal C$ is sampled according to the probability distribution $q$ defined by $$ \qPM{j} = \frac{\s}{S} \qquad \forall{j \in \mathcal{W}}. $$ Let $\mathbf{\hat a}(\cdot)$ be an arbitrary realization of the random variable $\hat{a}^{\ell-1}(\cdot)$, let $\mathbf{\Input}$ be a realization of $\Input \sim {\mathcal D}$, and let $$ \hat{z} = \sum_{k \in \mathcal{W}} \WWHatRow[ k] \, \mathbf{\hat a}_k(\mathbf{\Input}) $$ be the approximate intermediate value corresponding to the sparsified matrix $\WWHatRowCon$ and let $$ \tilde z = \sum_{k \in \mathcal{W}} \WWRow[ k] \, \mathbf{\hat a}_k(\mathbf{\Input}). $$ Now define $\mathcal{E}$ to be the (favorable) event that $\hat z$ $\epsilon$-approximates $\tilde z$, i.e., $\hat z \in (1 \pm \epsilon) \tilde z$, We will now show that the complement of this event, $\mathcal{E}^\mathsf{c}$, occurs with sufficiently small probability. Let ${\mathcal Z} \subseteq \supp$ be the set of \emph{well-behaved} points (defined implicitly with respect to neuron $i \in [\eta^\ell]$ and realization $\mathbf{\hat a}$) and defined as follows: $$ {\mathcal Z} = \left\{x' \in \supp \, : \, \gHat{x'} \leq C \s \quad \forall{j \in \mathcal{W}} \right \}, $$ where $C = 3 \, \kmax$. Let $\mathcal{E}_{{\mathcal Z}}$ denote the event that $\mathbf{\Input} \in {\mathcal Z}$ where $\mathbf{\Input}$ is a realization of $\Input \sim {\mathcal D}$. \paragraph{Conditioned on $\mathcal{E}_{\mathcal Z}$, event $\mathcal{E}^\mathsf{c}$ occurs with probability $\leq \frac{\delta}{4 \eta}$:} Let $\mathbf{\Input}$ be a realization of $\Input \sim {\mathcal D}$ such that $\mathbf{\Input} \in {\mathcal Z}$ and let $\mathcal C = \{c_1, \ldots, c_{m}\}$ be $m$ samples from $\mathcal{W}$ with respect to distribution $\qPM{}$ as before. Define $m$ random variables $\T[c_1], \ldots, \T[c_m]$ such that for all $j \in \mathcal C$ \begin{align} \label{eqn:tplu-defn} \T[j] &= \frac{\WWRow[j] \, \mathbf{\hat a}_{j} (\mathbf{\Input}) }{m \, \qPM{j}}= \frac{S \, \WWRow[ j] \, \mathbf{\hat a}_{j} (\mathbf{\Input}) }{m \, \s[j]}. \end{align} For any $j \in \mathcal C$, we have for the conditional expectation of $\T[j]$: \begin{align*} \E [\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}] &= \sum_{k \in \mathcal{W}} \frac{\WWRow[k] \, \mathbf{\hat a}_{k} (\mathbf{\Input})}{m \, \qPM{k}} \cdot \qPM{k} \\ &= \sum_{k \in \mathcal{W}} \frac{\WWRow[k] \, \mathbf{\hat a}_k (\mathbf{\Input})}{m} \\ &= \frac{\tilde z}{m}, \end{align*} where we use the expectation notation $\E[\cdot]$ with the understanding that it denotes the conditional expectation $\E \nolimits_{\CC \given \hat a^{l-1}(\cdot), \, \Point}\,[\cdot]$. Moreover, we also note that conditioning on the event $\mathcal{E}_{\mathcal Z}$ (i.e., the event that $\mathbf{\Input} \in {\mathcal Z}$) does not affect the expectation of $\T[j]$. Let $\T = \sum_{j \in \mathcal C} \T[j] = \hat z$ denote our approximation and note that by linearity of expectation, $$ \E[\T \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}} ] = \sum_{j \in \mathcal C} \E [\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}} ] = \tilde z $$ Thus, $\hat z = \T$ is an unbiased estimator of $\tilde z$ for any realization $\mathbf{\hat a}(\cdot)$ and $\mathbf{\Input}$; thus, we will henceforth refer to $\E[\T \, \mid \, \mathbf{\hat a}(\cdot), \, \mathbf{\Input} ]$ as simply $\tilde z$ for brevity. For the remainder of the proof we will assume that $\tilde z > 0$, since otherwise, $\tilde z = 0$ if and only if $\T[j] = 0$ for all $j \in \mathcal C$ almost surely, which follows by the fact that $\T[j] \ge 0$ for all $j \in \mathcal C$ by definition of $\mathcal{W}$ and the non-negativity of the ReLU activation. Therefore, in the case that $\tilde z = 0$, it follows that $$ \Pr (|\hat{z} - \tilde z| > \epsilon \tilde z \given \mathbf{\hat a}(\cdot), \mathbf{\Input}) = \Pr(\hat{z} > 0 \given \mathbf{\hat a}(\cdot), \mathbf{\Input}) = \Pr( 0 > 0) = 0, $$ which trivially yields the statement of the lemma, where in the above expression, $\Pr(\cdot)$ is short-hand for the conditional probability $\Pr_{\WWHatRowCon \, \mid \, \hat a^{l-1}(\cdot), \, \Input}(\cdot)$. We now proceed with the case where $\tilde z > 0$ and leverage the fact that $\mathbf{\Input} \in {\mathcal Z}$\footnote{Since we conditioned on the event $\mathcal{E}_{\mathcal Z}$.} to establish that for all $j \in \mathcal{W}$: \begin{align} C \s &\ge \gHat{\mathbf{\Input}} = \frac{\WWRow[j] \, \mathbf{\hat a}_{j}(\mathbf{\Input})}{\sum_{k \in \mathcal{W}} \WWRow[ k] \, \mathbf{\hat a}_k(\mathbf{\Input})} = \frac{\WWRow[j] \, \mathbf{\hat a}_{j}(\mathbf{\Input})}{\tilde z} \nonumber \\ \Leftrightarrow \quad \frac{\WWRow[j] \, \mathbf{\hat a}_{j}(\mathbf{\Input})}{\s[j]} &\leq C \, \tilde z. \label{eqn:sens-inequality} \end{align} Utilizing the inequality established above, we bound the conditional variance of each $\T[j], \, j \in \mathcal C$ as follows \begin{align*} \Var(\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) &\leq \E[(\T[j])^2 \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}] \\ &= \sum_{k \in \mathcal{W}} \frac{(\WWRow[k] \, \mathbf{\hat a}_{k}(\mathbf{\Input}))^2}{(m \, \qPM{k})^2} \cdot \qPM{k} \\ &= \frac{S}{m^2} \, \sum_{k \in \mathcal{W}} \frac{(\WWRow[k] \, \mathbf{\hat a}_{k}(\mathbf{\Input}))^2}{\s[k]} \\ &\leq \frac{S}{m^2} \, \left(\sum_{k \in \mathcal{W}}\WWRow[k] \, \mathbf{\hat a}_{k}(\mathbf{\Input}) \right) C\, \tilde z \\ &= \frac{S \, C \, \tilde z^2}{m^2}, \end{align*} where $\Var(\cdot)$ is short-hand for $\Var \nolimits_{\CC \given \hat a^{l-1}(\cdot), \, \Point}\, (\cdot)$. Since $\T$ is a sum of (conditionally) independent random variables, we obtain \begin{align} \label{eqn:varplu-bound} \Var(\T \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) &= m \Var(\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) \\ &\leq \frac{S \, C \, \tilde z^2}{m}. \nonumber \end{align} Now, for each $j \in \mathcal C$ let $$ \TTilde[j] = \T[j] - \E [\T[j] \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}] = \T[j] - \tilde z, $$ and let $\TTilde = \sum_{j \in \mathcal C} \TTilde[j]$. Note that by the fact that we conditioned on the realization $\mathbf{\Input}$ of $\Input$ such that $\mathbf{\Input} \in {\mathcal Z}$ (event $\mathcal{E}_{\mathcal Z}$), we obtain by definition of $\T[j]$ in \eqref{eqn:tplu-defn} and the inequality \eqref{eqn:sens-inequality}: \begin{equation} \label{eqn:tplu-bound} \T[j] = \frac{S \, \WWRow[j] \, \mathbf{\hat a}_{j} (\mathbf{\Input}) }{m \, \s[j]} \leq \frac{S \, C \, \tilde z}{m}. \end{equation} We also have that $S \ge 1$ by definition. More specifically, using the fact that the maximum over a set is greater than the average and rearranging sums, we obtain \begin{align*} S &= \sum_{j \in \mathcal{W}} \s = \sum_{j \in \mathcal{W}} \max_{\mathbf{\Input}' \in \SS} \,\, \g{\mathbf{\Input}'} \\ &\ge \frac{1}{|\SS|} \sum_{j \in \mathcal{W}} \sum_{\mathbf{\Input}' \in \SS} \g{\mathbf{\Input}'} = \frac{1}{|\SS|} \sum_{\mathbf{\Input}' \in \SS} \sum_{j \in \mathcal{W}} \g{\mathbf{\Input}'} \\ &= \frac{1}{|\SS|} \sum_{\mathbf{\Input}' \in \SS} 1 = 1. \end{align*} Thus, the inequality established in \eqref{eqn:tplu-bound} with the fact that $S \ge 1$ we obtain an upper bound on the absolute value of the centered random variables: \begin{equation} \label{eqn:tplutilde-bound} |\TTilde[j]| = \left|\T[j] - \frac{\tilde z}{m}\right| \leq \frac{S \, C \, \tilde z}{m} = M, \end{equation} which follows from the fact that: \paragraph{if $\T[j] \ge \frac{\tilde z}{m}$:} Then, by our bound in \eqref{eqn:tplu-bound} and the fact that $\frac{\tilde z}{m} \ge 0$, it follows that \begin{align*} \abs{\TTilde[j]} &= \T[j] - \frac{\tilde z}{m} \leq \frac{S \, C \, \tilde z}{m} - \frac{\tilde z}{m} \leq \frac{S \, C \, \tilde z}{m}. \end{align*} \paragraph{if $\T[j] < \frac{\tilde z}{m}$:} Then, using the fact that $\T[j] \ge 0$ and $S \ge 1$, we obtain \begin{align*} \abs{\TTilde[j]} &= \frac{\tilde z}{m} - \T[j] \leq \frac{\tilde z}{m} \leq \frac{S \, C \, \tilde z}{m}. \end{align*} Applying Bernstein's inequality to both $\TTilde$ and $-\TTilde$ we have by symmetry and the union bound, \begin{align*} \Pr (\mathcal{E}^\mathsf{c} \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) &= \Pr \left(\abs{\T - \tilde z} \ge \epsilon \tilde z \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}} \right) \\ &\leq 2 \exp \left(-\frac{\epsilon^2 \tilde z^2}{2 \Var(\T \given \mathbf{\hat a}(\cdot), \mathbf{\Input}) + \frac{2 \, \epsilon \, \tilde z M}{3}}\right) \\ &\leq 2 \exp \left(-\frac{\epsilon^2 \tilde z^2}{ \frac{2 S C \, \tilde z^2}{m} + \frac{2 S \, C \, \tilde z^2}{3 m}} \right) \\ &= 2 \exp \left(-\frac{3 \, \epsilon^2 \, m}{8 S \, C } \right) \\ &\leq \frac{\delta}{4 \eta }, \end{align*} where the second inequality follows by our upper bounds on $\Var(\T \given \mathbf{\hat a}(\cdot), \mathbf{\Input})$ and $\abs{\TTilde[j]}$ and the fact that $\epsilon \in (0,1)$, and the last inequality follows by our choice of $m = \SampleComplexity[\epsilon]$. This establishes that for any realization $\mathbf{\hat a}(\cdot)$ of $\hat a^{l-1}(\cdot)$ and a realization $\mathbf{\Input}$ of $\Input$ satisfying $\mathbf{\Input} \in {\mathcal Z}$, the event $\mathcal{E}^\mathsf{c}$ occurs with probability at most $\frac{\delta}{4 \eta}$. \paragraph{Removing the conditioning on $\mathcal{E}_{\mathcal Z}$:} We have by law of total probability \begin{align*} \Pr (\mathcal{E} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}}) &\ge \int_{\mathbf{\Input} \in {\mathcal Z}} \Pr(\mathcal{E} \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) \Pr_{\Input \sim {\mathcal D}} (\Input = \mathbf{\Input} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}}) \, d \mathbf{\Input} \\ &\ge \left(1 - \frac{\delta}{4 \eta }\right) \int_{\mathbf{\Input} \in {\mathcal Z}} \Pr_{\Input \sim {\mathcal D}} (\Input = \mathbf{\Input} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}}) \, d \mathbf{\Input} \\ &= \left(1 - \frac{\delta}{4 \eta }\right) \Pr_{\Input \sim {\mathcal D}} (\mathcal{E}_{\mathcal Z} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}}) \\ &\ge \left(1 - \frac{\delta}{4 \eta }\right) \left(1 - \frac{\delta } {8 \eta }\right) \\ &\ge 1 - \frac{3 \delta}{8 \eta} \end{align*} where the second-to-last inequality follows from the fact that $\Pr (\mathcal{E}^\mathsf{c} \given \mathbf{\hat a}(\cdot), \mathbf{\Input}, \mathcal{E}_{\mathcal Z}, \mathcal{E}_{\nicefrac{1}{2}}) \leq \frac{\delta}{4 \eta }$ as was established above and the last inequality follows by Lemma~\ref{lem:sensitivity-approximation}. \paragraph{Putting it all together} Finally, we marginalize out the random variable $\hat a^{\ell -1}(\cdot)$ to establish \begin{align*} \Pr(\mathcal{E} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} ) &= \int_{\mathbf{\hat a}(\cdot)} \Pr (\mathcal{E} \, \mid \, \mathbf{\hat a}(\cdot), \, \mathcal{E}_{\nicefrac{1}{2}} ) \Pr(\mathbf{\hat a}(\cdot) \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} ) \, d \mathbf{\hat a}(\cdot) \\ &\ge \left(1 - \frac{3 \delta}{8 \eta}\right) \int_{\mathbf{\hat a}(\cdot)} \Pr(\mathbf{\hat a}(\cdot) \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} ) \, d \mathbf{\hat a}(\cdot) \\ &= 1 - \frac{3 \delta}{8 \eta}. \end{align*} Consequently, \begin{align*} \Pr(\mathcal{E}^\mathsf{c} \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} ) &\leq 1 - \left(1 - \frac{3 \delta}{8 \eta}\right) = \frac{3 \delta}{8 \eta}, \end{align*} and this concludes the proof. \end{proof} \section{Asymptotic Time Complexity} \section{Additional Results} \label{app:results} In this section, we give more details on the evaluation of our compression algorithm on popular benchmark data sets and varying fully-connected neural network configurations. In the experiments, we compare the effectiveness of our sampling scheme in reducing the number of non-zero parameters of a network to that of uniform sampling and the singular value decomposition (SVD). All algorithms were implemented in Python using the PyTorch library~\citep{paszke2017automatic} and simulations were conducted on a computer with a 2.60 GHz Intel i9-7980XE processor (18 cores total) and 128 GB RAM. For training and evaluating the algorithms considered in this section, we used the following off-the-shelf data sets: \begin{itemize} \setlength\itemsep{0.25em} \item \textit{MNIST}~\citep{lecun1998gradient} --- $70,000$ images of handwritten digits between 0 and 9 in the form of $28 \times 28$ pixels per image. \item \textit{CIFAR-10}~\citep{krizhevsky2009learning} --- $60,000$ $32 \times 32$ color images, a subset of the larger CIFAR-100 dataset, each depicting an object from one of 10 classes, e.g., airplanes. \item \textit{FashionMNIST}~\citep{xiao2017} --- A recently proposed drop-in replacement for the MNIST data set that, like MNIST, contains $60,000$, $28 \times 28$ grayscale images, each associated with a label from 10 different categories. \end{itemize} We considered a diverse set of network configurations for each of the data sets. We varied the number of hidden layers between 2 and 5 and used either a constant width across all hidden layers between 200 and 1000 or a linearly decreasing width (denoted by "Pyramid" in the figures). Training was performed for 30 epochs on the normalized data sets using an Adam optimizer with a learning rate of 0.001 and a batch size of 300. The test accuracies were roughly 98\% (MNIST), 45\% (CIFAR10), and 96\% (FashionMNIST), depending on the network architecture. To account for the randomness in the training procedure, for each data set and neural network configuration, we averaged our results across 4 trained neural networks. \subsection{Details on the Compression Algorithms} We evaluated and compared the performance of the following algorithms on the aforementioned data sets. \begin{enumerate} \setlength\itemsep{0.25em} \item \textit{Uniform (Edge) Sampling} --- A uniform distribution is used, rather than our sensitivity-based importance sampling distribution, to sample the incoming edges to each neuron in the network. Note that like our sampling scheme, uniform sampling edges generates an unbiased estimator of the neuron value. However, unlike our approach which explicitly seeks to minimize estimator variance using the bounds provided by empirical sensitivity, uniform sampling is prone to exhibiting large estimator variance. \item \textit{Singular Value Decomposition} (SVD) --- The (truncated) SVD decomposition is used to generate a low-rank (rank-$r$) approximation for each of the weight matrices $(\hat{W}^2, \ldots, \hat{W}^L)$ to obtain the corresponding parameters $\hat{\theta} = (\hat{W}^2_r, \ldots, \hat{W}^L_r)$ for various values of $r \in {\mathbb N}_+$. Unlike the compared sampling-based methods, SVD does not sparsify the weight matrices. Thus, to achieve fair comparisons of compression rates, we compute the size of the rank-$r$ matrices constituting $\hat{\theta}$ as, $$ \size{\hat{\theta}} = \sum_{\ell = 2}^L \sum_{i = 1}^r \left( \nnz{u_i^\ell} + \nnz{v_i^\ell} \right), $$ where $W^\ell = U^\ell \Sigma^\ell (V^\ell)^\top$ for each $\ell \in \br{2,\ldots,L}$, with $\sigma_1 \ge \sigma_2 \ldots \ge \sigma_{\eta^{\ell-1}}$ and $u_i^\ell$ and $v_i^\ell$ denote the $i$th columns of $U^\ell$ and $V^\ell$ respectively. \item \textit{$\ell_1$ Sampling~\citep{achlioptas2013matrix}} --- An entry-wise sampling distribution based on the ratio between the absolute value of a single entry and the (entry-wise) $\ell_1$ - norm of the weight matrix is computed, and the weight matrix is subsequently sparsified by sampling accordingly. In particular, entry $w_{ij}$ of some weight matrix $W$ is sampled with probability $$ p_{ij} = \frac{\abs{w_{ij}}}{\norm{W}_{\ell_1}}, $$ and reweighted to ensure the unbiasedness of the resulting estimator. \item \textit{$\ell_2$ Sampling~\citep{drineas2011note}} --- The entries $(i,j)$ of each weight matrix $W$ are sampled with distribution \[ p_{ij} = \frac{w_{ij}^2}{\norm{W}_F^2}, \] where $\norm{\cdot}_F$ is the Frobenius norm of $W$, and reweighted accordingly. \item \textit{$\frac{\ell_1 + \ell_2}{2}$ Sampling~\citep{kundu2014note}} -- The entries $(i,j)$ of each weight matrix $W$ are sampled with distribution \[ p_{ij} = \frac{1}{2} \left(\frac{w_{ij}^2}{\norm{W}_F^2} + \frac{|w_{ij}|}{\norm{W}_{\ell_1}} \right), \] where $\norm{\cdot}_F$ is the Frobenius norm of $W$, and reweighted accordingly. We note that~\cite{kundu2014note} constitutes the current state-of-the-art in data-oblivious matrix sparsification algorithms. \item \textit{CoreNet} (Edge Sampling) --- Our core algorithm for edge sampling shown in Alg.~\ref{alg:sparsify-weights}, but without the neuron pruning procedure. \item \textit{CoreNet}\verb!+! (CoreNet \& Neuron Pruning) --- Our algorithm shown in Alg.~\ref{alg:main} that includes the neuron pruning step. \item \textit{CoreNet}\verb!++! (CoreNet\verb!+! \& Amplification) --- In addition to the features of \textit{Corenet}\verb!+!, multiple coresets $\mathcal C_1, \ldots, \mathcal C_\tau$ are constructed over $\tau \in {\mathbb N}_+$ trials, and the best one is picked by evaluating the empirical error on a subset $\TT \subseteq \PP \setminus \SS$ (see Sec.~\ref{sec:method} for details). \end{enumerate} \subsection{Preserving the Output of a Neural Network} We evaluated the accuracy of our approximation by comparing the output of the compressed network with that of the original one and compute the $\ell_1$-norm of the relative error vector. We computed the error metric for both the uniform sampling scheme as well as our compression algorithm (Alg.~\ref{alg:main}). Our results were averaged over 50 trials, where for each trial, the relative approximation error was averaged over the entire test set. In particular, for a test set $\PP_\mathrm{test} \subseteq \Reals^d$ consisting of $d$ dimensional points, the average relative error of with respect to the $f_{\paramHat}$ generated by each compression algorithm was computed as $$ \mathrm{error}_{\PP_\mathrm{test}}(f_{\paramHat}) = \frac{1}{|\PP_\mathrm{test}|} \sum_{\Input \in \PP_\mathrm{test}} \norm{f_{\paramHat}(\Input) - f_\param(\Input)}_1. $$ Figures~\ref{fig:mnist_error},~\ref{fig:cifar_error}, and~\ref{fig:fashion_error} depict the average performance of the compared algorithms for various network architectures trained on MNIST, CIFAR-10, and FashionMNIST, respectively. Our algorithm is able to compress networks trained on MNIST and FashionMNIST to about 10\% of their original size without significant loss of accuracy. On CIFAR-10, a compression rate of 50\% yields classification results comparable to that of uncompressed networks. The shaded region corresponding to each curve represents the values within one standard deviation of the mean. \subsection{Preserving the Classification Performance} We also evaluated the accuracy of our approximation by computing the loss of prediction accuracy on a test data set, $\PP_{\mathrm{test}}$. In particular, let $\mathrm{acc}_{\PP_{\mathrm{test}}}(f_\param)$ be the average accuracy of the neural network $f_\param$, i.e,. $$ \mathrm{acc}_{\PP_{\mathrm{test}}}(f_\param) = \frac{1}{|\PP_\mathrm{test}|} \sum_{\Input \in \PP_\mathrm{test}} \1 \left( \argmax_{i \in [\eta^L]} f_\param(x) \neq y(\Input) \right), $$ where $y(x)$ denotes the (true) label associated with $x$. Then the drop in accuracy is computed as $$ \mathrm{acc}_{\PP_{\mathrm{test}}}(f_\param) - \mathrm{acc}_{\PP_{\mathrm{test}}}(f_{\paramHat}). $$ Figures~\ref{fig:mnist_acc},~\ref{fig:cifar_acc}, and~\ref{fig:fashion_acc} depict the average performance of the compared algorithms for various network architectures trained on MNIST, CIFAR-10, and FashionMNIST respectively. The shaded region corresponding to each curve represents the values within one standard deviation of the mean. \subsection{Preliminary Results with Retraining} We compared the performance of our approach with that of the popular weight thresholding heuristic -- henceforth denoted by WT -- of~\cite{Han15} when retraining was allowed after the compression, i.e., pruning, procedure. Our comparisons with retraining for the networks and data sets mentioned in Sec.~\ref{sec:results} are as follows. For MNIST, WT required 5.8\% of the number of parameters to obtain the classification accuracy of the original model (i.e., 0\% drop in accuracy), whereas for the same percentage (5.8\%) of the parameters retained, CoreNet++ incurred a classification accuracy drop of 1\%. For CIFAR, the approach of~\cite{Han15} matched the original model’s accuracy using ~3\% of the parameters, whereas CoreNet++ reported an accuracy drop of 9.5\% for 3\% of the parameters retained. Finally, for FashionMNIST, the corresponding numbers were 4.1\% of the parameters to achieve 0\% loss for WT, and a loss of 4.7\% in accuracy for CoreNet++ with the same percentage of parameters retained. \subsection{Discussion} As indicated in Sec.~\ref{sec:results}, the simulation results presented here validate our theoretical results and suggest that empirical sensitivity can lead to effective, more informed sampling compared to other methods. Moreover, we are able to outperform networks that are compressed via state-of-the-art matrix sparsification algorithms. We also note that there is a notable difference in the performance of our algorithm between different datasets. In particular, the difference in performance of our algorithm compared to the other method for networks trained on FashionMNIST and MNIST is much more significant than for networks trained on CIFAR. We conjecture that this is partially due to considering only fully-connected networks as these network perform fairly poorly on CIFAR (around~45\% classification accuracy) and thus edges have more uniformly distributed sensitivity as the information content in the network is limited. We envision that extending our guarantees to convolutional neural networks may enable us to further reason about the performance on data sets such as CIFAR. \input{appendix_figures} \subsection{Analytical Results for Section~\ref{sec:analysis_sampling} (Importance Sampling Bounds)} \label{app:analysis_sampling} We begin by establishing an auxiliary result that we will need for the subsequent lemmas. \subsubsection{Empirical $\Delta_\neuron^{\ell}$ Approximation} \begin{lemma}[Empirical $\Delta_\neuron^{\ell}$ Approximation] \label{lem:delta-hat-approx} Let $\delta \in (0,1)$, let $\lambda_* = K'/2 \ge \lambda$, where $K'$ is from Asm.~\ref{asm:cdf}, and define $$ \DeltaNeuronHat = \DeltaNeuronHatDef, $$ where $\kappa = \sqrt{2 \lambda_*} \left(1 + \sqrt{2 \lambda_*} \logTerm \right)$ and $\SS \subseteq \PP$ is as in Alg.~\ref{alg:main}. Then, $$ \Pr_{\Input \sim {\mathcal D} } \left(\max_{i \in [\eta^\ell]} \DeltaNeuron[\Input] \leq \DeltaNeuronHat \right) \ge 1 - \frac{\delta}{4 \eta}. $$ \end{lemma} \begin{proof} Define the random variables $\mathcal{Y}_{\Input'} = \E[\DeltaNeuron[\Input']] - \DeltaNeuron[\Input']$ for each $\Input' \in \SS$ and consider the sum $$ \mathcal{Y} = \sum_{\Input' \in \SS} \mathcal{Y}_{\Input'} = \sum_{\Input' \in \SS} \left(\E[\DeltaNeuron[\Input]] - \DeltaNeuron[\Input']\right). $$ We know that each random variable $\mathcal{Y}_{\mathbf{\Input}'}$ satisfies $\E[\mathcal{Y}_{\mathbf{\Input}'}] = 0$ and by Assumption~\ref{asm:subexponential}, is subexponential with parameter $\lambda \leq \lambda_*$. Thus, $\mathcal{Y}$ is a sum of $|\SS|$ independent, zero-mean $\lambda_*$-subexponential random variables, which implies that $\E[\mathcal{Y}] = 0$ and that we can readily apply Bernstein's inequality for subexponential random variables~\citep{vershynin2016high} to obtain for $t \ge 0$ $$ \Pr \left(\frac{1}{|\SS|} \mathcal{Y} \ge t\right) \leq \exp \left(-|\SS| \, \min \left \{\frac{t^2}{4 \, \lambda_*^2}, \frac{t}{2 \, \lambda_*} \right\} \right). $$ Since $\SS = \ceil*{\kPrime \logTerm } \ge \log \left(\logTermInside / \delta \right) \, 2 \lambda^*$, we have for $t = \sqrt{2 \lambda_*}$, \begin{align*} \Pr \left(\E[\DeltaNeuron[\Input]] - \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] \ge t \right) &= \Pr \left(\frac{1}{|\SS|} \mathcal{Y} \ge t\right) \\ &\leq \exp \left( -|\SS| \frac{t^2}{4 \lambda_*^2} \right) \\ &\leq \exp \left( - \log \left(\logTermInside / \delta \right) \right) \\ &= \frac{\delta}{8 \, \eta \, \eta^* }. \end{align*} Moreover, for a single $\mathcal{Y}_\Input$, we have by the equivalent definition of a subexponential random variable~\citep{vershynin2016high} that for $u \ge 0$ $$ \Pr(\DeltaNeuron[\Input] - \E[\DeltaNeuron[\Input]] \ge u) \leq \exp \left(-\min \left \{-\frac{u^2}{4 \, \lambda_*^2}, \frac{u}{2 \, \lambda_*} \right\} \right). $$ Thus, for $u = 2 \lambda_* \, \log \left(\logTermInside / \delta \right)$ we obtain $$ \Pr(\DeltaNeuron[\Input] - \E[\DeltaNeuron[\Input]] \ge u) \leq \exp \left( - \log \left(\logTermInside / \delta \right) \right) = \frac{\delta}{ 8 \, \eta \, \eta^* }. $$ Therefore, by the union bound, we have with probability at least $1 - \frac{\delta}{4 \eta \, \eta^*}$: \begin{align*} \DeltaNeuron[\Input] &\leq \E[\DeltaNeuron[\Input]] + u \\ &\leq \left(\frac{1}{|\SS|} \sum_{\mathbf{\Input}' \in \SS} \DeltaNeuron[\Input'] + t \right) + u \\ &= \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] + \left(\sqrt{2 \lambda_*} + 2 \lambda_* \, \log \left(\logTermInside / \delta \right) \right) \\ &= \frac{1}{|\SS|} \sum_{\Input' \in \SS} \DeltaNeuron[\Input'] + \kappa \\ &\leq \DeltaNeuronHat, \end{align*} where the last inequality follows by definition of $\DeltaNeuronHat$. Thus, by the union bound, we have \begin{align*} \Pr_{\Input \sim {\mathcal D} } \left(\max_{i \in [\eta^\ell]} \DeltaNeuron[\Input] > \DeltaNeuronHat \right) &= \Pr \left(\exists{i \in [\eta^\ell]}: \DeltaNeuron[\Input] > \DeltaNeuronHat \right) \\ &\leq \sum_{i \in [\eta^{\ell}]} \Pr \left(\DeltaNeuron[\Input] > \DeltaNeuronHat \right) \\ &\leq \eta^{\ell} \left(\frac{\delta}{4 \eta \, \eta^*} \right) \\ &\leq \frac{\delta}{4 \, \eta}, \end{align*} where the last line follows by definition of $\eta^* \ge \eta^{\ell}$. \end{proof} \subsubsection{Notation for the Subsequent Analysis} Let $\WWHatRow^{\ell +}$ and $\WWHatRow^{\ell -}$ denote the sparsified row vectors generated when \textsc{Sparsify} is invoked with first two arguments corresponding to $(\Wplus, \WWRow^\ell)$ and $(\Wminus, -\WWRow^\ell)$, respectively (Alg.~\ref{alg:main}, Line~\ref{lin:pos-sparsify-weights}). We will at times omit including the variables for the neuron $i$ and layer $\ell$ in the proofs for clarity of exposition, and for example, refer to $\WWHatRow^{\ell +}$ and $\WWHatRow^{\ell -}$ as simply $\WWHatRowCon^+$ and $\WWHatRowCon^-$, respectively. Let $\Input \sim {\mathcal D}$ and define $$ \hat{z}^{+}(\Input) = \sum_{k \in \Wplus} \WWHatRow[k]^+ \, \hat a_k(\Input) \ge 0 \qquad \text{and} \qquad \hat{z}^{-}(\Input) = \sum_{k \in \Wminus} (-\WWHatRow[k]^-) \, \hat a_k(\Input) \ge 0 $$ be the approximate intermediate values corresponding to the sparsified matrices $\WWHatRowCon^{+}$ and $\WWHatRowCon^{-}$; let $$ \tilde z^{+}(\Input) = \sum_{k \in \Wplus} \WWRow[k] \, \hat a_k(\Input) \ge 0 \qquad \text{and} \qquad \tilde z^{-}(\Input) = \sum_{k \in \Wminus} (-\WWRow[k]) \, \hat a_k(\Input) \ge 0 $$ be the corresponding intermediate values with respect to the the original row vector $\WWRowCon$; and finally, let $$ z^{+}(\Input) = \sum_{k \in \Wplus} \WWRow[k] \, a_k(\Input) \ge 0 \qquad \text{and} \qquad z^{-}(\Input) = \sum_{k \in \Wminus} (-\WWRow[k]) \, a_k(\Input) \ge 0 $$ be the true intermediate values corresponding to the positive and negative valued weights. Note that in this context, we have by definition \begin{align*} \hat{z}_i^\ell (\Input) &= \dotp{\WWHatRowCon}{ \hat a(\Input)} = \hat{z}^{+}(\Input) - \hat{z}^{-}(\Input), \\ \tilde{z}_i^{\ell}(\Input) &= \dotp{\WWRowCon}{\hat a(\Input)} = \tilde z^+(\Input) - \tilde z^{-}(\Input), \quad \text{and} \\ z_i^{\ell}(\Input) &= \dotp{\WWRowCon}{a(\Input)} = z^+(\Input) - z^{-}(\Input), \end{align*} where we used the fact that $\WWHatRowCon = \WWHatRowCon^{+} - \WWHatRowCon^{-} \in \Reals^{1 \times \eta^{\ell-1}}$. \subsubsection{Proof of Lemma~\ref{lem:neuron-approx}} \lemneuronapprox* \begin{proof} Let $\epsilon, \delta \in (0,1)$ be arbitrary and let $\Wplus = \{\edge \in [\eta^{\ell-1}] : \WWRow[\edge] > 0\}$ and $\Wminus = \{\edge \in [\eta^{\ell-1}]: \WWRow[\edge] < 0 \}$ as in Alg.~\ref{alg:main}. Let $\epsilonLayer$ be defined as before, $ \epsilonLayer = \epsilonLayerDef , $ where $\DeltaNeuronHatLayers = \DeltaNeuronHatLayersDef$ and $\DeltaNeuronHat = \DeltaNeuronHatDef$. Observe that $\WWRow[j] > 0 \, \, \forall{j \in \Wplus}$ and similarly, for all $(-\WWRow[j]) > 0 \, \, \forall{j \in \Wminus}$. That is, each of index sets $\Wplus$ and $\Wminus$ corresponds to strictly positive entries in the arguments $\WWRow^\ell$ and $-\WWRow^\ell$, respectively passed into \textsc{Sparsify}. Observe that since we conditioned on the event $\mathcal{E}^{\ell-1}$, we have \begin{align*} 2 \, (\ell - 2) \, \epsilonLayer[\ell] &\leq 2 \, (\ell - 2) \, \epsilonLayerDefWordy[\ell] \\ &\leq \frac{\epsilon}{\DeltaNeuronHatLayersDef} \\ &\leq \frac{\epsilon}{2^{L - \ell + 1}} &\text{Since $\DeltaNeuronHat[k] \ge 2 \quad \forall{k \in \{\ell, \ldots, L\}}$} \\ &\leq \frac{\epsilon}{2}, \end{align*} where the inequality $\DeltaNeuronHat[k] \ge 2$ follows from the fact that \begin{align*} \DeltaNeuronHat[k] &= \DeltaNeuronHatDef[\mathbf{\Input}'] \\ &\ge 1 + \kappa &\text{Since $\DeltaNeuron[\mathbf{\Input}'] \ge 1 \quad \forall{\mathbf{\Input}' \in \supp}$ by definition} \\ &\ge 2. \end{align*} we obtain that $\hat{a}(\Input) \in (1 \pm \nicefrac{\epsilon}{2}) a(\Input)$, where, as before, $\hat{a}$ and $a$ are shorthand notations for $\hat{a}^{\ell-1} \in \Reals^{\eta^{\ell -1} \times 1}$ and $a^{\ell-1} \in \Reals^{\eta^{\ell -1} \times 1}$, respectively. This implies that $\mathcal{E}^{\ell - 1} \Rightarrow \mathcal{E}_{\nicefrac{1}{2}}$ and since $m = \SampleComplexity[\epsilon]$ in Alg.~\ref{alg:sparsify-weights} we can invoke Lemma~\ref{lem:pos-weights-approx} with $\epsilon = \epsilonLayer$ on each of the \textsc{Sparsify} invocations to conclude that $$ \Pr \left(\hat{z}^+(\Input) \notin (1 \pm \epsilonLayer) \tilde z^+(\Input) \, \mid \, \mathcal{E}^{\ell-1} \right) \leq \Pr \left(\hat{z}^+(\Input) \notin (1 \pm \epsilonLayer) \tilde z^+(\Input) \, \mid \, \mathcal{E}_{\nicefrac{1}{2}} \right) \leq \frac{3 \delta}{8 \eta}, $$ and $$ \Pr \left(\hat{z}^-(\Input) \notin (1 \pm \epsilonLayer) \tilde z^-(\Input) \, \mid \, \mathcal{E}^{\ell-1} \right) \leq \frac{3 \delta}{8 \eta}. $$ Therefore, by the union bound, we have \begin{align*} \Pr \left(\hat{z}^+(\Input) \notin (1 \pm \epsilonLayer) \tilde z^+(\Input) \text{ or } \hat{z}^-(\Input) \notin (1 \pm \epsilonLayer) \tilde z^-(\Input) \, \mid \, \mathcal{E}^{\ell-1} \right) &\leq \frac{3 \delta}{8 \eta} + \frac{3 \delta}{8 \eta} = \frac{3 \delta}{4 \eta}. \end{align*} Moreover, by Lemma~\ref{lem:delta-hat-approx}, we have with probability at most $\frac{\delta}{4 \eta }$ that $$ \DeltaNeuron[\Input] > \DeltaNeuronHat. $$ Thus, by the union bound over the failure events, we have that with probability at least $1 - \left(\nicefrac{3 \delta}{4 \eta} + \nicefrac{\delta}{4 \eta }\right) = 1 - \nicefrac{\delta}{\eta}$ that \textbf{both} of the following events occur \begin{fleqn} \begin{align} \hspace{\parindent} & \text{1.} \quad \hat{z}^+(\Input) \in (1 \pm \epsilonLayer) \tilde z^+(\Input) \ \ \text{and} \ \ \hat{z}^-(\Input) \in (1 \pm \epsilonLayer) \tilde z^-(\Input) \label{eqn:event1} \\ \hspace{\parindent} & \text{2.} \quad \DeltaNeuron[\Input] \leq \DeltaNeuronHat \label{eqn:event2} \end{align} \end{fleqn} Recall that $\epsilon' = \frac{\epsilon}{\epsilonDenomContant \, (L-1)}$, $ \epsilonLayer[\ell] = \epsilonLayerDef$, and that event $\mathcal{E}^\ell_i$ denotes the (desirable) event that $$ \hat{z}_i^\ell (\Input) \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^{\ell}_i (\Input) $$ holds, and similarly, $\mathcal{E}^\ell = \cap_{i \in [\eta^\ell]} \, \mathcal{E}_{i}^\ell$ denotes the vector-wise analogue where $$ \hat{z}^\ell (\Input) \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^{\ell}(\Input). $$ Let $k = 2 \, (\ell - 1)$ and note that by conditioning on the event $\mathcal{E}^{\ell-1}$, i.e., we have by definition \begin{align*} \hat{a}^{\ell-1}(\Input) &\in (1 \pm 2 \, (\ell - 2) \epsilonLayer[\ell]) a^{\ell-1}(\Input) = (1 \pm k \, \epsilonLayer[\ell]) a^{\ell-1}(\Input), \end{align*} which follows by definition of the ReLU function. Recall that our overarching goal is to establish that $$ \hat{z}_i^{\ell}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \epsilonLayer[\ell + 1]\right) z_i^\ell(\Input), $$ which would immediately imply by definition of the ReLU function that $$ \hat{a}_i^{\ell}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \epsilonLayer[\ell + 1]\right) a_i^\ell(\Input). $$ Having clarified the conditioning and our objective, we will once again drop the index $i$ from the expressions moving forward. Proceeding from above, we have with probability at least $1 - \nicefrac{\delta}{\eta}$ \begin{align*} \hat{z} (\Input) &= \hat{z}^+(\Input) - \hat{z}^-(\Input) \\ &\leq (1 + \epsilonLayer) \, \tilde z^+(\Input) - (1 - \epsilonLayer) \, \tilde z^-(\Input) &\text{By Event~\eqref{eqn:event1} above}\\ &\leq (1 + \epsilonLayer) (1 + k \, \epsilonLayer[\ell]) \, z^+(\Input) - (1 - \epsilonLayer) (1 - k \, \epsilonLayer[\ell]) \, z^-(\Input) &\text{Conditioning on event $\mathcal{E}^{\ell-1}$} \\ &=\left(1 + \epsilonLayer (k + 1) + k \epsilonLayer^2\right) z^+(\Input) + \left(-1 + (k+1) \epsilonLayer - k \epsilonLayer^2 \right) z^-(\Input) \\ &= \left(1 + k \, \epsilonLayer^2\right) z(\Input) + (k+1) \, \epsilonLayer \left(z^+(\Input) + z^-(\Input)\right) \\ &= \left(1 + k \, \epsilonLayer^2\right) z(\Input) + \frac{(k+1) \, \epsilon'}{ \DeltaNeuronHatLayersDef} \, \left(z^+(\Input) + z^-(\Input)\right) \\ &\leq \left(1 + k \, \epsilonLayer^2\right) z(\Input) + \frac{(k+1) \, \epsilon'}{\DeltaNeuron[\Input] \, \DeltaNeuronHatLayersDef[\ell+1]} \, \left(z^+(\Input) + z^-(\Input)\right) &\text{By Event~\eqref{eqn:event2} above} \\ &= \left(1 + k \, \epsilonLayer^2\right) z(\Input) + \frac{(k+1) \,\epsilon'}{ \DeltaNeuronHatLayersDef[\ell+1]} \, \left|z(\Input)\right| &\text{By $\DeltaNeuron[\Input] = \frac{z^+(\Input) + z^-(\Input)}{|z(\Input)|}$} \\ &= \left(1 + k \, \epsilonLayer^2\right) z(\Input) + (k+1) \, \epsilonLayer[\ell + 1] \, |z(\Input)|. \end{align*} To upper bound the last expression above, we begin by observing that $k \epsilonLayer^2 \leq \epsilonLayer$, which follows from the fact that $\epsilonLayer \leq \frac{1}{2 \, (L-1)} \leq \frac{1}{k}$ by definition. Moreover, we also note that $\epsilonLayer[\ell] \leq \epsilonLayer[\ell + 1]$ by definition of $\DeltaNeuronHat \ge 1$. Now, we consider two cases. \paragraph{Case of $z(\Input) \ge 0$:} In this case, we have \begin{align*} \hat{z}(\Input) &\leq \left(1 + k \, \epsilonLayer^2\right) z(\Input) + (k+1) \, \epsilonLayer[\ell + 1] \, |z(\Input)| \\ &\leq (1 + \epsilonLayer) z(\Input) + (k + 1) \epsilonLayer[\ell + 1] z(\Input) \\ &\leq (1 + \epsilonLayer[\ell + 1]) z(\Input) + (k + 1) \epsilonLayer[\ell + 1] z(\Input) \\ &= \left(1 + (k+2) \, \epsilonLayer[\ell + 1]\right) z(\Input) \\ &= \left(1 + 2 \, (\ell - 1) \epsilonLayer[\ell+1]\right) z(\Input), \end{align*} where the last line follows by definition of $k = 2 \, (\ell - 2)$, which implies that $k + 2 = 2( \ell - 1)$. Thus, this establishes the desired upper bound in the case that $z(\Input) \ge 0$. \paragraph{Case of $z(\Input) < 0$:} Since $z(\Input)$ is negative, we have that $\left(1 + k \, \epsilonLayer^2\right) z(\Input) \leq z(\Input)$ and $|z(\Input)| = -z(\Input)$ and thus \begin{align*} \hat{z}(\Input) &\leq \left(1 + k \, \epsilonLayer^2\right) z(\Input) + (k+1) \, \epsilonLayer[\ell + 1] \, |z(\Input)| \\ &\leq z(\Input) - (k + 1) \epsilonLayer[\ell + 1] z(\Input) \\ &\leq \left(1 - (k + 1) \epsilonLayer[\ell + 1] \right) z(\Input) \\ &\leq \left(1 - (k + 2) \epsilonLayer[\ell + 1] \right) z(\Input) \\ &= \left(1 - 2 \, (\ell - 1) \epsilonLayer[\ell+1]\right) z(\Input), \end{align*} and this establishes the upper bound for the case of $z(\Input)$ being negative. Putting the results of the case by case analysis together, we have the upper bound of $\hat{z}(\Input) \leq z(\Input) + 2 \, (\ell - 1) \epsilonLayer[\ell+1] |z(\Input)|$. The proof for establishing the lower bound for $z(\Input)$ is analogous to that given above, and yields $\hat{z}(\Input) \ge z(\Input) - 2 \, (\ell - 1) \epsilonLayer[\ell+1] |z(\Input)|$. Putting both the upper and lower bound together, we have that with probability at least $1 - \frac{\delta}{\eta}$: $$ \hat{z}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \epsilonLayer[\ell+1] \right) z(\Input), $$ and this completes the proof. \end{proof} \subsubsection{Remarks on Negative Activations} \label{app:negative} We note that up to now we assumed that the input $a(x)$, i.e., the activations from the previous layer, are strictly nonnegative. For layers $\ell \in \{3, \ldots, L\}$, this is indeed true due to the nonnegativity of the ReLU activation function. For layer $2$, the input is $a(x) = x$, which can be decomposed into $a(x) = a_\mathrm{pos}(x) - a_\mathrm{neg}(x)$, where $a_\mathrm{pos}(x) \geq 0 \in \Reals^{\eta^{\ell - 1}}$ and $a_\mathrm{neg}(x) \geq 0 \in \Reals^{\eta^{\ell - 1}}$. Furthermore, we can define the sensitivity over the set of points $\{a_\mathrm{pos}(x),\, a_\mathrm{neg}(x) \, \mid \, x \in \SS\}$ (instead of $\{a(x) \, \mid \, x \in \SS\}$), and thus maintain the required nonnegativity of the sensitivities. Then, in the terminology of Lemma~\ref{lem:neuron-approx}, we let $$ z_\mathrm{pos}^{+}(\Input) = \sum_{k \in \Wplus} \WWRow[k] \, a_{\mathrm{pos}, k}(\Input) \ge 0 \qquad \text{and} \qquad z_\mathrm{neg}^{-}(\Input) = \sum_{k \in \Wminus} (-\WWRow[k]) \, a_{\mathrm{neg}, k}(\Input) \ge 0 $$ be the corresponding positive parts, and $$ z_\mathrm{neg}^{+}(\Input) = \sum_{k \in \Wplus} \WWRow[k] \, a_{\mathrm{neg}, k}(\Input) \ge 0 \qquad \text{and} \qquad z_\mathrm{pos}^{-}(\Input) = \sum_{k \in \Wminus} (-\WWRow[k]) \, a_{\mathrm{pos}, k}(\Input) \ge 0 $$ be the corresponding negative parts of the preactivation of the considered layer, such that $$ z^{+}(\Input) = z_\mathrm{pos}^{+}(\Input) + z_\mathrm{neg}^{-}(\Input) \qquad \text{and} \qquad z^{-}(\Input) = z_\mathrm{neg}^{+}(\Input) + z_\mathrm{pos}^{-}(\Input). $$ We also let $$ \DeltaNeuron[\Input] = \frac{z^+(\Input) + z^-(\Input)}{|z(\Input)|} $$ be as before, with $z^+(\Input)$ and $z^-(\Input)$ defined as above. Equipped with above definitions, we can rederive Lemma~\ref{lem:neuron-approx} analogously in the more general setting, i.e., with potentially negative activations. We also note that we require a slightly larger sample size now since we have to take a union bound over the failure probabilities of all four approximations (i.e. $\hat z_\mathrm{pos}^{+}(\Input)$, $\hat z_\mathrm{neg}^{-}(\Input)$, $\hat z_\mathrm{neg}^{+}(\Input)$, and $\hat z_\mathrm{pos}^{-}(\Input)$) to obtain the desired overall failure probability of $\nicefrac{\delta}{\eta}$. \subsubsection{Proof of Theorem~\ref{thm:main}} The following corollary immediately follows from Lemma~\ref{lem:neuron-approx} and establishes a layer-wise approximation guarantee. \begin{restatable}[Conditional Layer-wise Approximation]{corollary}{corlayerwise} \label{cor:approx-layer} Let $\epsilon, \delta \in (0,1)$, $\ell \in \br{2,\ldots,L}$, and $\Input \sim {\mathcal D}$. \textsc{CoreNet} generates a sparse weight matrix $\hat{W}^\ell = \big(\WWHatRow[1]^\ell, \ldots, \WWHatRow[\eta^\ell]^\ell \big)^\top \in {\REAL}^{\eta^\ell \times \eta^{\ell-1}}$ such that \begin{equation} \label{eqn:coreset-property-neuron} \Pr (\mathcal{E}^\ell \, \mid \, \mathcal{E}^{\ell-1}) = % \Pr \left( \hat z^\ell (\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^{\ell} (\Input) \, \mid \, \mathcal{E}^{\ell-1} \right) \ge % 1 - \frac{\delta \, \eta^\ell }{\eta}, \end{equation} where $\epsilonLayer = \epsilonLayerDef$, $\hat{z}^\ell(\Input) = \hat{W}^\ell \hat a^\ell(\Input)$, and $z^\ell(\Input) = W^\ell a^\ell(\Input)$. \end{restatable} \begin{proof} Since~\eqref{eq:neuronapprox} established by Lemma~\ref{lem:neuron-approx} holds for any neuron $i \in [\eta^\ell]$ in layer $\ell$ and since $(\mathcal{E}^\ell)^\mathsf{c} = \cup_{i \in [\eta^\ell]} (\mathcal{E}_i^\ell)^\mathsf{c}$, it follows by the union bound over the failure events $(\mathcal{E}_i^\ell)^\mathsf{c}$ for all $i \in [\eta^\ell]$ that with probability at least $1 - \frac{\eta^\ell \delta}{\eta}$ \begin{align*} \hat z^\ell(\Input) &= \hat{W}^\ell \hat a^{\ell-1}(\Input) \in \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) W^\ell a^{\ell-1}(\Input) = \left(1 \pm 2 \, (\ell - 1) \, \epsilonLayer[\ell + 1] \right) z^\ell (\Input). \end{align*} \end{proof} The following lemma removes the conditioning on $\mathcal{E}^{\ell-1}$ and explicitly considers the (compounding) error incurred by generating coresets $\hat{W}^2, \ldots, \hat{W}^\ell$ for multiple layers. \lemlayer* \begin{proof} Invoking Corollary~\ref{cor:approx-layer}, we know that for any layer $\ell' \in \br{2,\ldots,L}$, \begin{align} \Pr_{\hat{W}^{\ell'}, \, \Input, \, \hat{a}^{\ell'-1}(\cdot)} (\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell'-1}) \ge 1 - \frac{\delta \, \eta^{\ell'}}{\eta}. \label{eqn:cor-ineq} \end{align} We also have by the law of total probability that \begin{align} \Pr(\mathcal{E}^{\ell'}) &= \Pr(\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell' - 1}) \Pr(\mathcal{E}^{\ell' - 1}) + \Pr(\mathcal{E}^{\ell'} \, \mid \, (\mathcal{E}^{\ell' - 1})^\mathsf{c}) \Pr ((\mathcal{E}^{\ell' - 1})^\mathsf{c} ) \nonumber \\ &\ge \Pr(\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell' - 1}) \Pr(\mathcal{E}^{\ell' - 1}) \label{eqn:repeated-invocation} \end{align} Repeated applications of \eqref{eqn:cor-ineq} and \eqref{eqn:repeated-invocation} in conjunction with the observation that $\Pr(\mathcal{E}^1) = 1$\footnote{Since we do not compress the input layer.} yield \begin{align*} \Pr(\mathcal{E}^\ell) &\ge \Pr(\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell' - 1}) \Pr(\mathcal{E}^{\ell' - 1}) \\ &\,\,\, \vdots & \text{Repeated applications of \eqref{eqn:repeated-invocation}} \\ &\ge \prod_{\ell'=2}^\ell \Pr(\mathcal{E}^{\ell'} \, \mid \, \mathcal{E}^{\ell' -1}) \\ &\ge \prod_{\ell'=2}^\ell \left(1 - \frac{\delta \, \eta^{\ell'}}{\eta}\right) &\text{By \eqref{eqn:cor-ineq}} \\ &\ge 1 - \frac{\delta}{\eta} \sum_{\ell'=2}^\ell \eta^{\ell'} &\text{By the Weierstrass Product Inequality}, \end{align*} where the last inequality follows by the Weierstrass Product Inequality\footnote{The Weierstrass Product Inequality~\citep{doerr2018probabilistic} states that for $p_1, \ldots, p_n \in [0,1]$, $$\prod_{i=1}^n (1 - p_i) \ge 1 - \sum_{i=1}^n p_i.$$} and this establishes the lemma. \end{proof} Appropriately invoking Lemma~\ref{lem:layer}, we can now establish the approximation guarantee for the entire neural network. This is stated in Theorem~\ref{thm:main} and the proof can be found below. \thmmain* \begin{proof} Invoking Lemma~\ref{lem:layer} with $\ell = L$, we have that for $\hat{\theta} = (\hat{W}^2, \ldots, \hat{W}^L)$, \begin{align*} \Pr_{\hat{\theta}, \, \Input} \left(f_{\paramHat}(x) \in 2 \, (L - 1) \, \epsilonLayer[L + 1] f_\param(x) \right) &= \Pr_{\hat{\theta}, \, \Input } (\hat z^{L}(\Input)\in 2 \, (L - 1) \, \epsilonLayer[L + 1] z^L (\Input)) \\ &= \Pr(\mathcal{E}^{L}) \\ &\ge 1 - \frac{\delta \, \sum_{\ell' = 2}^{L} \eta^{\ell'}}{\eta} \\ &= 1 - \delta, \end{align*} where the last equality follows by definition of $\eta = \sum_{\ell = 2}^L \eta^\ell$. Note that by definition, \begin{align*} \epsilonLayer[L+1] &= \epsilonLayerDefWordy[L+1] \\ &= \frac{\epsilon}{\epsilonDenomContant \, (L-1)}, \end{align*} where the last equality follows by the fact that the empty product $\DeltaNeuronHatLayersDef[L+1]$ is equal to 1. Thus, we have \begin{align*} 2 \, (L-1) \epsilonLayer[L+1] &= \epsilon, \end{align*} and so we conclude $$ \Pr_{\hat{\theta}, \, \Input} \left(f_{\paramHat}(x) \in (1 \pm \epsilon) f_\param(x) \right) \ge 1 - \delta, $$ which, along with the sampling complexity of Alg.~\ref{alg:sparsify-weights} (Line~\ref{lin:beg-sampling}), establishes the approximation guarantee provided by the theorem. For the computational time complexity, we observe that the most time consuming operation per iteration of the loop on Lines~\ref{lin:beg-main-loop}-\ref{lin:end-main-loop} is the weight sparsification procedure. The asymptotic time complexity of each $\textsc{Sparsify}$ invocation for each neuron $i \in [\eta^\ell]$ in layers $\ell \in \br{2,\ldots,L}$ (Alg.~\ref{alg:main}, Line~\ref{lin:pos-sparsify-weights}) is dominated by the relative importance computation for incoming edges (Alg.~\ref{alg:sparsify-weights}, Lines~\ref{lin:beg-sensitivity}-\ref{lin:end-sensitivity}). This can be done by evaluating $\WWRow[ik]^\ell a_{k}^{\ell-1}(x)$ for all $k \in \mathcal{W}$ and $x \in \SS$, for a total computation time that is bounded above by $\Bigo\left(|\SS| \, \eta^{\ell-1} \right)$ since $|\mathcal{W}| \leq \eta^{\ell-1}$ for each $i \in [\eta^\ell]$. Thus, $\textsc{Sparsify}$ takes $\Bigo\left(\abs{\SS}\, \eta^{\ell-1} \right)$ time. Summing the computation time over all layers and neurons in each layer, we obtain an asymptotic time complexity of $\Bigo \big(\abs{\SS} \, \sum_{\ell = 2}^L \eta^{\ell-1} \eta^{\ell}\big) \subseteq \Bigo \left(\abs{\SS} \, \eta^* \, \eta \right)$. Since $\abs{\SS} \in \Bigo(\log (\eta \, \eta^* / \delta))$, we conclude that the computational complexity our neural network compression algorithm is \begin{equation} \label{eqn:computation-time} \Bigo \left( \eta \, \, \eta^* \, \log \big(\eta \, \eta^*/ \delta \big) \right). \end{equation} \end{proof} \subsubsection{Proof of Theorem~\ref{thm:instance-independent-main}} In order to ensure that the established sampling bounds are non-vacuous in terms of the sensitivity, i.e., not linear in the number of incoming edges, we show that the sum of sensitivities per neuron $S$ is small. The following lemma establishes that the sum of sensitivities can be bounded \emph{instance-independent} by a term that is logarithmic in roughly the total number of edges ($\eta \cdot \eta^*$). \begin{lemma}[Sensitivity Bound] \label{lem:sens-bound} For any $\ell \in \br{2,\ldots,L}$ and $i \in [\eta^{\ell}]$, the sum of sensitivities $S = S_+ + S_-$ is bounded by $$ S \leq 2 \, |\SS| = 2 \, \ceil*{\kPrime \logTerm }. $$ \end{lemma} \begin{proof} Consider $S_+$ for an arbitrary $\ell \in \{2, \ldots, L\}$ and $i \in [\eta^{\ell}]$. For all $j \in \mathcal{W}$ we have the following bound on the sensitivity of a single $j \in \mathcal{W}$, \begin{align*} \s &= \max_{\Input \in \SS} \,\, \g{x} \leq \sum_{\Input \in \SS} \,\, \g{x} = \sum_{\Input \in \SS} \, \, \gDef{x}, \end{align*} where the inequality follows from the fact that we can upper bound the max by a summation over $\Input \in \SS$ since $\g{x} \ge 0$, $\forall j \in \mathcal{W}$. Thus, \begin{align*} S_+ &= \sum_{j \in \mathcal{W}} \s \leq \sum_{j \in \mathcal{W}} \sum_{\Input \in \SS} \, \, \g{x} \\ &=\sum_{\Input \in \SS} \frac{\sum_{j \in \mathcal{W}} \WWRow[j] \, a_{j}(\Input)}{\sum_{k \in \mathcal{W}}\WWRow[ k]\, a_{k}(\Input) } = |\SS|, \end{align*} where we used the fact that the sum of sensitivities is finite to swap the order of summation. Using the same argument as above, we obtain $S_- = \sum_{j \in \mathcal{W}_-} \s \leq |\SS|$, which establishes the lemma. \end{proof} Note that the sampling complexities established above have a linear dependence on the sum of sensitivities, $\sum_{\ell = 2}^{L} \sum_{i=1}^{\eta^\ell} S_\neuron^\ell$, which is instance-dependent, i.e., depends on the sampled $\SS \subseteq \PP$ and the actual weights of the trained neural network. By applying Lemma~\ref{lem:sens-bound}, we obtain a bound on the size of the compressed network that is independent of the sensitivity. \begin{restatable}[Sensitivity-Independent Network Compression]{theorem}{thminstanceindependentmain} \label{thm:instance-independent-main} For any given $\epsilon, \delta \in (0, 1)$ our sampling scheme (Alg.~\ref{alg:main}) generates a set of parameters $\hat{\theta}$ of size \begin{align*} \size{\hat{\theta}} \in \Bigo \left( \frac{ \log(\eta / \delta) \, \log ( \eta \, \eta^* / \delta) \log^2(\kmaxInsideLog) \, \eta \, L^2}{ \epsilon^2} \, \sum_{\ell = 2}^{L} (\DeltaNeuronHatLayers)^2 \, \right), \end{align*} in $\Bigo \left( \eta \, \, \eta^* \, \log \big(\eta \, \eta^*/ \delta \big) \right)$ time, such that $\Pr_{\hat{\theta}, \, \Input \sim {\mathcal D}} \left(f_{\paramHat}(x) \in (1 \pm \epsilon) f_\param(x) \right) \ge 1 - \delta$. \end{restatable} \begin{proof} Combining Lemma~\ref{lem:sens-bound} and Theorem~\ref{thm:main} establishes the theorem. \end{proof} \subsubsection{Generalized Network Compression} Theorem~\ref{thm:main} gives us an approximation guarantee with respect to one randomly drawn point $\Input \sim {\mathcal D}$. The following corollary extends this approximation guarantee to any set of $n$ randomly drawn points using a union bound argument, which enables approximation guarantees for, e.g., a test data set composed of $n$ i.i.d. points drawn from the distribution. We note that the sampling complexity only increases by roughly a logarithmic term in $n$. \begin{corollary}[Generalized Network Compression] \label{cor:generalized-compression} For any $\epsilon, \delta \in (0,1)$ and a set of i.i.d. input points $\PP'$ of cardinality $|\PP'| \in \mathbb{N}_+$, i.e., $\PP' \stackrel{i.i.d.}{\sim} {\mathcal D}^{|\PP'|}$, consider the reparameterized version of Alg.~\ref{alg:main} with \begin{enumerate} \item $\SS \subseteq \PP$ of size $|\SS| \ge \ceil*{\logTermGeneral \kPrime}$, \item $\DeltaNeuronHat = \DeltaNeuronHatDef$ as before, but $\kappa$ is instead defined as $$ \kappa = \sqrt{2 \lambda_*} \left(1 + \sqrt{2 \lambda_*} \logTermGeneral \right), \qquad \text{and} $$ \item $m \ge \SampleComplexityGeneralConcise$ in the sample complexity in \textsc{SparsifyWeights}. \end{enumerate} Then, Alg.~\ref{alg:main} generates a set of neural network parameters $\hat{\theta}$ of size at most \begin{align*} \size{\hat{\theta}} &\leq \sum_{\ell = 2}^{L} \sum_{i=1}^{\eta^\ell} \left( \ceil*{\frac{32 \, (L-1)^2 \, (\DeltaNeuronHatLayers)^2 \, S_\neuron^\ell \, \kmax \, \log (16 \, |\PP'| \, \eta / \delta) }{\epsilon^2}} + 1\right) \\ &\in \Bigo \left( \frac{ K \, \log ( \eta \, |\PP'| / \delta) \, L^2}{ \epsilon^2} \, \sum_{\ell = 2}^{L} (\DeltaNeuronHatLayers)^2 \, \sum_{i=1}^{\eta^\ell} S_\neuron^\ell \, \right), \end{align*} in $\Bigo \left( \eta \, \, \eta^* \, \log \big(\eta \, \eta^* \, |\PP'| / \delta \big) \right)$ time such that $$ \Pr_{\hat{\theta}, \, \Input} \left(\forall{\Input \in \PP'}: f_{\paramHat}(x) \in (1 \pm \epsilon) f_\param(x) \right) \ge 1 - \frac{\delta}{2}. $$ \end{corollary} \begin{proof} The reparameterization enables us to invoke Theorem~\ref{thm:main} with $\delta' = \nicefrac{\delta}{2 \, |\PP'|}$; applying the union bound over all $|\PP'|$ i.i.d. samples in $\PP'$ establishes the corollary. \end{proof} \section{Conclusion} \label{sec:conclusion} We presented a coresets-based neural network compression algorithm for compressing the parameters of a trained fully-connected neural network in a manner that approximately preserves the network's output. Our method and analysis extend traditional coreset constructions to the application of compressing parameters, which may be of independent interest. Our work distinguishes itself from prior approaches in that it establishes theoretical guarantees on the approximation accuracy and size of the generated compressed network. As a corollary to our analysis, we obtain generalization bounds for neural networks, which may provide novel insights on the generalization properties of neural networks. We empirically demonstrated the practical effectiveness of our compression algorithm on a variety of neural network configurations and real-world data sets. In future work, we plan to extend our algorithm and analysis to compress Convolutional Neural Networks (CNNs) and other network architectures. We conjecture that our compression algorithm can be used to reduce storage requirements of neural network models and enable fast inference in practical settings. \section{Introduction} \label{sec:introduction} Within the past decade, large-scale neural networks have demonstrated unprecedented empirical success in high-impact applications such as object classification, speech recognition, computer vision, and natural language processing. However, with the ever-increasing size of state-of-the-art neural networks, the resulting storage requirements and performance of these models are becoming increasingly prohibitive in terms of both time and space. Recently proposed architectures for neural networks, such as those in~\cite{Alex2012,Long15,SegNet15}, contain millions of parameters, rendering them prohibitive to deploy on platforms that are resource-constrained, e.g., embedded devices, mobile phones, or small scale robotic platforms. In this work, we consider the problem of sparsifying the parameters of a trained fully-connected neural network in a principled way so that the output of the compressed neural network is approximately preserved. We introduce a neural network compression approach based on identifying and removing \LL{isn't weighted weird here?} weighted edges with low relative importance via coresets, small weighted subsets of the original set that approximate the pertinent cost function. Our compression algorithm hinges on extensions of the traditional sensitivity-based coresets framework~\citep{langberg2010universal,braverman2016new}, and to the best of our knowledge, is the first to apply coresets to parameter downsizing. In this regard, our work aims to simultaneously introduce a practical algorithm for compressing neural network parameters with provable guarantees and close the research gap in prior coresets work, which has predominantly focused on compressing input data points. In particular, this paper contributes the following: \begin{enumerate} \item A coreset approach to compressing problem-specific parameters based on a novel, empirical notion of sensitivity that extends state-of-the-art coreset constructions. \item An efficient neural network compression algorithm, CoreNet, based on our extended coreset approach that sparsifies the parameters via importance sampling of weighted edges. \item Extensions of the CoreNet method, CoreNet+ and CoreNet++, that improve upon the edge sampling approach by additionally performing neuron pruning and amplification. \item Analytical results establishing guarantees on the approximation accuracy, size, and generalization of the compressed neural network. \item Evaluations on real-world data sets that demonstrate the practical effectiveness of our algorithm in compressing neural network parameters and validate our theoretical results. \end{enumerate} \section{Method} \label{sec:method} In this section, we introduce our neural network compression algorithm as depicted in Alg.~\ref{alg:main}. Our method is based on an important sampling-scheme that extends traditional sensitivity-based coreset constructions to the application of compressing parameters. \subsection{CoreNet} Our method (Alg.~\ref{alg:main}) hinges on the insight that a validation set of data points $\PP \stackrel{i.i.d.}{\sim} {\mathcal D}^n$ can be used to approximate the relative importance, i.e., sensitivity, of each weighted edge with respect to the input data distribution ${\mathcal D}$. For this purpose, we first pick a subsample of the data points $\SS \subseteq \PP$ of appropriate size (see Sec.~\ref{sec:analysis} for details) and cache each neuron's activation and compute a neuron-specific constant to be used to determine the required edge sampling complexity (Lines~\ref{lin:sample-s}-\ref{lin:cache-activations}). \input{pseudocode} Subsequently, we apply our core sampling scheme to sparsify the set of incoming weighted edges to each neuron in all layers (Lines~\ref{lin:beg-main-loop}-\ref{lin:end-main-loop}). For technical reasons (see Sec.~\ref{sec:analysis}), we perform the sparsification on the positive and negative weighted edges separately and then consolidate the results (Lines~\ref{lin:weight-sets}-\ref{lin:consolidate}). By repeating this procedure for all neurons in every layer, we obtain a set $\hat{\theta} = (\hat{W}^2, \ldots, \hat{W}^L)$ of sparse weight matrices such that the output of each layer and the entire network is approximately preserved, i.e., $\hat{W}^{\ell} \hat a^{\ell-1}(\Input) \approx W^\ell a^{\ell-1}(\Input)$ and $f_{\paramHat}(\Input) \approx f_\param(\Input)$, respectively\footnote{$\hat a^{\ell -1}(\Input)$ denotes the approximation from previous layers for an input $\Input \sim {\mathcal D}$; see Sec.~\ref{sec:analysis} for details.}. \subsection{Sparsifying Weights} The crux of our compression scheme lies in Alg.~\ref{alg:sparsify-weights} (invoked twice on Line~\ref{lin:pos-sparsify-weights}, Alg.~\ref{alg:main}) and in particular, in the importance sampling scheme used to select a small subset of edges of high importance. The cached activations are used to compute the \emph{sensitivity}, i.e., relative importance, of each considered incoming edge $j \in \mathcal{W}$ to neuron $i \in [\eta^\ell]$, $\ell \in \br{2,\ldots,L}$ (Alg.~\ref{alg:sparsify-weights}, Lines~\ref{lin:beg-sensitivity}-\ref{lin:end-sensitivity}). The relative importance of each edge $j$ is computed as the maximum (over $\Input \in \SS$) ratio of the edge's contribution to the sum of contributions of all edges. In other words, the sensitivity $\sPM$ of an edge $j$ captures the highest (relative) impact $j$ had on the output of neuron $i \in [\eta^\ell]$ in layer $\ell$ across all $\Input \in \SS$. The sensitivities are then used to compute an importance sampling distribution over the incoming weighted edges (Lines~\ref{lin:beg-sampling-distribution}-\ref{lin:end-sampling-distribution}). The intuition behind the importance sampling distribution is that if $\sPM$ is high, then edge $j$ is more likely to have a high impact on the output of neuron $i$, therefore we should keep edge $j$ with a higher probability. $m$ edges are then sampled with replacement (Lines~\ref{lin:beg-sampling}-\ref{lin:end-sampling}) and the sampled weights are then reweighed to ensure unbiasedness of our estimator (Lines~\ref{lin:beg-reweigh}-\ref{lin:end-reweigh}). \subsection{Extensions: Neuron Pruning and Amplification} In this subsection we outline two improvements to our algorithm that that do not violate any of our theoretical properties and may improve compression rates in practical settings. \textbf{Neuron pruning (CoreNet+)} Similar to removing redundant edges, we can use the empirical activations to gauge the importance of each neuron. In particular, if the maximum activation (over all evaluations $\Input \in \SS$) of a neuron is equal to 0, then the neuron -- along with all of the incoming and outgoing edges -- can be pruned without significantly affecting the output with reasonable probability. This intuition can be made rigorous under the assumptions outlined in Sec.~\ref{sec:analysis}. \textbf{Amplification (CoreNet++)} Coresets that provide stronger approximation guarantees can be constructed via \emph{amplification} -- the procedure of constructing multiple approximations (coresets) $(\WWHatRow^\ell)_1, \ldots, (\WWHatRow^\ell)_\tau$ over $\tau$ trials, and picking the best one. To evaluate the quality of each approximation, a different subset $\TT \subseteq \PP \setminus \SS$ can be used to infer performance. In practice, amplification would entail constructing multiple approximations by executing Line~\ref{lin:pos-sparsify-weights} of Alg.~\ref{alg:main} and picking the one that achieves the lowest relative error on $\TT$. \section{Problem Definition} \label{sec:problem-definition} \subsection{Fully-Connected Neural Networks} A feedforward fully-connected neural network with $L \in \mathbb{N}_+$ layers and parameters $\theta$ defines a mapping $f_\param: \mathcal{X} \to \mathcal{Y}$ for a given input $x \in \mathcal{X} \subseteq \Reals^d$ to an output $y \in \mathcal{Y} \subseteq \Reals^k$ as follows. Let $\eta^\ell \in \mathbb{N}_+$ denote the number of neurons in layer $\ell \in [L]$, where $[L] = \{1, \ldots, L \}$ denotes the index set, and where $\eta^1 = d$ and $\eta^{L} = k$. Further, let $\eta = \sum_{\ell = 2}^L \eta^\ell$ and $\eta^* = \max_{\ell \in \br{2,\ldots,L}} \eta^\ell$. For layers $\ell \in \br{2,\ldots,L}$, let $W^\ell \in \Reals^{\eta^\ell \times \eta^{\ell-1}}$ be the weight matrix for layer $\ell$ with entries denoted by $\WWRow[ij]^\ell$, rows denoted by $\WWRow^\ell \in \Reals^{1 \times \eta^{\ell -1}}$, and $\theta = (W^2,\ldots,W^{L})$. For notational simplicity, we assume that the bias is embedded in the weight matrix. Then for an input vector $x \in \Reals^d$, let $a^1 = x$ and $z^{\ell} = W^{\ell} a^{\ell-1} \in \Reals^{\eta^{\ell}}$, $\forall \ell \in \br{2,\ldots,L}$, where $a^{\ell-1} = \relu{z^{\ell-1}} \in \Reals^{\eta^{\ell-1}}$ denotes the activation. We consider the activation function to be the Rectified Linear Unit (ReLU) function, i.e., $\relu{\cdot} = \max \{\cdot\,, 0\}$ (entry-wise, if the input is a vector). The output of the network for an input $x$ is $f_\param(x) = z^L$, and in particular, for classification tasks the prediction is $ \argmax_{i \in [k]} f_\param(x)_i = \argmax_{i \in [k]} z^L_i. $ \subsection{Neural Network Coreset Problem} Consider the setting where a neural network $f_\param(\cdot)$ has been trained on a training set of independent and identically distributed (i.i.d.)\ samples from a joint distribution on $\mathcal{X} \times \mathcal{Y}$, yielding parameters $\theta = (W^2,\ldots,W^{L})$. We further denote the input points of a validation data set as $\PP = \br{x_i}_{i=1}^n \subseteq \mathcal{X}$ and the marginal distribution over the input space $\mathcal{X}$ as ${\mathcal D}$. We define the size of the parameter tuple $\theta$, $\size{\theta}$, to be the sum of the number of non-zero entries in the weight matrices $W^2,\ldots,W^{L}$. For any given $\epsilon, \delta \in (0,1)$, our overarching goal is to generate a reparameterization $\hat{\theta}$, yielding the neural network $f_{\paramHat}(\cdot)$, using a randomized algorithm, such that $\size{\hat{\theta}} \ll \size{\theta}$, and the neural network output $f_\param(x)$, $\Input \sim {\mathcal D}$ can be approximated up to $1 \pm \eps$ multiplicative error with probability greater than $1- \delta$. We define the $1 \pm \epsilon$ multiplicative error between two $k$-dimensional vectors $a, b \in \Reals^k$ as the following entry-wise bound: $ a \in (1 \pm \epsilon)b \, \Leftrightarrow \, a_i \in (1 \pm \epsilon) b_i \, \forall{i \in [k]}, $ and formalize the definition of an $(\epsilon, \delta)$-coreset as follows. \begin{definition}[$(\epsilon, \delta)$-coreset] Given user-specified $\eps, \delta \in (0,1)$, a set of parameters $\hat{\theta} = (\hat{W}^2, \ldots, \hat{W}^L)$ is an $(\eps, \delta)$-coreset for the network parameterized by $\theta$ if for $x \sim {\mathcal D}$, it holds that $$ \Pr_{\hat{\theta}, \Input} (f_{\paramHat} (x) \in (1 \pm \eps) f_\param(x)) \ge 1 - \delta, $$ where $\Pr_{\hat{\theta}, \Input}$ denotes a probability measure with respect to a random data point $\Input$ and the output $\hat{\theta}$ generated by a randomized compression scheme. \end{definition} \section{Related Work} \label{sec:related-work} Our work builds upon the following prior work in coresets and compression approaches. \CB{Emphasize that we don't need retraining} \paragraph{Coresets} Coreset constructions were originally introduced in the context of computational geometry \citep{agarwal2005geometric} and subsequently generalized for applications to other problems via an importance sampling-based, \emph{sensitivity} framework~\citep{langberg2010universal,feldman2011unified,braverman2016new}. Coresets have been used successfully to accelerate various machine learning algorithms such as $k$-means clustering~\citep{feldman2011unified,braverman2016new}, graphical model training~\citep{molina2018core}, and logistic regression~\citep{huggins2016coresets} (see the surveys of~\citep{bachem2017practical,munteanu2018coresets} for a complete list). In contrast to prior work, we generate coresets for reducing the number of parameters -- rather than data points -- via a novel construction scheme based on an efficiently-computable notion of sensitivity. \paragraph{Low-rank Approximations and Weight-sharing} \citet{Denil2013} were among the first to empirically demonstrate the existence of significant parameter redundancy in deep neural networks. A predominant class of compression approaches consists of using low-rank matrix decompositions, such as Singular Value Decomposition (SVD)~\citep{Denton14}, to approximate the weight matrices with their low-rank counterparts. Similar works entail the use of low-rank tensor decomposition approaches applicable both during and after training~\citep{jaderberg2014speeding, kim2015compression, tai2015convolutional, ioannou2015training, alvarez2017compression, yu2017compressing}. Another class of approaches uses feature hashing and weight sharing~\citep{Weinberger09, shi2009hash, Chen15Hash, Chen15Fresh, ullrich2017soft}. Building upon the idea of weight-sharing, quantization~\citep{Gong2014, Wu2016, Zhou2017} or regular structure of weight matrices can be used for reducing the effective number of parameters~\citep{Zhao17, sindhwani2015structured, cheng2015exploration, choromanska2016binary, wen2016learning}. Despite their favorable empirical effectiveness in compressing neural networks, these works generally lack performance guarantees on the quality of their approximations and/or the size of the resulting compressed network. \paragraph{Weight Pruning} Similar to our proposed method, weight pruning~\citep{lecun1990optimal} hinges on the idea that only a few dominant weights within a layer are required to approximately preserve the output. Approaches of this flavor have been investigated by~\cite{lebedev2016fast,dong2017learning}, e.g., by embedding sparsity as a constraint~\citep{iandola2016squeezenet, aghasi2017net, lin2017runtime}. Another related approach is that of~\cite{Han15}, which considers a combination of weight pruning and weight sharing methods. % However, prior work in weight pruning lacks rigorous theoretical analysis of the effect that the discarded weights can have on the compressed network. \paragraph{Generalization} The generalization properties of neural networks have been extensively investigated in various contexts~\citep{dziugaite2017computing, neyshabur2017pac, kawaguchi2017generalization, bartlett2017spectrally}. However, as was pointed out by~\cite{zhang2016understanding} and~\cite{neyshabur2017exploring}, current approaches to obtaining non-vacuous generalization bounds do not fully or accurately capture the empirical success of state-of-the-art neural network architectures. Recently, \cite{arora2018stronger} and \cite{zhou2018compressibility} highlighted the close connection between compressibility and generalization of neural networks. \cite{arora2018stronger} presented a compression method based on the Johnson-Lindenstrauss (JL) Lemma~\citep{johnson1984extensions} and proved generalization bounds based on succinct reparameterizations of the original neural network. Building upon the work of~\cite{arora2018stronger}, we extend our theoretical compression results to establish novel generalization bounds for fully-connected neural networks. In contrast to prior work, this paper addresses the problem of \emph{provably} compressing a fully-connected neural network while preserving the output for any point from the data distribution up to any user-specified approximation accuracy and failure probability. Unlike the method of~\cite{arora2018stronger}, which exhibits guarantees of the compressed network's performance only on the set of training points, our method's guarantees hold for any point drawn from the distribution. \section{Related Work} \label{sec:related-work} Our work builds upon the following prior work in coresets and compression approaches. \textbf{Coresets} Coreset constructions were originally introduced in the context of computational geometry \citep{agarwal2005geometric} and subsequently generalized for applications to other problems via an importance sampling-based, \emph{sensitivity} framework~\citep{langberg2010universal,braverman2016new}. Coresets have been used successfully to accelerate various machine learning algorithms such as $k$-means clustering~\citep{feldman2011unified,braverman2016new}, graphical model training~\citep{molina2018core}, and logistic regression~\citep{huggins2016coresets} (see the surveys of~\cite{bachem2017practical} and \cite{munteanu2018coresets} for a complete list). In contrast to prior work, we generate coresets for reducing the number of parameters -- rather than data points -- via a novel construction scheme based on an efficiently-computable notion of sensitivity. \textbf{Low-rank Approximations and Weight-sharing} \citet{Denil2013} were among the first to empirically demonstrate the existence of significant parameter redundancy in deep neural networks. A predominant class of compression approaches consists of using low-rank matrix decompositions, such as Singular Value Decomposition (SVD)~\citep{Denton14}, to approximate the weight matrices with their low-rank counterparts. Similar works entail the use of low-rank tensor decomposition approaches applicable both during and after training~\citep{jaderberg2014speeding, kim2015compression, tai2015convolutional, ioannou2015training, alvarez2017compression, yu2017compressing}. Another class of approaches uses feature hashing and weight sharing~\citep{Weinberger09, shi2009hash, Chen15Hash, Chen15Fresh, ullrich2017soft}. Building upon the idea of weight-sharing, quantization~\citep{Gong2014, Wu2016, Zhou2017} or regular structure of weight matrices was used to reduce the effective number of parameters~\citep{Zhao17, sindhwani2015structured, cheng2015exploration, choromanska2016binary, wen2016learning}. Despite their practical effectiveness in compressing neural networks, these works generally lack performance guarantees on the quality of their approximations and/or the size of the resulting compressed network. \textbf{Weight Pruning} Similar to our proposed method, weight pruning~\citep{lecun1990optimal} hinges on the idea that only a few dominant weights within a layer are required to approximately preserve the output. Approaches of this flavor have been investigated by~\cite{lebedev2016fast,dong2017learning}, e.g., by embedding sparsity as a constraint~\citep{iandola2016squeezenet, aghasi2017net, lin2017runtime}. Another related approach is that of~\cite{Han15}, which considers a combination of weight pruning and weight sharing methods. % Nevertheless, prior work in weight pruning lacks rigorous theoretical analysis of the effect that the discarded weights can have on the compressed network. To the best of our knowledge, our work is the first to introduce a practical, sampling-based weight pruning algorithm with provable guarantees. \textbf{Generalization} The generalization properties of neural networks have been extensively investigated in various contexts~\citep{dziugaite2017computing, neyshabur2017pac, bartlett2017spectrally}. However, as was pointed out by~\cite{neyshabur2017exploring}, current approaches to obtaining non-vacuous generalization bounds do not fully or accurately capture the empirical success of state-of-the-art neural network architectures. Recently, \cite{arora2018stronger} and \cite{zhou2018compressibility} highlighted the close connection between compressibility and generalization of neural networks. \cite{arora2018stronger} presented a compression method based on the Johnson-Lindenstrauss (JL) Lemma~\citep{johnson1984extensions} and proved generalization bounds based on succinct reparameterizations of the original neural network. Building upon the work of~\cite{arora2018stronger}, we extend our theoretical compression results to establish novel generalization bounds for fully-connected neural networks. Unlike the method of~\cite{arora2018stronger}, which exhibits guarantees of the compressed network's performance only on the set of training points, our method's guarantees hold (probabilistically) for any random point drawn from the distribution. In addition, we establish that our method can \LL{we have never introduced what "$\epsilon$-approximate" means. Is this "standard" enough that readers know what it means?}$\epsilon$-approximate the neural network output neuron-wise, which is stronger than the norm-based guarantee of \cite{arora2018stronger}. In contrast to prior work, this paper addresses the problem of compressing a fully-connected neural network while \emph{provably} preserving the network's output. Unlike previous theoretically-grounded compression approaches -- which provide guarantees in terms of the normed difference --, our method provides the stronger entry-wise approximation guarantee, even for points outside of the available data set. As our empirical results show, ensuring that the output of the compressed network entry-wise approximates that of the original network is critical to retaining high classification accuracy. Overall, our compression approach remedies the shortcomings of prior approaches in that it (i) exhibits favorable theoretical properties, (ii) is computationally efficient, e.g., does not require retraining of the neural network, (iii) is easy to implement, and (iv) can be used in conjunction with other compression approaches -- such as quantization or Huffman coding -- to obtain further improved compression rates. \section{Results} \label{sec:results} In this section, we evaluate the practical effectiveness of our compression algorithm on popular benchmark data sets (\textit{MNIST}~\citep{lecun1998gradient}, \textit{FashionMNIST}~\citep{xiao2017}, and \textit{CIFAR-10}~\citep{krizhevsky2009learning}) and varying fully-connected trained neural network configurations: 2 to 5 hidden layers, 100 to 1000 hidden units, either fixed hidden sizes or decreasing hidden size denoted by \emph{pyramid} in the figures. We further compare the effectiveness of our sampling scheme in reducing the number of non-zero parameters of a network, i.e., in sparsifying the weight matrices, to that of uniform sampling, Singular Value Decomposition (SVD), and current state-of-the-art sampling schemes for matrix sparsification~\citep{drineas2011note,achlioptas2013matrix,kundu2014note}, which are based on matrix norms -- $\ell_1$ and $\ell_2$ (Frobenius). The details of the experimental setup and results of additional evaluations may be found in Appendix~\ref{app:results}. \paragraph{Experiment Setup} We compare against three variations of our compression algorithm: (i) sole edge sampling (CoreNet), (ii) edge sampling with neuron pruning (CoreNet+), and (iii) edge sampling with neuron pruning and amplification (CoreNet++). For comparison, we evaluated the average relative error in output ($\ell_1$-norm) and average drop in classification accuracy relative to the accuracy of the uncompressed network. Both metrics were evaluated on a previously unseen test set. \paragraph{Results} Results for varying architectures and datasets are depicted in Figures~\ref{fig:classification} and ~\ref{fig:error} for the average drop in classification accuracy and relative error ($\ell_1$-norm), respectively. As apparent from Figure~\ref{fig:classification}, we are able to compress networks to about 15\% of their original size without significant loss of accuracy for networks trained on \textit{MNIST} and \textit{FashionMNIST}, and to about 50\% of their original size for \textit{CIFAR}. \begin{figure}[htb!] \centering \includegraphics[width=0.325\textwidth]{figures/acc/MNIST_l3_h1000_pyramid} \includegraphics[width=0.325\textwidth]{figures/acc/CIFAR_l3_h1000_pyramid} \includegraphics[width=0.325\textwidth]{figures/acc/FashionMNIST_l3_h1000_pyramid}% \caption{Evaluation of drop in classification accuracy after compression against the \textit{MNIST}, \textit{CIFAR}, and \textit{FashionMNIST} datasets with varying number of hidden layers ($L$) and number of neurons per hidden layer ($\eta^*$). Shaded region corresponds to values within one standard deviation of the mean.} \label{fig:classification} \end{figure}% \begin{figure}[htb!] \centering \includegraphics[width=0.325\textwidth]{figures/error/MNIST_l3_h1000_pyramid} \includegraphics[width=0.325\textwidth]{figures/error/CIFAR_l3_h1000_pyramid} \includegraphics[width=0.325\textwidth]{figures/error/FashionMNIST_l3_h1000_pyramid}% \caption{Evaluation of relative error after compression against the \textit{MNIST}, \textit{CIFAR}, and \textit{FashionMNIST} datasets with varying number of hidden layers ($L$) and number of neurons per hidden layer ($\eta^*$). } \label{fig:error} \end{figure}% \paragraph{Discussion} The simulation results presented in this section validate our theoretical results established in Sec.~\ref{sec:analysis}. In particular, our empirical results indicate that we are able to outperform networks compressed via competing methods in matrix sparsification across all considered experiments and trials. The results presented in this section further suggest that empirical sensitivity can effectively capture the relative importance of neural network parameters, leading to a more informed importance sampling scheme. Moreover, the relative performance of our algorithm tends to increase as we consider deeper architectures. These findings suggest that our algorithm may also be effective in compressing modern convolutional architectures, which tend to be very deep. \FloatBarrier
{ "timestamp": "2019-05-21T02:05:30", "yymm": "1804", "arxiv_id": "1804.05345", "language": "en", "url": "https://arxiv.org/abs/1804.05345" }
\section{Introduction} Evolutionary algorithms, which employ evaluation of multiple candidate solutions simultaneously and independently, are often seen as a natural choice for solving search and optimization problems in parallel and distributed environments. However, most evolutionary algorithms have two interleaving stages: the evaluation phase, where the fitness of the current population is evaluated, and the phase for selection and reproduction, where decisions are taken based on all the evaluated fitness values. This design has two problems. First, if the evaluation phase takes considerable time, and the time needed to evaluate a single individual can vary significantly, then a large portion of available computation resources is not used while the last individuals are waited for. Second, all the resources dedicated to fitness evaluation are idle during the second phase, which can also be noticeable if this phase is computationally expensive. The second issue is noticeable in algorithms which have non-trivial state update procedures, especially if they scale asymptotically worse than linearly with the population size. Most contemporary evolutionary multiobjective algorithms belong to this class, since they contain superlinear procedures related to the maintenance of Pareto-optimal sets and layers~\cite{nsga-ii,nsga-iii,spea2,pesa-ii}, evaluation of indicators~\cite{ibea,hype-algorithm} or classifying points towards reference vectors~\cite{moea-d, nsga-iii}. Asynchronous fitness evaluation performed by steady-state evolutionary algorithms are often seen as a practical solution to these issues. Apart from this, steady-state algorithms often have a better convergence speed than generational ones, since each individual is sampled from a strictly better distribution than the previous one. Studies on the steady-state variant of the NSGA-II algorithm suggest noticeable improvements over the classic generational variant on a number of standard benchmark multiobjective problems~\cite{nsga-ii-steady-state}. The asynchronous implementation of the steady-state NSGA-II has also demonstrated a better performance, in both time and diversity, for certain real-world combinatorial optimization problems~\cite{sync-async-moea}. Several papers also suggest that asynchronous steady-state algorithms have an advantage over the generational ones on problems with heterogeneous evaluation times, either random or increasing towards the Pareto-front~\cite{nebro-durillo-master-slave-nsga-ii, async-moea-heterocosts, async-master-slave-moea-heterocosts}. In several cases, within the fixed number of evaluations generational algorithms performed slightly better in terms of the hypervolume indicator~\cite{hypervolume}, but they took considerable more time to do that than the asynchronous algorithms~\cite{nebro-durillo-master-slave-nsga-ii}. With bigger population sizes, however, steady-state multiobjective evolutionary algorithms tend to consume more time in the update phase, because these procedures scale at least linearly worse than those of generational algorithms. This problem does not manifest itself when small population sizes are used and fitness evaluation is expensive: for instance, in~\cite{sync-async-moea} the population size ranged from 24 to 40, so the non-dominated sorting procedure from NSGA-II runs almost instantly. However, when the population size grows, steady-state algorithms often scale worse: for instance, the steady-state NSGA-II ran almost 10 times slower with the population size of 100 in experiments from~\cite{nsga-ii-steady-state} than the classic one. The only part of NSGA-II which has a relatively high computation complexity is the non-dominated sorting. This procedure is also used in the descendants of this algorithm, such as NSGA-III~\cite{nsga-iii}, and similar procedures maintain the archive of the best solutions in algorithms such as SPEA2~\cite{spea2}. While a run of non-dominated sorting is done once for the whole population on every iteration of a generational algorithm, in a steady-state algorithm one has to run non-dominated sorting every time a new individual is evaluated, which is $\Theta(n)$ times slower with the population size equal to $n$. This forced several research groups to investigate the ways to adapt non-dominated sorting algorithms to support the incremental operations. Li~et~al.~were the first with their ENLU approach~\cite{deb-enlu-14}, see also the journal version~\cite{deb-enlu}. ENLU, or Efficient Non-domination Level Update, handles the point addition by finding the level of the new point, comparing all points within that level to the new one, and pushing those who are dominated to the next level. In the next level, the points being moved are compared to all points of that level, and the new set of moving points is formed. The worst case of one such operation is still $\Theta(n^2 k)$ for $n$ points and dimension $k$, however, the algorithm typically runs much faster in practice. A slight improvement to one of the cases where ENLU deteriorates was subsequently proposed in~\cite{mishra-non-dominated-level-update}. Another line of the research was initiated by~\cite{incremental-nds-cec15}, where a faster update procedure was proposed for the case $k=2$. Its complexity is $O(n)$ in the worst case, and it quicky reaches $O(\log n)$ once the optimization manages to condense most of the points in the number of levels that is at most a constant. This procedure is based on maintaining the levels as binary search trees that can be cut or merged in $O(\log n)$. The support for $k > 2$ arrived much later~\cite{yakupovB-gecco17-inds}, where the algorithm is based on calling the offline non-dominated sorting with the complexity $O(n \cdot (\log n)^{k-1})$ on two subsequent levels to push the moving points forward. The fact that the ranks of the sorted points are known a priori made it possible to prove an improved $O(n \cdot (\log n)^{k-2})$ worst-case bound. The reported running times were also competitive compared to ENLU and often better. The mentioned algorithms are not yet ready to support asynchronous multi-objective optimization without introduction of a global lock. However, since each update accesses levels in a sequential order, it is possible and desirable to enrich the implementation with the possibility to introduce changes in unrelated parts by many threads simultaneously. This paper investigates several ways to do it. We choose the algorithm from~\cite{yakupovB-gecco17-inds} as the basic algorithm and developed several modifications: apart from the obvious modification to introduce the global lock on the entire data structure, we considered an implementation based on the compare-and-set concurrency primitives, as well as an implementation which uses finer-grained level-based locks. These implementations are evaluated on synthetic datasets generated by the asynchronous NSGA-II. \section{Preliminaries} In this section, we briefly introduce the notation we use and the core concepts necessary to understand this paper. \subsection{Notation} Without loss of generality, we consider multiobjective minimization problems. Since in large parts of this paper we do not consider particular optimization problems or fitness functions, we typically do not differentiate between genotypes and phenotypes, so we treat individuals as points in the $k$-dimensional objective space. A point $p$ is said to \emph{strictly dominate} a point $q$, denoted as $p \prec q$, if in every coordinate $p$ is not greater than $q$, and there exists a coordinate where it is strictly smaller: \begin{equation*} p \preceq q \leftrightarrow \begin{cases} \forall i, 1 \le i \le k, p_i \le q_i;\\ \exists i, 1 \le i \le k, p_i < q_i. \end{cases} \end{equation*} There also exists a \emph{weak domination} relation, denoted as $p \preceq q$, which removes the second condition. We use the term \emph{domination} for strict domination if not said otherwise. \emph{Non-dominated sorting} is a procedure that takes a set of points $P$ and assigns each point a \emph{rank}. The points from $P$ that are not dominated by any other points from $P$ receive rank 0. All points that are dominated only by points of rank 0 receive rank 1. Similarly, all points that are dominated only by points of rank $\le i$ receive rank $i + 1$. A set of points with the same rank is called a \emph{non-domination level}, or simply a \emph{level}. The first picture in Figure~\ref{inds-demo} shows an example of four non-domination levels of white points in two dimensions. \emph{Incremental non-dominated sorting} is a procedure that updates ranks of a set of points when a new point is inserted or deleted. There are several algorithms to perform incremental non-dominated sorting~\cite{deb-enlu,incremental-nds-cec15,yakupovB-gecco17-inds}, of which the one from~\cite{yakupovB-gecco17-inds} currently has the best performance among the ones for arbitrary dimension $k$. \emph{Crowding distance} is the quantity used for diversity management within a non-domination level in NSGA-II~\cite{nsga-ii}. For a point $p$, the crowding distance is equal to: \begin{equation*} CD(p) = \sum_{i=1}^{k} \frac{p^{\text{right}}_i - p^{\text{left}}_i}{P^{\max}_i - P^{\min}_i}, \end{equation*} where $P^{\max}_i$ is the maximum among the $i$-th coordinates in the population (similarly $P^{\min}_i$ is the minimum), and $p^{\text{right}}$ is the point from the population with the $i$-th coordinate just above $p_i$ (similarly $p^{\text{left}}$ is the point just below; in other words, when the population is sorted by the $i$-th coordinate, $p^{\text{right}}$ and $p^{\text{left}}$ are the neighbors of $p$). If at least one of the neighboring point is absent, then $CD(p) \gets \infty$. \subsection{Concurrency Primitives} \begin{figure}[!t] \centering \scalebox{0.75}{ \begin{tikzpicture}[scale=0.28] \TikZPictureLevel \end{tikzpicture} } \caption{The working principles of incremental non-dominated sorting. On each phase, a set of moving points is considered, which initially consists of a single point that is inserted. The points that are dominated by the nadir of the inserted points are selected, and the offline non-dominated sorting is performed on the union. The points that get rank 0 remain in the current front, while others become the next moving points.}\label{inds-demo} \end{figure} There exists a number of different concurrency primitives to ensure certain ordering on operations in multithreaded environment. Maybe the simplest one is the \emph{lock}. It is mostly used to surround a so-called \emph{critical section}: a region of the code that is intended to be executed by a single thread only. Locks basically support two operations: \emph{acquire} and \emph{release}. If the lock is not acquired by any thread, the first one that calls \emph{acquire} does it and can proceed. Any subsequent thread that calls \emph{acquire} will be suspended until the first thread releases the lock (by calling \emph{release}). When the lock is released, one of the threads waiting for this lock will resume and acquire this lock. A simple Java code example below shows an example of the usage of locks. \begin{lstlisting}[language=Java] Lock lock = new Lock(); void procedureUsingLock() { callSomethingThreadSafe(); lock.acquire(); callBySingleThreadOnly(); lock.release(); } \end{lstlisting} Some programming language, including Java, introduce a more complex concept called \emph{monitors}. However, when the special methods \texttt{wait()}, \texttt{notify()} and \texttt{notifyAll()} are not used, they are similar to locks. One can \texttt{synchronize} on an object, which is similar to acquiring a lock associated with that object and subsequently releasing it. A method of an object can be marked as \texttt{synchronized}, which is essentially equivalent to synchronizing on this object for the course of the entire method. There is also a number of finer primitives, which are not associated with critical sections of code, but instead guard the order in which a certain dedicated memory area is accessed or modified. One of them is called \emph{compare-and-set}. In simple words, one can access a variable, test it for equality to a reference value, and only if these values are equal, set the variable to another specified value, as if this all is a single uninterrupted instruction, that is, \emph{atomically}. In a Java notation, it is roughly equivalent to: \begin{lstlisting}[language=Java] int value; Object lock = new Object(); boolean compareAndSet(int ref, int newVal) { synchronized (lock) { if (value == ref) { value = newVal; return true; } else { return false; } } } \end{lstlisting} but is typically much faster. Many modern processors do indeed provide a similar instruction, such as the compare-and-exchange (\texttt{CMPXCHG}) instruction in the x86 family. The compare-and-set functions can be used to implement \emph{non-blocking} algorithms, in particular \emph{lock-free} and \emph{wait-free} algorithms, which, unlike the ones using locks or monitors, do not force threads to wait one for another. Such algorithms can theoretically scale better than the lock-based ones when the number of processors is growing. However, a non-blocking algorithm is not guaranteed to be better, since it can perform much more unnecessary work if not designed properly. For instance, an efficient wait-free algorithm for the wait-free queue was proposed as recently as in 2011~\cite{wait-free-queues}. \subsection{Incremental Non-dominated Sorting} Here we briefly describe the core principles of the incremental non-dominated sorting algorithm from~\cite{yakupovB-gecco17-inds}. They are also illustrated in Figure~\ref{inds-demo}. The algorithm maintains the levels in separate lists, ordered lexicographically from the first objective to the last one. On insertion of a point $p$, first the maximum number of level $\ell$, where a point exists that dominates $p$, is found. Then, a set of moving points $M$ is formed, initially $M = \{p\}$, which contains a subset of the points that increase their rank. An algorithm for offline non-dominated sorting is then run on $L_{\ell+1} \cup M$. Since both $M$ and $L_{\ell+1}$ are both non-dominating sets, and no point from $L_{\ell+1}$ can dominate a point from $M$, the rank of each point will be either 0 or 1. The points of rank 0 form the new level $L_{\ell+1}$, rank 1 forms the new $M$, and then the process continues with $\ell \gets \ell + 1$. The existence of only two ranks, 0 or 1, may improve the performance of non-dominated sorting: for instance, the algorithm from~\cite{jensen,buzdalov-nds-2014}, which normally runs in $O(n \cdot (\log n)^{k-1})$, speeds up to $O(n \cdot (\log n)^{k-2})$, because the $O(n \log n)$ algorithms that form its baseline for the divide-and-conquer degenerate to $O(n)$ in the presence of two ranks. Together with the fact that points from $L_{\ell+1}$ can never dominate points from $M$, this also enables calling directly the internal procedure of this algorithm, which assigns ranks to inferior points given that superior points are fully evaluated (this procedure is often called \textsc{HelperB} following the notation of the paper which introduced the methodology~\cite{jensen}). One more insight that further improves the performance is that we can first exclude those points from $L_{\ell+1}$ that are not dominated by the coordinatewise minimum, or the \emph{nadir}, of points from $M$. \section{Introducing Concurrency} In this section, we show two major ways for how to introduce concurrency into the incremental non-dominated sorting. Note that there also exists a simple and inefficient way, namely, to put all procedures that can update the levels under a single lock. In Java, one would modify a class which representes the collection of levels by putting the \texttt{synchronized} modifiers on all methods which query or modify the levels. This is, however, still a valid baseline method for subsequent comparisons, along with the single-threaded evolutionary algorithm. \subsection{The Compare-And-Set Approach}\label{cas1} In the approach based on compare-and-set primitives, we optimistically let the threads do their work on updating the levels in their local memory areas and publish the results of their computations in the case no other thread had updated this level before. Each level is stored in its own dedicated memory area that is updated atomically (for this purpose, in Java we use \texttt{AtomicReference} of an object that contains the points of the level along with the necessary metadata), so we ensure that the threads can work with the point sets that are internally consistent (for instance, each level consists of points that do not dominate each other). When using this logic, however, we cannot rely anymore on the fact that the set of moving points $M$ and the level $L_{\ell+1}$ we are insertion these points into are related in such a way that no point $p \in L_{\ell+1}$ can dominate a point $m \in M$. Indeed, since the time the current thread has formed the set $M$ and left the previous level $L_{\ell}$ in a consistent state, another thread might have updated the front $L_{\ell+1}$. Since every such update makes the level closer to the Pareto front by any sensible measure (such as the hypervolume~\cite{hypervolume}), some points can appear in $L_{\ell}$ that dominate some points in $M$. Given this fact, we have to resort to the full-blown offline non-dominated sorting to determine the new contents of $L_{\ell+1}$. We can use, however, the fact that the set of points to be sorted is formed by a union of two sets, $M$ and $L_{\ell+1}$, each of which is non-dominating. It follows from this fact that the ranks will be either 0 or 1 again, and, by induction, the next $M$ will also be non-dominating. As a consequence, the runtime of non-dominated sorting will be $O(n \cdot (\log n)^{k-2})$ for $k > 2$. Once a thread has computed the new value candidates of $L_{\ell+1}$ and $M$, it performs the compare-and-set on the atomic variable holding the actual value of $L_{\ell+1}$. If $L_{\ell+1}$ at this time is exactly the same as before the sorting, then the update succeeds and the thread moves on with a new $M$ to another level ($\ell \gets \ell + 1$). Otherwise, some other thread has changed $L_{\ell+1}$ before the current one, so it has to perform the process again until it succeeded. In this implementation, we use one lock to guard the relatively infrequent situations when a new level is added or the last level is removed. We also have to quit using the heuristic which stops propagation of levels and creates a new one once the set $M$ dominates the set $L_{\ell+1}$ entirely. \subsection{A Time-Stamping Modification}\label{cas2} To use the benefits offered by a faster merging of levels in~\cite{yakupovB-gecco17-inds}, we introduce time-stamping of levels. In this modification, each level has an associated integer number, which is increased at the beginning of each point insertion, and also on creation of a new version of a level. In the latter case this increased value is associated with this new version. The timestamps originate from a single atomic integer variable global to the particular set of levels, which can be atomically incremented when in use by multiple threads. While performing operations associated with insertion of a certain point, we keep the time-stamp $\tau$ corresponding to the moment when this insertion is started. Whenever we perform the merging of the set of moving points $M$ and the currently modified level $L_{\ell+1}$, and the time-stamp $T(L_{\ell+1})$ is less than $\tau$, it means that this level was not modified by any thread. In this case, the invariant that no point $p \in L_{\ell+1}$ can dominate any point $m \in M$ holds, since $M$ consists of the points that are \emph{at least as good}, in terms of domination, as the points from $L_{\ell}$ at the time $\tau$. Note that the above holds even if for this particular insertion there were previously several levels for which the time-stamp was greater than $\tau$. This can indeed happen since several insertions running in parallel could terminate earlier than the current one, or the current thread could be given a time slot enough to overcome other threads. \subsection{The Approach with Finer-Grained Locks}\label{lock} We have also implemented a version which has a lock associated with each level. When performing an update of the level $L_{\ell+1}$ by a set of moving points $M$, the thread acquires a lock $K_{\ell+1}$ associated with the updated level. By this it ensures that no other thread will modify the level $L_{\ell+1}$ by the time it is done with the sorting. Just before the lock $K_{\ell+1}$ is released, the thread acquires a lock $K_{\ell+2}$ associated with the next level $L_{\ell+2}$ if the new set of moving points $M$ is not empty. By doing this, the thread ensures that there will be no other thread which surpasses it. In turn this also ensure the condition that points from $L_{\ell+2}$ cannot dominate points from the new version of $M$. When the locks are used in this way, threads which update the levels always follow each other in an unchanged order in the direction of increasing of level indices. This is a property which greatly simplifies thinking about the algorithm as well as the formal proofs. However, this also results in many threads competing for the lock of the last level, since a thread typically not only adds a point, but also removes the worst point, which is located in the last level. To partially overcome this, we do not delete points unless the number of points in all levels exceeds $1.2 \cdot n$ for the desired population size $n$. Once this threshold is reached, the extra $0.2 \cdot n$ worst points are removed. Since this process can require removal of a large number of levels, a separate lock to handle this process was also introduced. \subsection{Recomputation of the Crowding Distance} When the algorithms for incremental non-dominated sorting are used within the NSGA-II algorithm, they need to support querying of a point, along with its rank and crowding distance, by its ordinal (that is, by its index in some arbitrary but predefined order). This is mostly trivial except for the crowding distance. Since the crowding distance requires the knowledge of the coordinate-wise span of the level in which the point resides, as well as the neighbors of this point in every coordinate, the information needed to compute the crowding distance is not local. This presents an issue in the realm of incremental non-dominated sorting, since it typically performs small changes to the levels. In particular, the size of the moving set $M$ is often much smaller than the size of the levels, and the subset of the level $L_{\ell+1}$, which is dominated by the coordinate-wise minimum of $M$ is often also small. The computationally complex non-dominated sorting is performed only on these small parts, while the remaining part of the level $L_{\ell+1}$ is processed using a routine with the complexity $O(nk)$. The complexity of crowding distance, if performed on the entire level, is $O(nk \log n)$, which appears to dominate the running time of the entire algorithm. We propose a way to reduce this part of the running time to $O(nk + \tilde{n}k \log \tilde{n})$, where $\tilde{n}$ is the size of the small parts from above. One of the ways to do it is to maintain, in each level and for each coordinate, a list of points contained in this level sorted in that coordinate. After an update, for the newly inserted points the lists sorted in each coordinate are constructed, and then these lists are merged with the lists stored in the level in $O(n)$ time each. During these merges, the entires corresponding to the just removed points are also removed from the lists, and the crowding distance is recomputed for every point. \section{Experiments} For the experimental evaluation, we have considered the algorithms mentioned above: \begin{itemize} \item INDS: the incremental non-dominated sorting algorithm from~\cite{yakupovB-gecco17-inds}; \item Sync: the same algorithm with all public methods annotated with \texttt{synchronized}; \item CAS1: the modification of INDS according to Section~\ref{cas1}; \item CAS2: the modification of CAS1 according to Section~\ref{cas2}; \item Lock: the lock-based modification of INDS according to Section~\ref{lock}. \end{itemize} All these algorithms were implemented in Java (OpenJDK with the runtime version 1.8.0\_141), and their performance was evaluated on an 64-core machine with four AMD Opteron\texttrademark\ Processor 6380 processors clocked at 2.5 GHz running a 64-bit GNU/Linux OS (kernel version 3.16.0). We evaluated the algorithms on the well-known benchmark problems DTLZ1--DTLZ4 and DTLZ7~\cite{dtlz}, as well as ZDT1--ZDT4 and ZDT6~\cite{zdt}. For the ZDT problems, we kept $k=2$. For all DTLZ problems, we performed experiments with $k=3$, and additionally we ran DTLZ1 and DTLZ2 problems with $k \in \{4,6,8,10\}$. The datasets were synthesized for each problem as follows. First, a random population of size 5000 was created. Then, a steady-state NSGA-II was run for the next 1000 iterations, creating 1000 points to be inserted. The initial population, as well as the inserted points, were recorded for the usage in benchmarking. For each DTLZ problem, three datasets were synthesized in this way, while for the ZDT problems the number of datasets was two. Each of the algorithms was then run on the datasets, and their running time was measured by the Java Microbenchmark Harness framework (JMH, version 1.17.2) with four warm-up and four measurement iterations, each at least one second long, using two independent forks of the Java Virtual Machine. Every run consisted of initialization of the algorithm on the initial population from the dataset, which was not counted towards the running time, and insertion of the 1000 points from the dataset, together with the subsequent deletion of the worst point, which was measured. For all algorithms except INDS, which is sequential by its nature, a number of threads was used to insert the points. The number of threads was taken from $\{3, 6, 12, 24\}$. The poits to be inserted were evenly and randomly distributed between the threads. Figure~\ref{zdt} shows the results for all ZDT problems, which are two-dimensional. Figure~\ref{dtlz3d} shows the results for all three-dimensional DTLZ problems. Figure~\ref{dtlz1} is dedicated to DTLZ1 with different values of $k$, while Figure~\ref{dtlz2} does the same for DTLZ2. \subsection{ZDT, Two Dimensions} The results on the ZDT problems reveal that one can not generally benefit from having an asynchronous algorithm when the average insertion time is very small (it is typically around $10^{-1.6} \approx 0.025$ seconds as Figure~\ref{zdt} suggests). The Sync version shows that thread contention introduces slowdowns that are orders of magnitude worse that the running time of the algorithm itself. These results also show that the algorithms based on the compare-and-set mechanism scale rather well in these conditions. There is a stable and distinct trend for the running time of CAS1 to decrease while the number of threads increases. CAS2, due to its optimizations, is initially rather fast and somewhat competitive with the single-threaded INDS. A minimum located somewhere between 6 and 12 threads can be observed for CAS2. The Lock algorithm, similar to the Sync one, degrades with the growth of the number of threads, however, its performance is much better than of Sync, in particular, it stays competitive to INDS when three threads are used. This behavior is generally expected from the lock-based algorithms. \subsection{DTLZ, Three and More Dimensions} Things, however, change in three dimensions, where the cost of a single insertion raises towards approximately $0.07$ seconds. In these settings, the performance of Sync, which is still much worse compared to INDS, but not to the scale observed on the ZDT problems. The performance of CAS1, however, becomes much worse compared to even Sync. It retains the trend towards better scaling with the number of threads, however, it is worse than Sync even when both use 24 threads. The key problem with CAS1 seems that it often spends much time in sorting, which gets more time-consuming in three dimensions, and only to find that some other thread has overwritten the target level. CAS2, on the other hand, retains relatively efficient, however, it still does not exceed INDS in the performance. In this setting, it demonstrates the trend towards increasing its running time with the number of threads. It looks like even with the improvements introduced to CAS2 the amount of work every thread wastes in order to recompute the level insertion once some other thread overcame it is growing with the number of threads and is not compensated by the absence of idle time. The biggest surprise is the Lock algorithm, which demonstrates roughly the same performance as in two dimensions and thus overcomes INDS in the performance. Figure~\ref{dtlz3d} also shows a consistently better behavior with 6 threads. The same trends are demonstrated also in higher dimensions on the DTLZ problems, which suggests that the lock-based algorithm is an algorithm of choice in the concurrent environments, at least for this number of points, threads and dimensions. Its behavior regarding the number of threads seems to be quite robust, although there is indeed a slight trend towards increasing the running time when the number of threads grows. A local minimum around six threads is observed for $k \le 4$, while this behavior tends to disappear for larger values of $k$. A possible explanation for such a good behavior of Lock can be that, after a short initial phase, the threads start to follow each other with some short distance in the same order for long periods of time. It is yet an open question whether it is true, and whether the picture is going to change with heterogeneous times of fitness evaluation. \section{Conclusion} We have made the first step towards efficient data structures for large-scale asynchronous steady-state multiobjective algorithms based on non-dominated sorting. Our experiments suggest that an asynchronous implementation of incremental non-dominated sorting with fine-grained level-based locking seems to be a viable choice already at population size of several thousand points with dimensions starting from $k = 3$. We should, however, notice that the benefits from using more threads for insertion of points are not very clear, although the algorithm seems to tolerate our tested maximum of 24 threads pretty well. It also looks like more advanced approaches, such as the algorithms based on the compare-and-set primitives, are more difficult to be made practical, at least with the chosen design of such algorithms. By this paper, we did \emph{not} prove that work-efficient lock-free algorithms do not exist for incremental non-dominated sorting. We only showed that a particular design, namely, comparing-and-setting entire levels, is probably not very efficient. Doing this on the level of single individuals, however, does not sound promising either, since it is very likely that this will multiply the computation costs of non-dominated sorting itself by the overhead of compare-and-set primitives. An approach based on locking of individual levels seems to be somewhat natural, as it ensures that threads walk the levels one after another. However, it is yet an open question whether the access of a single level by multiple threads, which operate at different non-intersecting parts of that level, can be efficiently implemented for reasonable problem sizes. It can possibly be done by checking in $O(nk)$ whether the regions dominated by two different sets of moving points, that are manipulated by different threads, intersect in the current level: if they do not, then this level, and all subsequent levels, can be processed by these two threads only with minor resource sharing, since the most expensive parts will operate with non-intersecting sets of points. Yet another possibility, which may find its use in heterogeneous computing systems (where the internals of an evolutionary algorithm is run with different computation resources than fitness evaluation) is a special flavor of an asynchronous algorithm which, on the arrival of a fitness thread, hands the next task immediately, and only then inserts the evaluated point into the data structure. This should reduce the idle rate of fitness-related computation resources, which are typically more expensive. However, an impact of this design on the convergence of an algorithm, as compared to the one implemented in this paper, may be non-trivial. As our future work, we plan to investigate the performance of the asynchronous algorithms in more realistic settings, such as working within a real evolutionary multiobjective algorithm, as well as with heterogeneous times of fitness evaluation. An extension of this approach to computing more different types of diversity measures, such as the reference-point based measure of NSGA-III, is also worth investigating. \section{Acknowledgments} This work was supported by Russian Science Foundation under the agreement No.~17-71-20178. \bibliographystyle{abbrv}
{ "timestamp": "2018-04-17T02:06:09", "yymm": "1804", "arxiv_id": "1804.05208", "language": "en", "url": "https://arxiv.org/abs/1804.05208" }
\section{Topology of $\VV(X)$ and $L(X)$} Following \cite{GM}, the {\em free topological vector space} $\VV(X)$ over a Tychonoff space $X$ is a pair consisting of a topological vector space $\VV(X)$ and a continuous map $i=i_X: X\to \VV(X)$ such that every continuous map $f$ from $X$ to a topological vector space $E$ gives rise to a unique continuous linear operator ${\bar f}: \VV(X) \to E$ with $f={\bar f} \circ i$. Theorem 2.3 of \cite{GM} shows that for all Tychonoff spaces $X$, $\VV(X)$ exists, is unique up to isomorphism of topological vector spaces, is Hausdorff and the mapping $i$ is a homeomorphism of the topological space $X$ onto its image in $\VV(X)$. Let $L(X)$ the free locally convex space over $X$ and denote by $\pmb{\mu}_X$ and $\pmb{\nu}_X$ the topology of $\VV(X)$ and $L(X)$, respectively. So $\VV(X) =(\VV_X, \pmb{\mu}_X)$ and $L(X)=(\VV_X, \pmb{\nu}_X)$, where $\VV_X$ is a vector space with a basis $X$. A description of the topology $\pmb{\mu}_X$ of $\VV(X)$ for a uniform space $X$ is given in Section 5 of \cite{BL}. In the next theorem we give a similar construction of the topology $\pmb{\mu}_X$ of $\VV(X)$ for a Tychonoff space $X$. First we explain our construction. Assume that $X$ is an arbitrary Tychonoff space and take a balanced and absorbent neighborhood $W$ of zero in $\VV(X)$. Take a sequence $\{ W_n\}_{n\in\NN}$ of balanced and absorbent neighborhoods of zero in $\VV(X)$ such that $W_1 +W_1 \subseteq W$ and $W_{n+1}+W_{n+1} \subseteq W_n$ for $n\in \NN$, where $\NN:=\{ 1,2,\dots\}$. For every $n\in\NN$ and each $x\in X$, choose a function $\phi_n \in \IR^X_{>0}$ such that $W_n$ contains a subset of the form \[ S_n :=\bigg\{ t x: x \in X \mbox{ and } |t| \leq \frac{1}{\phi_n(x)} \bigg\}. \] Then $W$ contains a subset of the form \begin{equation} \label{equ:topology-V-L-1} \begin{aligned} \sum_{n\in\NN} \frac{1}{\phi_n} X & =\sum_{n\in\NN} S_n := \bigcup_{m\in\NN} \big( S_1 +\cdots +S_m\big)\\ & =\bigcup_{m\in\NN} \left\{ \sum_{n=1}^m t_n x_n: x_n \in X \mbox{ and } |t_n| \leq \frac{1}{\phi_n(x_n)} \mbox{ for all } n\leq m \right\}, \end{aligned} \end{equation} If the space $X$ is discrete, Protasov showed in \cite{Prot} that the family $\mathcal{N}_X$ of all subsets of $\VV_X$ of the form $\sum_{n\in\NN} \frac{1}{\phi_n} X$ is a base at zero $\mathbf{0}$ for $\pmb{\mu}_X$, and the family $\mathcal{\widehat{N}}_X :=\{ \conv(V): V\in \mathcal{N}_X\}$ is a base at $\mathbf{0}$ for $\pmb{\nu}_X$ (where $\conv(V)$ is the convex hull of $V$). If $X$ is arbitrary, observe that every $W_n$ defines an entourage $V_n :=\{ (x,y): x-y \in W_n\}$ of the universal uniformity $\UU_X$ of the uniform space $X$. Therefore $W$ contains a subset of the form \begin{equation} \label{equ:topology-V-L-2} \sum_{n\in\NN} V_{n} := \bigcup_{m\in\NN} \left\{ \sum_{n=1}^m t_n (x_n-y_n): |t_n|\leq 1 \mbox{ and } (x_n,y_n)\in V_{n} \mbox{ for all } n\leq m\right\}. \end{equation} Combining (\ref{equ:topology-V-L-1}) and (\ref{equ:topology-V-L-2}) we obtain that every balanced and absorbent neighborhood $W$ of zero in $\VV(X)$ contains a subset of the form $ \sum_{n\in\NN} V_{n} + \sum_{n\in\NN} \frac{1}{\phi_n} X, $ where $\{V_{n}\}_{n\in\NN}\in \UU_X^\NN$ and $\{\phi_n\}_{n\in\NN}\in \IR^X_{>0}$. It turns out that the converse is also true. \begin{theorem} \label{t:topology-V(X)} The family \[ \mathcal{B}=\left\{ \sum_{n\in\NN} V_{n} + \sum_{n\in\NN} \frac{1}{\phi_n} X : \{V_{n}\}_{n\in\NN}\in \UU_X^\NN ,\; \{\phi_n\}_{n\in\NN}\in \IR^X_{>0}\right\} \] forms a neighbourhood base at zero of $\VV(X)$, and the family \[ \mathcal{B}_L=\{ \conv(W): W\in\mathcal{B}\}, \] where $\conv(W)$ is the convex hull of $W$, is a base at zero of $L(X)$. \end{theorem} \begin{proof} We prove the theorem in two steps. \smallskip {\em Step 1. We claim that the family $\mathcal{B}$ is a base of some vector topology $\TTT$ on $\VV_X$.} Indeed, by construction each set $W\in\mathcal{B}$ is balanced and absorbent. So, by Theorem 4.5.1 of \cite{NaB}, we have to check only that for every $W=\sum_{n\in\NN} V_{n} + \sum_{n\in\NN} \frac{1}{\phi_n} X\in\mathcal{B}$ there is a $W'=\sum_{n\in\NN} V'_{n} + \sum_{n\in\NN} \frac{1}{\phi'_n} X\in\mathcal{B}$ such that $W' +W' \subseteq W$. For every $n\in\NN$, choose $V'_n\in \UU_X$ such that $V'_{n} \subseteq V_{2n-1} \cap V_{2n}$ and $\phi'_n\in\IR^X_{>0}$ such that $\phi'_n \geq\max\{\phi_{2n-1}, \phi_{2n}\}$. Then for every $m\in\NN$ we obtain the following: if $|t_n|,|s_n|\leq 1 $ and $(x_n,y_n), (u_n,v_n)\in V'_{n}$, then \[ \begin{split} \sum_{n=1}^m t_n (x_n-y_n) & + \sum_{n=1}^m s_n (u_n-v_n) = t_1 (x_1-y_1) + s_1 (u_1-v_1)+\cdots + t_m (x_m-y_m) +s_m (u_m-v_m)\\ & \in \left\{ \sum_{n=1}^{2m} t_n (x_n-y_n): |t_n|\leq 1 \mbox{ and } (x_n,y_n)\in V_{n} \mbox{ for all } n\leq 2m\right\}, \end{split} \] and if $|t_n|, |s_n| \leq \frac{1}{\phi'_n(x)}$ and $x_n,y_n\in X$, then \[ \begin{split} \sum_{n=1}^m t_n x_{n} + \sum_{n=1}^m s_n y_{n} & = t_1 x_{1} + s_1 y_{1} +\cdots + t_m x_{m} +s_m y_{m}\\ & \in \left\{ \sum_{n=1}^{2m} t_n x_{n}: x_{n} \in X \mbox{ and } |t_n| \leq \frac{1}{\phi_n(x_n)} \mbox{ for all } n\leq 2m\right\}. \end{split} \] These inclusions easily imply $W' + W' \subseteq W$. \smallskip {\em Step 2. We claim that $\TTT=\pmb{\mu}_X$.} Indeed, if $x\in X$ and $W=\sum_{n\in\NN} V_{n} + \sum_{n\in\NN} \frac{1}{\phi_n} X\in \mathcal{B}$, then $x+W$ contains the neighborhood $V_{1}[x]:=\{ y\in X: (x,y)\in V_{1}\}$ of $x$ in $X$. Hence the identity map $\delta: X\to (\VV_X,\TTT), \delta(x)=x,$ is continuous. Therefore $\TTT\leq \pmb{\mu}_X$ by the definition of $\pmb{\mu}_X$. We show that $\TTT\geq \pmb{\mu}_X$. Given any circled and absorbent neighborhood $U$ of zero in $\pmb{\mu}_X$, choose symmetric neighborhoods $U_0,U_1,\dots$ of zero in $\pmb{\mu}_X$ such that $[-1,1]U_0 +[-1,1]U_0\subseteq U$ and \[ [-1,1]U_k + [-1,1]U_k + [-1,1]U_k \subseteq U_{k-1}, \; k\in\NN. \] Since $\UU_X$ is the universal uniformity and $X$ is a subspace of $\VV(X)$ by Theorem 2.3 of \cite{GM}, for every $n\in\NN$ we can choose $V_n\in\UU_X$ such that $y-x\in U_n$ for every $(x,y)\in V_{n}$. For every $n\in\NN$ and each $x\in X$ choose $\lambda(n,x)>0$ such that \[ [-\lambda(n,x),\lambda(n,x)]x \subseteq U_n, \] and set $\phi_n (x):= [1/\lambda(n,x)]+1$. Then $\phi_n \in\IR_{>0}^X$ for every $n\in\NN$. Then for every $m\in \NN$ we obtain the following: if $|t_n|\leq 1 \mbox{ and } (x_n,y_n)\in V_{n} \mbox{ for all } n\leq m$, then \[ \sum_{n=1}^m t_n (x_n-y_n) \in [-1,1]U_1 + \cdots + [-1,1]U_m \subseteq U_0, \] and if $ |t_n| \leq \frac{1}{\phi_n(x_n)}$ for $n=1,\dots,m$, then \[ \sum_{n=1}^m t_n x_{n} \in [-1,1]U_1 +\cdots + [-1,1]U_m \subseteq U_0. \] Therefore $\sum_{n\in\NN} V_{n} + \sum_{n\in\NN} \frac{1}{\phi_n} X\subseteq U$. Thus $\TTT \geq \pmb{\mu}_X$ and hence $\TTT=\pmb{\mu}_X$. Finally, the definition of the topology $\pmb{\nu}_X$ of $L(X)$ and Proposition 5.1 of \cite{GM} imply that the family $\mathcal{W}_L$ is a base at zero of $\pmb{\nu}_X$. \end{proof} \begin{remark} {\em In Theorem \ref{t:topology-V(X)} we consider arbitrary functions $\phi\in \IR_{>0}^X$. However, these functions can be chosen from the poset $C_\omega(X)$ of all $\omega$-continuous real-valued functions on a uniform space $X$. We refer the reader to Section 5 of \cite{BL} for details.} \end{remark}
{ "timestamp": "2018-04-17T02:05:38", "yymm": "1804", "arxiv_id": "1804.05199", "language": "en", "url": "https://arxiv.org/abs/1804.05199" }
\section{Introduction} \label{intro} Majorana states in Condensed Matter have been a hot topic for a few years now \cite{Nayak,Qi,Alicea,StanescuREV,Beenakker,Franz,Elliott,Aguado,Lutrev,Lutchyn,Oreg}. Different experiments have been carried out in order to demonstrate the actual existence of such topological states. Majorana modes are characterized by being chargeless and spinless edge states, hence most of the experiments aiming at their detection are based on identifying characteristic signatures on the electrical conductance of devices attached to them \cite{Mourik,HaoZ,Deng,Das,He294}. To obtain Majorana states one needs the presence of superconductivity, therefore the typical scenario usually requires a contact between a normal lead and a hybrid proximity-coupled semiconductor-superconductor. As topological states, Majorana modes are separated by an energy gap that protects them from other normal states and local sources of noise, a robustness that might allow the use of such states for topological quantum computing. In many ways Majoranas can be understood as non-local split Fermions. In this sense there are two kinds of Majorana states: non-propagating Majorana states appearing at the ends of (quasi) 1D nanowires and propagating chiral Majorana states formed along the edges of 2D-like hybrid structures. In this work we will focus on the second kind. We refer, more specifically, to devices similar to those of Refs.\ \cite{He294,Qi06,Qi10,Chung,Wang15,Lian16,Kalad} consisting of a quantum Hall (QH) or quantum anomalous Hall (QAH) insulator proximity coupled with a superconductor (QAH+S). In particular, we will consider a simple model of QAH+S that does not need the presence of external magnetic fields. In this kind of systems, chiral Majorana modes propagate along the edges in a clockwise or anticlockwise manner (depending on device parameters) for finite systems. An open infinite nanowire like the one depicted in the inset of Fig.\ \ref{F1} may hold two pairs of counterpropagating Majorana channels, one pair at each edge of the device. In general, it has been reported that each chiral Majorana channel contributes $0.5 e^2 / h$ to the linear conductance of a device. However, in this work we will show that for the infinite nanowire with only one normal contact the conductance remains $e^2/h$ independently of the number of active Majorana modes (one or two), even with a finite transmission probability to the Majorana channel of $\approx0.5$. The reason for this is that we consider a single normal contact connected to a semi-infinite Majorana device, instead of the two usual contacts in a normal-superconductor-normal arrangement. When only one normal contact (left) is present, only half of the possible Majorana channels are active, the outgoing ones. Ingoing Majorana modes into the junction would necessarily require a second (right) normal contact and therefore they are not contributing in our arrangement. We use a method based on the evaluation of the (complex) wave numbers allowed on each side of the junction and giving the detailed spatial distribution patterns of density and currents. In addition, we study how the spatial distribution of the Majorana modes is affected by magnetic orbital effects, on top of the already present QAH physics. We show how the spatial coupling between Majorana and non-Majorana states at both sides of the junction modifies the transmission and reflection processes, and thus also the conductance. This article is divided in five parts. Sections \ref{sec:1} and \ref{sec:2} present the model and the method of resolution to determine ingoing and outgoing modes of the junction. Next, in Secs.\ \ref{sec:3} and \ref{sec:4} we present the results without and with orbital effects of the magnetic field, respectively. Finally, a summary and outlook of the work is given in Sec.\ \ref{sec:5}. \section{Model} \label{sec:1} Our main objective is to study the distribution of currents and the conductance present in a N-(QAH+S) junction where chiral Majoranas may be present. We start using a simplified model of QAH+S Hamiltonian similar to the one devised in Refs.\ \cite{Qi06,Qi10}, \begin{equation} h_{\it BdG}({\bf p})=m({\bf p}) \sigma_z - \alpha\,(p_x \sigma_y - p_y \sigma_x)\tau_z + \Delta(x)\,\tau_+ + \Delta(x)^*\, \tau_-\; , \label{E1} \end{equation} where $m({\bf p})=m_0+m_1 {\bf p}^2$, with $m_0$ and $m_1$ known material parameters. As usual, the $\sigma$'s and $\tau$'s represent Pauli matrices for spin and isospin, respectively. We will consider $\alpha$ a known parameter related with the quasi-particle mass governing the shape of the Dirac cone for energies near its apex. In this work we set $\alpha\equiv 1$ as our unit for practical reasons. We will assume superconductivity achieved by proximity coupling between the QAH semiconductor and a metallic superconductor. The union between a superconducting and non superconducting region will be achieved through the spatial variation of the superconductor coupling constant $\Delta(x)$. The numerical results of this work will be presented in natural units of the problem, i.e., taking $2m_1$, $\hbar$ and $\alpha$ as unit values. That is, our length and energy units are $L_U\equiv L_{so}=2m_1\hbar^2/\alpha$ and $E_U=\alpha^2 /2m_1\hbar^2$. This model provides two phase boundaries with a critical value of the $m_0$ parameter, $m_{0}^{(c)}=\pm |\Delta|$. For large positive values of $m_0$ the device will be in a trivial phase while for large negative ones a phase of Chern number ${\cal C}=2$ will arise with two chiral Majoranas attached to each edge of the device. For intermediate values of $m_0$, between the two phase boundaries, there is a single Majorana phase of Chern number one (see Fig.\ \ref{F1}). The phase-transition boundaries may differ slighty from these values due to the transversal confinement, in a similar manner as in non-chiral Majorana nanowires \cite{Osca2015b}. Of course, the effect of the transversal confinement becomes negligible in wide enough wires. The presence of the Majorana modes is signaled by a pair of topological bands at wavenumber $k=0$ for the translationally invariant (infinite) wire. In Fig. \ref{F1} this can be seen with a plot of the energy $E(k=0)$ as a function of $m_0$. The presence of zero-energy modes indicate the Majorana phases, in good agreement with the expected critical values. The bulk-edge correspondence principle ensures that the critical value $m_0^{(c)}$ also indicates when chiral Majoranas will appear in a semi-infinite nanowire or in the superconducting region of the N-(QAH+S) junction studied in this work. \begin{figure} \center \resizebox{0.75\columnwidth}{!}{% \includegraphics{Fig1}} \caption{$E(k=0)$ as a function of the material parameter $m_0$ for a QAH slab of $L_y=5 L_U$ proximity coupled with a superconductor yielding strength $\Delta=2 E_U$. A sketch of the infinite system used for this band structure calculation is given in the inset. Notice the phase transitions at $m_0\approx\pm\Delta$, as indicated by the presence of zero modes. } \label{F1} \end{figure} \section{Method} \label{sec:2} We want to calculate the distribution of currents for a junction between a normal QAH material and a material of the same kind proximity coupled with a superconductor (see Fig.\ \ref{F2} for a graphical representation of the device). The numerical method was already used by us to calculate local currents and conductance in N-S junctions for non-chiral Majoranas in Refs.\ \cite{Osca2017,Osca2017b}, with some technical differences as briefly explained below. \begin{figure} \center \resizebox{0.75\columnwidth}{!}{% \includegraphics{Fig2}} \caption{Graphical description of the nanowire junction considered in this work, an infinite QAH slab with half of the slab proximity coupled with a superconductor. The junction interface separates normal and superconducting regions. On the left side there is a normal QAH region while on the right side there is a hybrid QAH+superconducting region with a non-zero $\Delta$.} \label{F2} \end{figure} The overall idea is that of a matching method for two different sets of asymptotic solutions, for a given energy $E$, one for each side of the junction and characterized by a $k$ wave number, $\Psi_{k}(x,y,\eta_\sigma,\eta_\tau) =\phi_{k}(y,\eta_\sigma,\eta_\tau) e^{i k x}$. These asymptotic solutions for the left and right contacts are assumed to be known for a large-enough set of wave numbers, with $k$ being either real (propagating) or complex (evanescent) \cite{Serra13}. The full solution for the left and right sides of the junction ($c=L,R$) is given by a superposition of the corresponding set of modes, \begin{equation} \label{eq2} \Psi^{(c)} (x,y,\eta_\sigma,\eta_\tau)=\sum_{k} d_{k}^{(c)} \, {e}^{i k x}\, \phi_{k}(y,\eta_\sigma,\eta_\tau)\,. \end{equation} The wavenumbers and the transverse eigenstates can be obtained numerically as solutions of the BdG Hamiltonian for each contact, where $ \sum_{\eta_\sigma \eta_\tau} \int{ dy\, \phi_{k}}=1$. The coefficients $d_{k}^{(c)}$ that determine the strength of each channel are obtained from the matching algorithm \cite{Osca2017,Osca2017b}. The distribution of currents is calculated from the wave functions given by Eq.\ (\ref{eq2}). We consider three different kinds of densites $\rho_a(x,y)$ and current $\vec{j}_a(x,y)$, where subindex $a$ may be $a=qp, c, s$ for quasiparticle, charge, and spin, respectively. Quasi-particle distributions are given by \begin{eqnarray} \label{E2a} \rho_{qp}(x,y) &=& \Psi^*(x,y)\Psi(x,y) \; ,\\ \vec{j}_{qp}(x,y) &=& \Re\left[\, \Psi^*(x,y)\, \hat{\vec{v}}_{qp}\, \Psi(x,y)\, \right]\;, \label{E2} \end{eqnarray} where the velocities are given by $\hat{v}_{qp,x}=\partial \mathcal{H}/\partial p_x$ and $\hat{v}_{qp,y}=\partial \mathcal{H}/\partial p_y$. Quasiparticle density and current fulfill a continuity equation $\partial \rho_{qp}(x,y) / \partial t=\nabla\cdot\vec{j}_{qp}(x,y) $ because the model has no sources or sinks of quasiparticles. With the Hamiltonian of Eq.\ (\ref{E1}) it is, \begin{eqnarray} \hat{v}_{qp,x} &=& -i \hbar 2 m_1 \partial_x \sigma_z - \frac{\alpha}{\hbar} \sigma_y \tau_z\;, \label{E3} \\ \hat{v}_{qp,y} &=& -i \hbar 2 m_1 \partial_y \sigma_z + \frac{\alpha}{\hbar} \sigma_x \tau_z\;. \label{E4} \end{eqnarray} Substitution of Eqs.\ (\ref{E3}) and (\ref{E4}) in Eq.\ (\ref{E2}) lead to the more familiar expressions \begin{equation} \vec{j}_{qp}(x,y)= 2\hbar m_1 \Im\left[\, \Psi^*(x,y)\, \nabla \sigma_z\, \Psi(x,y)\, \right] +\vec{j}_{so}(x,y)\;, \end{equation} where \begin{equation} \vec{j}_{so}(x,y)=-\frac{\alpha}{\hbar}\, \Re\left[\,\Psi^*(x,y)\,(\sigma_y \hat{x}-\sigma_x \hat{y})\tau_z\,\Psi(x,y)\,\right]\;. \end{equation} The charge and spin densities are obtained by adding $-e\tau_z$ and $\sigma_z$ operators, respectively, in Eq.\ (\ref{E2a}), \begin{eqnarray} \rho_{c}(x,y) &=& -e\, \Psi^*(x,y)\tau_z\Psi(x,y) \; ,\\ \rho_{s}(x,y) &=& \Psi^*(x,y)\sigma_z\Psi(x,y) \; . \end{eqnarray} Analogous substitutions in Eq.\ (\ref{E2}) yield the definitions of $\vec{j}_c(x,y)$ and $\vec{j}_s(x,y)$, the charge and spin currents. The conductance of the junction is evaluated on the normal side as \begin{equation} g(E)=\frac{e^2}{h}\left[\, N(E) - P_{ee}(E) + P_{eh}(E) \,\right]\; , \end{equation} where \begin{eqnarray} \label{eq12} P_{ee}(E) &=& \sum_{k,\eta_\sigma} d_{k}^{(L)}(E) \int dy \left|\phi_{k}^{(L)}(y,\eta_\sigma,\Uparrow) \right|^2\; ,\\ \label{eq13} P_{eh}(E) &=& \sum_{k,\eta_\sigma} d_{k}^{(L)}(E) \int dy \left| \phi_{k}^{(L)}(y,\eta_\sigma,\Downarrow) \right|^2\; , \end{eqnarray} are, respectively, the electron-electron ($ee$) and electron-hole ($eh$ or Andreev) reflection probabilities. As well known, normal $ee$ reflection reduces the conductance while Andreev $eh$ one increases it. Notice also that in the $k$-sums of Eqs.\ (\ref{eq12}) and (\ref{eq13}) only propagating output modes have to be included. The coefficients $d_k^{(c)}$ for both evanescent and propagating modes are obtained from the numerical algorithm, with the exception of the input channels that are set to one for normalization purposes. We consider as input channels the electron propagating solutions in the normal lead with a quasi-particle flow into the junction. As a peculiarity of this problem, we found that for $E=0$ and $k=0$ some instabilities in the flow calculation are obtained. They are simply resolved, however, by using a nonzero (small) value for $E$. \section{Current distributions} \label{sec:3} In Fig.\ \ref{F3} we display the quasi-particle current distribution (arrows) overprinted on their corresponding quasiparticle densities (color or gray-shaded) for two different scenarios. Figures \ref{F3}a and \ref{F3}c are for the case when the right side of the junction has a Chern number one, i.e., with a pair of topological bands crossing zero energy. Therefore, for energies below the gap there is a propagating Majorana mode attached to a system edge. On the other hand, Figs.\ \ref{F3}b and \ref{F3}d correspond to the case of Chern number two, with an additional pair of bands crossing zero energy. In this latter case we have simultaneously two propagating Majorana modes attached to the same edge. The first thing we notice is that only the lower edge shows an attached Majorana channel on the right side of the junction. The reason behind this difference between upper and lower edges is that in an infinite NS junction there are no counterpropagating modes. That is, the Majorana channel in the lower border is an outgoing channel. An ingoing Majorana channel would appear on the upper edge in case we considered a second junction with a normal lead on the right of the superconductor region, with its corresponding incident modes. As seen in Figs.\ \ref{F3}a and \ref{F3}c, with only one pair of topological bands in the superconductor region (${\cal C}=1$) an incident electron channel from the normal region will be transmitted to a Majorana channel in the superconducting region. Note that the Majorana channel is associated with a zero charge density and zero charge current. The transmission probability is $P_T=0.5$ and, nevertheless, the conductance $g(E)$ is still one quantum $g(E)=e^2/h$. The reason behind this apparent paradox is the distribution of probability between the reflected $ee$ and $eh$ channels. The electronic incident channel is partially reflected back in equal measure as an electron and as a hole through Andreev reflection, $P_{ee}=0.25$ and $P_{eh}=0.25$. This is not in contradiction with current literature finding a conductance of $g(E)=0.5 e^2/h$ due to the Majorana mode because, as explained above, we are considering a NS junction with a single normal lead and therefore neglecting the effect in the junction from Majorana counterpropagating states with an origin in a second lead. In this sense, the reflected channels have several peculiarities. First, their charge current and densities add up to zero and the same happens with their spin current and density (see Fig.\ \ref{F4}). The incident electron channel is responsible for an ingoing spin current into the Majorana mode, signaling the topological state of the superconductor. \begin{figure} \center \resizebox{0.75\columnwidth}{!}{% \includegraphics{Fig3}} \caption{a) Quasi-particle current overprinted on its corresponding probability density for a NS junction with a) one chiral Majorana mode in the superconducting side of the junction (that is, ${\cal C}=1$ topological phase); b) two simultaneous Chiral Majorana modes in the superconducting region (${\cal C}=2$ topological phase). Figures c) and d) are the charge current and densities corresponsing to the cases in a) and b), respectively. The material parameter for a) is $m_0=-1 E_U $ while for b) is $m_0=-3 E_U$. The rest of the parameters are $\Delta=2.0 E_U$ and $E=0.1 E_U$. Note that we take $\alpha=1 E_U L_U$ and $m_1=0.5 E_U L_U^2/\hbar^2$.} \label{F3} \end{figure} On the other hand, in Figs.\ \ref{F3}b and \ref{F3}d we can see the case with two pairs of topological bands active on the right side of the junction. In this case the incident electronic channel just goes through the junction without reflection. That is not surprising because two chiral Majorana channels add up to a single electron channel. In fact, the available Majorana channels degrade with increasing energy of the incident channel (i.e., the quality of the Majorana is worse as we deviate more and more from zero energy and approach the gap energy). In fact, we can see in Fig.\ \ref{F3}d how charge neutrality of the chiral Majoranas on the right side has been slighty lost already for $E=0.1 E_U$, probably with a certain degree of hybridization between the two Majoranas and the presence of a slight charge current in the lower superconducting border. \begin{figure} \center \resizebox{0.5\columnwidth}{!}{% \includegraphics{Fig5}} \caption{Spin current overprinted on the spin density for the case when the superconductor holds a single chiral Majorana mode. The Hamiltonian parameters are $m_0=-1 E_U $, $\Delta=2 E_U$ and $E=0.1 E_U$. } \label{F4} \end{figure} In Fig.\ \ref{F5} we can see the case when the superconductor is in a trivial state. In previous figures we considered an homogeneous infinite semiconductor slab with a junction separating the proximity coupled superconducting region from the non-superconducting one. However, here for pedagogical reason we consider that the junction separates two semiconductors having different material parameter $m_0$. The reason is that no open incident channels are available in the normal region for the range of values where the superconducting region is in a trivial phase. Therefore, we maintain the left side of the junction at a value of $m_0$ that allows for an electronic incident channel. The result is a perfect electron-electron reflection of the quasi-particle current. Therefore the overall charge and spin current in the contact remains zero. \begin{figure} \center \resizebox{0.75\columnwidth}{!}{% \includegraphics{Fig4}} \caption{a) Quasi-particle current overprinted on the probability density for the case when the superconducting side of the junction is in a trivial phase. b) The same of a) for the charge current and density. In order to have open channels available to probe the superconductor, the material parameter $m_0$ takes different values on the left and right sides; it is $m_{0}=-1 E_U$ on the left and $m_{0}=2 E_U$ on the right. The rest of the parameters are the same of preceding figures.} \label{F5} \end{figure} \section{Orbital effects} \label{sec:4} Until now we have considered the behavior of the junction mainly regarding variations of the material parameter $m_0$. In the underlying physical model, this parameter relates to the magnetization of the material. In this section we want to explore how the inclusion of orbital effects due to an external magnetic field may affect the results of our model. The strength of magnetic orbital effects is set by the magnetic length $l_z$, defined as $l_z^2=\hbar c/e B$. We consider a fully perpendicular magnetic field to the sample using a Landau gauge centered on $y=0$ through the magnetic substitution $p_x \rightarrow p_x - \hbar y/ l_z^2$. We also add the required Pauli matrix $\tau_z$ to properly consider the electron-hole symmetry of the problem \cite{Osca2015b}. The effects of electronic orbital motion on the QAH slab are twofold. First, if the external magnetic field is too large the edge channels disappear. This is not surprising because many chiral Majorana devices are quantum Hall devices with the addition of superconductivity. This way, different strengths of the field may enable or disable the edge propagating channels. In a certain way we are including here a competition between the QH and QAH effects. We can see in Fig.\ \ref{F6}a the conductance, and the different probabilities of transmission and reflection for a QAH normal-superconductor junction as a function of the magnetic length. At a certain value of the magnetic length ($l_z^{-2}\approx1.3 L_{U}^{-2}$) the QAH propagating channels are closed on the normal side of the junction and only evanescent modes remain. On the other hand, the second effect of the orbital motion is to effectively change the width of the nanowire due to magnetic confinement when $l_z<L_y$ (with $L_y$ the transverse width). This way, the distance of the QAH and chiral Majoranas with respect to the device edges increases, as can be seen comparing Fig.\ \ref{F6}b with Fig.\ \ref{F3}a. However, the most interesting feature is the separation of the propagating states from their respective edges and how this changes differently on each side of the junction for increasing external field. This affects how the electronic incident channel couples with the outgoing chiral Majorana mode on the superconductor side. Therefore, the transmission and reflection probabilities (and thus the conductance) are modified by the relative position of the channels caused by the presence of the orbital motion. \begin{figure} \center \resizebox{1.0\columnwidth}{!}{% \includegraphics{Figure6_2r}} \caption{a) Probabilities of reflection $P_{ee}$, Andreev reflection $P_{eh}$, transmission $P_{T}$ and conductance $g(E)$ of an incident electronic channel in a QAH slab with a normal-superconductor junction. The probabilities and conductance are shown as a function of the inverse squared magnetic length $l_z^{-2}$ that is directly proportional to the field. At zero field the device holds a chiral Majorana nanowire in the superconducting side of the junction. The Hamiltonian parameters are $m_0=-1.0 E_U $, $\Delta=2.0 E_U$ and $E=0.1 E_U$. b) Quasi-particle current and probability density for $l_z^{-2}= 1.2 L_U^{-2}$. Note that, in comparison with Fig. \ref{F3}a, the position of the edge states with respect to the confinement wall has changed. There are also differences between the left and right edge states relative position in the $y$ direction.} \label{F6} \end{figure} The oscillations in reflection and transmission probabilities, and thus in conductance, are due to changes in the transversal positions of the topological states. However, these changes are abruptly hindered by the disappearance of the propagating channels in the normal lead with increasing magnetic field. In the rest of the paper we will not consider orbital effects in the normal lead of the junction, assuming that we have shielded or dampened the magnetic field in that region. This way we always have a propagating channel opened in the normal contact to probe the behavior of the chiral modes under the effects of the orbital motion. In Fig.\ \ref{F7} we consider a QAH slab with orbital effects active only on the superconducting side. The superconducting region is tuned to hold a single Majorana channel at zero external field. We can see in Fig.\ \ref{F7}a (at the left of the vertical dashed line) how the transmission probability slightly decreases while the normal reflection increases with increasing magnetic strength. The reason is the change in spatial alignment between the incident and the Majorana channels, as shown in Fig.\ \ref{F7}b. This behavior persists up to the strength value marked as a black vertical dashed line. From that point onwards the magnetic effective confinement is too narrow to allow the nanowire to hold the transversal length of the Majorana. Therefore the propagating chiral Majorana mode disappears and only evanescent modes remain in the superconducting region. This is signaled by a zero transmission probability and the dominance of the Andreev effect as the main reflection mechanism. Electron-hole reflection probability rises to one and the conductance achieves its maximum value of two. \begin{figure} \center \resizebox{0.75\columnwidth}{!}{% \includegraphics{Figure7_2r}} \caption{a) Same as in Fig.\ \ref{F6}a, but with the external magnetic field applied only to the right side of the junction. This way we avoid the channel closing on the normal side and we can probe the junction behavior for higher magnetic fields. At zero field the device holds a chiral Majorana mode in the superconducting side of the junction and the vertical dotted line signals the strength for which this Majorana mode disappears. The material parameter $m_0=-1 E_U$ is constant all along the slab, while the rest of the Hamiltonian parameters are the same as above. b) and c) Quasi-particle current and probability density at strengths of the external field corresponding to $l_z^{-2}= 1.2 L_U^{-2}$ and $l_z^{-2}= 2.4 L_U^{-2}$, respectively. Note that only evanescent modes remain on the right side in panel c). } \label{F7} \end{figure} Finally, in Fig.\ \ref{F8} we consider the same slab but with the superconducting region tuned to hold two Majorana channels at zero external field. In Fig.\ \ref{F8}a the first vertical dashed line signals the transition from a state with two Majorana edge states to a single Majorana state, while the second one signals the loss of both Majorana channels. The first transition is followed by a change in the transmission probability $P_T\approx 1$ to $P_T\approx 0.5$ as we expect from the loss of one of the two Majorana channels. Accordingly, the electron and hole reflection probabilities rise from zero to $P_{ee}\approx P_{eh}\approx0.25$. Note, however, that here the change of the probabilities with the magnetic strength is not abrupt (probably because of large transverse finite size effects). The change is also smooth at the transition from one to zero active Majorana channels. This causes the conductance to oscillate while the system evolves between different conductance plateaus with smooth oscillations. \begin{figure} \center \resizebox{0.75\columnwidth}{!}{% \includegraphics{Figure8_2r}} \caption{a) Same as in Fig.\ \ref{F7}a but with a material parameter $m_0=-3 E_U$. The rest of the Hamiltonian parameters are the same as above. This way, at zero field the device holds two chiral Majorana modes in the superconducting side of the junction. Each vertical dotted line in a) signals the strength for which one Majorana mode is lost. b) and c) Quasi-particle current and probability density for strengths of the external field at the right side of the junction corresponding to $l_z^{-2}=1.6 L_U^{-2}$ and $l_z^{-2}=2.4 L_U^{-2}$, respectively.} \label{F8} \end{figure} \section{Conclusion} \label{sec:5} We have studied how the conductance in an normal-superconductor junction with chiral Majorana modes is related to the spatial distribution of currents using a simplified model. In particular, we have shown how the spatial coupling of the propagating modes on the different sides of the junction is relevant to explain the observed results. Furthermore, we have introduced the effect of the orbital motion in the model to investigate how this coupling is affected by a magnetic field. It is the objective of future work to apply this type of analysis to a more realistic physical model, like that of Ref.\ \cite{He294}, where we expect to observe similar behaviors plus some additional ones. The reason is that many of these models may be rewritten in terms of one or several coupled copies of the present one. \section*{Acknowledgments} This work was funded by MINEICO-Spain, grant MAT2017-82639. \bibliographystyle{epjc}
{ "timestamp": "2018-04-17T02:15:26", "yymm": "1804", "arxiv_id": "1804.05593", "language": "en", "url": "https://arxiv.org/abs/1804.05593" }
\section{Introduction} The classification of observed galaxies into distinct types has a long history \citep{Sandage:1961}. It is much more than an abstract taxonomic exercise with many implications for studies of poorly understood physical phenomena. Two notorious examples are: the relation between supermassive black holes (SMBHs) and their host galaxies, and the determination of the dark matter (DM) halo profiles. Currently, the general consensus is that SMBH masses correlate tightly with the stellar velocity dispersion of their host bulges \citep{Ferrarese:2000,Gebhardt:2000,Haring:2004}, implying that the two most probably co--evolve \citep[but see also][]{Jahnke:2011}. Also the kinematics of galaxy discs have a long history of being used to infer the underlaying dark matter distributions \citep{Rubin:1980}. Hence, an accurate theory on the formation of galactic stellar structures like discs and bulges would invariably lead to tighter constraints on these puzzling phenomena, apart from filling in the gaps on galaxy formation models in general. While in observations these structures are defined by various luminosity profiles fitted to galaxy images \citep[e.g.][]{vanderWel:2012}, as today there is yet no clear link between them and the intrinsic, kinematically defined stellar structures. A way to bridge this gap is provided by high resolution galaxy simulations which have full information on the stellar phase space (position and velocities), thus allowing for a proper definition of the intrinsic kinematic structures. Simulations can also be post--processed with radiative transfer codes to create mock images that can be subsequently analyzed as it is done for galaxy observations \citep[e.g.][]{Obreja:2014,Guidi:2016,Buck:2017,Bottrell:2017}. In this manner, it is in principle possible to search for a quantitative relation between the photometric morphology of a galaxy and its stellar kinematic structures. Therefore, it is necessary to have the means of robustly defining galactic stellar kinematic structures in simulations, which is precisely the aim of this work. In \citet[][hereafter Paper I]{Obreja:2016} we showed how dynamic stellar discs can be separated from the spheroids in galaxy simulations by using a clustering algorithm in a multidimensional kinematic space. However, galaxies are known to sometimes host a much larger variety of stellar structures. The stars of the Milky Way, in particular, are though to form several components: a thin and a thick disc \citep{Gilmore:1983}, a boxy/peanut bulge \citep{Okuda:1977}, a nuclear star cluster \citep{Becklin:1968}, a bar \citep{Hammersley:2000} and a stellar halo \citep{Searle:1978}. The evidences for a classical bulge in the Milky Way are currently still debated \citep[e.g.][]{Bland-Hawthorn:2016}. In extragalactic studies, many of the nearby spirals seen close to edge-on are better described by a two component disc rather than by one \citep{Dalcanton:2002,Yoachim:2006,Comeron:2011,Comeron:2014,Elmegreen:2017}. The inner regions of observed galaxies also appear to sometimes host multiple bulges and/or bars \citep[e.g.][]{Athanassoula:2005,Gadotti:2009,Aguerri:2009,Nowak:2010,Kormendy:2010,Mendez-Abreu:2014}. All these observational data encouraged us to improve the method described in Paper I to be able to disentangle more then two kinematic components in simulations. In the current paper we present the result of applying this method to a simulated Milky Way, hereafter MW, mass galaxy from the NIHAO sample \citep{Wang:2015}. We have done the same analysis for a total of 25 NIHAO galaxies, ranging from dwarfs to galaxies a few times more massive than the Milky Way, and found various combinations of stellar structures. For this paper, however, we chose one galaxy which resembles the Milky Way in a few important aspects, to exemplify how our pipeline works. The results on the complete 25 galaxy sample are the subject of an accompanying work \citep{Obreja:2018}. The particular galaxy we use as test case turns out to have five stellar kinematic components: a thin and a thick disc, a classical and a pseudo bulge, and a stellar halo, with properties within the expected observational ranges for shapes, velocities, rotational support and specific angular momenta. We also study the evolution of these properties for the five components separately to learn more about their formation patterns. In recent years cosmological simulations have started to achieve enough resolution to make it possible to study galactic stellar structures \citep[e.g.][]{Scannapieco:2010,Brook:2012, Aumer:2013, Stinson:2013a, Christensen:2014, Hopkins:2014, Marinacci:2014, Schaye:2015, Wang:2015, Grand:2017}. In this light, we make our analysis code freely available with the hope it will provide the means for a self consistent study of the formation and evolution of such structures across different simulation codes with different numerical schemes and feedback implementations. This work is structured as follows. Section~\ref{methods} presents the method to search for stellar structures. The simulated galaxy we are using as a test is described in Section~\ref{sim_section}. The results of our method applied to this galaxy are analyzed and discussed in Section~\ref{g8.26e11}. Section~\ref{z0prop_mw} gives the properties of the stellar kinematic structures at redshift $z=0$ in comparison to the Milky Way, while Section~\ref{z0prop} presents the results of analysing the galaxy from an extragalactic point of view. The evolution of the kinematic structures is given in Section~\ref{evolution}. Finally we summarize our results and highlight some concluding remarks in Section~\ref{conclusions}. \begin{figure} \includegraphics[width=0.47\textwidth]{paperI_fig1.eps} \caption{The surface mass density maps for the stars (left) and the cold gas (right) in face-on (top) and edge-on (bottom) projections for the galaxy g8.26e11. The white horizontal lines represent the physical scale of 10 kpc.} \label{fig:sunrise} \end{figure} \section{Gaussian Mixture Models applied to galaxy dynamics} \label{methods} The most widely employed method to kinematically split simulated galaxies was introduced by \citet{Abadi:2003}, and is based on the distribution of stellar circularities \citep[e.g.][]{Brooks:2008,Scannapieco:2010,Scannapieco:2011,Brook:2012,Martig:2012,Kannan:2015,Grand:2017}. The circularity parameter $\epsilon$ is the ratio between the azimuthal angular momentum of a particle $J_z$, and the angular momentum of a circular orbit having the same binding energy $J_c(E)$, where the $z$-direction is along the symmetry axis of the galaxy. \citet{Domenech:2012} proposed another method which uses not only the circularity parameter, but also the binding energy of the particles $E$, and the angular momentum component $J_p$ defined as $\overrightarrow{J}_p=\overrightarrow{J}-\overrightarrow{J}_z$, and normalized also to the angular momentum of the circular orbit. Here $\overrightarrow{J}$ is the total angular momentum of a stellar particle. These authors use the \textit{k-means} cluster finding algorithm \citep{Scholkopf:1998} with a given number of groups to disentangle the stellar galaxy components in this 3D space, ($J_{\rm z}/J_{\rm c}(E)$, $J_{\rm p}/_{\rm c}(E)$, $E$). The \textit{k-means} algorithm minimizes the sum of squared distances for all cluster pairs, also known as intra-cluster distance or cluster ``inertia''. This method needs to assume a certain metric, and even though it will always converge given enough iterations, it might be to a local minimum. The main limitations of \textit{k-means} are the assumptions of cluster convexity and isotropy. Therefore, \textit{k-means} is best suited for regular shaped manifolds and spherical clusters that are approximately equally populated. In Paper I we generalized the method of \citet{Domenech:2012} by using \textit{Gaussian Mixture Models} (hereafter GMM) instead of \textit{k-means}, in a similar 3D dynamical space. GMM is a probabilistic method that results in a so called soft assignment of particles to clusters, each particles having a normalized probability to belong to a certain group. Same as the \textit{k-means}, it is an iterative method which employs an expectation--maximisation algorithm to find the parameters of the Gaussians. This method relaxes the assumption of cluster symmetry, allowing for a fully free covariance matrix. Also, since it uses the Mahalanobis distance to the cluster centers (means of the Gaussians) as the minimization criteria, it does not bias the results towards equally weighted clusters. On the particular problem of separating the dynamical components of a galaxy's stars, it naturally results in analogues of observed substructures like thin and/or thick discs, classical- and/or pseudo-bulges, and/or stellar haloes, with mass weights that are not constrained to be roughly equal. Throughout this study, the 3D dynamical space refers to ($j_z/j_c$, $j_p/j_c$, $e/|e|_{\rm max}$), lower case letters denoting \emph{specific} angular momentum and binding energy. The specific binding energies are scaled to the absolute value of the energy of the most bound stellar particle in the halo $|e|_{\rm max}$. Therefore $-1<e/|e|_{\rm max}<0$, and as such the galaxy/dark matter halo mass dependence is factored out, as well as the dimensionality of the energy. A crucial step between the previous and the present study is that the halo potential is now recomputed assuming isolation. In this manner pathological distributions of binding energy can be circumvented. The complete analysis package, which we call {\tt galactic structures finder} or {\tt gsf}, can be downloaded via \url{https://github.com/aobr/gsf}. It is a Python-Fortran90 package based on {\tt pynbody} \citep[\texttt{http://pynbody.github.io},][]{Pontzen:2013} to load, orient and transform to physical units a simulation snapshot, and the {\tt scikit-learn} Python package for Machine Learning \citep{Pedregosa:2011} to run the clustering algorithm. An OpenMP Fortran90 module was added to perform the direct N-body gravity force using all the particles in the given halo. The {\tt gsf} analysis package assumes that the simulation snapshot has been pre-processed with a halo finder. For this study we have used {\tt Amiga Halo Finder} \citep[{\tt AHF},][]{Knollmann:2009}. The analysis package has been designed to work out of the box with simulations that can be loaded with {\tt pynbody}. The work flow of {\tt gsf} is as follows: \begin{itemize} \item The simulated halo is loaded with {\tt pynbody} and converted to physical units. \item The halo is oriented with the $z$-axis parallel to the galaxy's total stellar angular momentum. \item The gravitational potential at the position of each stellar particle due to all (dark matter and baryons) particles in the halo is computed by direct summation. \item The gravitational potential in the equatorial plane at fixed radial positions is computed by direct summation over all particles in the halo. \item At the same radial position on the equatorial plane, the code computes the specific angular momentum of particles on circular orbits, and constructs the $e-j_c$ mapping. \item The specific angular momenta of the stellar particles are decomposed as $\vec{j}=\vec{j_z}+\vec{j_p}$, and their corresponding $j_c$ are computed by interpolating on the $e-j_c$ mapping the accurate values of their binding energies calculated previously. \item The input feature matrix $(j_z/j_c, j_p/j_c, e/|e|_{\rm max})$ with as many entries as stellar particles is passed to the clustering algorithm together with the number of groups $nk$ to look for. \item The clustering algorithm returns a matrix of probabilities $P_{ik}$, where $i$ is the stellar particle index and $k$ runs from $0$ to $nk-1$. For each $i$ the probabilities are normalized: $\Sigma_{k=0,nk-1}P_{ik}=1$. \item The code creates two types of figures. The first one contains the stellar mass distributions in the input parameters $j_z/j_c$, $j_p/j_c$ and $e/|e|_{\rm max}$ for each of the $nk$ structures found by the clustering algorithm. The other type of figure is made for each of the $nk$ structure separately, and contains the face-on and edge-on stellar surface mass densities, and the edge-on line-of-sight velocity maps. \end{itemize} All the relevant information of a run is saved into various files. This includes: the stellar indices $i$ and the matrix of probabilities $P_{ik}$, the re-computed gravitational potential of all stellar particles, the $e-j_c$ mapping, and the rotation matrix needed to transform the raw simulation to the equatorial plane of the galaxy. \begin{figure*} \includegraphics[width=0.98\textwidth]{paperI_fig2.eps} \caption{The results of {\tt gsf} applied to the galaxy g8.26e11 shown as stellar mass in each component as a function of the dynamical features given as input: $j_{\rm z}/j_{\rm c}$ (top row), $j_{\rm p}/j_{\rm c}$ (central row) and $e/|e|_{\rm max}$ (bottom row), when varying the number of components from $nk=2$ (far left) to $nk=5$ (far right). The solid and dashed colored lines stand for the hard and soft clustering tagging (see text for more details), while the solid grey give the total stellar mass. The colored labels in each panel are the components' nicknames. For all panels the bin width is fixed to 0.01.} \label{figure3} \end{figure*} \begin{figure*} \includegraphics[width=0.98\textwidth]{paperI_fig3.eps} \caption{The edge-on surface mass density of the stellar components of g8.26e11 for $nk=2$ (second row from the bottom) to $nk=5$ (top row). The complete edge-on surface mass density is shown in the bottom left corner. The white bars give the 10 kpc physical scale, and the white labels provide the correspondence with the same components shown in Figure~\ref{figure3}. The range of mass surface densities is the same for all panels.} \label{figure4} \end{figure*} \begin{figure*} \includegraphics[width=0.98\textwidth]{paperI_fig4.eps} \caption{The edge-on line of sight velocities of the stellar components of g8.26e11. The white bars give the 10 kpc physical scale, and the white labels provide the correspondence with the same components shown in Figures~\ref{figure3} and \ref{figure4}. All panels have the same velocity range.} \label{figure4.1} \end{figure*} \section{The simulations} \label{sim_section} The NIHAO suite \citep{Wang:2015} is a series of baryonic cosmological zoom-in simulations run with the improved version \citep{Wadsley:2017} of the N-body SPH code {\tt GASOLINE} \citep{Wadsley:2004}, assuming a standard Planck cosmology (Planck Collaboration 2014). The version of the code used for the run includes fixes to deal with the artificial cold blobs \citep{Ritchie:2001} by employing the artificial viscosity implementation of \citet{Price:2008}. The SPH kernel is that of \citet{Dehnen:2012}, assuming 50 neighbors. In order to better resolve the shocks induced by feedback the code uses the time step limiter of \citet{Saitoh:2009}. Metals are diffused as discussed in \citet{Wadsley:2008}. The sources of gas heating are photoionization and photoheating from a redshift dependent UV background \citep{Haardt:2012}, while the cooling channels are the metal lines and the Compton scattering \citep{Shen:2010}. Gas particles with temperatures lower than 15000 K and densities higher than 10.3 cm$^{\rm -3}$ can form stars following a Kennicutt-Schmidt relation. Stellar feedback takes into account two processes: the SNe II blast-waves \citep{Stinson:2006} and the pre-heating of the gas in the region where such a event will take place by the massive star which is the SN II progenitor \citep{Stinson:2013a}. The implementation of the latter process is also known as ``early stellar feedback''. The code assumes a Chabrier Initial Mass Function \citep[IMF;][]{Chabrier:2003}. The heavy element enrichment of the gas is based on the SNe Ia yields of \citet{Thielemann:1986} and SNe II yields of \citet{Woosley:1995}. The NIHAO simulations cover three orders of magnitude in dark matter halo mass, from dwarfs to galaxies at the peak of the baryon conversion efficiency. In this mass regime the SN feedback is supposed to be the dominant factor limiting star formation, while AGN feedback (not included in this version of {\tt GASOLINE}) should have only a marginal effect. All these galaxies indeed follow the redshift dependent abundance matching constraints \citep{Moster:2013,Behroozi:2013}, and thus can also be used to study galaxy evolution. For the particular purpose of studying the evolution of galactic stellar structures, we have chosen a subsample of 25 NIHAO galaxies, which mainly comprises the massive end of the complete sample. The reason we excluded the simulated dwarfs is that it is generally not expected of these types of galaxies to have a large variety of stellar dynamical subcomponents. Also, given that our method is intended to work on virialized systems, we have excluded the massive galaxies which at $z=0$ have disturbed stellar mass distributions. The analysis presented in this work has, thus, been done on a sample of 25 simulated galaxies, from which we chose one galaxy (g8.26e11) to show how {\tt gsf} is capable of disentangling stellar kinematic structures with clear observational counterparts. This particular galaxy has the total mass, stellar mass and morphology very close to the Milky Way. The galaxy g8.26e11 has a mass resolution of 3.2$\rm\times$10$^{\rm 5}$M$_{\rm\odot}$ and 1.7$\rm\times$10$^{\rm 6}$M$_{\rm\odot}$ for the gas and dark matter particles, respectively. Its gravitational softenings are 400~pc and 931~pc, respectively. At $z=0$, this galaxy has a dark matter halo mass of 9.0$\rm\times$10$^{\rm 11}$M$_{\rm\odot}$, a stellar mass of 4.7$\rm\times$10$^{\rm 10}$M$_{\rm\odot}$, and a viral radius of 213~kpc. The mass of cold gas (T$\rm<$15000~K) is 4.2$\rm\times$10$^{\rm 10}$M$_{\rm\odot}$, while the virial fraction of cold gas is 0.57. This galaxy, same as the complete NIHAO sample, respects the Tully-Fisher \citep{Tully:1977} relations for both stars and baryons \citep{Dutton:2017}. Figure~\ref{fig:sunrise} shows the face-on (top) and edge-on (bottom) projections for the stellar (left) and cold gas (right) surface mass densities of g8.26e11. This galaxy resembles a large design spiral from the nearby Universe, as it can be appreciated from the face-on gas projection. \section{The stellar kinematic structures of a Milky Way mass galaxy} \label{g8.26e11} We start our study by showing how increasing the number of components, $nk$, required by {\tt gsf}, the search algorithm naturally leads to dynamical stellar structures that can be associated with the various components thought to be part of observed galaxies, and particularly of the MW. Figure~\ref{figure3} shows the results of running {\tt gsf} for the galaxy g8.26e11 when $nk$ is increased from $2$ to $5$ (left to right), by plotting the mass in each component as a function of the input dynamical features, $j_z/j_c$, $j_p/j_c$, $e/|e|_{\rm max}$ (top to bottom). The different colors stand for the various components, each being given a nickname, which is also shown in the figure. The solid/dashed lines represent the hard/soft clustering assignations. The soft tagging means that each particle $i$ has a certain combination of probabilities $\{P_{k}^{(i)}\}$ to belong to the GMM groups $\{k\}$ with $k$ running from $0$ to $nk-1$, such that $\Sigma_k P_k^{(i)}=1$. The hard tagging associates each particle $i$ to the one group $k$ for which $P_k^{(i)}$ is maximum. Therefore, to construct the solid curves each stellar particle contributes all its mass to the one group to which most likely belongs, while for the dashed curves each particle proportionally distributes its mass to the $\{k\}$ groups according to its probabilities $\{P_k^{(i)}\}$. The fact that the two ways of tagging are so similar can be invoked as a good reason for using the hard assignations, which is the case for the rest of this study. The results in Figure~\ref{figure3} are transformed to `observables', namely edge-on mass surface densities and velocity maps in Figures~\ref{figure4} and ~\ref{figure4.1}, respectively, where $nk$ decreases from top down. The nicknames of the various components have been chosen based on the maps in Figures~\ref{figure4} and ~\ref{figure4.1}. This manner of choosing the name of the components is only feasible for small sample of galaxies. We are currently exploring various possibilities to perform this step automatically. The circularity histogram for this galaxy has a strong peak close to $j_{\rm z}/j_{\rm c}=1$ and no other important feature (grey curves in the top panels of Figure~\ref{figure3}). For the run with $nk=2$, {\tt gsf} distinguishes the mass under the sharp circularity peak (dark blue) from the broad, more symmetric distribution centered close to $j_z/j_c=0$ (dark red), as can be appreciated from the top left panel. Looking at the same two components in the other two dynamical features (center and bottom left panels), the first component (dark blue) is obviously more biased towards orbits in the equatorial plane, with a peak in $j_p/j_c\sim0.1$ than the second one (dark red), with an almost flat distribution between $j_p/j_c\sim0.1$ and $\sim0.8$, and less gravitationally bound (bottom left panel). The corresponding edge-on mass surface densities and velocity maps in the forth rows of Figures~\ref{figure4} and ~\ref{figure4.1} for the two $nk=2$ components given by the dark blue and dark red distributions in the left column of Figure~\ref{figure3} show the expected characteristics of \textit{discs} and \textit{spheroids}: axial symmetry and a spider velocity diagram for the former versus spherical symmetry and only a small amount of coherent rotation for the latter. Increasing $nk$ to $3$, the material of the disc component from $nk=2$ is redistributed into two new components (center left column of Figure~\ref{figure3}), one containing only material from the $nk=2$ disc, while the other also encloses some of the $nk=2$ spheroid. The new component shown in light blue gathers most of the least rotationally supported material of the $nk=2$ disc, $j_z/j_c$ from $0$ to $\sim0.6$ (far left and centre left top panels), and the least gravitationally bound mass of the $nk=2$ spheroid (far left bottom panel). The new component in light blue of the $nk=3$ run has a large $j_p/j_c$ range $[0,0.9]$, while the corresponding component in dark blue is now more confined to the equatorial plane ($j_p/j_c<0.45$) than the disk of $nk=2$. As it can be expected from these distributions, the new (light blue) component displays characteristics of a \textit{thick disc} in the corresponding maps in Figures~\ref{figure4} and ~\ref{figure4.1} (third row), while the new `disc' in dark blue looks like a \textit{thin disc}. The least rotationally supported component in this case (dark red) is more compact than the $nk=2$ spheroid, as it can be appreciated from the third rows of Figures~\ref{figure4} and ~\ref{figure4.1}, and as a consequence it is named \textit{bulge}. In the $nk=4$ case, third column of Figure~\ref{figure3} and second rows of Figures~\ref{figure4} and ~\ref{figure4.1}, parts of the least rotationally supported material of the $nk=3$ thick disc and of the $nk=3$ bulge are redistributed into a more extended velocity dispersion supported component, namely a \textit{spheroid} (orange curves in Figure~\ref{figure3}). This spheroid basically has no net rotation and covers most of the binding energy range in an uniform manner. The largest value of $nk$ used throughout this study is $5$. For the test galaxy g8.26e11, $nk=5$ leads to an important redistribution of all the material in the components with large velocity dispersion support, including the thick disc. The one stable component is the \textit{thin disc}, whose definition changes the least from $nk=3$ to $nk=4$, and finally to $nk=5$. The very interesting aspect of this case is that now {\tt gsf} is able to separate the \textit{stellar halo} (magenta curves in the right column panels of Figure~\ref{figure3}), from the thick disc and the spheroid of $nk=4$. In the binding energy distribution, the {stellar halo} encompasses all the mass in the discreet low binding energy distribution peak and some material with $e/|e|_{\rm max}\geq-0.6$. These two different features in the binding energy distribution of the stellar halo correspond to the outer and inner components suggested by Milky Way observations \citep[e.g.][]{Carollo:2007}. From the circularity distributions, all the three components at low and negative $j_z/j_c$ have some degree of coherent rotation, see also the edge-on velocity maps in the top rows of Figures~\ref{figure4} and ~\ref{figure4.1}. Another interesting fact from $nk=5$ is the separation of the least from the most plane confined dispersion supported material, as it is evident from the dark red vs the red $j_p/j_c$ distributions in the center right panel of Figure~\ref{figure3}. Based on their appearance in the corresponding `observables' of Figures~\ref{figure4} and ~\ref{figure4.1}, the former is called \textit{classical bulge} and the latter \textit{pseudo bulge} \citep{Kormendy:2004}. To sum up, the {\tt gsf} run with $nk=5$ for the galaxy g8.26e11 results into a \textit{thin}, a \textit{thick disc}, a \textit{classical} and a \textit{pseudo bulge}, and a \textit{stellar halo}. In the next sections we show that these kinematic structures agree to what theoretical models suggest as well as display properties seen in real data. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{paperI_fig5a.eps} \includegraphics[width=0.45\textwidth]{paperI_fig5b.eps} \caption{\textbf{Left:} the contributions to g8.26e11's total circular velocity (thick black curve) of the various galaxy components (solid thin and dashed curves): dark matter (black), stellar halo (magenta), classical bulge (dark red), pseudo bulge (red), thick stellar disc (light blue), thin stellar disc (dark blue), and gas (dashed orange). The data points are observations of the Galaxy' stars by \citet{Reid:2014} (blue), \citet{LopezCorredoira:2014} (red) and \citet{Kafle:2012} (green), rescaled by \citet{Bland-Hawthorn:2016} to $238\pm15$~km~s$^{\rm -1}$ at $R_{\rm 0}=8.2$~kpc. At radii $r>5$~kpc, the total circular velocity of g8.26e11 is in very good agreement with the MW observations. \textbf{Right:} the radial profiles of the rotational velocities $V_{\rm\phi}$ for the thin (dark blue) and thick (light blue) stellar discs, the stellar halo (magenta) and the cold gas (dashed orange), and the total circular velocity of g8.26e11 (thick black curve). The coloured star symbols at $R=R_{\rm 0}$ give the MW observed rotational velocities in the solar neighbourhood of: the stellar halo ($\sim40$~km~s$^{\rm -1}$, magenta, \citealt{Bond:2010}), and of the thin and old thick stellar discs ($220$~km~s$^{\rm -1}$ and $170\pm16$~km~s$^{\rm -1}$ in dark and light blue respectively, \citealt{Haywood:2013}). The black star symbol represents the circular velocity at the Sun's position ($238\pm15$~km~s$^{\rm -1}$, \citealt{Bland-Hawthorn:2016}, and references therein).} \label{figure5} \end{center} \end{figure*} \section{The solar neighbourhood perspective} \label{z0prop_mw} To have a quantitative assessment of how similar g8.26e11 and the Galaxy are, Figure~\ref{figure5} shows the total circular velocity profiles $V_c(r)=\sqrt{GM(<r)/r}$ and the contributions to it of the various stellar structures, gas and dark matter (left panel), and the profiles of the discs and stellar halo rotational velocities (right panel). We use small $r$ to refer to the 3D radius, and capital $R$ for the projected one. Over plotted in the left panel are the MW total circular velocities derived from observations of: masers associated with massive young stars of \citet{Reid:2014} (blue points), red clump giant stars of \citet{LopezCorredoira:2014} (red points), and blue horizontal branch star of \citet{Kafle:2012}. Given the fact that g8.26e11 was not simulated on purpose to resemble the MW, the agreement between its total circular velocity curve (thick black curve) and these observations is quite remarkable for radii $r>5$~kpc. At smaller radii, the MW has the Galactic bar, which is responsible for the dip in $V_c$ at $r\simeq3$~kpc as probed by the dynamics of the HI gas \citep[e.g.][and references therein]{Sofue:2009}. On the other hand the simulated galaxy has no bar, but a classical and a pseudo bulge, resulting in a purely rising $V_c$ at $r<2.5$~kpc. In the right panel of Figure~\ref{figure5}, the total circular velocity curve $V_c$ of the simulated galaxy (thick black) is plotted together with the rotational velocities $V_{\phi}$ of the thin and thick stellar discs, stellar halo and cold gas. The rotational velocity $V_{\phi}$ profiles are computed as mass-weighted averages of $V$, where $V$ is the component of a particle's velocity along the direction of local rotation in the cylindrical coordinate reference frame of the galaxy's center. We recall that the simulation is oriented with the $z$-axis in the direction of the total stellar angular momentum. The other two components of a particle's velocity in this reference frame are the radial $U$ and vertical $W$ velocities. One important feature obvious in this panel is that the total circular velocity is best traced by the cold gas rotation (dashed orange curve), the thin stellar disc (dark blue) having a $\sim20$~km~s$^{\rm -1}$ lower $V_{\phi}$. The dark blue and light blue stars on the plot represent measurements for the MW as published by \citet{Haywood:2013} for the solar neighbourhood thin and thick discs of $220$~km~s$^{\rm -1}$ and $\sim170$~km~s$^{\rm -1}$, respectively, which are very close to the corresponding values of $V_{\phi}$ at the solar radius $R_0\simeq8.2$~kpc from the simulation ($218$ and $\sim166$~km~s$^{\rm -1}$, respectively). It is important to note that \citet{Haywood:2013} distinguish the two MW discs based on the stellar ages and positions in the [Fe/H]-[$\rm\alpha$/Fe] plane. The magenta star gives the approximate stellar halo rotation at $R_0$ \citep{Bond:2010}, while the black star is the solar value of $V_c(R_0)\simeq238$~km~s$^{\rm -1}$ \citep[][and references therein]{Bland-Hawthorn:2016}. One way to quantify a disc's thickness is through the vertical velocity dispersion. To compare the two discs of the simulated galaxies with the MW results, we selected a solar neighbourhood as $|R-R_0|<2$~kpc and $|z|<2$~kpc. The velocity dispersion in the vertical direction $\sigma_W$ of the g8.26e11's thick disc at $R_0$ is $73$~km~s$^{\rm -1}$ and of the thin disc is $29$~km~s$^{\rm -1}$. The various studies of the MW did not yet converged on one value for each of the two discs' $\sigma_W$ given the differences in the surveys selection functions, sky coverages, dynamical modeling, and thin/thick definition. For the MW thick disc, \citet{Robin:2017} found values as low as $27$~km~s$^{\rm -1}$, while \citet{Binney:2012} obtained $\sigma_W$ in the range [31,65]~km~s$^{\rm -1}$. \citet{Robin:2017} also found the lowest values for the MW thin disc, between $6$ and $20$~km~s$^{\rm -1}$, while \citet{Binney:2012} found values in the range [20,27]~km~s$^{\rm -1}$. We can therefore conclude that g8.26e11 is a realistic MW analogue from the point of view of the solar neighbourhood stellar population dynamics. Globally the stellar mass of g8.26e11 is distributed as follows: $21\%$ in the thin disc, $33\%$ in the thick disc, $25\%$ in the classical bulge, $14.5\%$ in the pseudo bulge, and the remaining $6.5\%$ in the stellar halo. Therefore, from a dynamics point of view, g8.26e11 has a bulge-to-total ($B/T$) mass ratio of 0.46, summing up the contributions of the two bulges and the stellar halo. For comparison, the Milky Way is estimated to have $B/T\rm\simeq0.30$ \citep[see review by][]{Bland-Hawthorn:2016}. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{paperI_fig6a.eps} \includegraphics[width=0.45\textwidth]{paperI_fig6b.eps} \caption{The radial profiles of line-of-sight velocities $v_{\rm los}$ assuming an edge-on perspective (left) and vertical velocity dispersions $\sigma_{\rm z}$ (right) for the components of galaxy g8.26e11. The stellar components are shown with the same colors as in the right column of Figure~\ref{figure3}, while the cold gas is given in orange.} \label{figure6} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{paperI_fig7.eps} \caption{The radial profiles of surface mass densities $\Sigma$ for the components of galaxy g8.26e11. The stellar components are shown with the same colors as in the right column of Figure~\ref{figure3}. The solid and dashed lines in the right panel give the edge-on and face-on on perspectives, respectively.} \label{figure6.1} \end{center} \end{figure} \section{The simulated galaxy seen as an extragalactic object} \label{z0prop} In external galaxies, however, the profiles of $V_{\phi}$, as shown in the right panel of Figure~\ref{figure5}, can not be directly measured. For extragalactic objects the kinematic information comes instead in the form of line-of-sight velocity $v_{\rm los}$ fields like the one shown in the bottom row of Figure~\ref{figure4.1} for the total stellar population of g8.26e11. In order to compare the simulation with observed external galaxies, radial profiles of line-of-sight velocity $v_{\rm los}$ were extracted along the major axis (horizontal) in Figure~\ref{figure4.1}, using a slit of 1.6~kpc width. The left panel of Figure~\ref{figure6} shows the radial profiles of $v_{\rm los}$ for the thin (dark blue) and thick (light blue) discs, stellar halo (magenta), all stars (grey) and cold gas (orange) of the simulated galaxy g8.26e11. One of the first thing to notice in this panel is that the $v_{\rm los}$ profiles for both thin disc (dark blue) and all the stars (grey) are flat after $R\gtrsim5$~kpc, the former saturating at $174\pm5$~km~s$^{\rm -1}$ and the latter at $149\pm4$~km~s$^{\rm -1}$. A similar trend can be observed for the stellar halo which saturates at $v_{\rm los}=44\pm15$~km~s$^{\rm -1}$ On the contrary, the thick disc has a line-of-sight velocity profile declining with the radius, on average $\sim50$~km~s$^{\rm -1}$ lower than that of the thin disc. In observed external galaxies the $v_{\rm los}$ of the stellar haloes can not be measured at such small radii ($R<25$~kpc) given that the convolved velocity is heavily dominated by the disc(s), and that stellar haloes are much fainter than the central components. We also note that the $v_{\rm los}$ profiles for the thin and thick discs are significantly below their corresponding $V_{\phi}$ at all radii $R$. The same is true for the cold gas. These differences in rotational velocities can have important consequences on what is inferred from observational studies on the intrinsic stellar distribution, and consequently on the inner dark matter halo mass estimates. What is typically accessible in observations of extragalactic stellar velocity fields are absorption lines produced mainly in the atmospheres of young stars \citep[e.g.][]{Yoachim:2008b,Martinsson:2013}. However, if the rotational velocity profiles derived in this manner are assumed to be a characteristic of the full stellar distribution, the galaxies will be taken to be more dynamically cold, i.e. discy, than they truly are. Therefore, we think that circular velocities of external galaxies derived from gas and/or stellar kinematics might also be significantly underestimated. Quantifying this bias is, however, beyond the scope of this paper. The right panel of Figure~\ref{figure6} shows another galaxy observable, the vertical velocity dispersion $\sigma_z$ profile. The thin and thick discs, all stars and cold gas vertical velocity dispersions are given by the dark blue, light blue, grey and orange curves, respectively. As expected of thin stellar discs, the vertical velocity dispersion $\sigma_z$ profile decreases slowly with radius, and can be approximated as a constant of $\sim25$~km~s$^{\rm -1}$. The thin disc $\sigma_z(R)$ looks very different than the one of the whole galaxy, which is much better approximated by the thick disc at $R\geq5$~kpc. The thick disc $\sigma_z(R)$ decreases approximately linearly with the projected radius, from $\sim75$~km~s$^{\rm -1}$ at $R\sim5$~kpc to $\sim20$~km~s$^{\rm -1}$ at $R\sim20$~kpc. The central peak of the whole galaxy's $\sigma_z(R)$ is produced by the bulge components and reaches $\sim120$~km~$s^{\rm -1}$ in the very center. The cold gas' $\sigma_z$ decreases approximately linearly between $\sim50$~km~s$^{\rm -1}$ in the center to $\sim10$~km~s$^{\rm -1}$ at $R=5$~kpc, after which it stays constant. Overall, from both normalization and the shape of the edge-on line-of-sight velocity profiles and vertical velocity dispersion profiles, g8.26e11 resembles the observed galaxy UGC 00448 after correcting it for inclination effects \citep[Appendix D of][]{Martinsson:2013}. UGC 00448 is a less massive galaxy than the simulated one, with a stellar mass of $1.9\pm1.0\times10^{\rm 10}$M$_{\rm\odot}$ derived using the HI line width of \citet{Staveley-Smith:1988} and the Tully-Fisher relation of \citet{Dutton:2017}. Figure~\ref{figure6.1} gives the total stellar surface mass density $\Sigma$ profile (grey curves) in both edge-on (solid grey) and face-on (dashed grey) perspectives. To recover this type of profile from photometry of observed galaxies, one has to assume a mass-to-light ($M/L$) ratio(s). The same panel also shows the corresponding profiles in both perspectives for all five stellar kinematic components. The total stellar surface mass density is peaked in the centre ($R<5$~kpc) and well fitted by an exponential at larger radii ($5<R<25$~kpc), with scalelengths $R_{\rm d}$ of $3.5\pm0.1$ and $4.1\pm0.1$~kpc in edge-on and face-on perspectives, respectively. For comparison, the scalelength of UGC 00448 in the K-band face-on corrected profile is $3.9\pm0.2$~kpc \citep{Martinsson:2013}. UGC 00448 shows a very similar radial light distribution to g8.26e11 stellar mass one, with a central peak and a purely exponential profile for $R>4.3$~kpc. This observed galaxy is classified as SABc. Looking at the contributions of the five components to the $\Sigma(R)$ in edge-on perspective, the two discs extend all the way to $R=0$, but do not have purely exponential profiles. The disc peak in the centre is a behavior expected from the evolution of thin discs that conserve their angular while reaching for their minimum energy state \citep{Lynden-Bell:1972}. In face-on perspective the two disc types have a dip in the centre. These dips are a direct consequence of the GMM algorithm which associates very small probabilities for particles in the very inner region to belong to the discs. Fitting with an exponential the profiles for the two discs in the range $5<R<25$~kpc, we obtained scalelengths $R_{\rm d}$ of $3.5\pm0.1$ and $3.4\pm0.1$~kpc for the thin and thick discs in edge-on perspective, and $4.3\pm0.1$ and $3.9\pm0.1$~kpc in face-on perspective, respectively. In the case of the Galaxy, the two discs have $R_{\rm d}$ in a similar range, $R_{\rm d}=2.7-3.7$~kpc \citep[e.g.][]{Piffl:2014,Sanders:2015,Binney:2015}. Interestingly, the three components supported by random motions in both face-on and edge-on perspective, namely the classical and pseudo bulges and the stellar halo, show purely exponential profiles. Though this is expected of structures like low mass spheroidal galaxies \citep{Graham:2003,Koda:2015,vanDokkum:2015} or pseudo bulges \citep{Fisher:2008,Gadotti:2009}, it is not generally expected of classical bulges, which are thought to have S\'{ersic} indices $n_{\rm S}\geq2$. However, other authors like for example \citet{Andredakis:1994} or \citet{Andredakis:1995} have argued that bulges cover a continuous range in profiles from purely exponential to highly centrally concentrated ones. If one would try to fit $\Sigma(R)$ of all stars, a combination of an exponential and a S\'{ersic} profiles, or even a S\'{ersic} profile only would suffice. The observational bias towards large $n_{\rm S}$ for bulges can be partially explained by the expectation that the disc is purely exponential and reaches the center. This requirement for the disc forces the central component to a concave fitting function, i.e. a large $n_{\rm S}$. However, the parameters resulting from such fits can severely bias the conclusions drawn on galaxy formation \citep[e.g.][]{Mosenkov:2014,Bernardi:2014}. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{paperI_fig8.eps} \caption{The assembly of baryons and dark matter along the main branch of the merger tree. The thick black curve and the coloured solid ones show the evolution of the dark matter halo mass normalized to its value at $z=0$ and of the normalized progenitor baryonic masses inside the viral radius along the main branch of the merger tree for the $z=0$ components of g8.26e11. The thick dashed black curve represents the evolution of the dark matter halo specific angular momentum $j_{\rm h}$ normalized to its final value.} \label{figure8.1} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{paperI_fig9a.eps} \includegraphics[width=0.45\textwidth]{paperI_fig9b.eps} \caption{The evolution of the normalized stellar masses inside the viral radius along the main branch of the merger tree (left) and the star formation rate histories for the components of g8.26e11. The normalizations of each component (left) is done with respect to its corresponding stellar mass at $z=0$. The thick cyan curves are the corresponding parameterizations of \citet{vanDokkum:2013} for a MW mass galaxy, while the thick dashed cyan ones are their extrapolations to higher redshifts. The colored numbers after each name in the right panel give the half stellar mass formation redshifts $z_{\rm 1/2}$.} \label{figure8} \end{center} \end{figure*} \section{Formation of stellar structures} \label{evolution} In order to provide a link with the dark matter halo evolution, Figure~\ref{figure8.1} shows the dark matter mass (thick solid black) and specific angular momentum (thick dashed black) build up along the main branch of the merger tree. Both quantities are normalized to their respective values at $z=0$. The figure also gives the normalized progenitor baryonic masses inside the viral radius along the main branch of the merger tree for the $z=0$ components of g8.26e11 (thin coloured curves). The baryonic mass assemblies look like step functions, while the dark matter halo mass growth is very smooth, with only one small jump at $z\sim1.6$, which marks the transition from a large growth rate to an ever decreasing small one. This small step represents the last important merger at the dark matter halo scale. The major merger at $z\sim1.6$ brings in almost half of the final baryonic mass of the thin disc, but less than $10\%$ of the classical bulge mass. Also, from the thick dashed black curve, this merger is responsible for a large fraction of the dark matter halo spin, $\geq60\%$. Figure~\ref{figure8.1} supports the idea that the kinematic structures themselves also get assembled in the same temporal sequence as their mass gets transformed into stars. Basically, the classical bulge forms first, followed by the pseudo bulge, then by the thick disc and later by the thin disc. In this perspective, the stellar halo does not fit into this sequence, showing a much more syncopated assembly pattern. Counting the large jumps in the baryonic mass assembly of the stellar halo (magenta curve) one can say that almost $80\%$ of its progenitor material came in with the minor mergers at $z\sim3.5$ and $0.8$ and with the major merger at $z\sim1.6$. This behaviour together with the significant jumps in the its corresponding normalized stellar mass (magenta curve) is a clear indication that a large fraction of this $\sim80\%$ stellar halo progenitor mass came inside $r_{\rm vir}$ as already formed stars. Figure~\ref{figure8} shows the star formation rate, hereafter SFR, (right) and the normalized stellar masses inside the viral radius along the main branch of the merger tree (left) for the $z=0$ components of g8.26e11. The SFR of the whole galaxy in grey shows a prominent peak between $z\sim3$ and $z\sim2$, a fast decrease in intensity between $z\sim2.5$ and $z\sim0.9$, and a much slower decrease afterwards down to a $z=0$ value of $\sim2M_{\rm\odot}yr^{\rm -1}$. Over plotted in cyan is the reconstructed SFR history for a MW mass galaxy of \citet{vanDokkum:2013}. This observationally derived SFR history resembles well both the shape and the normalization of g8.26e11 (grey curve), apart from a slight shift of the simulation's SFR towards earlier times. Looking at the five components separately, the three dispersion supported ones show clear high redshift, $z>2$, SFR peaks, and very little (the two bulges) to no SFR (the stellar halo) after $z\sim1$. The thin disc grows its stellar mass at an approximately constant rate of $1.5M_{\rm\odot}yr^{\rm -1}$ only after the peaks of the bulges and of the stellar halo. The thick disc, on the other hand shows more similarities with the dispersion dominated components than with the thin disc, although it reaches its maximum SFR later on, and the decrease at lower redshifts is more gradual. The fact that the thick disc forms most of its stars at early times is in agreement with the scenario proposed by \citet{Brook:2004}, who found that the disc stars with higher vertical velocities tend to be preferentially born during the high redshift epoch of frequent, chaotic mergers. The formation times of the stars in the five kinematic structures of g8.26e11 show a clear trend. The colored numbers in the upper right corner of the SFR panel of Figure~\ref{figure8} give the components' half stellar mass formation redshifts $z_{\rm 1/2}$. These values tell the story of a stellar mass formation sequence, with the stars of the halo forming first ($z_{\rm 1/2}=2.15$), followed by those of the classical bulge ($z_{\rm 1/2}=2.13$), the pseudo bulge ($z_{\rm 1/2}=1.79$), the thick disc ($z_{\rm 1/2}=1.35$) and finally those of the thin disc ($z_{\rm 1/2}=0.57$). The $z_{\rm 1/2}=1.46$ of the whole galaxy is in between the pseudo bulge and the thick disc values. This sequence, however, does not necessarily imply that the kinematic structures themselves formed, understood as in `got assembled', in this order. The left panel of Figure~\ref{figure8} represents the stellar mass assembly for the various components of g8.26e11. The thin disc and the two bulges show smoothly increasing curves, suggesting that the \textit{SFR occurred in-situ} for these three stellar structures. Interestingly, this formation pattern for classical bulges disfavours the merger scenario \citep[e.g.][]{Aguerri:2001} in the particular case of this simulated galaxy. The thick disc, however, shows a relatively large jump at the same redshift $z\sim1.6$ of the largest jump visible for the stellar halo. These feature are a clear indication of a merger, which results in the stars of the infalling object to be incorporated later on to either the thick disc or the stellar halo of the main galaxy. Globally, the stellar halo has the largest fraction of stars \textit{born ex-situ} ($45\%$), followed by the thick disc with $8\%$, and finishing with the thin disc with only $2\%$. Both classical and pseudo bulges of this galaxy formed all their stars in-situ. Same as for the SFR, we over plotted in the left panel of Figure~\ref{figure8} the observationally constrained stellar mass assembly for a MW mass galaxy of \citet{vanDokkum:2013} (cyan curve). This observational curve is relatively close to the grey curve representing the global evolution of the simulated galaxy. At closer inspection, the component most similar to the observations is the thick disc. Therefore, g8.26e11 not only resembles the MW at redshift $z=0$, but also has an assembly/SFR history history very similar to what the observational study of \citet{vanDokkum:2013} suggests. This plot shows clearly a stellar mass assembly sequence very similar to the SFR history one, excluding the stellar halo. The stellar halo of g8.26e11 is to a great extent a product of mergers. \subsection{Evolution of spins, sizes, shapes and rotational support} \label{prop_ev} The properties of the kinematic structures of g8.26e11 described before refer to the galaxy at redshift $z=0$. For the study of the various properties' evolutions we trace back the Lagrangian mass defined at $z=0$ as belonging to either one of the kinematic components, or to the galaxy (stars) as a whole. The aim is to quantify how the five kinematic components of g8.26e11 evolve in the properties that discriminate among them, namely: sizes, shapes, angular momenta and rotational support. All these quantities are computed in physical units. For each component $k$ at a given time $t$, we first calculate its center of mass position $\vec{\rm r}_{\rm k}{\rm(t)}$, and velocity $\vec{\rm v}_{\rm k}{\rm(t)}$, using all the baryon particles $\{i\}$ that are progenitors of the stellar particles of component $k$ at $z=0$, $\{i\}(t)\rm\Leftrightarrow k(z=0)$, and update the positions $\vec{\rm r}_{\rm i}{\rm(t)}$, and velocities $\vec{\rm v}_{\rm i}{\rm(t)}$ with respect to center of mass reference frame. The masses $m_{\rm i}(t)$ used for the particles $\{i\}$ at any time, are their corresponding stellar masses from $z=0$, $m_{\rm i}(t) = m_{\rm i(*)}(z=0)$. In order to have a more straightforward interpretation of the dynamical quantities' evolutions the orientation of the simulation box at each time step is kept fixed, such that the $z$-axis is parallel to the total stellar angular momentum of the galaxy at redshift $z=0$. We use the term \textit{size} to refer to the 3D half mass radius $r_{\rm 50}(k;t)$, and the term \textit{shape} for the ellipticity defined as: \begin{equation} \varepsilon(k;t)=1-\frac{c(k;t)}{a(k;t)}, \end{equation} where the semiaxes $a$ and $c$ are computed from the eigenvalues $E_1\leq E_2\leq E_3$ of the inertia tensor $I_{\rm jl}(k;t)$ of the particles $\{i\}(t)$, following \citet{GonzalezGarcia:2005}. The inertia tensor is defined as: \begin{equation} I_{jl}^{(k)} = \sum_{i\in(k)} m_i(\delta_{jl}r_i^2-x_jx_l), \end{equation} with $j$ and $l$ looping over the Cartesian coordinates. The semiaxes $a>b>c$ are computed from: \begin{equation} \label{eq7} \begin{aligned} a^2+b^2+c^2 &= 5(E_1+E_2+E_3)/2\\ a^2/b^2 &= (E_3+E_2-E_1)/(E_1+E_3-E_2)\\ a^2/c^2 &= (E_3+E_2-E_1)/(E_1+E_2-E_3) \end{aligned} \end{equation} The specific angular momentum $j(k;t)$ will be referred to as \textit{spin} of the component $k$, given that it is computed with respect to the reference frame of the particle group $\{i\}(t)$: \begin{equation} j(k;t) = \frac{| \sum_{i\in(k)} m_i\vec{r}_i(t)\times \vec{v}_i(t)|}{ \sum_{i\in(k)} m_i}, \end{equation} To estimate the amount of rotational support we use the velocity dispersion fraction $f_{\sigma}(k;t)$ defined as: \begin{equation} f_{\sigma}(k;t)=1-3\frac{\sigma_z(k;t)^2}{\sigma(k;t)^2}. \end{equation} For a group of particles with isotropic velocities in their own reference frame, we expect this fraction to be zero because $\sigma_z\simeq\sigma/\sqrt{3}$. If the velocities of the particles are instead confined to the $xy$-plane, $\sigma_z\simeq0$, and the fraction should be one. We use $f_{\sigma}$ to estimate the rotational support as opposed to the more conventional $v/\sigma$ \citep{Davies:1983} because the latter can not be employed at high redshift, where the Lagrangian masses of the components extend way outside the virial radius of the progenitor halo. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{paperI_fig10.eps} \caption{Evolutions of the normalized spins (top), normalized sizes (centre top), shapes (centre bottom) and vertical velocity dispersion fractions (bottom) for the various components of g8.26e11. The quantities plotted have been calculated by tracing back the Lagrangian mass of each kinematic group separately. The black lines in all panels show the corresponding evolution of the same property for the $z=0$ dark matter halo Lagrangian mass.} \label{figure7} \end{center} \end{figure} The colored curves of Figure~\ref{figure7} show the evolutions of the spins (top), sizes (centre top), shapes (centre bottom) and velocity dispersion fractions (bottom) for the Lagrangian masses of each of the five kinematic components of g8.26e11. The grey curves give the corresponding evolutions of all the stellar particles of the galaxy at $z=0$, while the black ones represent the Lagrangian mass of the $z=0$ dark matter halo. Both spins and sizes have been normalized to their respective maximum values to ease the comparison between the various galaxy components. At early epochs the spins of all components grow approximately linearly with time until they reach their maximum values, at redshifts around $3$. This early behavior reproduces well the predictions of the tidal torque theory \citep{Hoyle:1951,Peebles:1969,Doroshkevich:1970,White:1984}, which links the angular momentum acquisition in protogalaxies with the torques induced upon each other by neighbouring collapsing regions of the universe. In this framework, a collapsing region is expected to attain its maximum angular momentum when it reaches its maximum extent and its evolution decouples from the universal expansion. In the spherical collapse model, this time is called turn-around. Therefore, we can identify the beginning of the g8.26e11 halo collapse with this turn-around redshift, $z_{\rm turn}\sim3$. After this time, all components lose part of their angular momenta, with the dark matter losing only $\sim30\%$, while the two bulges lose more than $95\%$. Among the five kinematic components, the thin disc loses the least, $\sim60\%$. A qualitatively similar specific angular momentum evolution has been shown by \citet{Dominguez:2015} for the dynamical disc and spheroid components of two simulated galaxies. As we already showed in Section~\ref{z0prop_mw}, the total circular velocity $V_{\rm c}$ of the simulated galaxy is very similar to that of the MW. Also its rotational velocities $V_{\phi}$ for its various stellar components are in very good agreement with MW observations in the solar neighbourhood, while the disc(s) scalelengths are in agreement with both MW and external galaxy measurements. In the light of these results, the loss of angular momentum we found is a genuine property of the baryonic collapse and assembly, and not an effect of the so-called `angular momentum problem' \citep{Navarro:2000}, which affected earlier generations of simulations. This problem has largely been solved by improving the numerical schemes \citep[e.g][]{Serna:2003}, increasing the resolution \citep[e.g][]{Governato:2004} and implementing feedback processes \citep[e.g.][]{Okamoto:2005}. The merger at $z\sim1.6$ is easy to identify in both the sizes evolution plot (centre top of Figure~\ref{figure7}) and the shapes one (center bottom). This epoch marks the halo virialization, $z_{\rm vir}\sim1.3$, as exemplified by the dark matter $r_{\rm 50}$ reaching its equilibrium value, and by the sharp dips in the evolution of the thin disc's and classical and pseudo bulges' shapes, $\varepsilon$. The collapse of the various kinematic components follows the same sequence as their SFR histories, with the classical bulges being first and the thin disc last. Same as before, the stellar halo does not follow the evolution of the other kinematic components, instead resembling more the dark matter, i.e. the wiggles in the size evolutions of the stellar halo and the dark matter halo are correlated. All five kinematic components of g8.26e11 lose angular momentum faster between $z_{\rm turn}\sim3$ and $z_{\rm vir}\sim1.3$ than later on. With the exception of the thin disc, the other four also form a large fraction of their stars during the same epoch. These stars are thus formed in a highly turbulent environment, i.e. the collapsing dark matter halo(es), where the gravitational potential varies on short timescales. This suggests that the dominant physical process responsible for the stellar loss of angular momentum during this time is violent relaxation \citep{LyndenBell:1967}. Asymmetries of the gravitational potential generated by mergers also provide an efficient mechanism to transfer the gas angular momentum. Gas can also lose angular momentum through dynamical friction \citep{Chandrasekhar:1943,Leeuwin:1997}, a physical process whose effects in simulations depend both on the resolution and on the numerical algorithms employed \citep{Semelin:2002}. One possible explanation for the little loss of angular momentum by the thin disc component is its progenitor gas being part of the hot halo previous to its arrival on the equatorial plane \citep{Athanassoula:2016}. \citet{Peschken:2017} find in prepared merger simulations that this seems to be the case, the angular momentum of the discs increasing with time at the expense of the angular momentum of the gaseous halo \citep{Eggen:1962}. While the progenitor gas of the thin kinematic disc of g8.26e11 might have passed through the hot halo phase, our results suggest that the main cause for its angular momentum conservation is simply the fact that a large part of this material accretes onto the galaxies at times when the dark matter halo is already virialized, and as such there is no physical mechanism that is able to alter it considerably. In the bottom two panels of Figure~\ref{figure7} we estimate how much disc-like in shape ($\varepsilon$ close to $1$) and in rotational support ($f_{\sigma}$ close to 1) the five components, the whole galaxy and the dark matter halo are. At high redshifts all components have high values of $\varepsilon$ because their material is part of the filamentary large scale structure. As the dark matter gets assembled, its shape evolves towards spherical symmetry ($\varepsilon\sim0$) and its rotational support settles to zero, as expected. For all baryonic components shown, from $z_{\rm turn}\sim3$ to $z_{\rm vir}\sim1.3$ the shapes evolve towards spherical symmetry, while the rotational support increase. After $z_{\rm vir}$, different behaviors emerge. The material of the classical bulge loses all its rotational support, ending up as a velocity dispersion dominated system with a small ellipticity $\varepsilon\sim0.2$. The pseudo bulge and the thick disc show almost no evolution between $z_{\rm vir}\sim1.3$ and $z=0$, the final ellipticity of the former being $\sim0.45$ and of the latter $\sim0.65$. The thin disc on the other hand increases its $\varepsilon$ up to $\sim0.85$ and its $f_{\sigma}$ up to one. From these two physical properties at $z=0$ it clearly qualifies for the nickname of `thin disc'. \begin{figure*} \includegraphics[width=0.98\textwidth]{paperI_fig11.eps} \caption{The spatial distribution evolution of the progenitors (gas + stars) of the five stellar kinematic components of g8.26e11. The dashed circles represent the virial radii at each redshift shown in the top left. The numbers $f$ shown in the bottom right corners of each panel give the {corresponding fraction of the total progenitor mass already in stellar form at that particular redshift}. All panels are centered on the center of mass of the progenitor dark matter halo at the corresponding redshift. The projection is the same across all redshifts, set as the $yz-$plane of $z=0$. The physical scale is $462$~kpc/side.} \label{g826_xy_ev} \end{figure*} \subsection{A visual perspective on the assembly of stellar structures} \label{ev_vis} In order to have a visual impression on the different evolutionary paths of the g8.26e11 kinematic components, Figure~\ref{g826_xy_ev} shows a redshift sequence (left to right) of the spatial distribution of the baryonic particles comprising each of them (top to bottom). The numbers $f$ in the bottom right corner of each panel give the total stellar mass fractions of the particular component shown at each corresponding redshift. The normalization of $f$ is done with respect to the total baryonic mass of each component separately. For e.g., at $z=3$ only $f=1\%$ of the thin disc progenitor material has already been converted to stars. At $z=3$ (left column) the classical bulge material (second row) occupies the densest regions of the large scale filamentary structure feeding the dark matter halo, while the thin disc material (bottom row) is the most diffuse one. As the evolution proceeds, the thin disc is the last in collapsing, without considering the stellar halo which takes a long time to assemble the stars in the tidally disrupted small satellites. Considering these projections together with the evolution of the stellar and baryon fractions in Figure~\ref{figure8}, it appears that both the thin and the thick discs form partially from material that comes in as gas part of the large scale baryon filamentary structure which contracts and collapses. On the other hand the two bulges and the halo seem to accrete most of their mass through mergers, understood as through coalescence of already formed structures. However, from Section~\ref{evolution} we know that the star formation occurs mostly at high redshift and completely in-situ for the two bulges (see Section~\ref{evolution}). This means that their progenitor material from the infalling smaller galaxy at $z=2$ sinks to the centre of the main galaxy in gaseous form and is subsequently transformed into stars. The stars of this infalling galaxy end up making part of the two stellar discs at redshift $z=0$, especially the thick one, and of the stellar halo. \section{Summary and conclusions} \label{conclusions} We use one simulated Milky Way analogue (g8.26e11) from the NIHAO suite of galaxies \citep{Wang:2015} to show how Gaussian Mixture Models in the stellar kinematic space of normalized specific angular momentum -- binding energy ($j_{\rm z}/j_{\rm c}$, $j_{\rm p}/j_{\rm c}$,$e/|e|_{\rm max}$) can disentangle a large variety of galactic stellar structures. \subsection{Galactic structure finder} The analysis pipeline {\tt galactic structure finder} ({\tt gsf}) can be applied to any simulated galaxy in equilibrium state to disentangle the fine structure of its stellar distribution (thin/thick discs, classical/pseudo bulges, stellar haloes, spheroids, inner discs/bars). The code calculates the N-body gravitational potential for a halo in isolation in order to correctly compute the stellar circularities. These circularities $j_z/j_c$ together with the normalized stellar angular momenta in the equatorial plane of the galaxy $j_p/j_c$ and with the normalized stellar binding energies $e/|e|_{\rm max}$ are used as input space for the Gaussian Mixture Models clustering method. The only input parameter needed to run {\tt gsf} is the number of Gaussians $nk$. The $nk$ parameter depends on the problem one wants to study and on the resolution of the simulation. We used {\tt gsf} on a sample of 25 high resolution galaxies (~10$^{\rm 6}$ particles per halo) ranging from dwarfs to objects a few times more massive than the Milky Way \citep{Wang:2015} to study the properties of stellar structures like thin/thick discs, stellar haloes, etc, as well as their formation history. In this simulated galaxy sample, the low mass objects have only two dynamically distinct components: a disc and a spheroid. Driven by the quest to disentangle stellar haloes, we found the more massive galaxies to host up to five distinct components. In the present study we exemplify how {\tt gsf} works on a simulated Milky Way analogue. For this study, the so-called optimal number of stellar components has been chosen by visual inspection of the surface mass density and line-of-sight velocity maps. Automatizing this part is a work in progress, with the aim of applying {\tt gsf} to large samples of simulated galaxies. \subsection{The multiple components of a MW analogue} The example galaxy at $z=0$, g8.26e11 has two distinct discs (thin and thick), two distinct bulges (classical and pseudo), and a stellar halo. The stellar mass is approximately distributed as follows: $21\%$ in the thin disc, $33\%$ in the thick one, $25\%$ in the classical bulge, $14.5\%$ in the pseudo one, and $6.5\%$ in the stellar halo. Therefore, this galaxy has a dynamical disc--to--total ratio of 0.54. By comparison the Milky Way is thought to have a disc-to-total ratio of 0.70 \citep[e.g.][]{Bland-Hawthorn:2016}. Adopting as vantage point a position similar to the Sun's in the MW, g8.26e11 is remarkably similar to the Galaxy. The total circular velocity for $R>5$~kpc of this simulated galaxy passes through the observational derived data points for the MW of \citet{Kafle:2012}, \citet{Reid:2014} and \citet{LopezCorredoira:2014} (Figure~\ref{figure5}). The thin and thick discs in the simulations have rotational velocities $V_{\phi}$ at the Sun's position ($R_0=8.2$~kpc) of $218$ and $166$~km~s$^{\rm -1}$, respectively, values which are in very good agreement with MW observations \citep[e.g.][]{Haywood:2013}. The $V_{\phi}$ of the local stellar halo \citep{Bond:2010} is also well recovered ($V_{\phi}\simeq48$~km~s$^{\rm -1}$). At $R_0$, the vertical velocity dispersions of the thin and thick discs are $29$ and $73$~km~s$^{\rm -1}$, respectively, values close to the upper limits found for the MW \citep{Binney:2012,Robin:2017}. Seen as an extragalactic object, the kinematic thin disc has a flat rotation profile $v_{\rm los}\simeq174\pm5$~km~s$^{\rm -1}$ and a small and approximately constant with radius vertical velocity dispersion $\sigma_z\simeq 27\pm6$~km~s$^{\rm -1}$ (Figure~\ref{figure6}). Its flattening measured from the eigenvalues of the inertia tensor is $\sim0.85$, where one corresponds to a razor thin disc and zero to a perfect spheroid. The thick disc, on the other hand, has a declining rotation curve, lagging $\sim50$~km~s$^{\rm -1}$ behind the thin disc at all radii, and a significantly larger velocity dispersion ($\sim80$~km~s$^{\rm -1}$ at an edge-on projected radius of $\sim2.5$~kpc) that declines $\sim$linearly with radius. The flattening of the thick disc is $\sim0.65$. The galaxy as a whole has a large central vertical velocity dispersion ($\sigma_z\simeq 120$~km~s$^{\rm -1}$) due to the presence of the classical and pseudo bulges. The simulated galaxy also nicely exemplifies the differences in the various velocities used to judge simulated and observed galaxies. The simulated galaxy in this study has $V_c>V_{\phi}>v_{\rm los}$ at all radii. The stellar component whose $V_{\phi}$ and $v_{\rm los}$ are the closest to the total circular velocity is the thin disc. Though the cold gas $V_{\phi}$ of this galaxy traces very well $V_c$ ouside of the bulge region, its $v_{\rm los}$ is significantly lower. These findings strongly suggest that circular velocities of external galaxies, constructed from observed velocities corrected for inclination effects, can be significantly underestimated. Contrary to expectations, both types of discs show stellar surface mass density profiles more centrally concentrated than pure exponentials (Figure~\ref{figure6.1}). On the other hand, the surface mass density of the classical and pseudo bulges, and of the stellar halo are exponential. The flattening for the classical and pseudo bulges are closer to spherical symmetry, $\sim0.20$ and $\sim0.45$, respectively. Basically, the kinematic stellar structure we call \textit{classical bulge} has all the expected properties, except the mass density profile. It is compact, almost spherically symmetric, has no net rotation and is made of old stars. This finding raises interesting questions regarding the nature of classical bulges, as derived from galaxy photometry, which predict high S\'{ersic} indices ($n_{\rm S}>2$) for this type of stellar structures. The star formation history and the stellar mass assembly history of this galaxy (Figure~\ref{figure8}) are similar to the observational derived ones for a Milky Way mass galaxy \citep{vanDokkum:2013}. Breaking down the total SFR into the contributions from the five components, we find that the dispersion dominated structures (the two bulges and the stellar halo) formed most of their stars at high redshift ($z>1$), the peaks in their SFRs occurring at $z\sim3$. The thick disc has a very extended SFR with a quite flat peak ($\sim5$~M$_{\rm\odot}$yr$^{\rm -1}$) between $z\sim2.5$ and $z\sim1.5$, while the thin disc forms its stars at a constant rate of $\simeq1.5$~M$_{\rm\odot}$yr$^{\rm -1}$ between $z\simeq2.5$ and $z=0$. Globally, this galaxy formed half of its stars by $z_{\rm 1/2}=1.46$. The half stellar mass formation redshifts, $z_{\rm 1/2}$, for the five structures form a sequence, with $z_{\rm 1/2}=2.15$, 2.13, 1.79, 1.35, and 0.57 for the stellar halo, classical bulge, pseudo bulge, thick disc and thin disc, respectively. One of the major benefits of our method is that it allows to study the formation histories of these various structures by tracing back in time the Lagrangian mass of each one of them separately. Actually, this should be a relatively straightforward analysis in any particle-based simulation code. In this manner, for example, we can quantify precisely the loss of angular momentum between the epoch of dark matter halo turn-around and $z=0$ for each stellar kinematic structure. For this particular galaxy, the thin disc material loses the smallest fraction of its maximum angular momentum ($\sim60$ per cent), while the classical bulge loses the most ($\sim95$ per cent). Similarly, the $z=0$ dark matter halo only loses $\sim30$ per cent of its maximum angular momentum. By plotting the evolution of the various parameters, such as half mass radius, shape, rotational support and/or specific angular momentum, for each stellar kinematic component as well as for the dark matter halo, it is possible to identify the important epochs in the formation of the galaxy (Figure~\ref{figure7}). In this way we found the turn-around-redshift for this galaxy to be $z_{\rm turn}\sim3$, while the virialisation of its dark matter halo ends by $z_{\rm vir}\sim1.3$. For all stellar components as well as for the dark matter halo the biggest loss of angular momentum occurs between these two epochs. At high redshifts all five stellar kinematic components display a filamentary spatial structure, which vanishes first for the dispersion dominated structures, and lastly for the rotation dominated ones (Figure~\ref{g826_xy_ev}). The two bulges of this galaxy formed in-situ all their stars, while the thick disc accretes $8\%$ and forms in-situ the rest. The thin disc has also a small fraction of accreted stars of $2\%$. The thick stellar disc of this simulated galaxy \textit{forms thick} \citep{Brook:2004}. The stellar halo has a different assembly history than the other four components, at least half of its stellar mass being formed in small satellites that subsequently get incorporated to the progenitor galaxy and are tidally destroyed in this process. A significant fraction ($55\%$) of this galaxy's stellar halo is, however, formed in-situ \citep[e.g.][]{Cooper:2015}. \subsection{Ongoing and future {\tt gsf} applications} Finally, we like to anticipate that in an accompanying paper \citep{Obreja:2018} we extend the analysis presented here to a larger set of 25 NIHAO galaxies, in a first attempt to constrain the formation patterns of stellar substructures like thin/thick discs, classical/pseudo bulges, stellar haloes, inner stellar discs and stellar spheroids. Recent years have seen big advances in the field of high resolution galaxy simulations, resulting in ever more realistic galaxies. However, the various groups active in this field use simulation codes which employ different implementations for the sub-grid physics. Therefore, it is very important to understand what are the detailed differences between these codes in terms of the fine structure of galaxies, so that by comparing with observational data the models of galaxy formation can be improved. In this perspective, the new generation of zoom-in cosmological simulations with very high resolutions \citep[e.g.][]{Grand:2017,Hopkins:2017,Buck:2018a} are an ideal laboratory to study the emergence of the galactic stellar structures. For these reasons, we think that our analysis pipeline would open the path to a better understanding of stellar structures formation if applied in a consistent way to the wealth of current and future high-resolution zoom-in simulations. To foster such studies, we thus make {\tt gsf} publicly available at \url{https://github.com/aobr/gsf}. \section*{Acknowledgments} We would like to thank the anonymous referee for a constructive report, which helped improve the quality of this manuscript. We would also like to thank Glenn van de Ven, Fabrizio Arrigoni Battaia, Rosa Dom\'{\i}nguez Tenreiro and Chris Brook for useful conversations. All figures in this work have been made with {\tt matplotlib} \citep{Hunter:2007}. The {\tt gsf} code also uses the Python libraries {\tt numpy} \citep{Walt:2011} and {\tt scipy} \citep{Jones:2001}. {\tt F2PY} \citep{Peterson:2009} has been used to compile the Fortran module for Python. This research was carried out on the High Performance Computing resources at New York University Abu Dhabi; on the \textsc{theo} cluster of the Max-Planck-Institut f\"{u}r Astronomie and on the \textsc{hydra} clusters at the Rechenzentrum in Garching. We greatly appreciate the contributions of these computing allocations. AO and BM have been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- MO 2979/1-1. TB acknowledges support from the Sonderforschungsbereich SFB 881 ``The Milky Way System'' (subproject A1) of the DFG. \bibliographystyle{mnras}
{ "timestamp": "2018-04-25T02:00:33", "yymm": "1804", "arxiv_id": "1804.05576", "language": "en", "url": "https://arxiv.org/abs/1804.05576" }
\section{Introduction} The reproducing kernel Hilbert space ${\mathcal F}_1$ of entire functions with reproducing kernel $e^{z\overline{w}}$ is associated with the names of Bargmann, Segal and Fock, and will be called in this paper the Fock space (more precisely, it is the symmetric Fock space associated with $\mathbb C$, see \cite{MR0157250}). It plays an important role in stochastic processes, mathematical physics and quantum mechanics, for recent work on the topic see e.g. \cite{DHK,Hall1}. The space ${\mathcal F}_1$ is isometrically included in the Lebesgue space of the plane with weight $dA(z):=\frac{1}{\pi}e^{-|z|^2}dxdy$, and a key feature of ${\mathcal F}_1$ is that the adjoint of the operator of multiplication by the complex variable is the operator of differentiation. It is of interest to look at various generalizations of ${\mathcal F}_1$. One approach consists in slightly modifying the weight function, see e.g. the works \cite{MR728694,rosenblum_hermite,siso} and another line is to change the kernel (that is the norms of the monomials) in an appropriate way, for instance replacing the exponential by the Mittag-Leffler function in the case of the grey noise theory, see e.g. \cite{MR1124240}. Then too the weight is changed, but not always in an explicit way. Here we consider the family $({\mathcal F}_m)_{m=1}^\infty$ of reproducing kernel Hilbert spaces with reproducing kernel \begin{align} k_m(z,\omega)= \sum_{n=0}^\infty \frac{z^n\overline{\omega}^n}{(n!)^m},\quad m=1,2,\ldots \label{kmmm} \end{align} The space ${\mathcal F}_m$ can then be easily described as the space of all Taylor series of the form $f(z)=\sum_{n=0}^\infty f_nz^n$ for which $$\sum_{n=0}^\infty |f_n|^2(n!)^m<\infty.$$ For $m=1$, the space is equal to the classical Fock space, and the case $m=2$ was defined and studied in \cite{daf1}.\\\\ The main results are as follows: The first is a geometric characterization of the spaces ${\mathcal F}_m$ in terms of a weight; the second main result is the characterization of ${\mathcal F}_m$ in terms of the adjoint operator of multiplication by $z$, associated to the Stirling numbers of second kind, see (\ref{eq:23May18a}). This generalizes the well known case of $m=1$, and opens the ground for future applications such as interpolation and sampling theorems in the setting of ${\mathcal F}_m$; see for instance the papers \cite{MR2672228,MR3558232} for the case of $\mathcal F_1$. The third main result is obtaining a structure of topological algebra for the inductive limit of the dual of the space $\cap_{m=1}^\infty\mathcal F_m$. This allows us to work locally in a Hilbert space rather than in the non-metrizable space $\cup_{m\in\mathbb N}\mathcal F_{2-m}$.\\ The outline of the paper is as follows. In Section \ref{2} we review some facts on the Mellin transform. In Section \ref{3}, using the Mellin transform, we give a geometric characterization of the spaces $\mathcal F_m$ for $m\in\mathbb N$. A characterization of these spaces in terms of the adjoint of the operator of multiplication by $z$ and using the Stirling numbers of the second kind is given in Section \ref{4}. A related Bargmann transform is defined in Section \ref{5}. In Section \ref{6} we define a Gelfand triple in which we imbed the Fock space. We observe that the intersection $\bigcap_{m=1}^\infty{\mathcal F}_m$ is a nuclear space and its dual is an algebra of the type introduced in \cite{MR3404695}. \section{Preliminaries} \setcounter{equation}{0} \label{2} Let $(a,b)$ a open interval of the real line, and let $f$ and $g$ such that both $f(x)x^{c-1}$ and $g(x)x^{c-1}$ are summable on $[0,\infty)$ for $c\in(a,b)$. The Mellin transform of $f$, denoted by ${\mathcal M}(f)$, is given by \[ {\mathcal M}(f)(c):=\int_0^\infty x^{c-1}f(x)dx, \quad c\in(a,b). \] In particular, the Mellin transform of the function $f_1(x)=e^{-x}$ is the Gamma function: $${\mathcal M}(f_1)(c)= \int_0^\infty x^{c-1}e^{-x}dx=\Gamma(c), \quad c>0.$$ The Mellin convolution of $f$ and $g$ is defined by $$(f*g)(x):=\int_0^\infty f(\frac{x}{t})g(t)\frac{dt}{t} =\int_0^\infty f(t) g(\frac{x}{t})\frac{dt}{t}, \quad x>0.$$ An important relation between the Mellin transform and the Mellin convolution, see e.g. \cite[Theorem 3]{MR1468369}, is given by \begin{align*} {\mathcal M}(f*g)(c)=({\mathcal M}(f)(c))({\mathcal M}(g)(c)), \quad c\in(a,b). \end{align*} \section{Geometric description of ${\mathcal F}_m$} \setcounter{equation}{0} \label{3} Recall that the Fock space ${\mathcal F}_1$ consists of those entire functions $f$ for which $$\iint_{{\mathbb C}}|f(z)|^2 e^{-|z|^2}dA(z)<\infty,$$ and is the reproducing kernel Hilbert space with reproducing kernel $e^{z\overline{w}}$. In this section we give for $m=2,\ldots$ a geometric characterization for the space $${\mathcal F}_m=\left\{ f(z)=\sum_{n=0}^\infty a_n z^n \text{ is entire with } \sum_{n=0}^\infty |a_n|^2 (n!)^m<\infty\right\}$$ which is the reproducing kernel Hilbert space with reproducing kernel \eqref{kmmm}, when equipped with the inner product \begin{align*} \langle f,g\rangle_{{\mathcal F}_m}:= \sum_{n=0}^\infty f_n\overline{g_n}(n!)^m, \,\text{where } f(z)=\sum_{n=0}^\infty f_nz^n, \,\, g(z)=\sum_{n=0}^\infty g_nz^n, \end{align*} for every $f,g\in{\mathcal F}_m$. First, we use the properties of the Mellin transform to build the kernels $K_m(z)$, which are generalizations of the modified Bessel function of the second order, also called the Macdonald function. Let $K_1(x)=e^{-x}$ and for every integer $m>1$ define the function \begin{align} \label{eq:22May18a} K_m(x):=(K_1*\cdots*K_1)(x), \quad x\in{\mathbb R}_+ \end{align} that is the function $K_1(x)$ Mellin-convoluted $m$ many times with itself. \begin{lemma} Let $m$ be an integer. The following properties hold: \begin{itemize} \item[(1)] For $m>1$, the kernel $K_m$ has the integral representations \begin{align} \label{eq:7Apr18a} K_m(x)=\int_0^\infty\cdots\int_0^\infty \frac{e^{-\sum_{i=1}^{m-1} x_i-\frac{x}{\prod_{i=1}^{m-1} x_i}}} {\prod_{i=1}^{m-1}x_i} dx_1\cdots dx_{m-1} \end{align} and \begin{align} \label{eq:7Apr18b} K_m(x)=\int_{{\mathbb R}}\cdots\int_{{\mathbb R}} e^{-\sqrt[m]{x}(\sum_{i=1}^{m-1}e^{t_i} +e^{-\sum_{i=1}^{m-1}t_i})}dt_1\cdots dt_{m-1}. \end{align} \item[(2)] The function $K_m$ is monotone decreasing in $(0,\infty)$. \item[(3)] The Mellin transform of $K_m$ is given by \[ {\mathcal M}(K_m)(x)=\Gamma(x)^m,\quad x>0, \] and so \begin{align} \label{eq:9Apr18a} \int_0^\infty x^nK_m(x)dx =(n!)^m,\quad n\in{\mathbb N}. \end{align} \end{itemize} \end{lemma} \textbf{Proof.} Part 1 is proved by induction on $m$: if $m=2$, we get $$K_2(x)=\int_0^\infty e^{-x/t}e^{-t} \frac{dt}{t}=\int_0^\infty \frac{e^{-x_1- \frac{x}{x_1}}}{x_1}dx_1.$$ Suppose formula (\ref{eq:7Apr18a}) holds for $m$. Then \begin{align*} K_{m+1}(x)&=(K_m*e^{-t})(x) =\int_0^\infty K_m\left(\frac{x}{x_m} \right)e^{-x_m} \frac{dx_m}{x_m} \\&=\int_0^\infty\cdots\int_0^\infty \frac{e^{-\sum_{i=1}^{m-1} x_i-\frac{\frac{x}{x_m}} {\prod_{i=1}^{m-1} x_i}}} {\prod_{i=1}^{m-1}x_i} \frac{e^{-x_m}}{x_m} dx_1\cdots dx_m \\&=\int_0^\infty\cdots\int_0^\infty \frac{e^{-\sum_{i=1}^m x_i- \frac{x}{\prod_{i=1}^m x_i}}} {\prod_{i=1}^m x_i} dx_1\cdots dx_m, \end{align*} i.e., (\ref{eq:7Apr18a}) holds for $m+1$ and hence for every $m>1$. Next, we use (\ref{eq:7Apr18a}) and the change of variables $s_i=\ln(x_i)$, $1\le i\le m-1,$ to obtain \begin{align*} K_m(x)=\int_{{\mathbb R}}\cdots\int_{{\mathbb R}} e^{-\sum_{i=1}^{m-1}e^{s_i}- \frac{x}{e^{\sum_{i=1}^{m-1}s_i}}} ds_1\cdots ds_{m-1} \end{align*} and by another change of variables $t_i=s_i-\ln(\sqrt[m]{x}), 1\le i\le m-1,$ we get $$K_m(x)=\int_{{\mathbb R}}\cdots \int_{{\mathbb R}} e^{-\sqrt[m]{x} (\sum_{i=1}^{m-1}e^{t_i}+ e^{-\sum_{i=1}^{m-1}t_i})} dt_1\cdots dt_{m-1}.$$ From the representation (\ref{eq:7Apr18a}) it is easily seen that $K_m(x)$ is a monotone decreasing function. Finally, the Mellin transform of $K_m$ is given by \begin{align*} {\mathcal M}(K_m)(c)= {\mathcal M}(f_1)(c)\cdots{\mathcal M}(f_1)(c) =(\Gamma(c))^m,\quad c>0, \end{align*} therefore $$\int_0^\infty x^{c-1}K_m(x)dx =(\Gamma(c))^m,\quad c>0.$$ For $c=n+1$, we have $$\int_0^\infty x^nK_m(x)dx =(\Gamma(n+1))^m=(n!)^m. \quad\blacksquare$$ In the special case $m=2$, we get that \[ K_2(x)=\int_{\mathbb R} e^{-\sqrt{x}2\cosh(t)}dt,\quad x\in{\mathbb R}_+ \] is the Bessel function of the second kind, see \cite{daf1}. For an arbitrary $m>2$, the kernel $K_m(x)$ can be expressed in terms of the Meijer $G$-functions; \cite[Chapter 5]{MR0058756} for the latter. We now show how the generalized Fock spaces ${\mathcal F}_m$ are obtained from the kernels $K_m(x)$ in a natural way. \begin{theorem} For any integer $m\ge1$, the space ${\mathcal F}_m$ is equal to the space of all entire functions $f:{\mathbb C}\rightarrow{\mathbb C}$ satisfying the condition \begin{align} \label{eq:7Apr18c} \iint_{{\mathbb C}}|f(z)|^2 K_m(|z|^2)dA(z)<\infty. \end{align} Moreover, the inner product of ${\mathcal F}_m$ is given by \begin{align*} \frac{1}{\pi}\iint_{{\mathbb C}} f(z)\overline{g(z)}K_m(|z|^2)dA(z) =\sum_{n=0}^\infty f_n\overline{g_n}(n!)^m,\quad f,g\in{\mathcal F}_m, \end{align*} and ${\mathcal F}_m$ has the orthonormal basis $\left\{\frac{z^n}{(n!)^{m/2}} \right\}_{n=0}^\infty$. \end{theorem} \textbf{Proof.} A straightforward computation shows that \begin{align*} \iint_{{\mathbb C}} z^n\overline{z}^kK_m(|z|^2)dA(z) &=\int_0^\infty \int_0^{2\pi}r^ne^{in\theta} r^ke^{-ik\theta} K_m(r^2)rd\theta dr \\&=\int_0^{2\pi} e^{i(n-k)\theta}d\theta \int_0^\infty r^{n+k+1}K_m(r^2)dr \\&=2\pi\delta_{n,k} \int_0^\infty r^{2n+1}K_m(r^2)dr \\&=2\pi\delta_{n,k} \int_0^\infty u^nK_m(u) \frac{du}{2}\\ &=\pi (n!)^m\delta_{n,k}. \end{align*} Let $f=\sum_{n=0}^\infty f_nz^n$ and $g=\sum_{n=0}^\infty g_nz^n$ be entire functions. Then \begin{align*} \pi \iint_{{\mathbb C}}f(z)\overline{g(z)} K_m(|z|^2)dA(z)&= \sum_{n,k=0}^\infty f_n\overline{g_k}\iint_{{\mathbb C}}z^n \overline{z}^kK_m(|z|^2)dA(z) \\&=\sum_{n,k=0}^\infty f_n\overline{g_k}\delta_{n,k}(n!)^m =\sum_{n=0}^\infty f_n\overline{g_n}(n!)^m, \end{align*} which implies that $f\in{\mathcal F}_m$ if and only if condition (\ref{eq:7Apr18c}) holds, i.e., \[ \frac{1}{\pi} \sum_{n=0}^\infty|f_n|^2(n!)^m =\iint_{{\mathbb C}} |f(z)|^2K_m(|z|^2)dA(z)<\infty \] as wanted. Furthermore, the inner product in ${\mathcal F}_m$ is then given by $$\langle f,g\rangle_{{\mathcal F}_m}= \sum_{n=0}^\infty f_n\overline{g_n}(n!)^m =\frac{1}{\pi}\iint_{{\mathbb C}}f(z) \overline{g(z)}K_m(|z|^2)dA(z). \quad\blacksquare$$ In the case $m=2$, similar yet different spaces related to other families of orthogonal polynomials, appear in \cite[Lemma 4]{Karp1} and \cite{Karp2}, \begin{remark} Let $0<\epsilon<1$. Then $\frac{\epsilon}{n!}<1$ for every $n\ge0$ and hence \begin{align*} \sum_{m=1}^\infty \epsilon^mk_m(z,\omega) &=\sum_{m=1}^\infty \epsilon^m \left(\sum_{n=0}^\infty \frac{z^n\overline{\omega}^n}{(n!)^m}\right) =\sum_{n=0}^\infty \left( \sum_{m=1}^\infty\left( \frac{\epsilon}{n!} \right)^m\right)z^n\overline{\omega}^n \\&=\sum_{n=0}^\infty \frac{\epsilon}{n!} \left( \frac{1}{1-\frac{\epsilon}{n!}} \right) z^n\overline{\omega}^n =\epsilon\cdot\sum_{n=0}^\infty \frac{z^n\overline{\omega}^n}{n!-\epsilon} \end{align*} and \begin{align*} \sum_{m=1}^\infty \frac{\epsilon^m}{m!} k_m(z,\omega)=\sum_{n=0}^\infty \left( e^{\frac{\epsilon}{n!}} -1\right)z^n\overline{\omega}^n. \end{align*} \end{remark} \section{Operator theoretic description of ${\mathcal F}_m$} \setcounter{equation}{0} \label{4} Denote by ${\mathfrak a}$ the operator of multiplication by $z$ by ${\mathfrak b}$ differentiation by $z$, i.e., ${\mathfrak a}=M_z$ and ${\mathfrak b}=\frac{\partial}{\partial z}$. Both ${\mathfrak a}$ and ${\mathfrak b}$ are defined on polynomials and more generally on entire functions. They satisfy the familiar commutation relation $$[{\mathfrak b},{\mathfrak a}]={\mathfrak b}{\mathfrak a}-{\mathfrak a}{\mathfrak b}=I.$$ In the Fock space ${\mathcal F}_1$, ${\mathfrak a}$ and ${\mathfrak b}$ are unbounded operators, and satisfy $${\mathfrak a}^*={\mathfrak b} \quad\text{ and }\quad {\mathfrak b}^*={\mathfrak a}.$$ This relation is very important, as the Fock space is the only space of entire functions for which ${\mathfrak a}$ and ${\mathfrak b}$ are adjoint to each other, see \cite{MR0157250}. We generalize this result by presenting a relation between the operators ${\mathfrak a}$ and ${\mathfrak b}$ in the space ${\mathcal F}_m$. That gives us another characterization of the space ${\mathcal F}_m$. We first introduce the Stirling numbers of the second kind $S(k,n)$, which appear naturally in the theory of ordering bosons. \begin{definition} [Stirling numbers of the second kind] For $k\in{\mathbb N}_0$ and $n\in{\mathbb N}_0$, the numbers $S(k,n)$ are defined by the recurrence formula $$S(k,n)= nS(k-1,n)+S(k-1,n-1),\quad k,n\ge1$$ with the initial values $S(k,0)=\delta_{k,0}$ and $S(k,n)=0$ if $k<n$. \end{definition} It is well known, see \cite{MR2862989,MR2479303}, that $$({\mathfrak a}{\mathfrak b})^k=\sum_{n=1}^{k} S(k,n){\mathfrak a}^{n}{\mathfrak b}^{n}, \quad k\ge 1$$ and this operator is called the Mellin derivative operator of order $k$ (with $c=0$), see \cite[Lemma 9]{MR1468369}. \begin{theorem} \label{thm:11Apr18a} Let $m\ge1$ be an integer. The operators ${\mathfrak a}$ and $({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b}$ are closed densely defined operators on the space ${\mathcal F}_m$ and their domains coincide $$Dom({\mathfrak a})=Dom(({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b})=D,$$ where \begin{align} \label{eq:10Apr18a} D=\left\{f(z)=\sum_{n=0}^\infty f_nz^n: \sum_{n=0}^\infty|f_n|^2(n!)^mn^m<\infty\right\} \subseteq{\mathcal F}_m. \end{align} Moreover, the adjoint operator of ${\mathfrak a}$ in ${\mathcal F}_m$ is given by $${\mathfrak a}^*=({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b},\quad\text{with}\quad Dom({\mathfrak a}^*)=Dom\left(({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b}\right)=D.$$ Furthermore, let ${\mathcal H}$ be a Hilbert space of entire functions in which the polynomials are dense, and let $m\in{\mathbb N}$. If the adjoint operator of ${\mathfrak a}$ in ${\mathcal H}$ is equal to the operator $({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b}$, i.e., if \begin{align} \label{eq:23May18a} (M_z)^*= \frac{\partial}{\partial z} \left[\sum_{n=1}^{m-1}S(m-1,n)z^n \frac{\partial^n}{\partial z^n}\right], \end{align} then ${\mathcal H}={\mathcal F}_m$ and there exists $c>0$ for which $$\langle f,g\rangle_{{\mathcal H}}= c\cdot\langle f,g\rangle_{{\mathcal F}_m}, \quad \forall f,g\in{\mathcal H}.$$ \end{theorem} \textbf{Proof.} It is easy to see that ${\mathfrak a}$ and $({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b}$ are closed, densely defined operators on ${\mathcal F}_m$. If $f(z)=\sum_{n=0}^\infty f_nz^n\in{\mathcal F}_m$, then \begin{align*} f\in Dom({\mathfrak a})&\iff {\mathfrak a} f=\sum_{n=0}^\infty f_nz^{n+1}\in{\mathcal F}_m \iff \sum_{n=0}^\infty |f_n|^2((n+1)!)^m<\infty \end{align*} and \begin{align*} f\in Dom(({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b})&\iff ({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b} f=\sum_{n=1}^\infty f_nn^mz^{n-1}\in{\mathcal F}_m \\&\iff \sum_{n=1}^\infty |f_n|^2n^{2m}((n-1)!)^m= \sum_{n=1}^\infty |f_n|^2(n!)^mn^m<\infty. \end{align*} Therefore, $Dom({\mathfrak a})=Dom(({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b})=D$ as in (\ref{eq:10Apr18a}). Next, if $$g(z)=\sum_{n=0}^\infty g_nz^n\in Dom({\mathfrak a}^*),$$ there exists $$h(z)=\sum_{n=0}^\infty h_nz^n\in{\mathcal F}_m$$ such that $\langle {\mathfrak a} f,g\rangle_{{\mathcal F}_m} =\langle f,h\rangle_{{\mathcal F}_m}$ for every $f\in Dom({\mathfrak a})$. In particular, for $f(z)=z^n\, (n\ge0)$, we get \begin{align*} \overline{g_{n+1}}((n+1)!)^m= \langle z^{n+1},g\rangle_{{\mathcal F}_m} =\langle z^n,h\rangle_{{\mathcal F}_m}=\overline{h_n}(n!)^m \end{align*} and hence $h_n=g_{n+1}(n+1)^m$ for every $n\ge0$. Thus, \begin{align*} h\in{\mathcal F}_m&\Longrightarrow \sum_{n=0}^\infty |h_n|^2(n!)^m=\sum_{n=0}^\infty |g_{n+1}|^2(n+1)^{2m}(n!)^m<\infty \\&\Longrightarrow \sum_{n=1}^\infty |g_n|^2 (n!)^mn^m<\infty \Longrightarrow g\in D, \end{align*} hence $Dom({\mathfrak a}^*)\subseteq D$. Finally, if $g\in D=Dom(({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b})$, then \begin{align*} \langle f,({\mathfrak b}{\mathfrak a})^{m-1} {\mathfrak b} g\rangle&= \left\langle \sum_{n=0}^\infty f_nz^n, \sum_{n=0}^\infty (n+1)^mg_{n+1}z^n\right\rangle =\sum_{n=0}^\infty f_n(n+1)^m\overline{g_{n+1}}(n!)^m \\&=\sum_{n=0}^\infty f_n\overline{g_{n+1}}((n+1)!)^m= \left\langle \sum_{n=0}^\infty f_nz^{n+1},\sum_{n=0}^\infty g_nz^n\right\rangle=\langle {\mathfrak a} f,g\rangle, \end{align*} for every $f\in D=Dom({\mathfrak a}),$ which proves that $g\in Dom({\mathfrak a}^*)$. Therefore, $D\subseteq Dom({\mathfrak a}^*)$ and hence $Dom({\mathfrak a}^*)=D$. By the previous calculation we also know that ${\mathfrak a}^*=({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b}$. Now suppose that ${\mathcal H}$ is a Hilbert space which contains all polynomials, and such that $${\mathfrak a}^*=({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b}$$ in ${\mathcal H}$. Then for every $f\in Dom({\mathfrak a})\cap{\mathcal H}$ and $g\in Dom(({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b})\cap{\mathcal H}$, \begin{align} \label{eq:10Apr18b} \langle {\mathfrak a} f,g \rangle_{{\mathcal H}}=\langle f,({\mathfrak b}{\mathfrak a})^{m-1} {\mathfrak b} g\rangle_{{\mathcal H}} \end{align} and as both $Dom({\mathfrak a})$ and $Dom(({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b})$ contain all polynomials, we apply (\ref{eq:10Apr18b}) for the choice $f(z)=z^l,g(z)=z^k\, (k,l\ge0)$, thus \begin{align*} \langle z^{l+1},z^k\rangle_{{\mathcal H}}&= \langle {\mathfrak a} f,g\rangle_{{\mathcal H}}=\langle f,({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b} g \rangle_{{\mathcal H}} \\&=\langle z^l,k^mz^{k-1}\rangle_{{\mathcal H}}= k^m\langle z^l,z^{k-1} \rangle_{{\mathcal H}},\quad k,l\ge0. \end{align*} We now prove by induction that for every $k\ge0$ and $l\ge k$, $$\langle z^{l+1},z^k\rangle_{{\mathcal H}}=0:$$ \begin{itemize} \item If $k=0$, we know that $\langle z^{l+1},1\rangle_{{\mathcal H}}=0$ for every $l\ge0$. \item Assume that for some $k\ge0$ we have $\langle z^{l+1},z^k\rangle_{{\mathcal H}}=0$ for every $l\ge k$. Therefore, $\langle z^{l+2},z^{k+1}\rangle_{{\mathcal H}}=(k+1)^m \langle z^{l+1},z^k\rangle_{{\mathcal H}}=0$ for every $l\ge k$, which means that $$\langle z^{l+1},z^{k+1}\rangle_{{\mathcal H}}=0$$ for every $l\ge k+1$, as wanted. \end{itemize} Thus the family $\{z^k\}_{k=0}^\infty$ is orthogonal in ${\mathcal H}$ and one can easily see that \[ \langle z^k,z^k\rangle_{{\mathcal H}} =k^m\langle z^{k-1},z^{k-1}\rangle_{{\mathcal H}},\quad \forall k\ge 1, \] which implies that $$\langle z^k,z^k\rangle_{{\mathcal H}} =(k!)^m\langle 1,1\rangle_{{\mathcal H}}.$$ To conclude, if $f(z)=\sum_{k=0}^\infty f_kz^k$ and $g(z)=\sum_{k=0}^\infty g_kz^k\in{\mathcal H}$, then $$\langle f,g\rangle_{{\mathcal H}}= \sum_{k,l=0}^\infty f_k\overline{g_l} \langle z^k,z^l\rangle_{{\mathcal H}} =\sum_{k=0}^\infty f_k\overline{g_k} (k!)^m\langle1,1\rangle_{{\mathcal H}},$$ i.e., the inner product in ${\mathcal H}$ is equal to the one in ${\mathcal F}_m$, up to a positive multiplicative constant $c=\langle1,1\rangle_{{\mathcal H}}$. As ${\mathcal H}$ is a Hilbert space which contains all the polynomials, it follows that $${\mathcal H}=\left\{ f=\sum_{n=0}^\infty f_nz^n:\langle f,f\rangle_{{\mathcal H}}= c\sum_{n=0}^\infty |f_n|^2(n!)^m<\infty\right\}={\mathcal F}_m. \quad\blacksquare$$ In the previous theorem we proved that ${\mathcal F}_m$ is the only Hilbert space which contains all polynomials and in which the adjoint operator of ${\mathfrak a}=M_z$ is equal to the operator $$({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b}=\frac{\partial}{\partial z} \left[\sum_{n=1}^{m-1}S(m-1,n)z^n \frac{\partial^n}{\partial z^n}\right].$$ One can see that we have the relations $${\mathfrak b}^n{\mathfrak a}={\mathfrak a}{\mathfrak b}^n+n{\mathfrak b}^{n-1} \quad\text{and}\quad {\mathfrak b}{\mathfrak a}^n={\mathfrak a}^n{\mathfrak b}+n{\mathfrak a}^{n-1}$$ for every $n\in{\mathbb N}$, and in particular the operators ${\mathfrak a}$ and ${\mathfrak a}^*$ do not satisfy the commutation relation. However we have the following result. \begin{proposition} The commutator of ${\mathfrak a}$ and ${\mathfrak a}^*=({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b}$ is equal to \begin{align} \label{eq:11Apr18b} [{\mathfrak a}^*,{\mathfrak a}]= I+\sum_{n=1}^{m-1}(n+1)S(m,n+1){\mathfrak a}^n{\mathfrak b}^n. \end{align} \end{proposition} \textbf{Proof.} As $${\mathfrak a}^*=({\mathfrak b}{\mathfrak a})^{m-1}{\mathfrak b}= {\mathfrak b}\sum_{n=1}^{m-1} S(m-1,n){\mathfrak a}^{n}{\mathfrak b}^{n},$$ we have \begin{align*} [{\mathfrak a}^*,{\mathfrak a}]&= {\mathfrak b} \sum_{n=1}^{m-1} S(m-1,n){\mathfrak a}^n{\mathfrak b}^n{\mathfrak a}- {\mathfrak a}{\mathfrak b}\sum_{n=1}^{m-1} S(m-1,n){\mathfrak a}^n{\mathfrak b}^n \\&={\mathfrak b} \sum_{n=1}^{m-1}S(m-1,n) {\mathfrak a}^n({\mathfrak a}{\mathfrak b}^n+n{\mathfrak b}^{n-1}) -{\mathfrak a}{\mathfrak b}\sum_{n=1}^{m-1} S(m-1,n){\mathfrak a}^n{\mathfrak b}^n \\&=({\mathfrak b}{\mathfrak a}-{\mathfrak a}{\mathfrak b}) \sum_{n=1}^{m-1}S(m-1,n) {\mathfrak a}^n{\mathfrak b}^n+{\mathfrak b}\sum_{n=1}^{m-1} nS(m-1,n){\mathfrak a}^n{\mathfrak b}^{n-1} \\&=\sum_{n=1}^{m-1}S(m-1,n) {\mathfrak a}^n{\mathfrak b}^n+\sum_{n=1}^{m-1}nS(m-1,n) ({\mathfrak a}^n{\mathfrak b}+n{\mathfrak a}^{n-1}){\mathfrak b}^{n-1} \\&=\sum_{n=1}^{m-1}(n+1) S(m-1,n){\mathfrak a}^n{\mathfrak b}^n+ \sum_{n=1}^{m-1}n^2S(m-1,n) {\mathfrak a}^{n-1}{\mathfrak b}^{n-1} \end{align*} and as $S(m-1,1)=S(m-1,m-1)=S(m,m)=1$, we have \begin{align*} [{\mathfrak a}^*,{\mathfrak a}]&=I+m{\mathfrak a}^{m-1}{\mathfrak b}^{m-1} +\sum_{n=1}^{m-2}(n+1) [S(m-1,n)+(n+1)S(m-1,n+1)]{\mathfrak a}^n{\mathfrak b}^n \\&=I+m{\mathfrak a}^{m-1}{\mathfrak b}^{m-1}+ \sum_{n=1}^{m-2}(n+1)S(m,n+1){\mathfrak a}^n{\mathfrak b}^n \\&=I+\sum_{n=1}^{m-1}(n+1) S(m,n+1){\mathfrak a}^n{\mathfrak b}^n.\quad\blacksquare \end{align*} Sequentially, a straightforward calculation shows that \begin{align*} \|{\mathfrak a} f\|^2_{{\mathcal F}_m} =\|{\mathfrak a}^* f\|^2_{{\mathcal F}_m} +\|f\|^2_{{\mathcal F}_m}+\sum_{k=1}^{m-1}\binom{m}{k} \left[\sum_{n=0}^\infty|f_n|^2(n!)^mn^k\right] \end{align*} for every $f\in D$, which guarantees that all the terms in the identity are finite. It is tempting to write the last identity (with some abuse of notation) as \begin{align*} \|{\mathfrak a} f\|^2_{{\mathcal F}_m} =\|{\mathfrak a}^* f\|^2_{{\mathcal F}_m} +\|f\|^2_{{\mathcal F}_m}+ \sum_{k=1}^{m-1}\binom{m}{k} \langle f,({\mathfrak a}{\mathfrak b})^kf\rangle_{{\mathcal F}_m}, \end{align*} however $f\in D$ does not necessarily imply that $f\in Dom(({\mathfrak a}{\mathfrak b})^k)$. Finally, we have the following relation between the operators ${\mathfrak a},{\mathfrak b}$ and the family of spaces $({\mathcal F}_m)_{m\in{\mathbb Z}}$. For every $n\ge1$, \begin{itemize} \item the Fock space ${\mathcal F}_1$ satisfies \begin{align*} {\mathfrak a}^n({\mathcal F}_1)\subseteq{\mathcal F}_0 \quad\,\,\,\,\,\,\text{and}\quad \,\,\,{\mathfrak b}^n({\mathcal F}_1)\subseteq{\mathcal F}_0, \end{align*} \item if $m>1$, then \begin{align*} {\mathfrak a}^n({\mathcal F}_m)\subseteq{\mathcal F}_{m-1} \quad\text{and}\quad \,\,\,{\mathfrak b}^n({\mathcal F}_m)\subseteq{\mathcal F}_m, \end{align*} \item if $m<1$, then \begin{align*} {\mathfrak a}^n({\mathcal F}_m)\subseteq{\mathcal F}_m \quad\quad\,\,\,\,\,\,\text{and}\quad \,\,\,{\mathfrak b}^n({\mathcal F}_m)\subseteq{\mathcal F}_{m-1}. \end{align*} \end{itemize} \begin{remark} Unlike the situation in the the Fock space, where the adjoint of ${\mathfrak b}$ is equal to ${\mathfrak a}$, in the space ${\mathcal F}_m$ the adjoint operator of ${\mathfrak b}$ is equal to $${\mathfrak b}^*\left(\sum_{k=0}^\infty f_kz^k\right) =\sum_{k=0}^\infty \frac{f_k}{(k+1)^{m-1}}z^{k+1},$$ thus ${\mathfrak b}^*\ne{\mathfrak a}$ if $m>1$. \end{remark} \iffalse \textbf{Proof.} Not fully details yet: if $f(z)=\sum_{k=0}^\infty f_kz^k$ then \begin{align*} {\mathfrak a}^n(f)=\sum_{k=0}^\infty f_kz^{n+k}=\sum_{k=n}^\infty f_{k-n}z^k \end{align*} and \begin{align*} {\mathfrak b}^n(f)=\sum_{k=n}^\infty k\cdots(k-(n-1))f_kz^{k-n} =\sum_{k=0}^\infty \frac{(k+n)!}{k!}f_{k+n}z^k \end{align*} \begin{description} \item[1] If $f\in{\mathcal F}_1$, then $\sum_{k=0}^\infty |f_k|^2k!<\infty$ and hence $$\|{\mathfrak a}^n(f)\|^2_{{\mathcal F}_0}=\sum_{k=0}^\infty |f_k|^2<\infty$$ and $$\|{\mathfrak b}^n(f)\|^2_{{\mathcal F}_0}=\sum_{k=0}^n|f_{k+n}|^2 \left[\frac{(k+n)!}{k!}\right]^2<\infty,$$ as for every fixed $n\ge1$, $[(k+n)!]^2\le(k!)^3$ for $k$ large enough; \item[2] If $f\in{\mathcal F}_m$ for $m>1$, then $$\|{\mathfrak a}^n(f)\|^2_{{\mathcal F}_{m-1}}=\sum_{k=0}^\infty |f_k|^2$$ \item[3] \end{description} As a corollary from the last proposition, we get that for every $n\ge1$, \begin{align*} {\mathfrak a}^n\left(\bigcap _{m=-\infty}^\infty{\mathcal F}_m\right)\subseteq\bigcap_{m=-\infty}^\infty {\mathcal F}_m \end{align*} \fi \section{Generalized Bargmann Transform} \setcounter{equation}{0} \label{5} Recall that the normalized Hermite functions are defined by $$\eta_n(t)=\frac{1}{\pi^{1/4}2^{n/2} \sqrt{n!}} e^{\frac{t^2}{2}}\left( e^{-t^2} \right)^{(n)}, \quad n\in{\mathbb N}_0.$$ The family $\{\eta_n\}_{n=0}^\infty$ is an orthonormal basis of the Lebesgue space ${\mathbf L}_2({\mathbb R},dt).$ Furthermore, see \cite[p. 436]{MR1502747}, the $\eta_n$ are uniformly bounded by some constant, i.e., \begin{align*} \exists C>0 \text{ such that $|\eta_n(t)|\le C$, for every $n\in{\mathbb N}$ and $t\in{\mathbb R}$.} \end{align*} Similarly to the symmetric Fock space associated with ${\mathbb C}$, see e.g. \cite{MR0157250}, that is ${\mathcal F}_1$, there is a a fourth characterization of the space ${\mathcal F}_m$, given by a mapping from ${\mathbf L}_2({\mathbb R},dt)$ into ${\mathcal F}_m$, presented in the following proposition. \begin{proposition} Let $m\ge2$. For every $t\in{\mathbb R}$ and $z\in{\mathbb C}$ define the function \begin{align} \label{eq:2Dec15a} h_m(z,t):=\sum_{n=0}^\infty \frac{z^n}{(n!)^{m/2}} \eta_n(t). \end{align} Then, \begin{itemize} \item[1.] for every $t\in{\mathbb R}$, the function $h_m(\cdot,t)$ is entire. \item[2.] $f\in {\mathcal F}_m$ if and only if there exists $g\in {\mathbf L}_2({\mathbb R},dt)$, such that \begin{align} f(z)=\int_{{\mathbb R}} h_m(z,t) g(t)dt=\langle g, \overline{h_m(z,\cdot)}\rangle_{{\mathbf L}_2({\mathbb R},dt)}. \end{align} \end{itemize} \end{proposition} \textbf{Proof.} Since the functions $\eta_n(t)$ are all bounded by $C$, the sum in (\ref{eq:2Dec15a}) converges and so $h_m(\cdot,t)$ is entire. Next, let $f(z)=\langle g, \overline{h_m(z,\cdot)}\rangle_{{\mathbf L}_2({\mathbb R},dt)}$ for some $g\in {\mathbf L}_2({\mathbb R},dt)$. Then, \begin{align*} f(z)=\int_{{\mathbb R}} \left(\sum_{n=0}^\infty \frac{z^n}{(n!)^{m/2}} \eta_n(t)g(t)\right)dt= \sum_{n=0}^\infty \frac{z^n}{(n!)^{m/2}} \int_{{\mathbb R}}\eta_n(t)g(t)dt. \end{align*} As the system $\{\eta_n\}_{n=0}^\infty$ forms an orthonormal basis of ${\mathbf L}_2({\mathbb R},dt)$, we have Parseval's equality $$\sum_{n=0}^\infty \left| \int_{{\mathbb R}}\eta_n(t) g(t)dt\right|^2= \int_{\mathbb R}|g(t)|^2dt$$ and hence $f\in {\mathcal F}_m$, since $$\sum_{n=0}^\infty \left| \frac{1}{(n!)^{m/2}} \int_{{\mathbb R}}\eta_n(t)g(t)dt \right|^2(n!)^m= \|g\|_{{\mathbf L}_2({\mathbb R},dt)}^2<\infty.$$ Finally, let $f\in {\mathcal F}_m$. It can be written as $f(z)=\sum_{n=0}^\infty a_nz^n$ with $\sum_{n=0}^\infty |a_n|^2(n!)^m<\infty$. Setting $$g(t)=\sum_{n=0}^\infty (n!)^{m/2}a_n\eta_n(t),$$ we observe that $$\|g\|_{{\mathbf L}_2({\mathbb R},dt)}^2= \sum_{n=0}^\infty |a_n|^2(n!)^m<\infty$$ and finally that $$\langle h_m(z,\cdot),g\rangle_{{\mathbf L}_2({\mathbb R},dt)} =\sum_{n=0}^\infty \frac{z^n}{(n!)^{m/2}}(n!)^{m/2}a_n=f(z). \quad\blacksquare$$ This characterization of ${\mathcal F}_m$ motivates us to consider an associated Bargmann transform. For any $g\in {\mathbf L}_2({\mathbb R},dt)$ we define the Bargmann transform of $g$ to be \begin{align*} \mathfrak{B_m}(g):= \sum_{n=0}^\infty \frac{z^n}{(n!)^{m/2}} \int_{{\mathbb R}}\eta_n(t)g(t)dt= \langle g, \overline{h_m(z,\cdot)}\rangle_{{\mathbf L}_2({\mathbb R},dt)}. \end{align*} The mapping $\mathfrak{B_m}:{\mathbf L}_2({\mathbb R},dt)\rightarrow{\mathcal F}_m$ is unitary; it satisfies $$\mathfrak{B_m}(\eta_n)(z)= \frac{z^n}{(n!)^{m/2}} \quad\text{and}\quad\|g\|_{{\mathbf L}_2({\mathbb R},dt)} =\|\mathfrak{B_m}(g)\|_{{\mathcal F}_m}$$ for every $g\in {\mathbf L}_2({\mathbb R},dt)$. \begin{remark} In case where $m=1$, $\mathfrak{B_1}$ is the well known Bargmann transform and the function $h_1(z,t)$ can be written in closed form as $$h_1(z,t)=e^{2tz-t^2-z^2/2}.$$ When $m>1$, finding an explicit closed formula for the function $h_m(z,t)$ might involve new generalizations of the exponential function. \end{remark} \section{A Gelfand Triple associated to the family $({\mathcal F}_m)_{m\in\mathbb Z}$} \setcounter{equation}{0} \label{6} The reproducing kernel Hilbert spaces $\{{\mathcal F}_m\}_{m=1}^\infty$, starting from the Fock space ${\mathcal F}_1$, form a decreasing sequence, i.e., $${\mathcal F}_1\supset {\mathcal F}_2\supset... \supset {\mathcal F}_{m}\supset {\mathcal F}_{m+1}\supset....$$ so it makes sense, in the spirit of the theory of Gelfand triples (as developed for instance in the books \cite{MR0435834,GS2_english}) to consider the intersection space \[ \begin{split} {\mathcal F}&=\bigcap_{m=1}^\infty {\mathcal F}_m\\ &= \left\{ f=\sum_{n=0}^\infty a_n z^n \text{ such that } \|f\|_m=\sum_{n=0}^\infty |a_n|^2(n!)^m<\infty, \forall m\in{\mathbb N}\right\} \end{split} \] which consists of entire functions, and its dual. We consider the dual space of each ${\mathcal F}_m$, with respect to the Fock space ${\mathcal F}_1$. \begin{lemma} \label{lem:5Mar16a} For every $m\ge1$, the dual space of ${\mathcal F}_m$, with respect to ${\mathcal F}_1$ is $${\mathcal F}_{2-m}:=({\mathcal F}_m)^\prime =\left\{ b=(b_n)_{n\in{\mathbb N}_0}: \|b\|_{2-m}^2: =\sum_{n=0}^\infty |b_n|^2 (n!)^{2-m}<\infty\right\}.$$ \end{lemma} Therefore, we have the Gelfand triple \begin{align} \label{eq:6Mar16b} \bigcap_{m=1}^\infty {\mathcal F}_m\subset {\mathcal F}_1\subset \bigcup_{m=1}^\infty {\mathcal F}_{2-m}. \end{align} The inclusion map from ${\mathcal F}_{m}$ into ${\mathcal F}_{m+1}$ is nuclear, and it follows that $\cap_{m=1}^\infty{\mathcal F}_{2-m}$ is a Fr\'echet nuclear space, and in particular a perfect space in the terminology of Gelfand and Shilov; see \cite{GS2_english}. The dual space $\cup_{m=1}^\infty {\mathcal F}_{2-m}$ has two different set of properties, topological and algebraic; the first follow from the theory of perfect spaces, and the structure algebra comes from the form of the weights. The fact that the product is jointly continuous comes from the theory of reflexive Fr\'echet spaces; see \cite[IV.26, Theorem 2]{MR83k:46003}.\\ We begin with the topological properties. Although not metrizable, the space $\cup_{m=1}^\infty\mathcal F_{2-m}$ behaves well with respect to sequences and compactness: \begin{enumerate} \item A sequence converges in the strong (or weak) topology of the dual if and only if its elements are in one of the spaces ${\mathcal F}_{2-m}$ and converges in the topology of the latter; see \cite[p. 56]{GS2_english}. \item A subset of $\cup_{m=1}^\infty{\mathcal F}_{2-m}$ is compact in the strong topology of the dual if and only if it is included in one the spaces ${\mathcal F}_{2-m}$ and compact in the topology of the latter; see \cite[p. 58]{GS2_english}. \end{enumerate} These properties allow us to reduce to the Hilbert space setting and sequences the study of continuous functions from a compact metric space into $\cup_{m=1}^\infty{\mathcal F}_{2-m}$. \\ The algebra structure is given by the convolution product (or Cauchy product) defined as follows: \begin{equation} \label{convol} a*b:=\left(\sum_{k=0}^n a_kb_{n-k}\right)_{n\in{\mathbb N}_0}, \end{equation} where $a=(a_n)_{n\in\mathbb N_0}$ and $b=(b_n)_{n\in\mathbb N_0}$ belong to the dual. \begin{proposition} The space \begin{align*} {\mathcal F}^\prime:= \bigcup_{m=1}^\infty {\mathcal F}_{2-m} =\left\{ b=(b_n)_{n\in{\mathbb N}_0}: \exists m\ge1,\|b\|_{2-m}:= \sum_{n=0}^\infty \frac{|b_n|^2}{(n!)^{m-2}}<\infty \right\}, \end{align*} is a topological algebra; the convolution product is jointly continuous with respect to the two variables, and satisfies \begin{equation} \label{vage1234} \| a*b \|_{2-p}\le A(q-p) \|a\|_{2-q}\|b\|_{2-p}, \end{equation} for every $a\in {\mathcal F}_{2-q}$ and $b\in {\mathcal F}_{2-p}$, where $p,q\in{\mathbb N}$ such that $q\ge p+1$. \end{proposition} The weights $\alpha_n=n!$ satisfy \[ \alpha_{m+n}=\sqrt{(m+n)!} \ge\sqrt{m!n!}=\alpha_m\alpha_n \] for every $m,n\in{\mathbb N}_0$ and $\sum_{n=0}^\infty (\alpha_n)^{-2}=\sum_{n=0}^\infty \frac{1}{n!}=e<\infty$. Using these properties of the weight the statements in the proposition follow then from \cite{MR3404695} or, in a maybe more explicit way, from \cite[Exercise 5.4.8 p. 260-261]{CAPB_2}, with \[ A(q-p) =\left(\sum_{n=0}^\infty \alpha_n^{2(p-q)}\right)^{1/2} =\left(\sum_{n=0}^\infty \left(\frac{1}{n!} \right)^{q-p}\right)^{1/2}<\infty \] for $q-p\ge 1$.\\ We note that \eqref{vage1234} is called V\"age inequality, and originates with the work of V\"age; see \cite{MR2387368,vage96}.\\ Consider now a ${\mathcal F}_1$-valued function, say $f$, defined a compact set (for instance $[0,1]$). When viewing $f$ as $\cup_{m=1}^\infty{\mathcal F}_{2-m}$-valued, one can define differentiability and compute explicitly the derivative, which will take values in one of the spaces $\mathcal F_{2-m}$ rather than in the Fock space itself. Using V\"age inequality one can also consider stochastic type integrals of the form \[ \int_0^1 f(t)*g(t)dt, \] where $f$ and $g$ are continuous from $[0,1]$ into $\mathcal F^\prime$ as Riemann integrals. The image of $[0,1]$ under the function $f* g$ is then compact and the integral is computed in one of the spaces $\mathcal F_{2-m}$. See \cite{aal2,aal3,MR3231624, MR3615375} for similar arguments and applications, the latter in the setting of quaternionic stochastic processes. Finally, we refer to \cite{ACSS} for the study of the quaternionic Fock space and to \cite{diki2} for some of its generalizations in the quaternionic setting. \\\\ \textbf{Acknowledgment.} Daniel Alpay thanks the Foster G. and Mary McGaw Professorship in Mathematical Sciences, which supported this research. The authors would like to thank Prof. Karol Penson for his helpful remarks related to the kernels (\ref{eq:22May18a}) and the Meijer functions, and Prof. Dmitrii Karp for pointing out the references \cite{Karp2,Karp1}. \bibliographystyle{plain} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def\lfhook#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{'}\hidewidth\crcr\unhbox0}}} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$}
{ "timestamp": "2018-05-31T02:04:17", "yymm": "1804", "arxiv_id": "1804.05254", "language": "en", "url": "https://arxiv.org/abs/1804.05254" }
\subsection{Algorithms for Computing and Updating Scores} In this section we give the details of our algorithms for computing initial scores (Algorithm~\ref{algCS}) and for updating these scores values once a blocker vertex $c$ is selected and added to the blocker set $Q$ (Algorithms~\ref{algAnc}-\ref{algAU}). \begin{algorithm}[H] \caption{Compute Initial scores for a node $v$ in $T_x$} \begin{algorithmic}[1] \State {\bf Initialization [Local Step]:} {\bf if} $h_x (v) = h$ {\bf then} $score_x (v) \leftarrow 1$ {\bf else} $score_x (v) \leftarrow 0$ \label{algCS:init} \State {\bf In round $r > 0$:} \State {\bf send:} {\bf if} $r = h - h_x (v) + 1$ {\bf then} send $\langle score_x (v) \rangle$ to $parent_x (v)$ \label{algCS:send} \vspace{0.05in} \State {\bf receive [lines~\ref{algCS:newreceiveStart}-\ref{algCS:receiveEnd}]:} \If{$r = h - h_x(v)$} \label{algCS:newreceiveStart} \State let $\mathcal{I}$ be the set of incoming messages to $v$ \For{{\bf each} $M \in \mathcal{I}$} \label{algCS:receiveStart} \State let $M = \langle score^- \rangle$ and let the sender be $w$ \State {\bf if} $w$ is a child of $v$ in $T_x$ {\bf then} $score_x (v) \leftarrow score_x (v) + score^-$ \EndFor \label{algCS:receiveEnd} \EndIf \end{algorithmic} \label{algCS} \end{algorithm} \vspace{-0.1in} Algorithm~\ref{algCS} gives the procedure for computing the initial scores for a node $v$ in a tree $T_x$. In Step~\ref{algCS:init} each leaf node at depth $h$ initializes its score for $T_x$ to $1$ and all other nodes set their initial score to $0$. In a general round $r > 0$, nodes with $h_x (v) = h+1-r$ send out their scores to their parents and nodes with $h_x(v) = h-r$ will receive all the scores from its children in $T_x$ and set its score equal to the sum of these received scores (Steps~\ref{algCS:newreceiveStart}-\ref{algCS:receiveEnd}). \begin{lemma} \label{lemma:alg2Runtime} Algorithm~\ref{algCS} computes the initial scores for every node $v$ in $T_x$ in $O(h)$ rounds. \end{lemma} \begin{proof} The leaves at depth $h$ correctly initialize their score to $1$ locally in Step~\ref{algCS:init}. Since we only consider paths of length $h$ from the root $x$ to a leaf, it is readily seen that a node $v$ that is $h_x (v)$ hops away from $x$ in $T_x$ will receive scores from its children in round $h - h_x(v)$ and thus will have the correct $score_x (v)$ value to send in Step~\ref{algCS:send}. \end{proof} For every $x \in V$, every node $v \in T_x$ will run this algorithm to compute their score in $T_x$. Since every run of Algorithm~\ref{algCS} for a given $x$ takes $h$ rounds, all the initial scores can be computed in $O(n\cdot h)$ rounds. \vspace{-0.05in} \begin{algorithm}[H] \caption{{\sc Ancestors $(v,x)$:} Algorithm for computing ancestors of node $v$ in $T_x$ at round $r$ } \begin{algorithmic}[1] \State {\bf Initialization [Local Step]:} $Anc_x (v) \leftarrow \phi$ \State {\bf In round $r > 0$:} \State {\bf send [lines~\ref{algAnc:sendStart}-\ref{algAnc:sendEnd}]:} \If{$r = 1$} \label{algAnc:sendStart} \State send $\langle v \rangle$ to $v$'s children in $T_x$ \label{algAnc:sendRound1} \Else \State let $\langle y \rangle$ be the message $v$ received in round $r-1$ \label{algAnc:sendy1} \State send $\langle y \rangle$ to $v$'s children in $T_x$ \label{algAnc:sendy2} \EndIf \label{algAnc:sendEnd} \State {\bf receive [lines~\ref{algAnc:receiveStart}-\ref{algAnc:receiveEnd}]:} \State let $\langle y \rangle$ be the message $v$ received in this round \label{algAnc:receiveStart} \State add $y$ to $Anc_x (v)$ \label{algAnc:receiveEnd} \end{algorithmic} \label{algAnc} \end{algorithm} Algorithm~\ref{algAnc} describes our algorithm for precomputing the ancestors of each node $v$ in a tree $T_x$ of height $h$. In round 1, every node $v$ sends its ID to its children in $T_x$ as described in Step~\ref{algAnc:sendRound1}. And in a general round $r$, $v$ sends the ID of the ancestor that it received in round $r-1$ (Steps~\ref{algAnc:sendy1}-\ref{algAnc:sendy2}). If a node $v$ receives the ID of an ancestor $y$, then it immediately adds it to its ancestor set, $Anc_x (v)$ (Steps~\ref{algAnc:receiveStart}-\ref{algAnc:receiveEnd}). \begin{lemma} \label{lemma:computeAnc} For a tree $T_x$ of height $h$ rooted at vertex $x$, Algorithm~\ref{algAnc} correctly computes the set of ancestors for all nodes $v$ in $T_x$ in $O(h)$ rounds. \end{lemma} \begin{proof} We show that all nodes $v$ correctly computes all their ancestors in $T_x$ in the set $Anc_x (v)$ using induction on round $r$. We show that by round $r$, every node $v$ has added all its ancestors that are at most $r$ hops away from $v$. If $r = 1$, then $v$'s parent in $T_x$ (say $y$) would have send out its $ID$ to $v$ in Step~\ref{algAnc:sendRound1} and $v$ would have added it to $Anc_x (v)$ in Step~\ref{algAnc:receiveEnd}. Assume that every node $v$ has already added all ancestors in the set $Anc_x (v)$ that are at most $r-1$ hops away from $v$. Let $u$ be the ancestor of $v$ in $T_x$ that is exactly $r$ hops away from $v$. Then by induction, $u \in Anc_x (y)$ since $u$ is exactly $r-1$ hops away from $y$ and thus $y$ must have send $u$'s $ID$ to $v$ in round $r$ in Step~\ref{algAnc:sendy2} and hence $v$ would have added $u$ to its set $Anc_x (v)$ in round $r$ in Step~\ref{algAnc:receiveEnd}. \end{proof} \begin{algorithm}[H] \caption{Algorithm for updating scores at $v$ when $v$ is a descendant of new blocker node $c$} Input: blocker vertex $c$ added to $Q$.\\ There is no communication in this algorithm, it is entirely a {\bf local computation} at $v$. \begin{algorithmic}[1] \If{$score (v) \neq 0$} \For{{\bf each} $x \in V$} \If{$c \in Anc_x (v)$} \label{algDU:check} \State $score (v) \leftarrow score (v) - score_x (v)$ \label{algDU:update1} \State $score_x (v) \leftarrow 0$ \label{algDU:update2} \EndIf \EndFor \EndIf \end{algorithmic} \label{algDU} \end{algorithm} Once we have pre-computed the $Anc_x(v)$ sets for all vertices $v$ and all trees $T_x$ using Algorithm~\ref{algAnc}, updating the scores at each node for all trees in which it is a descendant of the newly chosen blocker node $c$ becomes a purely local computation. Algorithm~\ref{algDU} describes the algorithm at node $v$ that updates its scores after a vertex $c$ is added as a blocker node to $Q$. At node $v$ for each given $T_x$, $v$ checks if $c \in Anc_x (v)$ and if so update its score values in Steps~\ref{algDU:update1}-\ref{algDU:update2}. \begin{lemma} \label{lemma:algDescendantUpdate} Given a blocker vertex $c$, Algorithm~\ref{algDU} correctly updates the scores of all nodes $v$ such that $v$ is a descendant of $c$ in some tree $T_x$. \end{lemma} \begin{proof} Fix a vertex $v$ and a tree $T_x$ such that $v$ is a descendant of $c$ in $T_x$. By Lemma~\ref{lemma:computeAnc} $c \in Anc_x (v)$, and thus $v$ will correctly update its score values in Steps~\ref{algDU:update1}-\ref{algDU:update2}. \end{proof} We now move to the last remaining part of the blocker set algorithm: our method to correctly update scores at ancestors of the newly chosen blocker node $c$ in each $T_x$. Recall that if $v$ is an ancestor of $c$ in $T_x$ we need to subtract $score_x(c)$ from $score_x(v)$. Here, in contrast to Algorithms~\ref{algAnc} and \ref{algDU} for nodes that are descendants of $c$ in a tree, we do not precompute anything. Instead we give an $O(n)$-round method in Algorithm~\ref{algAU} to correctly update scores for each vertex for all trees in which that vertex is an ancestor of $c$. Before we describe Algorithm~\ref{algAU} we establish the following lemma, which is key to our $O(n)$-round method. \begin{lemma} \label{lemma:intree} Fix a vertex $c$. For each root vertex $x \in V -\{c\}$, let $\pi_{x,c}$ be the path from $x$ to $c$ in the $h$-hop SSSP tree $T_x$. Let $T = \cup_{x \in V-\{c\}} \{e ~|~e$ lies on $\pi_{x,c}\}$, i.e., $T$ is the set of edges that lie on some $\pi_{x,c}$. Then $T$ is an in-tree rooted at $c$. \end{lemma} \begin{proof} If not, there exists some $x, y \in V-\{c\}$ such that $\pi_{x,c}$ and $\pi_{y,c}$ coincide first at some vertex $z$ and the subpaths in $\pi_{x,c}$ and $\pi_{y,c}$ from $z$ to $c$ are different. Let these paths coincide again at some vertex $z'$ (such a vertex exists since their endpoint is same) after diverging from $z$. Let the subpath from $z$ to $z'$ in $\pi_{x,c}$ be $\pi^1_{z,z'}$ and the corresponding subpath in $\pi_{y,c}$ be $\pi^2_{z,z'}$. Similarly let $\pi_{x,z}$ be the subpath of $\pi_{x,c}$ from $x$ to $z$ and let $\pi_{y,z}$ be the subpath of $\pi_{y,c}$ from $y$ to $z$. Clearly both $\pi^1_{z,z'}$ and $\pi^2_{z,z'}$ have equal weight (otherwise one of $\pi_{x,c}$ or $\pi_{y,c}$ cannot be a shortest path). Thus the path $\pi_{x,z}\circ \pi^2_{z,z'}$ is also a shortest path. Let $(a,z')$ be the last edge on the path $\pi^{1}_{z,z'}$ and $(b,z')$ be the last edge on the path $\pi^{2}_{z,z'}$. Now since the path $\pi_{x,z'}$ has $(a,z')$ as the last edge and we break ties using the IDs of the vertices, hence $ID(a) < ID(b)$. But then the shortest path $\pi_{y,z'}$ must also have chosen $(a, z')$ as the last edge and hence $\pi_{y,z}\circ \pi^{1}_{z,z'}$ must be the subpath of path $\pi_{y,c}$, resulting in a contradiction. \end{proof} \begin{algorithm}[H] \caption{Pipelined Algorithm for updating scores at $v$ for all trees $T_x$ in which $v$ is an ancestor of newly chosen blocker node $c$} Input: current blocker set $Q$, newly chosen blocker node $c$ \begin{algorithmic}[1] \State {\bf Send [lines~\ref{algAU:initStart}-\ref{algAU:initEnd}]: (only for $c$)} \State {\bf Local Step at $c$:} create a list $list_c$ and {\bf for each} $x \in V$ {\bf do} add an entry $Z = \langle x, score_x (c) \rangle$ to $list_c$ if $score_x (c) \neq 0$; then set $score_x (c)$ to $0$ for each $x \in V$ and set $score (c)$ to $0$ \label{algAU:initStart} \State {\bf Round $i$:} let $Z = \langle x, score_x (c) \rangle$ be the $i$-th entry in $list_c$; send $\langle Z \rangle$ to $c$'s parent in $T_x$ \label{algAU:initEnd} \State {\bf In round $r > 0$: (for vertices $v \in V - Q -\{c\}$)} \vspace{0.05in} \State {\bf send [lines~\ref{algAU:sendStart}-\ref{algAU:sendEnd}]:} \If{$v$ received a message in round $r-1$} \label{algAU:sendStart} \State let that message be $\langle Z \rangle = \langle x, score_x (c) \rangle$. \State { \bf if} $v \neq x$ {\bf then} send $\langle Z\rangle$ to $v$'s parent in $T_x$ \label{algAU:sendEnd} \vspace{0.05in} \EndIf \State {\bf receive [lines~\ref{algAU:receiveStart}-\ref{algAU:receiveEnd}]:} \If{$v$ receives a message $M$ of the form $\langle x, score_x (c) \rangle$} \label{algAU:receiveStart} \State $score_x (v) \leftarrow score_x (v) - score_x (c)$; $score (v) \leftarrow score (v) - score_x (c)$ \label{algAU:update} \EndIf \label{algAU:receiveEnd} \end{algorithmic} \label{algAU} \end{algorithm} Lemma~\ref{lemma:intree} allows us to re-cast the task for ancestor nodes to the following (where we use the notation in the statement of Lemma~\ref{lemma:intree}): the new blocker node $c$ needs to send $score_x(c)$ to all nodes on $\pi_{x,c}$ for each tree $T_x$. Recall that in the {\sc Congest}{} model for directed graphs the graph edges are bi-directional. Hence this task can be accomplished by having $c$ send out $score_x(c)$ for each tree $T_x$ (other than $T_c$) in $n-1$ rounds, one score per round (in no particular order) along the parent edge for $T_x$. Each message $\langle x, score_x(c)\rangle$ will move along edges in $\pi_{x,c}$ (in reverse order) along parent edges in $T_x$ from $c$ to $x$. Consider any node $v$. In general it will be an ancestor of $c$ in some subset of the $n-1$ trees $T_x$. But the characterization in Lemma~\ref{lemma:intree} establishes that the incoming edge to $v$ in all of these trees is the same edge $(u,v)$ and this is the unique edge on the path from $c$ to $v$ in tree $T$ ($T$ is defined in the statement of Lemma~\ref{lemma:intree}). In fact, the messages for all of the trees in which $v$ is an ancestor of $c$ will traverse exactly the same path from $c$ to $x$. Hence, for the messages sent out by $c$ for the different trees in $n-1$ different rounds (one for each tree other than $T_c$), if each vertex simply forwards any message $\langle x, score_x(c)\rangle$ it receives to its parent in tree $T_x$ all messages will be pipelined to all ancestors in $n-1 +h$ rounds. This is what is done in Algorithm~\ref{algAU}, whose steps we describe below, for completeness. Step~\ref{algAU:initStart} of Algorithm~\ref{algAU} is local computation at the new blocker vertex $c$ where for each $T_x$ to which $c$ belongs, $c$ adds an entry $\langle x, score_x (c) \rangle$ to a local list $list_c$. In round $i$, $c$ sends the $i$-th entry in its list, say $\langle y, score_y(c) \rangle$, to its parent in $T_y$. For node $v$ other than $c$, in a general round $r > 0$, if $v$ receives a message for some $x \in V$ it updates its score value for $x$ (Steps~\ref{algAU:receiveStart}-\ref{algAU:update}) and then forwards this message to its parent in $T_x$ in round $r+1$ (Step~\ref{algAU:sendStart}-\ref{algAU:sendEnd}). \begin{lemma} \label{lemma:algAncestorUpdate} Given a new blocker vertex $c$, Algorithm~\ref{algAU} correctly updates the scores of all nodes $v$ in every tree $T_x$ in which $v$ is an ancestor of $c$ in $O(n+h)$ rounds. \end{lemma} \begin{proof} Correctness of Algorithm~\ref{algAU} was argued above. For the number of rounds, $c$ sends out it last message in round $n-1$, and if $\pi_{v,c}$ has length $k$ then $v$ receives all messages sent to it by round $n-1+k$. Since we only have $h$-hop trees $k \leq h$ for all nodes, and the lemma follows. \end{proof} \section{Computing a Blocker Set Deterministically}\label{sec:blocker} The simplest method to find a blocker set is to chose the vertices randomly. An early use of this method for path problems in graphs was in Ullman and Yannakakis~\cite{UY91} where a random set of $O(\sqrt n \cdot \log n)$ distinguished nodes was picked. It is readily seen that some vertex in this set will intersect any path of $O(\sqrt n)$ vertices in the graph (and so this set would serve as a blocker set of size $O((n \log n)/h)$for our algorithm if $h= \sqrt n$). Using this observation an improved randomized parallel algorithm (in the PRAM model) was given in~\cite{UY91} to compute the transitive closure. Since then this method of using random sampling to choose a suitable blocker set has been used extensively in parallel and dynamic computation of transitive closure and shortest paths, and more recently, in distributed computation of APSP~\cite{HNS17}. It is not clear if the above simple randomized strategy can be derandomized in its full generality. However, for our purposes a blocker set only needs to intersect all paths in the set of hop trees we construct in Step~\ref{algMain:h-hop-sssp-tree} of Algorithm~\ref{algMain}. For this, a deterministic sequential algorithm for computing a blocker set was given in King~\cite{King99} in order to compute fully dynamic APSP. This algorithm computes a blocker set of size $O((n/h) \ln p)$ for a collection $F$ of $h$-hop trees with a total of $p$ leaves across all trees (and hence $p$ root to leaf paths) in an $n$-node graph. In our setting $p\leq n^2$ since we have $n$ trees and each tree could have up to $n$ leaves. King's sequential blocker set algorithm uses the following simple observation: Given a collection of $p$ paths each with exactly $h$ nodes from an underlying set $V$ of $n$ nodes, there must exist a vertex that is contained in at least $ph/n$ paths. The algorithm adds one such vertex $v$ to the blocker set, removes all paths that are covered by this vertex and repeats this process until no path remains in the collection. The number of paths is reduced from $p$ to at most $(1-h/n) \cdot p$ when the blocker vertex $v$ is removed, hence after $O((n/h) \ln p)$ removals of vertices, all paths are removed. Since $p$ is at most $n^2$ the size of the blocker set is $O((n \log n)/h)$. King's sequential algorithm for finding a blocker set runs in $O(n^2 \log n)$ deterministic time. We now describe our distributed algorithm to compute a blocker set. As in King~\cite{King99}, for each vertex $v$ in a tree $T_x$ in the collection of trees $H$ we define: \begin{itemize} \item $score_x(v)$ is the number of leaves at depth $h$ in $T_x$ that are in the subtree rooted at $v$ in $T_x$; \item $score(v) = \sum_x score_x(v)$. \end{itemize} Thus, $score(v)$ is the number of root-to-leaf length paths of length $h$ in the collection of trees $H$ that contain vertex $v$. Initially, our distributed algorithm computes all $score_x(v)$ and $score(v)$ for all vertices $v\in V$ and all $h$-hop trees $T_x$ in $O(n\cdot h)$ rounds using Algorithm~\ref{algCS}. Then through an all-to-all broadcast of $score(v)$ to all other nodes for all $v$, all nodes identify the vertex $c$ with maximum score as the next blocker vertex to be removed from the trees and added to the blocker set $Q$. (In case there are multiple vertices with the maximum score the algorithm chooses the vertex of minimum id having this maximum score. This ensures that all vertices will locally choose the same vertex as the next blocker vertex once they have received the scores of all vertices.) We repeat this process until all scores are zeroed out. By the discussion above (and as observed in~\cite{King99}) we will identify all the vertices in $Q$ in $O((n \cdot \log n)/h)$ repeats of this process. What remains is to obtain an $O(n)$ round procedure to update the $score$ and $score_x$ values at all nodes each time a vertex $c$ is removed so that we have the correct values at each node for each tree when the leaves covered by $c$ are removed from the tree. If a vertex $v$ is a descendant of the removed vertex $c$ in $T_x$ then all paths in $T_x$ that pass through $v$ are removed when $c$ is removed and hence $score_x(v)$ needs to go down to zero for each such tree $T_x$ where $v$ is a descendant of the chosen blocker node $c$. In order to faciliate an $O(n)$-round computation of these updated $score_x$ values in each tree at all nodes that are descendants of $c$, we initially precompute at every node $v$ a list $Anc_x(v)$ all of its ancestors in each tree $T_x$. This is computed in $O(n \cdot h)$ rounds using Algorithm~\ref{algAnc}. Thereafter, each time a new blocker vertex $c$ is selected to be removed from the trees and added to $Q$, it is a local computation at each node $v$ to determine which of the $Anc_x(v)$ sets at $v$ contain $c$ and to zero out $score_x(v)$ for each such $x$. The other type of vertices whose scores change after a vertex $c$ is removed are the ancestors of $c$ in each tree. If $v$ is an ancestor of $c$ in $T_x$ then after $c$ is removed $score_x(v)$ needs to be reduced by $score_x(c)$ (i.e., $c$'s score before it was removed and added to $Q$) since these paths no longer need to be covered by $v$. For these ancestor updates we give an $O(n)$-round algorithm that runs after the addition of each new blocker node to $Q$ and correctly updates the scores for these ancestors in every tree (Algorithm~\ref{algAU}). These algorithms together give the overall deterministic algorithm (Algorithm~\ref{algCB}) for the computation of the blocker set $Q$ in $O(n \cdot h + (n^2 \log n)/h)$ rounds. We now give the details of our algorithms. We assume that for a tree $T_s$ rooted at $s$, each node $v$ in the tree knows $\delta(s,v)$, its shortest path distance from $s$, its hop length $h_{s}(v)$ (the number of edges on the tree path from $s$ to $v)$, its parent node, and all its children in $T_s$. \subsection{The Blocker Set Algorithm} Algorithm~\ref{algCB} gives our distributed deterministic method to compute a blocker set. It uses a collection of helper algorithms that are described in the next section. This blocker set algorithm is at the heart of our main algorithm (Algorithm~\ref{algMain}, Step~\ref{algMain:centers}) for computing the exact weighted APSP. \begin{algorithm}[H] \caption{{\sc Compute-Blocker}} Input: set $H$ of all $h$-hop trees $T_x$; \hspace{0.1in} Output: set $Q$ \begin{algorithmic}[1] \State {\bf Initialization [lines~\ref{algCB:runAlgCS}-\ref{algCB:broadcast}]:} Run Algorithm~\ref{algCS} to compute scores for all $v \in V$ \label{algCB:runAlgCS} \State For each $T_x$ compute the ancestors of each vertex $v$ in $T_x$ in $Anc_x(v)$ using Algorithm~\ref{algAnc} \label{algCB:anc} \For{each $v\in V$}\label{algCB:forloop} \State {\bf Local Step:} $score(v) \leftarrow \sum_{x\in V} score_x (v)$ \label{algCB:local} \State broadcast $score (v)$ to all nodes in $V$ (using Lemma~\ref{lem:all-to-all-bc}) \vspace{0.05in} \label{algCB:broadcast} \State {\bf Add blocker vertices to blocker set $Q$ [lines~\ref{algCB:startWhile}-\ref{algCB:endWhile}]:} \State {\bf while} there is a node $c$ with $score (c) > 0$ \label{algCB:startWhile} \State \hspace{.5in} {\bf Local Step:} select the node $c$ with max score as next vertex in $Q$ \label{algCB:select} \State \hspace{.5in} Run Algorithms~\ref{algDU} and \ref{algAU} to update $score_x (v)$ for each $x \in V$ and $score (v)$ \label{algCB:updateScores} \State \hspace{.5in} broadcast $score (v)$ to all nodes in $V$ and receive $score(x)$ from all other nodes $x$ \label{algCB:endWhile} \EndFor \end{algorithmic} \label{algCB} \end{algorithm} Step~\ref{algCB:runAlgCS} of Algorithm~\ref{algCB} executes Algorithm~\ref{algCS} to compute all the initial scores at all nodes $v$. Step~\ref{algCB:anc} involves running Algorithm~\ref{algAnc} for pre-computing ancestors of each node in every $T_x$. Step~\ref{algCB:local} is a local computation (no communication) where all nodes $v$ compute their total score by summing up the scores for all trees $T_x$ to which they belong. And in Step~\ref{algCB:broadcast}, each node $v$ broadcasts its score value to all other nodes. The while loop in Steps~\ref{algCB:startWhile}-\ref{algCB:endWhile} of Algorithm~\ref{algCB} runs as long as there is a node with positive score. In Step~\ref{algCB:select}, the node with maximum score is selected as the vertex $c$ to be added to $Q$ (and if there are multiple nodes with the maximum score, then among them the node with the minimum ID is selected, so that the same node is selected locally at every vertex). In Step~\ref{algCB:updateScores}, after blocker vertex $c$ is selected, each node $v$ checks whether it is a descendant of $c$ in each $T_x$ and if so update its score for that tree using Algorithm~\ref{algDU}. This is followed by an execution of Algorithm~\ref{algAU} which updates the scores at each node $v$ for each tree $T_x$ in which $v$ is an ancestor of $c$. Then in Step~\ref{algCB:endWhile}, all the nodes broadcast their score to all other nodes so that they can all select the next vertex to be added to $Q$. This leads to the following lemma, assuming the results shown in the next section. \section{Deterministic Weighted APSP in $\tilde{O}(n^{3/2})$ or Less}\label{sec:outline} We describe a simple deterministic weighted APSP algorithm in the {\sc Congest} model that runs in $O(n^{3/2})$ rounds. The algorithm is also a template for computing APSP deterministically with even a smaller number of rounds if $h$-APSP can be computed in a smaller number of rounds. Very recently such an algorithm was designed in~\cite{AR18b}, and we discuss the impact of that algorithm at the end of the paper. Let the edge-weighted input graph be $G=(V,E)$ with weight function $w$ and with $|V|=n$ and $|E|=m$. The {\sc Congest}{} model assumes that every message is of $O(\log n)$-bit size, which restricts $w(e)$ to be an $O(\log n)$ size integer value. However, outside of this restriction imposed by the {\sc Congest} model, our algorithm works for arbitrary edge-weights (even negative edge-weights as long as there is no negative-weight cycle). Our overall APSP algorithm is given in Algorithm~\ref{alg6}. We first observe that and $h$-hop SSSP tree rooted at a vertex $s$ can be computed in $O(h)$ rounds by running the Bellman-Ford algorithm for $h$ rounds. Using Bellman-Ford, our algorithm first computes an $h$-hop SSSP tree for each vertex $v$ in $O(n \cdot h)$ rounds. It then finds a collection of $b=O(n/h)$ nodes called {\it centers} (or {\it distinguished nodes}~\cite{UY91} or {\it centers}~\cite{HNS17}) such that there is a center node in any subpath of length $h$ in every $h$-hop shortest path tree we have computed. Once we have our center set we can compute APSP by first computing SSSP for each center (using the Bellman-Ford algorithm) and then broadcasting each node's shortest path distances from each of the $b$ centers to all other nodes; both of these steps are readily performed in $O(n\cdot b)$ rounds. When $h = \sqrt n$ the number of centers is also $O(\sqrt n)$ hence the overall algorithm runs in $O(n^{3/2}$) rounds provided we can find the desired centers efficiently and deterministically. \begin{algorithm}[H] \caption{ Main Algorithm} \begin{algorithmic}[1] \State Compute an $h$-hop SSSP tree for each $v\in V$. \label{alg6:h-hop-sssp-tree} \State Compute $\tilde{O}(n/h)$ centers and store in $\mathcal{Q}$ using Algorithm~\ref{alg5}. \State {\bf for each} $c\in \mathcal{Q}$ in sequence: send $\delta(v,c)$ value to each $v$ whose $h$-hop tree contains $c$. \label{alg6:center-to-h-hop-root} \State {\bf for each} $c \in \mathcal{Q}$ in sequence: run Bellman-Ford from $c$ to compute $\delta (c,v)$ at each $v \in V$. \label{alg6:bf} \State {\bf for each} $c \in \mathcal{Q}$ in sequence: broadcast from each $v\in V$ its $\delta(c,v)$ value to all other nodes. \label{alg6:broadcast} \State {\bf Local step at each node:} At each $v$ compute $\delta(u,v)$ values for each $u \in V$ by using the $h$-hop $\delta(v,c)$ values at $v$ received in Step~\ref{alg6:center-to-h-hop-root} and the $\delta (c,u)$ values for all $c\in \mathcal{Q}$ received in Step~\ref{alg6:broadcast}. \end{algorithmic} \label{alg6} \end{algorithm} \subsection{Finding Centers for a Collection of $h$-hop Trees}\label{sec:blockers-outline} The simplest method to find a set of centers is to chose them randomly. This method was first used by Ullman and Yannakakis where they picked a random set of $O(\sqrt n \cdot \log n)$ centers (or distinguished nodes) which are readily seen to intersect any path of $O(\sqrt n)$ vertices. Using this set they computed the transitive closure (and some single source problems) more efficiently in parallel (in the PRAM model) in $O(n^{\epsilon})$ time than the matrix-multiplication based method. This method of choosing suitable centers has been used extensively in parallel and dynamic computation of transitive closure and shortest paths, and more recently, has been used in distributed computation of APSP~\cite{HNS17}. In order to compute fully-dynamic APSP, King~\cite{King99} gave a deterministic sequential algorithm to compute a blocker set in the following setting. A deterministic sequential algorithm for computing blockers was given in~\cite{King99} in order to compute fully dynamic APSP. We now briefly describe the results in~\cite{King99} that we will use in our distributed algorithm for computing a set of $O(n/h)$ centers. \begin{definition}[(\cite{King99})] Let $F$ be a collection of $h$-hop SSSP trees for the graph $G=(V,E)$. A set $B \subset V$ is a \emph{blocker} for $F$ if every path from the root of a tree to a descendant leaf contains a vertex in $B$. \hide{ $S$ is a blocker set for $F$ if for each pair of vertices $s, t \in V$ such that the shortest path from $s$ to $t$ has more than $h$ hops, then there exists vertices $c_1, c_2,\ldots, c_l \in S$ where $l > 0$ such that the shortest path can be represented as $(s,c_1) \circ (c_1,c_2) \circ \ldots \circ (c_{l-1},c_l) \circ (c_l,t)$, where $(c_i,c_{i+1})$ refers to the shortest path from $c_i$ to $c_{i+1}$ and each of these paths have hop-length at most $h$. \end{definition} A deterministic algorithm that computes a blocker set of size $O((n/h) \log n)$ in $O(n^2 + nh \log n)$ time is given in~\cite{King99}. It uses the observation that, given the collection of trees $F$ with totally $p$ leaves across all trees (and hence $p$ root to leaf paths), there must exist a vertex that is contained in at least $ph/n$. The algorithm add one such vertex to the blocker set, removes all paths that are covered by this vertex and repeats this process until no path remains. The number of paths is reduced from $p$ to at most $(1-h/n) \cdot p$ when the blocker vertex is removed, hence after $O((n/h) \ln p)$ removals of vertices, all paths are removed. The result follows since $p$ is at most $n^2$. We now give a deterministic distributed algorithm to compute a set of $b= O((n/h) \log n)$ centers in $O(n \cdot b)$ rounds given the collection $F$ of $h$-hop SSSP trees for every source $s\in V$. We assume that for s tree $T_s$ rooted at $s$, each node $v$ in the tree knows $\delta(s,v)$, its shortest path distance from $s$, its hop length $h_{s}(v)$ (the number of edges on the tree path from $s$ to $v$, and the identities of its parent node and its children in $T_s$. Initially our algorithm will compute, for each node $v$, the values $score_s(v)$ for each root $s$, which is the number of leaves in the subtree rooted at $v$ in the tree with root $s$, and $score(v)$, which is the sum of the $n$ $score_s(v)$ values. These values are computed in Algorithm~\ref{alg4} are readily computed in $O(h)$ rounds per tree by running the Bellman-Ford algorithm `in reverse' (this is a simple method developed in~\cite{PR18} to compute betweenness centrality values after the computation of APSP). The method works as follows. Let us assume that the $h$-hop SSSP trees for the $n$ sources were computed by an Algorithm $h$-Hop, where each vertex $v$ can identify a round number $r_x$ in which it sent out its finalized SSSP distance from source $s$. Clearly these round numbers will be different for different sources. Algorithm~\ref{alg4}, the $score_x(v)$ values are propagated from the leaves of each tree towards the root by running the rounds of Algorithm $h$-Hop in reverse. Hence, if $v$ sent out its SSSP distance for source $x$ to its children in round $r$ of Algorithm $h$-Hop and if Algorithm $h$-hop took $R$ rounds in total, then $v$ will sent out its score$x_(v)$ value to its parent in round $R-r$ of Algorithm~\ref{alg4}. It is readily verified by induction that $v$'s children would have sent their scores to $v$ in an earlier round than $R-r$ since none of $v$'s children could have sent out their finalized value in a round before $r$. \begin{algorithm}[H] \caption{Initialize scores each node for all $h$-hop trees } \begin{algorithmic}[1] \State Run an $h$-hop APSP algorithm and store at $v$ the round $r_{x}(v)$ in which $\delta(x,v)$ was sent out by $v$ to its children. Let $R$ denote the total number of rounds used by Algorithm $h$-hop APSP. \State {\bf for each} node $v$ and each root $x$ {\bf if} $l_x (v) = h$ {\bf then} $score_x (v) \leftarrow 1$ {\bf else} $score_x(v) \leftarrow 0$ \State {\bf In round $r > 0$:} \State {\bf send:} {\bf for each} $v \in V$ {\bf if} $r = R - r_{x}(v)$ for some $x$ {\bf then} send $\langle score_x (v), x \rangle$ to $parent_x(v)$ \label{alg3:sendStart} \State {\bf receive [lines~\ref{alg3:newreceiveStart}-\ref{alg3:receiveEnd}]:} for each $v\in V$ let $\mathcal{I}_v$ be the set of incoming messages to $v$ \label{alg3:newreceiveStart} \State $~~$ {\bf for each} $M \in \mathcal{I}_v$ \label{alg3:receiveStart} \State $~~$ let $M = \langle score^-, x \rangle$ and let the sender be $w$ \State $~~$ {\bf if} $w$ is a child of $v$ in $t_x$ {\bf then} $score_x (v) \leftarrow score_x (v) + score^-$ \label{alg3:receiveEnd} \end{algorithmic} \label{alg3} \end{algorithm} \begin{algorithm}[H] \caption{ Update scores at node $v$ after center $c$ is selected} Input: center $c$, collection of already selected centers $\mathcal{C}$ \begin{algorithmic}[1] \State {\bf Initialization [lines~\ref{alg4:initStart}-\ref{alg4:initEnd}]:} \If{$c = v$} \label{alg4:initStart} \For{each $x \in S$} \If{$score_x (v) \neq 0$} \State add an entry $Z = \langle x, score_x (v) \rangle$ to $list_v^{anc}$; $sent(Z) \leftarrow false$ \State add an entry $Z = \langle x \rangle$ to $list_v^{des}$; $sent(Z) \leftarrow false$ \State $score_x (v) \leftarrow 0$ \EndIf \EndFor \State $score (v) \leftarrow 0$ \EndIf \label{alg4:initEnd} \State {\bf In round $r > 0$: (only for vertices in $v \in V - \mathcal{C} \cup \{c\}$)} \vspace{0.05in} \State {\bf send [lines~\ref{alg4:sendStart}-\ref{alg4:sendEnd}]:} \State Let $Z_1 = \langle x \rangle$ be the unsent entry with the smallest ID in $list_v^{des}$ \label{alg4:sendStart} \State send $\langle Z_1\rangle$ to all descendants of $v$ in $x$'s tree; $sent(Z_1) \leftarrow true$ \State Let $Z_2 = \langle x, score_x (v) \rangle$ be the corresponding entry in $list_v^{anc}$ \If{$v \neq x$} \State send $\langle Z_2\rangle$ to $v$'s parent in $x$'s tree; $sent(Z_2) \leftarrow true$ \vspace{0.05in} \EndIf \label{alg4:sendEnd} \State {\bf receive [lines~\ref{alg4:receiveStart}-\ref{alg4:receiveEnd}]:} \State Let $\mathcal{I}$ be the set of incoming messages \label{alg4:receiveStart} \For{{\bf each} $M \in \mathcal{I}$} \If{$M$ is of the form $\langle x \rangle$} \State $score_x (v) \leftarrow 0$; $score (v) \leftarrow score (v) - score_x (v)$; add $Z$ to $list_v^{des}$ \Else ($M$ is of the form $\langle x, score^{-} \rangle$) \State $score_x (v) \leftarrow score_x (v) - score^{-}$; $score (v) \leftarrow score (v) - score^{-}$; add $Z$ to $list_v^{anc}$ \EndIf \EndFor \label{alg4:receiveEnd} \end{algorithmic} \label{alg4} \end{algorithm} \begin{algorithm}[H] \caption{Algorithm for computing centers at node $v \in V$ } \begin{algorithmic}[1] \State Run Algorithm~\ref{alg3} to compute scores \State $score(v) \leftarrow 0$ \State {\bf for each} $x \in S$ $score(v) \leftarrow score(v) + score_x (v)$ \State broadcast $score (v)$ to all nodes in $V$ \State {\bf while} there is a node $c$ with $score (c) > 0$ \State \hspace{.5in} let $c$ be the node with max score \State \hspace{.5in} Run Algorithm~\ref{alg4} to update $score_x (v)$ for each $x \in S$ and $score (v)$ \State \hspace{.5in} broadcast $score (v)$ to all nodes in $V$ \end{algorithmic} \label{alg5} \end{algorithm} \begin{lemma} Let $c$ be the new chosen center. Then Algorithm~\ref{alg4} correctly updates the value of $score_x (v)$ for each source $x \in S$ and node $v \notin \mathcal{C}$. \end{lemma} \begin{lemma} For each node $v \in V$, Algorithm~\ref{alg6} correctly computes the shortest path distance values from each $s \in V$ to $v$ in $\tilde{O} (R(h) + n\cdot \frac{n}{h})$, where $R(h)$ denotes the number of rounds required to compute $h$ hop APSP. \end{lemma} \begin{proof} Algorithm~\ref{alg5} correctly computes $h$ hop APSP and the set $\mathcal{Q}$ in $\tilde{O} (R(h) + n\cdot \frac{n}{h})$ rounds. Since there are $\tilde{O}(\frac{n}{h})$ centers, all the Bellman Ford executions in Step~\ref{alg6:bf} can be finished in $\tilde{O}(n\cdot \frac{n}{h})$ rounds. In Step~\ref{alg6:broadcast}, each node $v$ needs to broadcast $\tilde{O}(\frac{n}{h})$ values, thus the congestion at any node is bounded by $\tilde{O}(n\cdot \frac{n}{h})$, and hence all the nodes receive all the broadcasted values by $\tilde{O}(n\cdot \frac{n}{h})$ rounds. \end{proof} \section{Conclusion}\label{sec:concl} We have presented a new distributed algorithm for the exact computation of weighted all pairs shortest paths in both directed and undirected graphs. This algorithm runs in $O(n^{3/2} \cdot \sqrt{\log n})$ rounds and is the first $o(n^2)$-round deterministic algorithm for this problem in the {\sc Congest} model. At the heart of our algorithm is a deterministic algorithm for computing blocker set. Our blocker set construction may have applications in other distributed algorithms that need to identify a relatively small set of vertices that intersect all paths in a set of paths with the same (relatively long) length. The main open question left by our work is to improve the round-bound for deterministic weighted APSP. In the unweighted case APSP can be computed in $O(n)$ rounds~\cite{HW12,PRT12,LP13,PR18}, and weighted APSP can be computed in $\tilde{O}(n^{5/4})$ rounds~\cite{HNS17} w.h.p. with randomization. Considering the simplicity of our algorithm and the dramatic improvement over the previous (trivial) bound, we believe that further improvements could be achievable. Also of independent interest is to explore better distributed algorithms to find a blocker set. Very recently in~\cite{AR18}, a new pipelined algorithm for Step~\ref{algMain:h-hop-sssp-tree} of our Algorithm~\ref{algMain} was presented that runs in $O(n\sqrt{h})$ rounds (improved from $O(nh)$ that we have here). This, together with some changes to the blocker set algorithm in Step~\ref{algMain:centers}, gives an $\tilde{O}(n^{4/3})$ rounds deterministic algorithm for APSP in the Congest model. \section{Introduction}\label{sec:intro} The design of distributed algorithms for various network (or graph) problems such as shortest paths~\cite{LP13,Nanongkai14,Elkin17,HNS17} and minimum spanning tree~\cite{GHS83,PR99,GKP98,KP98} is a well-studied area of research. The most widely considered model for studying distributed algorithms is the {\sc Congest}{} model~\cite{Peleg00} (also see~\cite{Elkin17,HNS17,HW12,LP13,Nanongkai14,Ghaffari15}), described in more detail below. In this paper we consider the problem of computing all pairs shortest paths (APSP) in a weighted directed (or undirected) graph in this model. The problem of computing all pairs shortest paths (APSP) in distributed networks is a very fundamental problem, and there has been a considerable line of work for the {\sc Congest}{} model as described below in Section~\ref{sec:prior}. However, for a weighted graph no deterministic algorithm was known in this model other than a trivial method that runs in $n^2$ rounds. In this paper we present the first algorithm for this problem in the {\sc Congest}{} model that computes weighted APSP deterministically in less than $n^2$ rounds. Our algorithm computes APSP deterministically in $O(n^{3/2} \cdot \sqrt{\log n})$ rounds in this model in both directed and undirected graphs. Our distributed APSP algorithm is quite simple and we give an overview in Section~\ref{sec:outline}. It uses the notion of a blocker set introduced by King~\cite{King99} in the context of sequential fully dynamic APSP computation. Our deterministic distributed algorithm for computing a blocker set is the most nontrivial component of our algorithm, and is described in Section~\ref{sec:blocker}. In very recent work~\cite{AR18}, these results have been incorporated in a deterministic APSP algorithm that runs in $\tilde{O}(n^{4/3})$ rounds. The key to this improvement is a novel pipelined method that improves Step~\ref{algMain:h-hop-sssp-tree} in our Algorithm~\ref{algMain}. \subsection{The {\sc Congest} Model} \label{sec:congest} In the {\sc Congest} model~\cite{Peleg00}, there are $n$ independent processors interconnected in a network. We refer to these processors as nodes. These nodes are connected in the network by bounded-bandwidth links which we refer to as edges. The network is modeled by a graph $G = (V,E)$ where $V$ is the set of processors and $E$ is the set of edges or links between these processors. Here $|V| = n$ and $|E| = m$. Each node is assigned a unique ID from $1$ to $n$ and has infinite computational power. Each node has limited topological knowledge and only knows about its incident edges. For the weighted APSP problem we consider, each edge has a positive integer weight and the edge weights are bounded by $poly(n)$. Also if the edges are directed, the corresponding communication channels are bidirectional and hence the communication network can be represented by the underlying undirected graph $U_G$ of $G$ (this is also considered in~\cite{HNS17,PR18,GL17}). The computation proceeds in rounds. In each round each processor can send a message of size $O(\log n)$ along edges incident to it, and it receives the messages sent to it in the previous round. The model allows a node to send different message along different edges though we do not need this feature in our algorithm. The performance of an algorithm in the {\sc Congest}{} model is measured by its round complexity, which is the worst-case number of rounds of distributed communication. Hence the goal is to minimize the round complexity of an algorithm. \subsection{Prior Work} \label{sec:prior} {\bf Unweighted APSP.} For APSP in unweighted undirected graphs, $O(n)$-round algorithms were given independently in~\cite{HW12,PRT12}. An improved $n +O(D)$-round algorithm was then given in~\cite{LP13}, where $D$ is the diameter of the undirected graph. Although this latter result was claimed only for undirected graphs, the algorithm in~\cite{LP13} is also a correct $O(n)$-round APSP algorithm for directed unweighted graphs. The message complexity of directed unweighted APSP was reduced to $mn +O(m)$ in a recent algorithm~\cite{PR18} that runs in $\min\{2n, n+O(D)\}$rounds (where $D$ is now the directed diameter of the graph). A lower bound of $\Omega (n/\log n)$ for the number of rounds needed to compute the diameter of the graph in the {\sc Congest}{} model is given in~\cite{FHW12}. {\bf Weighted APSP.} While unweighted APSP is well-understood in the {\sc Congest}{} model much remains to be done in the weighted case. For deterministic algorithms, weighted SSSP for a single source can be computed in $n$ rounds using the classic Bellman-Ford algorithm~\cite{Bellman58,Ford56}, and this leads to a simple deterministic weighted APSP algorithm that runs in $O(n^2)$ rounds. Nothing better was known for the number of rounds for deterministic weighted APSP until our current results. {\bf Exact Randomized APSP Algorithms.} Even with randomization, nothing better than $n^2$ rounds was known for exact weighted APSP until recently, when Elkin~\cite{Elkin17} gave a randomized weighted APSP algorithm that runs in $\tilde{O}(n^{5/3})$ rounds and this was further improved to $\tilde{O}(n^{5/4})$ rounds in Huang et al.~\cite{HNS17}. Both of these are w.h.p. results. {\bf Deterministic Approximation Algorithms for APSP.} There are deterministic algorithms for approximating weighted all pairs shortest path problem, and these run in $\tilde{O}(n)$ rounds for both directed~\cite{LP15} and undirected graphs~\cite{LP15,HKN16,EN16}. \section{Overview of the APSP Algorithm}\label{sec:outline} Let $G=(V,E)$ be an edge-weighted graph (directed or undirected) with weight function $w$ and with $|V|=n$ and $|E|=m$. The {\sc Congest}{} model assumes that every message is of $O(\log n)$-bit size, which restricts $w(e)$ to be an $O(\log n)$ size integer value. However, outside of this restriction imposed by the {\sc Congest}{} model, our algorithm works for arbitrary edge-weights (even negative edge-weights as long as there is no negative-weight cycle). Given a path $p$ we will use {\it weight} or {\it distance} to denote the sum of the weights of the edges on the path and {\it length} (or sometimes {\it hops}) to denote the number of edges on the path. We denote the shortest path distance from a vertex $x$ to a vertex $y$ in $G$ by $\delta(x,y)$. In the following we will assume that $G$ is directed, but the same algorithm works for undirected graphs as well. An $h$-hop SSSP tree for $G$ rooted at a vertex $r$ is a tree of height $h$ where the weight of the path from $r$ to a vertex $v$ in the tree is the shortest path distance in $G$ among all paths that have at most $h$ edges. In the case of multiple paths with the same distance from $r$ to $v$ we assume that $v$ chooses the parent vertex with minimum id as its parent in the $h$-hop SSSP tree. We will use $h = \sqrt {n \cdot \log n}$ in our algorithm. Our overall APSP algorithm is given in Algorithm~\ref{algMain}. In Step~\ref{algMain:h-hop-sssp-tree} an $h$-hop SSSP tree and the associated SSSP distances, $\delta_h (r,v)$, are computed at vertex $v$ for each root $r\in V$. Step 2 computes a {\it blocker set } $Q$ of $q=O((n \log n)/h)$ nodes for the collection of $h$-hop SSSP trees constructed in Step 1. This step is described in detail in the next section, where we describe a distributed implementation of King's sequential method~\cite{King99}. Our method computes the blocker set $Q$ in $O(nh + (n^2 \log n)/h)$ rounds. We now give the definition of a blocker set for a collection of rooted $h$-hop trees. \begin{algorithm}[H] \caption{ Main Algorithm} \begin{algorithmic}[1] \State {\bf for each} $v\in V$ in sequence: compute an $h$-hop SSSP rooted at $v$ by computing at each $x\in V$ the $h$-hop shortest path distances $\delta_h(v,x)$, the parent pointer in the SSSP tree, and the hop-length $h_v (x)$ in $T_v$, i.e., the number of edges on the path from $v$ to $x$.\label{algMain:h-hop-sssp-tree} \State Compute a blocker set $Q$ of size $\Theta ((n \log n)/h)$ for the $h$-hop SSSP trees computed in Step~\ref{algMain:h-hop-sssp-tree} using Algorithm~\ref{algCB}. \label{algMain:centers} \State {\bf for each} $c\in Q$ in sequence: compute SSSP with root $c$ and $\delta (c,v)$ at each $v\in V$ \label{algMain:bf} \State {\bf for each} $c \in Q$ in sequence: broadcast to all other nodes $c$'s ID and the $\delta_h(v,c)$ values it computed for each $v$ in Step~\ref{algMain:h-hop-sssp-tree}. \label{algMain:broadcast} \State {\bf Local step at each node:} At each $v$ compute $\delta(u,v)$ values for each $u \in V$ by using equation~\ref{eq:SD} with $\delta_h (u,v)$ from Step~\ref{algMain:h-hop-sssp-tree}, the $\delta(c,v)$ values from Step~\ref{algMain:bf}, and the $\delta_h (u,c)$ values received in Step~\ref{algMain:broadcast}. \label{algMain:local-compute} \end{algorithmic} \label{algMain} \end{algorithm} \begin{definition}[{\bf Blocker Set}~\cite{King99}] Let $H$ be a collection of rooted $h$-hop trees in a graph $G=(V,E)$. A set $Q\subseteq V$ is a \emph{ blocker set } for $H$ if every root to leaf path of length $h$ in every tree in $H$ contains a vertex in $Q$. Each vertex in $Q$ is called a \emph{blocker vertex} for $H$. \end{definition} In Step 3 of Algorithm~\ref{algMain} we compute $\delta (c,v)$ for each $c\in Q$ and for all $v\in V$. In Step 4 each blocker vertex $c$ broadcasts all of the $\delta_h(v,c)$ values it computed in Step~\ref{algMain:h-hop-sssp-tree}. Finally, in Step~\ref{algMain:local-compute} each node $v$ computes $\delta(u,v)$ for each $u\in V$ using the values it computed or received in the earlier steps. More specifically, $v$ computes $\delta(u,v)$ as: \begin{equation}\label{eq:SD} \delta (u,v) = \min \left\{\delta_h(u,v), ~\min_{c\in Q} \left(\delta_h(u,c) + \delta (c,v)\right)\right\} \end{equation} \begin{lemma} The $\delta(u,v)$ values computed at each $v$ in Step~\ref{algMain:local-compute} of Algorithm~\ref{algMain} are the correct shortest path distances. \end{lemma} \begin{proof} Fix vertices $u,v$ and consider a shortest path $p$ from $u$ to $v$. If $p$ has at most $h$ edges then $w(p) = \delta_h(u,v)$ and this value is directly computed at $v$ in Step~\ref{algMain:h-hop-sssp-tree}. Otherwise by the property of the blocker set $Q$ we know that there is a vertex $c\in Q$ which lies along $p$ within the $h$-hop SSSP tree rooted at $u$ that is constructed in Step~\ref{algMain:h-hop-sssp-tree}. Let $p_1$ be the portion of $p$ from $u$ to $c$ and let $p_2$ be the portion from $c$ to $v$. So $w(p_1) = \delta_h(u,c)$, $w(p_2)= \delta (c,v)$ and $w(p) = w(p_1) + w(p_2)$. The value $\delta_h(u,c)$ is received by $v$ in the broadcast step for center $c$ in Step~\ref{algMain:broadcast}. The value $\delta(c,v)$ is computed at $v$ when SSSP with root $c$ is computed in Step~\ref{algMain:bf}. Hence $v$ has the information needed to compute $\delta(u,v)$ in Step~\ref{algMain:local-compute} for each $u$ using Equation~\ref{eq:SD}. \end{proof} We now bound the number of rounds needed for each step in Algorithm~\ref{algMain} (other than Step~\ref{algMain:centers}). For this we first state bounds for some simple primitives that will be used to execute these steps. \begin{lemma}\label{lem:bf} Given a source $s\in V$, using the Bellman-Ford algorithm:\\ (a) the shortest path distance $\delta (s,v)$ can be computed at each $v\in V$ in $n$ rounds.\\ (b) the $h$-hop shortest path distance $\delta_h(s,v)$, the hop length $h_s(v)$, and parent pointer in the $h$-hop SSSP tree rooted at $s$ can be computed at each $v\in V$ in $h$ rounds. \end{lemma} \begin{lemma}\label{lem:k-bc} A node $v$ can broadcast $k$ local values to all other nodes reachable from it deterministically in $O(n+k)$ rounds. \end{lemma} \begin{proof} We construct a BFS tree rooted at $v$ in at most $n$ rounds and then we pipeline the broadcast of the $k$ values. The root $v$ sends the $i$-th value to all its children in round $i$ for $1\leq i \leq k$. In a general round, each node $x$ that received a value in the previous round sends that value to all its children. It is readily seen that the $i$-th value reaches all nodes at hop-length $d$ from $v$ in the BFS tree in round $i+d-1$, and this is the only value that node $x$ receives in this round. \end{proof} \begin{lemma}\label{lem:all-to-all-bc} All $v\in V$ can broadcast a local value to every other node they can reach in $O(n)$ rounds deterministically. \end{lemma} \begin{proof} This broadcast can be done in $O(n)$ rounds in many ways, for example by piggy-backing on an $O(n)$ round unweighted APSP algorithm~\cite{LP13,PR18} (and also \cite{HW12,PRT12} for undirected graphs) where now each message contains the value sent by source $s$ in addition to the current shortest path distance estimate for source $s$. \end{proof} \begin{lemma} Algorithm~\ref{algMain} runs in $O(n\cdot h + (n^2/h) \cdot \log n)$ rounds assuming Step 2 can be implemented to run within this bound. \end{lemma} \begin{proof} Let the size of the blocker set be $q = \frac{n}{h}\cdot \log n$. Using part (b) of Lemma~\ref{lem:bf}, Step 1 can be computed in $O(n \cdot h)$ rounds by computing the $n$ $h$-hop SSSP trees in sequence. Step 3 can be computed in $O(n \cdot q) = O((n^2/h) \cdot \log n)$ rounds by part (a) of Lemma~\ref{lem:bf}. Step 4 can be computed in $O(n \cdot q) = O((n^2/h) \cdot \log n)$ rounds by Lemma~\ref{lem:k-bc} (using $k=n)$. Finally, Step 5 involves only local computation and no communication. This establishes the lemma. \end{proof} In the next section we give a deterministic algorithm to compute Step~\ref{algMain:centers} in $O(n\cdot h + (n^2/h) \cdot \log n)$ rounds, which leads to our main theorem (by using $h = \sqrt{n \cdot \log n})$. \begin{theorem} Algorithm~\ref{algMain} is a deterministic distributed algorithm for weighted APSP in directed or undirected graphs that runs in $O(n^{3/2} \cdot \sqrt{\log n})$ rounds in the {\sc Congest}{} model. \end{theorem}
{ "timestamp": "2018-04-17T02:12:10", "yymm": "1804", "arxiv_id": "1804.05441", "language": "en", "url": "https://arxiv.org/abs/1804.05441" }
\section{Introduction} This article studies how the sum of scalar independent random variables can deviate from its expected value. It extends previous bounds from Hoeffding \cite{Hoe63} which exploited information about the range of each random variable. It also outperforms previous bounds from Bennett \cite{Ben62} which also incorporates information about the variance of each random variable as well as a single fixed range on all random variables. The proposed bound simultaneously leverages information about the heterogeneous means, variances and ranges of the random variables to obtain better convergence rates in a wider range of useful settings. These rates give tighter guarantees on the probability a given portfolio will underperform. This permits improved portfolio optimization in settings where second-order information is available while parametric information remains unavailable (as we shall show in Section~\ref{sct:applications}). Recall Hoeffding's celebrated inequality \cite{Hoe63}. \begin{thm}[Hoeffding, 1963] \label{thm:hoeffding} Given $X_1,\ldots,X_n$ scalar independent random variables bounded\footnote{Throughout this article, statements of the form $L_i \leq X_i \leq M_i$ will be assumed to be equivalent to $\Pr(X_i \in [L_i,M_i])=1.$ Furthermore, we will denote the probability of deviating by the shorthand $\Pr$.} as $L_i \leq X_i \leq M_i$ for $i=1,\ldots,n$ then \[ \Pr\left ( \frac{1}{n} \sum_{i=1}^n X_i - \frac{1}{n} \sum_{i=1}^n \mathrm{E}[X_i ] \geq t \right ) \leq \exp \left ( -2 \frac{n^2 t^2}{\sum_{i=1}^n (M_i-L_i)^2 } \right ). \] \end{thm} In some scenarios, we may be given the range of a random variable {\em as well as} information about its variance. In those settings, Bennett's inequality may be more appropriate \cite{Ben62}. Classically, this inequality requires that all the random variables have the same ceiling $M$ and the same mean $\mu$ yet can easily be generalized\footnote{Two proofs are provided; the first proof also requires a bottom range on the random variables (i.e. $-M \leq X_i \leq M$) but the second proof in pp. 42-3 of \cite{Ben62} does not and only requires that $X_i \leq M$.} as follows. \begin{thm}[Bennett, 1962] \label{thm:bennett} For a collection $X_1,\ldots,X_n$ of independent random variables satisfying $X_i \leq M_i$, $\mathrm{E}[X_i]=\mu_i$ and $\mathrm{E}[(X_i-\mu_i)^2]=\sigma_i^2$ for $i=1,\ldots,n$ and for any $t \geq 0$, the following holds \[ \Pr\left ( \frac{1}{n} \sum_{i=1}^n X_i - \frac{1}{n} \sum_{i=1}^n \mathrm{E}[X_i ] \geq t \right ) & \leq & \exp \left ( - n \frac{v}{s^2} h \left ( \frac{ts}{v} \right ) \right ) \] where $h(x) = (1+x) \ln(1+x) - x$, $s=\max_i M_i-\mu_i$ and $v=\frac{1}{n} \sum_{i=1}^n \sigma_i^2$. \end{thm} The above theorem looks slightly more general than the classic formula in \cite{Ben62} as well as popular variations of the original theorem in the literature. Specifically, we allow the variables to have heterogeneous means $\mu_i$ and heterogeneous ceilings $M_i$. In Appendix~\ref{sct:bennettproof}, we provide a detailed derivation of this flavor of Bennett's inequality. Bernstein's inequality \cite{Ber24} is obtained by replacing the function $h(x)$ with the function $g(x)=\frac{3x^2}{2x+6}$. Since $h(x) \geq g(x)$ for all $x \geq 0$, it is known that Bennett's inequality is strictly sharper than Bernstein's inequality. Moreover, Bernstein's inequality is in turn sharper than Prohorov's inequality \cite{Pro59} and Chebyshev \cite{Tch1874} inequalities. Therefore, it is natural to focus herein on Bennett's inequality. Other variations of Bennett's inequality were provided by Hoeffding in his third theorem \cite{Hoe63}. However, these variations further required all variables to have $\mathrm{E}[X_i]=0$, a constant variance $\sigma^2$ and a constant ceiling $M$. These variations are therefore less useful when the random variables are heterogeneous. Hoeffding's inequalities were subsequently improved and extended, notably by Talagrand \cite{Tal95,Tal95b} which, under mild conditions, gave tighter bounds by identifying some missing factors. This article will propose a novel variant of Bennett's inequality. In the homogeneous case, this variant yields a strictly tighter bound since it avoids using an unnecessary loosening. The new inequality also allows each individual random variable to have its own distinct and heterogeneous range, mean and variance. These heterogeneous properties will be exploited more subtly to obtain a bound that remains tight even if many random variables are drawn from very different distributions. The new bound achieves sharper rates by leveraging Lambert's ${\cal W}$ function (specifically, the so-called principal value of Lambert's function). As further explicated in Appendix~\ref{sct:lambert}, this function is as straightforward to compute as any other and enjoys a number of analytic properties \cite{Roy2010}. This article is organized as follows. In Section~\ref{sct:betterbennett}, we derive a refinement of Bennett's inequality. In Section~\ref{sct:homogeneous}, we numerically explore the sharpness of this bound relative to other classical inequalities in the case of homogeneous random variables which have identical means, variances and ranges. In Section~\ref{sct:heterogeneous}, we numerically explore the sharpness of the bound in the heterogeneous case. In Section~\ref{sct:applications} we discuss applications of the bound in portfolio assessment. The article then concludes with a brief discussion. \section{Refining Bennett's inequality} \label{sct:betterbennett} Given a sequence of independent random scalar variables $X_i$, the following theorem bounds the probability that $\frac{1}{n} \sum_{i=1}^n X_i$ will deviate above its expected value by more than $t$. We consider first the case where all the random variables have homogeneous mean, ceiling and variance. \begin{thm} \label{thm:betterhomo} Let $X_1,\ldots,X_n$ be independent real-valued random variables such that $\mathrm{E}[X_i]=\mu$, $\mathrm{E}[(X_i-\mu)^2]=\sigma^2$ and $\Pr(X_i\in(-\infty,M])=1$. Then, for any $t\in (0,M-\mu)$, the following inequality holds \[ \Pr\left ( \frac{1}{n} \sum_{i=1}^n X_i - \frac{1}{n} \sum_{i=1}^n \mathrm{E}[X_i] \geq t \right ) & \leq & e^{-\lambda nt} \left ( \frac{\sigma^2}{s^2} \left ( e^{\lambda s} - 1 - \lambda s \right ) + 1 \right )^n \] where $s=M-\mu$ and \[ \lambda &=& \frac{1}{t}+\frac{s}{\sigma^2}-\frac{1}{s}-\frac{1}{s} {\cal W} \left ( \exp \left ( \frac{s}{t} + \frac{s^2}{\sigma^2} -1 + \ln\left ( \frac{s-t}{t} \right) \right ) \right). \] \end{thm} The above is an immediate consequence of our main theorem which generalizes to the setting of heterogeneous random variables. \begin{thm} \label{thm:betterhetero} Let $X_1,\ldots,X_n$ be independent real-valued random variables such that $\mathrm{E}[X_i]=\mu_i$, $\mathrm{E}[(X_i-\mu_i)^2]=\sigma_i^2$ and $\Pr(X_i\in(-\infty,M_i])=1$. Then, for any $t\in (0,s)$, the following inequality holds \[ \Pr\left ( \frac{1}{n} \sum_{i=1}^n X_i - \frac{1}{n} \sum_{i=1}^n \mathrm{E}[X_i] \geq t \right ) & \leq & e^{-\lambda nt} \prod_{i=1}^n \left ( \frac{\sigma_i^2}{s_i^2} \left ( e^{\lambda s_i} - 1 - \lambda s_i \right ) + 1 \right ) \] where \[ \lambda &=& \frac{1}{{ \sum_{i=1}^n \frac{s_i^2}{1-e^{-s_i^2/\sigma_i^2}}}} \sum_{i=1}^n \frac{s_i^2 \lambda_i}{1-e^{-s_i^2/\sigma_i^2}} \\ \lambda_i &=& \frac{s}{t s_i}+\frac{s_i}{\sigma_i^2}-\frac{1}{s_i}-\frac{1}{s_i} {\cal W} \left ( \exp \left ( \frac{s}{t} + \frac{s_i^2}{\sigma_i^2} -1 + \ln\left ( \frac{s-t}{t} \right) \right ) \right) \\ s &=& \frac{1}{n} \sum_{i=1}^n s_i, \:\: s_i \:=\: M_i-\mu_i. \] \end{thm} \begin{proof} Consider the probability of interest \[ \Pr &=& \Pr\left ( \frac{1}{n} \sum_{i=1}^n X_i - \frac{1}{n} \sum_{i=1}^n \mathrm{E}[X_i] \geq t \right ). \] Consider translated versions of the random variables $Y_i = X_i - \mu_i$. We now have $\mathrm{E}[Y_i]=0$, $\mathrm{E}[Y_i^2]=\sigma_i^2$ and $\Pr(Y_i \in (-\infty,M_i-\mu_i])=1$. Applying the change of variables $Y_i = X_i - \mu_i$ does not change the probability of interest \[ \Pr &=& \Pr\left ( \frac{1}{n} \sum_{i=1}^n Y_i \geq t \right ) \\ &=& \Pr\left ( e^{ \lambda \sum_{i=1}^n Y_i} \geq e^{\lambda n t} \right ) \] where the second line holds for any $\lambda\geq 0$ by monotonic transformation of the probability. We then apply Markov's inequality as follows: \[ \Pr &\leq& \inf_{\lambda \geq 0} e^{-\lambda nt} \mathrm{E} \left [ e^{\lambda \sum_{i=1}^n Y_i} \right ] \\ &=& \inf_{\lambda \geq 0} e^{-\lambda nt} \prod_{i=1}^n \mathrm{E} [ e^{\lambda Y_i} ]. \] Above, the second line follows from the independence of the random variables. Consider bounding a single term in the product, in other words $\mathrm{E}[e^{\lambda Y_i}]$. Begin by the conjecture (which will be subsequently proved) that the following upper bound holds for appropriate choices of the three parameters $(\alpha_i,\beta_i,\gamma_i)$ for all $\lambda \geq 0$: \[ \mathrm{E}[e^{\lambda Y_i}] & \leq & \gamma_i \exp(\lambda \alpha_i) + 1 -\gamma_i - \beta_i \lambda. \] Clearly, when $\lambda=0$, both sides of the conjectured bound are unity and equality is achieved. When $\lambda=0$, the derivative of the left hand side is \[ \left . \frac{\partial \mathrm{E}[e^{\lambda Y_i}]}{\partial \lambda} \right |_{\lambda=0} = \mathrm{E}[Y_i] =0. \] For the bound to hold locally around $\lambda=0$ both left hand side and right hand side must have equal derivatives when they attain equality at $\lambda=0$. This tangential contact ensures the bound will not cross the original function as $\lambda$ varies. This forces the following choice for the second parameter \[ \left . \frac{\partial \left ( \gamma_i \exp(\lambda \alpha_i) + 1 -\gamma_i - \beta_i \lambda \right )}{\partial \lambda} \right |_{\lambda=0} &=& 0 \\ \beta_i &=& \gamma_i \alpha_i. \] Choose $\alpha_i=M_i-\mu_i$. To satisfy the above tangential contact constraint, it is necessary that $\beta_i = \gamma_i (M_i-\mu_i)$. The conjectured bound is now \[ \mathrm{E}[e^{\lambda Y_i}] & \leq & \gamma_i \exp(\lambda (M_i-\mu_i)) + 1 -\gamma_i - \gamma_i (M_i-\mu_i)\lambda. \] Since the above inequality makes tangential contact, taking second derivatives of both sides with respect to $\lambda$ gives a conservative curvature test to ensure that the bound holds. If the right hand side has higher curvature everywhere and makes tangential contact at $\lambda=0$ then it upper-bounds the left hand side. Taking second derivatives of both sides with respect to $\lambda$ produces the curvature constraint \[ \frac{\partial^2 \mathrm{E}[e^{\lambda Y_i}]}{\partial \lambda ^2} & \leq & \frac{\partial^2 \left ( \gamma_i \exp(\lambda (M_i-\mu_i)) + 1 -\gamma_i - \gamma_i (M_i-\mu_i) \lambda \right ) }{\partial \lambda ^2} \\ \mathrm{E}[e^{\lambda Y_i}Y_i^2] &\leq & \gamma_i \exp(\lambda (M_i-\mu_i)) (M_i-\mu_i)^2. \] Divide both sides by $\exp(\lambda (M_i-\mu_i))$ to obtain \[ \mathrm{E}[e^{\lambda (Y_i-(M_i-\mu_i))}Y_i^2] &\leq & \gamma_i (M_i-\mu_i)^2. \] Note that $\exp(\lambda(Y_i-(M_i-\mu_i)) \leq 1$ inside the expectation since $\lambda \geq 0$ and $Y_i \leq M_i-\mu_i$. Replacing $\exp(\lambda(Y_i-(M_i-\mu_i))$ in the expectation with $1$ gives the following stricter condition on $\gamma_i$ to guarantee a bound: \[ \mathrm{E}[e^{\lambda (Y_i-(M_i-\mu_i))}Y_i^2] \: \leq \: \mathrm{E}[Y_i^2] \: \leq \: \gamma_i (M_i-\mu_i)^2. \] Since $\mathrm{E}[Y_i^2]=\sigma_i^2$, the following setting for $\gamma_i$ guarantees that the curvature of the upper bound is larger than that of the original function: \[ \gamma_i &=& \frac{\sigma_i^2}{(M_i-\mu_i)^2}. \] Thus, the conjectured bound holds for any choice of $\lambda \geq 0$ and is tight at $\lambda=0$. Next define $s_i = M_i - \mu_i$ and rewrite the above expression as \[ \mathrm{E}[e^{\lambda Y_i}] & \leq & \frac{\sigma_i^2}{s_i^2} \left ( \exp(\lambda s_i)) -1 - \lambda s_i \right ) + 1. \] Apply this upper bound to each individual term in the product \[ \Pr & \leq & \inf_{\lambda \geq 0} e^{-\lambda nt} \prod_{i=1}^n \mathrm{E}[e^{\lambda Y_i}] \\ & \leq & \inf_{\lambda \geq 0} e^{-\lambda nt} \prod_{i=1}^n \left ( \frac{\sigma_i^2}{s_i^2} \left ( e^{\lambda s_i} - 1 - \lambda s_i \right ) + 1 \right ). \] This gives the bound in the theorem $\Pr \leq B(\lambda)$ where \[ B(\lambda) &=&e^{-\lambda nt} \prod_{i=1}^n \left ( \frac{\sigma_i^2}{s_i^2} \left ( e^{\lambda s_i} - 1 - \lambda s_i \right ) + 1 \right ). \] What remains is to specify the choice of $\lambda$ to impute into the formula. We next consider ways of finding a good choice for $\lambda$. However, we emphasize that {\em any} choice for $\lambda \geq 0$ creates a valid upper bound on the probability $\Pr$. We start by finding a looser upper bound on $B(\lambda)$ which we will then minimize in order to recover $\lambda$ which we denote by $\lambda^*$. We start by considering $n$ arbitrary non-negative scalar variables $t_1,\ldots,t_n$ that sum to $nt$, i.e. $\sum_{i=1}^n t_i = nt$. We choose to set these $t_i$ as follows \[ t_i &=& t \frac{s_i}{\frac{1}{n} \sum_{i=1}^n s_i} \: = \: \frac{t s_i}{s}. \] where we have taken $s=\frac{1}{n} \sum_{i=1}^n s_i$. Rewrite the current bound as follows \[ B(\lambda) &=& e^{-\lambda \sum_{i=1}^n t_i} \prod_{i=1}^n \left ( \frac{\sigma_i^2}{s_i^2} \left ( e^{\lambda s_i} - 1 - \lambda s_i \right ) + 1 \right ) \\ &=& \exp \sum_{i=1}^n \left ( \log \left ( \frac{\sigma_i^2}{s_i^2} \left ( e^{\lambda s_i} - 1 - \lambda s_i \right ) + 1 \right )-\lambda t_i \right ) \\ &=& \exp \sum_{i=1}^n b_i(\lambda) \] where we have defined the terms in the summation as follows \[ b_i(\lambda) &=& \log \left ( \frac{\sigma_i^2}{s_i^2} \left ( e^{\lambda s_i} - 1 - \lambda s_i \right ) + 1 \right ) -\lambda t_i. \] The minimizer of $b_i(\lambda)$ is obtained in closed-form via the Lambert ${\cal W}$ function as follows \[ \lambda_i^* &=& \arg \min_\lambda b_i(\lambda) \\ &=& \frac{1}{t_i}+\frac{s_i}{\sigma_i^2}-\frac{1}{s_i}-\frac{1}{s_i} {\cal W} \left ( \exp \left ( \frac{s_i}{t_i} + \frac{s_i^2}{\sigma_i^2} -1 + \ln\left ( \frac{s_i-t_i}{t_i} \right) \right ) \right) \\ &=& \frac{s}{ts_i}+\frac{s_i}{\sigma_i^2}-\frac{1}{s_i}-\frac{1}{s_i} {\cal W} \left ( \exp \left ( \frac{s}{t} + \frac{s_i^2}{\sigma_i^2} -1 + \ln\left ( \frac{s-t}{t} \right) \right ) \right) \] where in the last line we have inserted the choice for $t_i$. Note, that Theorem~\ref{thm:nonnegative} in the Appendix ensures that the minimizers are non-negative, in other words, $\lambda_i^*\geq 0.$ Next, derive the curvature of $b_i(\lambda)$ and upper-bound it as follows \[ \frac{\partial^2 b_i(\lambda)}{\partial \lambda^2} &=& \frac{\sigma_i^2 e^{\lambda s_i}}{\frac{\sigma_i^2}{s_i^2} (e^{\lambda s_i} - \lambda s_i - 1) + 1} - \left ( \frac{ \frac{\sigma_i^2}{s_i} (e^{\lambda s_i}-1)}{\frac{\sigma_i^2}{s_i^2} (e^{\lambda s_i} - \lambda s_i - 1) + 1} \right )^2 \\ & \leq & \frac{\sigma_i^2 e^{\lambda s_i}}{\frac{\sigma_i^2}{s_i^2} (e^{\lambda s_i} - \lambda s_i - 1) + 1}. \] Above, we bounded the curvature simply by dropping the last negative term. Taking derivatives and setting to zero gives the maximum of the right hand side above which yields $\lambda=s_i/\sigma_i^2$. Inserting this value of $\lambda$ into the bound gives a supremum on the curvature \[ \frac{\partial^2 b_i(\lambda)}{\partial \lambda^2} &\leq & \frac{\sigma_i^2 \exp(s_i^2/\sigma_i^2)}{\frac{\sigma_i^2}{s_i^2} (e^{s_i^2/\sigma_i^2} - s_i^2/\sigma_i^2 - 1) + 1} \\ &=& \frac{s_i^2}{1-e^{-s_i^2/\sigma_i^2}}. \] We now form a quadratic upper bound for each $b_i(\lambda)$ term as follows, \[ b_i(\lambda) & \leq & \frac{s_i^2}{1-e^{-s_i^2/\sigma_i^2}} \frac{(\lambda-\lambda_i^*)^2}{2} + b_i(\lambda_i^*). \] The above holds since both left hand side and right hand side are equal when $\lambda=\lambda_i^*$. Furthermore, the gradients of both the left hand side and the right hand side are zero when $\lambda=\lambda_i^*$. Finally, the curvature of the left hand side is always less than the curvature of the right hand side. Therefore, the quadratic on the right hand side must be an upper bound on $b_i(\lambda)$. Replacing each $b_i(\lambda)$ term with its corresponding quadratic bound gives an overall upper bound on $B(\lambda)$ as follows \[ B(\lambda) &=& \exp\left ( \sum_{i=1}^n b_i(\lambda) \right ) \\ & \leq & \exp \left ( \sum_{i=1}^n \frac{s_i^2}{1-e^{-s_i^2/\sigma_i^2}} \frac{(\lambda-\lambda_i^*)^2}{2} + b_i(\lambda_i^*) \right ). \] It is easy to minimize the right hand side analytically over $\lambda$ to obtain \[ \lambda^* &=& \frac{1}{{ \sum_{i=1}^n \frac{s_i^2}{1-e^{-s_i^2/\sigma_i^2}}}} \sum_{i=1}^n \frac{s_i^2 \lambda_i^*}{1-e^{-s_i^2/\sigma_i^2}} \] which yields the theorem. \end{proof} An interesting property of Theorem~\ref{thm:betterhetero} is that it carefully incorporates heterogeneous information about the different random variables. Rather than simply averaging variances (as in Bennett's inequality), we compute more complicated interactions between the variances $\sigma_i^2$ and the spreads $s_i=M_i-\mu_i$. This subtle combination of information about the heterogeneous random variables will yield significant improvements over Bennett's inequality. Furthermore, in the homogeneous setting (which emerges if all $\sigma_i=\sigma$ and all $s_i=s$), it is clear that the new bound in Theorem~\ref{thm:betterhomo} is circumventing loosening steps in Bennett's inequality. Therefore, our proposed bounds are strictly tighter than Bennett's. We can also consider a counterpart of Theorem~\ref{thm:betterhetero} which uses information about the bottom range on $X_i$, namely $L_i \leq X_i$, instead of the ceiling. Note that it is also straightforward to derive counterparts of Bennett's and Bernstein's bounds in such a setting as well. \begin{thm} \label{thm:betterreversed} Let $X_1,\ldots,X_n$ be independent real-valued random variables such that $\mathrm{E}[X_i]=\mu_i$, $\mathrm{E}[(X_i-\mu_i)^2]=\sigma_i^2$ and $\Pr(X_i\in [L_i,\infty))=1$. Then, for any $t\in (0,s)$, the following inequality holds \[ \Pr\left ( \frac{1}{n} \sum_{i=1}^n X_i - \frac{1}{n} \sum_{i=1}^n \mathrm{E}[X_i] \leq -t \right ) & \leq & e^{-\lambda nt} \prod_{i=1}^n \left ( \frac{\sigma_i^2}{s_i^2} \left ( e^{\lambda s_i} - 1 - \lambda s_i \right ) + 1 \right ) \] where \[ \lambda &=& \frac{1}{{ \sum_{i=1}^n \frac{s_i^2}{1-e^{-s_i^2/\sigma_i^2}}}} \sum_{i=1}^n \frac{s_i^2 \lambda_i}{1-e^{-s_i^2/\sigma_i^2}} \\ \lambda_i &=& \frac{s}{t s_i}+\frac{s_i}{\sigma_i^2}-\frac{1}{s_i}-\frac{1}{s_i} {\cal W} \left ( \exp \left ( \frac{s}{t} + \frac{s_i^2}{\sigma_i^2} -1 + \ln\left ( \frac{s-t}{t} \right) \right ) \right) \\ s &=& \frac{1}{n} \sum_{i=1}^n s_i, \:\: s_i \:=\: \mu_i-L_i. \] \end{thm} \begin{proof} The proof is entirely analogous to the one for Theorem~\ref{thm:betterhetero}. \end{proof} \section{Experiments with homogeneous random variables} \label{sct:homogeneous} This section numerically compares the new bound with Hoeffding's, Bennett's and Bernstein's bounds which are well-known classical concentration inequalities. This section will only consider the homogeneous setting, $M_i=M$, $\mu_i=\mu$ and $\sigma_i=\sigma$ for all $i=1,\ldots,n$ random variables. Specifically, we will compare our bound in Theorem~\ref{thm:betterhomo} against Bennett's inequality in Theorem~\ref{thm:bennett} which uses $h(x)=(1+x)\log(1+x)-x$. Similarly, we consider Bernstein's inequality \cite{BouBouLug04} which is identical to Bennett's yet uses $g(x)=\frac{3x^2}{2x+6}$ in the place of $h(x)$. Finally, we apply Hoeffding's inequality as in Theorem~\ref{thm:hoeffding} whose rate has no dependence on the variance of the random variables. To directly compare these bounds, we will explore various values of $M$, $\mu$ and $\sigma$ to see how they compare against each other. For practical visualization purposes, we first note that all bounds scale with $n$ in the same manner. Therefore, without loss of generality, we set $n=1$ throughout our experiments. Furthermore, since we can scale $(M,\mu,\sigma)$ by an arbitrary factor without changing the bounds, we simply lock $M=1$ to remove this source of redundancy in our experimental exploration. Figure~\ref{fig:bounds1} and Figure~\ref{fig:bounds2}, depict the convergence rates for the four different concentration inequalities under various choices of $\mu \in \{-\frac{1}{2},0,\frac{1}{2}\}$ and $\sigma \in \{1,1/2,1/4,1/8\}$. Since Hoeffding's bound also requires a value for the bottom range of the random variable, $L$, we make a simple arbitrary choice to set it to $L=-M$. In fact, many applications of Bennett and Bernstein typically assume that $|X_i|\leq M$ (though it is not strictly necessary to do so). Also, note that setting $L=-M$ does not violate the elementary inequality $\sigma_i \leq (M_i-L_i)/2$ in any of our experiments. By observing the log-probability for each bound as $t$ varies, it is possible to see which inequalities are tighter (i.e. yielding an exponentially smaller deviation probability). Bounds with lower (negative) rates indicate faster convergence of the average to its expected value. Clearly, the new bound is strictly sharper than Bernstein's and Bennett's inequalities in the homogeneous setting. In fact, we know that this must be true since Bennett makes an unnecessary loosening step and Bernstein follows it with yet another unnecessary loosening step. Hoeffding's bound performs poorly unless the variance $\sigma^2$ is large, i.e. it is close to the maximum value it can have while still respecting the elementary inequality $\sigma_i \leq (M_i-L_i)/2$. At that setting, the variance is essentially providing very little useful information and so the variance-based inequalities (such as Bennett's, Bernstein's and the proposed bound) are no longer relevant. Otherwise, the new bound clearly dominates the classical inequalities in our experiments. It is important to note that these rate quantities will be multiplied by the number of observations $n$ and then exponentiated to obtain bounds on the probability. Therefore, the advantages of the bounds relative to each other will be drastically magnified as $n$ grows. \section{Experiments with heterogeneous random variables} \label{sct:heterogeneous} In Section~\ref{sct:homogeneous} we compared the new bound to classical concentration inequalities when all the random variables are homogeneous. We here consider bounding the probability $\Pr(\frac{1}{n} \sum_{i=1}^n X_i - \frac{1}{n} \sum_{i=1} \mathrm{E}[X_i ] \geq t)$ when we deal with independent random variables $X_i$ that are {\em not} identically distributed but rather have their own distinct heterogeneous values of $M_i$, $\mu_i$ and $\sigma_i$. In these experiments, the advantages of the bound can sometimes be dramatic. We will consider various random choices of $M_i$, $\mu_i$, $\sigma_i$ and $t$. These synthetic experiments allow us to compare our new inequality relative to the Bernstein, Bennett and Hoeffding inequalities in the heterogeneous setting. To generate synthetic problems, we set $M_i$ to be the absolute value of random draws from a white Gaussian (e.g. with zero-mean and unit variance). We set $L_i$ to be the negated absolute values drawn from a white Gaussian. We set $\mu_i$ equal to a uniform value drawn in the interval $[L_i,M_i]$. We then choose $\sigma_i$ uniformly from $[0,\frac{1}{2z}(M_i-L_i)]$ for various choices of $z\geq 1$. This way, we explore different levels of variance without ever violating the elementary inequality $\sigma_i \leq (M_i-L_i)/2$. Finally, we set $t$ by sampling a scalar from the uniform distribution and multiplying it by the value $s=\frac{1}{n} \sum_{i=1}^n M_i-\mu_i$ which was introduced in our bound. We compute the bound using Theorem~\ref{thm:betterhetero} in the heterogeneous setting. To compute Bennett's inequality in the heterogeneous setting, we use the formula in Theorem~\ref{thm:bennett}. Similarly, by replacing the $h(x)$ function with the $g(x)$ function, we compute Bernstein's inequality. All three of these approaches ignore information in the $L_i$ values. To compute a bound using Hoeffding's inequality, we apply Theorem~\ref{thm:hoeffding} which ignores information about the $\sigma_i$ and $\mu_i$ values. In Figure~\ref{fig:scatter1} and Figure~\ref{fig:scatter2}, we see the log-probabilities for the new bound on the x-axis and the log-probabilities for the classical inequalities on the y-axis. Several experiments are shown for $n=1,10,10$ and for $z=1,2,10,100$. Whenever a coordinate marker is above the diagonal line, the new bound is performing better for that particular random experiment. Clearly, the new bound is outperforming Bennett's, Bernstein's and Hoeffding's bounds. When $n=1$ in the top of the figures, we are back to the homogeneous case where the bound must strictly outperform Bennett's and Bernstein's inequalities. It also seems to frequently outperform Hoeffding's. As we increase $n$, the advantages of the bound become even more dramatic in the heterogeneous case. When $z$ is small, the variances $\sigma_i^2$ are large and potentially close to their maximum allowable values (e.g. prior to violating the elementary inequality $\sigma_i \leq (M_i-L_i)/2$). Therefore, variance does not provide much information about the distribution of the random variables. Meanwhile, when $z$ is large, the the variance values are smaller and Hoeffding's bound becomes extremely loose since it ignores variance information. The new bound seems to frequently outperform the classical inequalities. \section{An application in portfolio optimization} \label{sct:applications} There are many natural applications of Theorems~\ref{thm:betterhomo}, ~\ref{thm:betterhetero}, and ~\ref{thm:betterreversed}. As a motivating example, consider a financial portfolio with several independent investments. Each investment $i$ will provide a payoff $X_i$ from an unknown distribution. We may know a priori the minimum payoff $L_i$, the expected payoff $\mu_i$ and the variance of the payoff $\sigma_i^2$ for investment $i$. We are interested in the probability that the sum total of our investments will under-perform its expected value by $t$. For example, we may want to compute the probability that the portfolio will {\em under-perform} and produce a total payoff that is smaller than some risk-free payoff $\tau$. The quantity of interest is $\Pr(\sum_{i=1}^n X_i \leq \tau)$. Using Theorem~\ref{thm:betterreversed}, it is straightforward to upper-bound the probability that a portfolio under-performs. We are given information about each $X_i$ such as its $L_i$, $\mu_i$ and $\sigma_i$ and we are given a threshold $\tau$ for the portfolio to be worthwhile. If the total payoff from the investment falls below $\tau$, it has {\em under-performed}. Our upper bound holds without making any further parametric assumptions about the distribution of the payoffs $X_1,\ldots,X_n$. Conversely, many practitioners make Gaussian assumptions or parametric assumptions about portfolios and payoff distributions \cite{Mar52}. A non-parametric approach remains agnostic and may be better matched to real-world settings. Consider the following toy example. We have $n=2$ investments. Investment 1 has an expected payoff of $\mu_1 = \$30$ with a standard deviation of $\sigma_1=\$25$. Investment 2 has an expected payoff of $\mu_2 = \$100$ with a standard deviation of $\sigma_2=\$20$. Investment 1 has a floor on its payoff of $L_1=\$25$. Meanwhile, Investment 2 can potentially yield as little as $L_2=\$5$ in terms of payout. For the portfolio to be worthwhile, we are told that the total payoff of both investments must be at least $\$74$ (or the average payoff across both investments must be at least $\$37$). Otherwise, the portfolio is under-performing. According to our new bound in Theorem~\ref{thm:betterreversed}, the probability of under-performing is less than 39.1\%. We next apply Bennett's inequality and Bernstein's inequality to this portfolio problem. Consider Bennett's inequality in Theorem~\ref{thm:nicerbennett}. Just as we were able to reverse the bound in Theorem~\ref{thm:betterhetero} to obtain Theorem~\ref{thm:betterreversed}, it is possible to obtain reversed versions of Bennett's and Bernstein's inequalities. Bennett's says the probability is at most 50.1\% and Bernstein says it is at most 57.2\%. To apply Hoeffding's inequality, we also need an upper bound on the investment's payouts (e.g. $M_1$ and $M_2$). Recall the elementary formula $\sigma_i \leq (M_i-L_i)/2$. This gives us the {\em most-optimistic} value for ${\hat M}_i=\max(L_i+2\sigma_i, \mu_i)$. This setting helps tighten the Hoeffding bound as much as possible. Technically, the {\em most-optimistic} setting of the Hoeffding bound can be quite erroneous and misleading. In fact, if we make additional assumptions about the problem, then Hoeffding is not actually computing a bound. We are making assumptions that may not be true about the original problem. Nevertheless, we use the heterogeneous $L_i$ and imputed ${\hat M}_i$ in Theorem~\ref{thm:hoeffding}. Hoeffding says the probability of under-performing is less than 58.1\%. Surprisingly, this is still worse than the (valid) estimate using our novel bound. The new bound gives the best estimate and shows that the payoff on our investments is more likely to meet the target total payoff of $\$74$ (or an {\em average} payoff of $\$37$ across both investments). The bound on the probability of 39.1\% shows that the investment portfolio is worth the risk. Rather than simply {\em bound} the probability a given portfolio under-performs, we may wish to {\em find an optimal portfolio}. Consider $n$ possible investments where we aim to optimally allocate funds by computing the proportion $\alpha_i\geq 0$ of our budget that is allocated to investment $i$ for $i=1,\ldots n$. Clearly, budget proportions sum to unity and therefore $\sum_{i=1}^n \alpha_i=1$. Let $X_i$ represent a random variable which equals the return of the $i$'th investment. Assume we know a priori that $X_i$ has mean $\mu_i$, deviation $\sigma_i$ and floor $L_i$. We wish to find $\alpha_1,\ldots,\alpha_n$ that minimize the probability $\Phi$ that the portfolio generates less than some targeted return $\tau$. We can compute $\Phi$ as follows \[ \Phi &=& \Pr\left ( \sum_{i=1}^n \alpha_i X_i \leq \tau \right ) \:\: = \:\: \Pr \left ( \frac{1}{n} \sum_{i=1}^n {\tilde X}_i - \frac{1}{n} \sum_{i=1}^n {\tilde \mu}_i \leq -\frac{1}{n} t \right ). \] On the right, we have merely rewritten the probability after the change of variables ${\tilde X}_i=\alpha_i X_i$ and $t = \sum_{i=1}^n {\tilde \mu}_i - \tau$. The new random variables ${\tilde X}_i$ have mean ${\tilde \mu}_i = \alpha_i \mu_i$, variance ${\tilde \sigma_i}^2=\alpha_i^2 \sigma_i^2$ and floor ${\tilde L}_i=\alpha_i L_i$. Apply Theorem~\ref{thm:betterreversed} which holds for any $\lambda \geq 0$ to obtain \[ \Phi &\leq& e^{-\lambda t} \prod_{i=1}^n \left ( \frac{\alpha_i^2 \sigma_i^2}{\alpha_i^2 (\mu_i - L_i)^2} \left ( e^{\lambda \alpha_i (\mu_i - L_i)} - 1 - \lambda \alpha_i(\mu_i - L_i) \right ) + 1 \right ) \\ &=& e^{-\lambda t} \prod_{i=1}^n \left ( \frac{\sigma_i^2}{(\mu_i - L_i)^2} \left ( e^{\lambda_i (\mu_i - L_i)} - 1 - \lambda_i(\mu_i - L_i) \right ) + 1 \right ) \] where in the second line we have simply defined $\lambda_i=\alpha_i \lambda$. Insert the definition of $t$ into the above \[ \Phi & \leq & e^{-\lambda (\sum_{i=1}^n \alpha_i (\mu_i - \tau))} \prod_{i=1}^n \left ( \frac{\sigma_i^2}{(\mu_i - L_i)^2} \left ( e^{\lambda_i (\mu_i - L_i)} - 1 - \lambda_i(\mu_i - L_i) \right ) + 1 \right ). \] Recall that $\lambda_i = \lambda \alpha_i$ and therefore \[ \Phi & \leq & \prod_{i=1}^n e^{- \lambda_i (\mu_i - \tau)} \left ( \frac{\sigma_i^2}{(\mu_i - L_i)^2} \left ( e^{\lambda_i (\mu_i - L_i)} - 1 - \lambda_i(\mu_i - L_i) \right ) + 1 \right ). \] We wish to minimize the bound on the right hand side over $\lambda \geq 0$ as well as to minimize it over $\alpha_i \geq 0$ for $i=1,\ldots,n$ subject to $\sum_{i=1}^n \alpha_i=1$. An equivalent problem is to minimize the right hand side of the inequality over $\lambda_i \geq 0$ for $i=1,\ldots,n$. The solution can be found by independently minimizing each term in the product $\prod_{i=1}^n$ above over the $\lambda_i$ that appears in it. The solution for $\lambda_i$ is a straightforward reapplication of the derivations in the proof of Theorem~\ref{thm:betterhetero}, \[ \lambda_i&=& \frac{1}{\mu_i-\tau}+\frac{\mu_i-L_i}{\sigma_i^2}-\frac{1}{\mu_i-L_i}\left ( 1 + {\cal W} \left ( \left ( \frac{\tau-L_i}{\mu_i-\tau} \right) e^{ \frac{\mu_i-L_i}{\mu_i-\tau} + \frac{(\mu_i-L_i)^2}{\sigma_i^2} -1 } \right) \right ) \] and shows that we must require $\tau \in [\mu_i,L_i]$ to obtain valid numerical solutions. To recover the optimal proportions for our budget, we merely compute $\alpha_i=\lambda_i / \sum_{i=1}^n \lambda_i$. Given a $\tau$, to recover the optimized upper bound on $\Phi$, insert the suggested values of $\lambda_i$ into the final bound above. Alternatively, rather than specifying a $\tau$ value, a user may prefer to specify how much deviation from the expected return he is willing to tolerate by selecting $t$. In that case, to avoid numerical problems, it is straightforward to see that $t \in (0,\min_i (\mu_i-L_i))$. Figure~\ref{fig:portfolio2} depicts the upper bound on the probability we will obtain a return less than $\tau$ for 3 investments having a floor of 0 and $\mu_1=0.3030, \mu_2=0.2400, \mu_3=0.6178$ with $\sigma_1=0.2601,\sigma_2=0.5248,\sigma_3=0.7645$ (top panel). In the bottom panel, the optimal portfolio distribution is depicted across $\tau$ values. Figure~\ref{fig:portfolio1} depicts the upper bound on the probability we will obtain a return less than $\tau$ for 4 investments having a floor of 0 and $\mu_1=0.1474, \mu_2=0.6088, \mu_3=0.1785, \mu_4=0.7585$ with $\sigma_1=0.0593,\sigma_2=0.6218,\sigma_3=0.2183,\sigma_4=0.4597$ (top panel). In the bottom panel, the optimal portfolio distribution is depicted across $\tau$ values. \section{Conclusions} A new bound was proposed that characterizes the convergence of the average of independent bounded random variables towards its expected value. In the homogeneous case, the new bound is strictly sharper than Bennett's and Bernstein's inequalities and very often outperforms Hoeffding's inequality. The bound also readily applies in settings where the random variables are not identically-distributed and may have heterogeneous values for their range, expected value, and variance. In the heterogeneous case, the new bound sometimes dramatically outperforms the classical inequalities as well. The bound appears useful in portfolio optimization as well as potentially other application areas. Deriving the bound involved the use of a peculiar transcendental function known as Lambert's ${\cal W}$ function. While Lambert's ${\cal W}$-function has been known since the 1779 paper by Leonhard Euler it has only been popularized in the 1980's. It may be helpful in the development of other concentration inequalities.
{ "timestamp": "2018-04-17T02:12:25", "yymm": "1804", "arxiv_id": "1804.05454", "language": "en", "url": "https://arxiv.org/abs/1804.05454" }
\section{Introduction} The high energy physicists have been looking for physics beyond the standard model (SM) for some decades. This research has recently received a new impulse from the Higgs discovery\cite{cms,atl} and from the data of the semi-leptonic decays $B \to D^{(*)} \tau \nu_{\tau}$[3-8] and $B\to K^*\ell^+\ell^-$\cite{lhcb1,lhcb2}\footnote{$\ell$ denotes a light lepton, unless otherwise stated.}, which have exhibited strong tensions with the SM predictions[11-13]. Indeed, the SM entails lepton flavor universality (LFU), which seems to be contradicted by the measurements of the observables \begin{equation} R_{D^{(*)}} = \frac{{\cal B}(B\to D^{(*)} \tau \bar{\nu}_{\tau})}{{\cal B}(B\to D^{(*)} \ell \bar{\nu}_{\ell})} \ ~~~ {\mathrm and} \ ~~~ R_{K^*} = \frac{{\cal B}(B\to K^*\mu^+\mu^-)}{{\cal B}(B\to K^*e^+e^-)}. \label{ratb} \end{equation} It is important to notice that these quantities attenuate the biases related to the experimental efficiency, to the values of CKM matrix elements $V_{cb}$ and $V_{sb}$ and to the theoretical uncertainties of the form factors (FF); therefore they appear especially suitable for singling out new physics (NP) effects. In the present article, we are mainly concerned with the experimental results of the $B \to D^{(*)} \tau \nu_{\tau}$ decays, about which some authors have performed model independent analyses[14-23], while other people have interpreted them in terms of specific NP models, like two-higgs-doublet[24-27] (2HD), leptoquark[14,28-34] (LQ), left-right symmetric\cite{hv1,alt} (LR) or extra-dimension\cite{bsw} model. The anomaly has been connected to the leptonic $B$ and $B_c$ decays to $\tau \bar{\nu}_{\tau}$[24-27,35] and a new light has been cast on the muon anomalous magnetic moment\cite{chn,cai} (see also ref. 38). All that is a goad to further searches for confirmations of NP. In this sense, the $\Lambda_b$ decays to $\Lambda\ell^+\ell^-$\cite{dm,gu} and to $\Lambda_c\tau \nu_{\tau}$[41-47], as well as the decays $B_c\to J/\psi (\eta_c) \tau \nu_{\tau}$\cite{dub}, could give definitive confirmations of NP, in particular of LFU violation (LFUV); indeed, these presumably share the same basic processes as the two above mentioned $B$ decays. In the present paper, we consider the baryonic decay \begin{equation} \Lambda_b \to \Lambda_c \tau^- \bar{\nu}_{\tau}, \label{slh} \end{equation} to which a previous letter\cite{dsa} has been dedicated. Here we give a more in-depth, model independent analysis of this decay, which we compare to the $\Lambda_b \to \Lambda_c \ell \nu_{\ell}$ one. Precisely, we limit ourselves to the spin-independent observables and analyze the NP dependence of suitable dimensionless ratios, among which, analogously to (\ref{ratb}), \begin{equation} R_{\Lambda_c} = \frac{{\cal B}(\Lambda_b \to \Lambda_c \tau \bar{\nu}_{\tau})}{{\cal B}(\Lambda_b\to \Lambda_c \ell \bar{\nu}_{\ell})}. \label{ratl} \end{equation} To this end, we propose for the NP interaction five different dimension 6 operators, chosen according to the most frequently used models - typically, the above mentioned 2HD, LQ or LR - and similarly to other model independent analyses\cite{swd,dut,dt2}. The main differences with the previous studies consist of imposing more stringent constraints on the NP effects, also by taking into account some analyses of the semi-leptonic $B$ decays\cite{faj2,iv,tw}, and of introducing a particular criterion for discriminating among the different NP interactions. Moreover, in order to probe the FF dependence of our predictions, we consider five different alternatives. We find that, while the partial decay width depends rather strongly on the FF, the above mentioned ratios depend much more mildly on them. On the contrary, our prediction about $R_{\Lambda_c}$ is quite different from the one of the other authors[49-56]. However, as we shall see, there are reasons for assuming that the rest of the analysis is independent of this discrepancy. As a last subject, we single out a differential observable which allows, in principle, to distinguish between two of the most likely NP interactions. Sect. 2 resumes our assumptions, including the above mentioned criterion. In sect. 3, we deduce, in the covariant formalism, the general formulae for the matrix elements; we also introduce the various FF, for which we give a short review of previous contributions. In sect. 4, we sketch the expressions of the differential and partial widths of the decays of interest. In sect. 5, we show predictions of the partial decay widths, both according to the SM and to our assumptions about NP. Sect. 6 is devoted to illustrating the constraints on the various NP couplings. Sect. 7 is dedicated to a discussion of our results, in light our criterion, and to a review of previous analyses. In sect. 8, we exhibit the predictions of the differential decay widths according to two different NP interactions, suggesting a new observable, sensitive to the differences between them. Lastly, some conclusions are presented in sect. 9 \section{Assumptions} We list here our assumptions, five in all. The first four are shared by the other authors, whereas the fifth one is the above mentioned criterion. 1) The NP process entails LFUV, therefore it does not act on $\tau$ in the same way as on the light leptons. In a simplifying assumption, the NP does not concern at all the electron and the muon. 2) The basic process that gives rise to the NP in the semi-leptonic decays (\ref{slh}) and $B \to D^{(*)} \tau \nu_{\tau}$ consists uniquely of $b \to c \tau \nu_{\tau}$ and does not involve any spectator partons. This is a consequence of the short range of the would-be NP interaction, whose intermediate boson is estimated to have a mass of order 1 $TeV$\cite{bln,frt,bdn,choh}. As we shall see, this has important consequences. 3) Only one type of interaction $-$ scalar, vector, {\it etc.} $-$ is present in the effective lagrangian. 4) The double ratio \begin{equation} R_{\Lambda_c}^{ratio} = R_{\Lambda_c}/R_{\Lambda_c}^{SM} \end{equation} depends only mildly on the FF. This assumption is supported by the analyses relative to the semi-leptonic $\Lambda_b$\cite{swd,lyz} and $B$[3-8,28,29] decays. In particular, according to refs. 28 and 58, it results \begin{equation} R_D^{ratio} = R_D^{exp}/R_D^{SM} = 1.30\pm0.17, \ ~~~~~ \ \ ~~~~~ R_{D^*}^{ratio} = R_{D^*}^{exp}/R_{D^*}^{SM} = 1.25\pm0.08, \label{rat-exp} \end{equation} quite compatible with each other. Further arguments will be exposed below. 5) Lastly, given the reliability of the SM at present energies, the NP term is only a perturbation of the known amplitude for the decay considered. Therefore, we privilege the interactions whose effective couplings are much smaller than the Fermi constant, $G = 1.166379 \cdot 10^{-5} GeV^{-2}$. Taking into account the more restrictive of the results (\ref{rat-exp}), the first four assumptions imply immediately that \begin{equation} R_{\Lambda_c} = \xi \frac{\Gamma_{\tau}^{SM}}{\Gamma_{\ell}^{SM}}, \ ~~~~~ \ \ ~~~~~ \ \xi = 1.25 \pm 0.08. \label{rat-prd} \end{equation} Here $\Gamma_{\tau(\ell)}$ is the partial width of the decay $\Lambda_b \to \Lambda_c \tau^- \bar{\nu}_{\tau} (\ell^- \bar{\nu}_{\ell})$; according to our prediction, $\Gamma_{\tau}$ results to be \begin{equation} \Gamma_{\tau} = \xi \Gamma_{\tau}^{SM}. \label{gam-tau} \end{equation} \section{Matrix Element of the Decay} \subsection{SM and NP Amplitudes} We consider the matrix element for the decay $\Lambda_b \to \Lambda_c \ell{\bar \nu}_{\ell}$\footnote{In this section and in the next one, $\ell$ denotes either $\tau$ or a light lepton.}. To this end, we set, in quite a general way, \begin{equation} {\cal M} = V_{cb} \frac{G}{\sqrt{2}} (J^L_{\mu} j^{\mu}+ g_r {\cal I}). \label{matlm} \end{equation} Here ${\cal I}$ is the NP interaction and \begin{equation} g_r = x e^{i\varphi} \end{equation} the corresponding relative coupling\cite{swd}, with $x$ and $\varphi$ real, $x$ $>$ 0. We consider five types of effective dimension 6 operators, according to the most frequently used models: \begin{equation} {\cal I} = J^L_{\mu} j^{\mu}, ~~ J^R_{\mu} j^{\mu}, ~~ J^S j, ~~ J^P j, ~~ J^H j. \end{equation} Here \begin{eqnarray} j_{\mu} &=& {\bar u}_{\ell}\gamma_{\mu}(1-\gamma_5)v, \ ~~~~~ \ \ ~~~~~ \ \ ~~~~~ \ \ ~~ \ j = {\bar u}_{\ell}(1-\gamma_5)v, \\ J^{L(R)}_{\mu} &=& \langle\Lambda_c|{\bar c}\gamma_{\mu}(1\mp\gamma_5)b |\Lambda_b\rangle,\ ~~~~~ \ \ ~~~~~ \ J^S = \langle\Lambda_c|{\bar c} b |\Lambda_b\rangle, \\ J^P &=& \langle\Lambda_c|{\bar c} \gamma_5 b |\Lambda_b\rangle, \ ~~~~~ \ \ ~~~~~ \ \ ~~~~~ \ ~~~ \ J^H = J^S - J^P \end{eqnarray} and $u_{\ell}$ and $v$ are the four-spinors of the charged lepton and of the anti-neutrino respectively; lastly, $L$, $R$, $S$, $P$ and $H$ denote, respectively, left-handed vector, right-handed vector, scalar, pseudo-scalar and $S-P$-interaction. \subsection{Form Factors} The most general expressions of the vector and axial hadronic currents are \begin{eqnarray} \langle\Lambda_c|{\bar c}\gamma_{\mu}b |\Lambda_b\rangle &=& \bar{u_f} V_{\mu}u_i = \bar{u_f} (f_1 \gamma_{\mu} + f_2 i\sigma_{\mu\nu} q^{\nu} + f_3 q_{\mu})u_i, \label{ffv} \\ \langle\Lambda_c|{\bar c}\gamma_{\mu}\gamma_5 b |\Lambda_b\rangle &=& \bar{u_f} A_{\mu}\gamma_5 u_i = \bar{u_f} (g_1 \gamma_{\mu} + g_2 i\sigma_{\mu\nu} q^{\nu} + g_3 q_{\mu}) \gamma_5 u_i. \label{ffa} \end{eqnarray} Here the $f_i$ and the $g_i$ ($i$ = 1,2,3) are functions of $q^2$, $u_{i(f)}$ the four-spinor of the initial (final) baryon, \begin{equation} q = p_i-p_f = p_{\ell}+p \end{equation} and $p_{i(f)}$, $p_{\ell}$ and $p$ are, respectively, the four-momenta of the baryons, of the charged lepton and of the anti-neutrino. Using the equations of motion (eom), the operators $V_{\mu}$ and $A_{\mu}$, which appear in Eqs. (\ref{ffv}) and (\ref{ffa}), can be re-written as (see Appendix A) \begin{equation} V_{\mu} = X_0 \gamma_{\mu} + f_2 P_{\mu} + f_3 q_{\mu}, \ ~~~~~ \ A_{\mu} = Y_0 \gamma_{\mu} + g_2 P_{\mu} + g_3 q_{\mu}, \end{equation} where \begin{equation} X_0 = f_1-(m_i+m_f)f_2, ~~~~~ Y_0 = g_1+(m_i-m_f)g_2, ~~~~~ P = p_i+p_f \end{equation} and $m_{i(f)}$ is the mass of the initial (final) baryon: $m_i$ = 5.619 $GeV$, $m_f$ = 2.286 $GeV$. Moreover, as regards the (pseudo-) scalar current, still, the eom imply\cite{dsa} \begin{equation} J^S = \frac{q^{\mu}}{\delta m_Q}{\bar u}_f V_{\mu} u_i, ~~~~~ J^P = -\rho\frac{q^{\mu}}{\delta m_Q}{\bar u}_f A_{\mu} u_i, \end{equation} with \begin{equation} \delta m_Q = m_b-m_c, ~~~~~ \rho = \frac{m_b-m_c}{m_b+m_c} \sim 0.53, \label{2hdm} \end{equation} $m_b$ = 4.18 $GeV$ and $m_c$ = 1.28 $GeV$ being the masses of the $b$- and $c$-quark respectively. \subsubsection{A Short Review} Different techniques have been adopted for determining the FF for the decay (\ref{slh}): - lattice calculation\cite{dm1,dt2}, approximated by an analytical expression\cite{dut}; - quark models: constituent\cite{pv}, covariant\cite{guf}, diquark\cite{fgk} and heavy quark Isgur-Wise\cite{klw,lhcb4} (IW) model; - sum rules (SR), both in pole approximation\cite{dec,swd,lyz} and in full QCD\cite{azs}. \subsubsection{Present Analysis} The five different FF we use here are based on some approximations, generally accepted for the heavy quark transition $b\to c$\cite{dec}: \begin{equation} f_1 = g_1, \ ~~~~~ \ f_2 = g_2 = A, \ ~~~~~ \ f_3 = g_3 = 0. \label{hff} \end{equation} In particular, the first FF is of the IW type\cite{klw} and the remaining four are based on the SR\cite{dec,swd,lyz}. The IW FF reads as \begin{eqnarray} f_1 (q^2) &=& \zeta_0 [\omega(q^2)] = 1 -1.47[\omega(q^2)-1]+0.95[\omega(q^2)-1]^2, \label{iwff0} \\ \omega(q^2) &=& \frac{m_i^2+m_f^2-q^2}{2m_im_f}; \ ~~~~ ~~~~ \ f_2 (q^2) = 0. \label{iwff} \end{eqnarray} Incidentally, it is worth noticing that this is quite compatible with the bounds determined by the recent analysis of $\Lambda_b \to \Lambda_c \mu \nu_{\mu}$ data\cite{lhcb4}. The parametrizations of the SR FF are reported in Table 1. \begin{table*} \begin{center} \caption{The four different FF inferred from sum rules: $f_1$ is dimensionless, $f_2$ is expressed in $GeV^{-1}$ and $q^2$ in $GeV^2$.} \begin{tabular}{|c|c|c|c|c|} \hline\hline $~~~~~~~~~~$&$~~~~SR1~~~~$&$~~~~SR2~~~~$&$~~~~SR3~~~~$&$~~~~SR4~~~~$ \\ \hline\hline $f_1(q^2)$ & 6.66/(20.27 - $q^2$) & 8.13/(22.50 - $q^2$) & 13.74/(26.68 - $q^2$) & 16.17/(29.12 - $q^2$) \\ $f_2(q^2)$ & -0.21/(15.15 - $q^2$) & -0.22/(13.63 - $q^2$) & -0.41/(18.65 - $q^2$) & -0.45/(19.04 - $q^2$) \\ \end{tabular} \label{tab:one} \end{center} \end{table*} \section{Decay Width} \subsection{Derivation of Basic Formulae} The observables that we study in this paper are derived from \begin{equation} d\Gamma = \frac{1}{2m_i} \sum|{\cal M}|^2 d\Phi. \label{ddw} \end{equation} Here $d\Phi$ is the phase space and the symbol $\sum$ denotes the average over the polarization of the initial baryon and the sum over the polarizations of the final particles. We have \begin{equation} \sum |{\cal M}|^2 = |V_{cb}|^2 \frac{G^2}{2} [T_{SM} + 2x \Re(T_I e^{-i\varphi}) + x^2 T_N]. \label{modsq} \end{equation} Here $T_{SM}$ is the SM contribution, \begin{equation} T_{SM} = \sum H_{\mu\nu} \ell^{\mu\nu}, \ ~~~~~ \ H_{\mu\nu} = J^L_{\mu} J^{L*}_{\nu}, \ ~~~~~ \ \ell_{\mu\nu} = j_{\mu} j^*_{\nu}. \end{equation} As to the terms $T_I$ and $T_N$, they correspond, respectively, to the interference between the SM and the NP amplitude and to the modulus square of the NP amplitude. Specifically, we have \begin{eqnarray} T_I^L &=& T_N^L = T_{SM}, \ ~~~ \ T_I^R = \sum J^L_{\mu} J^{R*}_{\nu}\ell^{\mu\nu}, \ ~~~ \ T_N^R = \sum J^R_{\mu} J^{R*}_{\nu}\ell^{\mu\nu}, \\ T_I^{S(P)} &=& \sum J^L_{\mu} J^{S(P)*} j^{\mu}j^*, \ ~~~ \ \ ~~~ \ T_N^{S(P)} = \sum J^{S(P)} J^{S(P)*} jj^*, \\ T_I^H &=& \sum J^L_{\mu} J^{H*} j^{\mu}j^*, \ ~~~ \ \ ~~~ \ \ ~~~ \ T_N^H = \sum J^H J^{H*} jj^*, \end{eqnarray} the upper indices denoting the various NP interactions. In the present paper we are not concerned with spin, therefore we consider an unpolarized initial baryon. A standard calculation in the covariant formalism leads to \begin{align} T_{SM} &= 2^5 \{(X_0+Y_0)^2h_1 + (X_0-Y_0)^2h_2 + (Y_0^2-X_0^2)h_3 \ ~~~~~ ~~~~~ \ \nonumber \\ \ ~~~~~ \ &+ A[m_f(X_0+Y_0){\cal L}_i+ m_i(X_0-Y_0){\cal L}_f]+A^2p_f\cdot p_i ~ {\cal L}_P\}, \label{smc} \end{align} where $A$ is defined by the second Eq. (\ref{hff}) and \begin{eqnarray} h_1 &=& p_f\cdot p_{\ell} ~ p_i \cdot p, \ ~~~~~ \ h_2 = p_f \cdot p ~ p_i\cdot p_{\ell}, \ ~~~~~ \ h_3 = m_i m_f ~ p \cdot p_{\ell}, \\ {\cal L}_{i(f)} &=& p_{i(f)}\cdot p_{\ell} ~ P\cdot p + p_{i(f)}\cdot p ~ P\cdot p_{\ell}-p_{i(f)}\cdot P ~ p\cdot p_{\ell}, \\ {\cal L}_P &=& 2p_{\ell}\cdot P ~ p\cdot P - P^2 ~ p \cdot p_{\ell}. \end{eqnarray} As regards the remaining terms, one has \begin{eqnarray} T_I^R &=& 2^6 \{(X_0^2-Y_0^2)(k_1+k_2) - (X_0^2+Y_0^2)k_3 \nonumber \\ &+& A[m_f(X_0+Y_0){\cal L}_i+ m_i(X_0-Y_0){\cal L}_f] + A^2 m_f m_i{\cal L}_P\}, \\ T_N^R &=& 2^5 \{(X_0-Y_0)^2k_1 +(X_0+Y_0)^2k_2 + (Y_0^2-X_0^2)k_3 \nonumber \\ &+& A[m_f(X_0+Y_0){\cal L}_i+ m_i(X_0-Y_0){\cal L}_f] \nonumber + A^2p_f\cdot p_i{\cal L}_P\}, \\ T_I^S &=& 2^5 \frac{m_l}{\delta m_Q} [X_0^2(k_1+k_2)+A X_0(k_3+k_4)+A^2 p\cdot P ~ q\cdot P ~ k_+], \\ T_N^S &=& 2^4 \frac{p_l\cdot p}{(\delta m_Q)^2}[X_0^2(k_5+k_6)+A X_0(k_7+k_8)+A^2 (q\cdot P)^2 ~ k_+], \\ T_I^P &=& 2^5\frac{m_l}{\delta m_Q} [Y_0^2(-k_1+k_2)+A Y_0(k_3-k_4)-A^2 p\cdot P ~ q\cdot P ~ k_-], \\ T_N^P &=& 2^4\frac{p_l\cdot p}{(\delta m_Q)^2}[Y_0^2(k_5-k_6)+ A Y_0(-k_7+k_8) + A^2 (q\cdot P)^2 ~ k_-], \\ T_I^H &=& T_I^S + \rho T_I^P, \ ~~~~~ \ T_N^H = T_N^S + \rho^2 T_N^P. \end{eqnarray} Here \begin{eqnarray} k_1 &=& p\cdot p_f ~ q\cdot p_i+ p\cdot p_i ~ q\cdot p_f-p\cdot q ~ p_f\cdot p_i, \ ~~~ \ ~~~ \ k_2 = m_i m_f ~ p\cdot q, \\ k_3 &=& m_i(p\cdot p_f ~ q\cdot P + p\cdot P ~ q\cdot p_f), \ ~~~ \ k_4 = m_f(p\cdot p_i ~ q\cdot P + q\cdot p_i ~ p\cdot P), \\ k_5 &=& 2q^2 ~ p_f\cdot q ~ p_i\cdot q ~ p_i\cdot p_f, \ ~~~ \ \ ~~~ \ \ ~~~~ \ k_6 = m_i m_f ~ q^2, \\ k_7 &=& m_i ~ p_f\cdot q ~ P\cdot q, \ ~~~ \ \ ~~~~~ \ \ ~~~ \ \ ~~~~~ \ k_8 = m_f ~ p_i\cdot q ~ P\cdot q, \\ k_+ &=& p_i\cdot p_f + m_i m_f, \ ~~~ \ \ ~~~ \ \ ~~~ \ \ ~~~ \ \ ~ \ k_- = p_i\cdot p_f - m_i m_f. \end{eqnarray} \subsection{Differential and Partial Decay Width} The integration over the phase space is suitably performed by fixing a reference frame at rest with respect to $\Lambda_b$; to this end, it is also worth recalling the relation of $q^2$ to the energy $E_f$ of the final baryon in that frame: \begin{equation} q^2 = m_i^2+m_f^2-2 m_iE_f. \label{q2} \end{equation} After integrating Eq. (\ref{ddw}) over the angular variables, the differential decay width reads as\cite{dsa} \begin{equation} \frac{d\Gamma_{\ell}}{dq^2} = \frac{1}{2^7 \pi^3 m_i^2} \int_{E_{\ell}^-}^{E_{\ell}^+} dE_{\ell}\sum|{\cal M}|^2. \label{ddw1} \end{equation} Here $E_{\ell}$ is the energy of the charged lepton in the above mentioned frame and \begin{eqnarray} E_{\ell}^{\pm}&=& \frac{b\pm\sqrt{\Delta}}{2q^2}, \ ~~~~~ \ \ ~~~~~ \ \Delta = b^2+4q^2c, \label{zrs} \\ b &=& 2m_iE_f^2 - (2m_i^2+M^2)E_f+M^2m_i, \ ~~~~~ \ \ ~~~~~ \ \\ c &=& -(m_i^2+m^2_{\ell})E_f^2+m_iM^2E_f+m_f^2 m^2_{\ell}-\frac{1}{4}M^4, \label{coef2} \ ~~~~~ \ \\ M^2 &=& m_i^2+m_f^2+m_{\ell}^2; \ ~~~~~ \ \ ~~~~~ \ \ ~~~~~ \ \ ~~~~~ \ \label{coef3} \end{eqnarray} moreover, $m_{\ell}$ = 0.106 $GeV$ for $\ell$ = $\mu$ and 1.777 $GeV$ for $\ell$ = $\tau$. The partial decay width is obtained by integrating Eq. (\ref{ddw1}) over $q^2$: \begin{equation} \Gamma_{\ell} = \int_{q^2_-}^{q^2_+}dq^2\frac{d\Gamma}{dq^2}. \label{pdw1} \end{equation} Here the limits $q^2_{\pm}$ are related, through Eq. (\ref{q2}), respectively to $E_f$ = $m_f$ and $E_f$ = $E_f^m$, with \begin{equation} E_f^m = \sqrt{m_f^2+ p_m^2}, \ ~~~~~ \ \ ~~~~~ \ p_m = \frac{1}{2}(m_i-m_{\ell}-\frac{m_f^2}{m_i-m_{\ell}}).\label{lmf} \end{equation} For later convenience, we re-write the partial decay width, Eq. (\ref{pdw1}), as \begin{equation} \Gamma_{\ell} = \Gamma_{\ell}^{SM}+2 x cos\varphi \Gamma_{\ell}^I + x^2 \Gamma_{\ell}^N. \label{meq} \end{equation} Here, taking account of Eqs. (\ref{ddw1}) and (\ref{modsq}), we have \begin{equation} \Gamma_{\ell}^{SM} = \frac{|V_{cb}|^2}{2^7 \pi^3 m_i^2} \frac{G^2}{2} \int_{q^2_-}^{q^2_+}dq^2 \int_{E_{\ell}^-}^{E_{\ell}^+} dE_{\ell} T_{SM}, \label{meq1} \end{equation} similar expressions holding for $\Gamma_{\ell}^I$ and $\Gamma_{\ell}^N$, with $T_I$ and $T_N$ in place of $T_{SM}$. A check of the formulae used is given in Appendix B, where, in particular, the expression of $\Gamma_{\ell}^{SM}$ is compared with the well-known formula of the muon decay. \section{Predictions of Partial Decay Widths} \begin{table*} \begin{center} \caption{$\Gamma_{\mu}^{SM}$, $\Gamma_{\tau}^{SM}$ (in $\mu eV$) and the ratio $R_{\Lambda_c}^{SM}$ = $\Gamma_{\tau}^{SM}/\Gamma_{\mu}^{SM}$, for the five different FF considered.} \begin{tabular}{|c|c|c|c|} \hline\hline $~~~~~FF~~~$&$~~~~\Gamma_{\mu}^{SM}~~~~$&$~~~~\Gamma_{\tau}^{SM}~~~~$&$~~~~R_{\Lambda_c}^{SM}~~~~$ \\ \hline\hline IW & 31.6 & 5.63 & 0.18 \\ SR1 & 10.8 & 1.95 & 0.18 \\ SR2 & 11.5 & 1.80 & 0.16 \\ SR3 & 22.1 & 3.40 & 0.15 \\ SR4 & 24.5 & 3.61 & 0.15 \\ \end{tabular} \label{tab:two} \end{center} \end{table*} Table 2 shows the values of $\Gamma_{\mu}^{SM}$ and $\Gamma_{\tau}^{SM}$, calculated by means of Eq. (\ref{meq1}), and the ratio $R_{\Lambda_c}^{SM}$ = $\Gamma_{\tau}^{SM}/\Gamma_{\mu}^{SM}$, for the five different FF considered in the article. It can be seen that the SM results of the partial widths depend strongly on the FF. In particular, as regards $\Gamma_{\mu}^{SM}$, the IW FF gives the best approximation of the experimental value, {\it i. e.}\cite{pdg}, \begin{equation} \Gamma_{\ell}^{exp} = (29.5_{-11.4}^{+14.5}) \mu eV. \label{exp} \end{equation} Our result agrees also with the numerical value given in ref. 53. Instead, two of the SR FF differ from this value by more than one standard deviation and probably they need an overall normalization factor. However, we consider in the present article mainly ratios between dimensional quantities, which appear to be barely dependent on the FF. A first example is offered by the ratio $R^{SM}_{\Lambda_c}$, listed in the last column of Table 2. This table and Eq. (\ref{rat-prd}) entail a prediction for $R_{\Lambda_c}$. Indeed, averaging over the five values yields \begin{equation} {\bar R}_{\Lambda_c}^{SM} = 0.164\pm 0.006, \ ~~~~~ \ {\bar R}_{\Lambda_c} = 0.205 \pm 0.013\pm 0.008.\label{rat-prd1} \end{equation} Here the former ratio is only affected by the systematic error caused by the FF uncertainty, while for the latter also the statistical one (0.013) has to be accounted for. The smallness of the theoretical error confirms assumption 4). A particular attention deserves the IW FF. First of all, it allows to check immediately our formula (\ref{meq1}) against the expression of the well-known muon decay width, as shown in Appendix B. Secondly, it yields, for $R_{\Lambda_c}^{SM}$ and for the other dimensionless quantities considered in our article, results that are similar to those obtained with the SR FF, although structurally different. On the contrary, our result for $R_{\Lambda_c}^{SM}$ is considerably smaller than those given by the other authors. Indeed, such values span from 0.26\cite{lyz} to 0.38\cite{pv}, being concentrated, in recent years, between 0.31 and 0.34\cite{gu1,dm1,dut,fgk,swd,dt2,azs}. Refs. 52 and 56 give more complete reviews of these results. In any case, the analysis exposed in the following sections is presumably independent of such a discrepancy, as it is based on the ratios $\chi$ and $r_{\pm}$ (Eqs. (\ref{cc1}) and (\ref{rr1}) respectively), which depend exclusively on the decay (\ref{slh}). \section{Couplings of the Various NP Interactions} \subsection{Argand Diagrams for the NP Couplings} \begin{table*} \begin{center} \caption{Values of $\Gamma_{\tau}^I$ and $\Gamma_{\tau}^N$ (in $\mu eV$) for $S$, $P$ and $R$-interactions and for the five different FF} \begin{tabular}{|c|c|c|c|c|c|c|} \hline\hline $~~~~~FF~~~$&$~~~~\Gamma_{\tau}^{I,S}~~~~$&$~~~~\Gamma_{\tau}^{I,P}~~~~$&$~~~~\Gamma_{\tau}^{I,R}~~~~$&$~~~~\Gamma_{\tau}^{N,S}~~~~$&$~~~~\Gamma_{\tau}^{N,P}~~~~$&$~~~~\Gamma_{\tau}^{N,R}~~~~$ \\ \hline\hline IW~ & 1.28 & 0.26 & -3.32 & 2.26 & 0.44 & 5.63 \\ SR1 & 0.58 & 0.12 & -0.67 & 1.03 & 0.19 & 1.95 \\ SR2 & 0.60 & 0.12 & -0.39 & 1.06 & 0.20 & 1.80 \\ SR3 & 0.99 & 0.20 & -1.21 & 1.75 & 0.34 & 3.40 \\ SR4 & 1.05 & 0.22 & -1.25 & 1.86 & 0.36 & 3.61 \\ \end{tabular} \label{tab:three} \end{center} \end{table*} Table 3 provides the values of $\Gamma_{\tau}^I$ and $\Gamma_{\tau}^N$ for the $S$-, $P$- and $R$-interaction, calculated by Eq. (\ref{meq}) together with the equations analogous to (\ref{meq1}). The parameters corresponding to the $H$-interaction can be deduced from the following linear combinations: \begin{equation} \Gamma_{\tau}^{I,H} = \Gamma_{\tau}^{I,S} - \rho \Gamma_{\tau}^{I,P}, ~~~~~ \ ~~~~~ \Gamma_{\tau}^{N,H} = \Gamma_{\tau}^{N,S} + \rho^2 \Gamma_{\tau}^{N,P}. \label{twhg} \end{equation} As regards the $L$-interaction, we have the re-scaling \begin{equation} |1+x_L e^{i\varphi}|^2 = \xi, \label{lhd} \end{equation} independent of the FF. Eq. (\ref{meq}) yields, together with Eq. (\ref{gam-tau}), a relation between $x$ and $\varphi$. Taking account of the statistical and systematic errors, the allowed region consists of a circular crown in the Argand plane of the coupling $g_r$, centered at \begin{equation} g_c \equiv (\chi, 0), \ ~~~~ \ ~~~~ \ \chi = -\frac{\Gamma_{\tau}^I}{\Gamma_{\tau}^N} \label{cc1} \end{equation} and with radii \begin{equation} r_{\pm} = \frac{\sqrt{\Delta_{r\pm}}}{\Gamma_{\tau}^N}, ~~~~ \ ~~~~ \ \Delta_{r\pm} = (\Gamma_{\tau}^I)^2 + (\Gamma_{\tau\pm} -\Gamma_{\tau}^{SM})\Gamma_{\tau}^N; \label{rr1} \end{equation} here $\Gamma_{\tau\pm}$ takes into account the statistical error of $\xi_{\pm}$, Eq. (\ref{rat-prd}), and the systematic one, related to the FF. Exceptionally, the latter is absent for $L$-interaction, as Eq. (\ref{lhd}) entails, independent of the FF, \begin{equation} g_c \equiv (-1, 0), ~~~~ r_{\pm} = \sqrt{\xi_{\pm}}. \label{crr} \end{equation} The mean values and the statistical and systematic errors of the radii and the coordinates of centers of the Argand diagrams are listed in Table 4. Again, we remark the small theoretical errors of the parameters, which reflect the mild FF dependence. \subsection{Remarks} Two remarks are in order for the case of $\varphi$ = $\pm\pi/2$, where the interference between the SM amplitude and the NP one vanishes. - Firstly, note that comparing the results of Table 2 and of Table 3 yields \begin{equation} \Gamma_{\tau}^{N,R} = \Gamma_{\tau}^{SM}; \label{rel23} \end{equation} this is a consequence of the integration of Eq. (\ref{ddw}) over the phase space, which washes out the interference term between the vector and the axial current. Therefore we have, again independent of the FF, \begin{equation} x_R(\pm\pi/2) = x_L(\pm\pi/2) = 0.50\pm 0.04. \label{rel33} \end{equation} - Secondly, if one considers the possibility of decays $\Lambda_b \to \Lambda_c \tau^- \bar{\nu}_{\ell}$, with $\ell$ = $e$, $\mu$, $\tau$\cite{tw}, the coupling strength for $\ell$ = $\mu$ and $e$ can be inferred just for $\varphi$ = $\pm\pi/2$. \begin{table*} \begin{center} \caption{The mean values of the radii and of the centers of the Argand diagrams for the relative couplings $g_r$. $\bar{r}$ is affected both by a statistical and a systematic error, the former and the latter one respectively.} \begin{tabular}{|c|c|c|c|c|c|} \hline\hline $~~~~~~~~$&$~~~~S~~~~$&$~~~~P~~~~$&$~~~~H~~~~$&$~~~~L~~~~$&$~~~~R~~~~$ \\ \hline\hline $\bar{r}$ & 0.90$\pm$0.04$\pm$0.02 & 3.21$\pm$0.20$\pm$0.08 & 0.83$\pm$0.04$\pm$0.02 & 1.12$\pm$0.02 & 0.65$\pm$0.03$\pm$0.04 \\ $g_c$ & (-0.56, 0) & (-1.12, 0) & (-0.48, 0) & (-1.0, 0) & (0.37$\pm$0.10, 0) \\ \end{tabular} \label{tab:four} \end{center} \end{table*} \subsection{Relative Strengths of the NP Interactions} In order to compare the strengths of the various NP interactions, we may, for example, calculate their minimal values. These occur at $\varphi$ = 0, except for the $R$-interaction, for which one has to set $\varphi$ = $\pi$. This singular behavior is due to the negative value of $\Gamma_{\tau}^{I,R}$ (see Table 3), which induces, through Eq. (\ref{cc1}), a real positive value of $\chi$, and to the positivity of $x_{min} = r-|\chi|$, which follows from Eqs. (\ref{cc1}) and (\ref{rr1}). As we shall see in a moment, this anomaly is connected to a strong limitation on the phase $\varphi$. The values of $x_{min}$ $-$ once more barely FF dependent $-$ are listed in Table 5. \subsection{Constraints from $B\to D^{(*)} \tau {\bar \nu}_{\tau}$ Decays} Now we exhibit the phase limitations implied by our analysis, when combined with the analogous ones performed on the semi-leptonic $B$ decays\cite{ faj2,dt1,tw,iv,ddt}, especially the most recent ones\cite{iv}. - As regards the $L$-interaction, the agreement with all of the previous papers is trivial, because the NP term just re-scales the SM interaction. - As shown before, the minimum value of $x$ for the $R$-interaction occurs in correspondence of $\varphi$ = $\pi$. This property is shared by the $B \to D^* \tau \nu_{\tau}$ decay, whereas the $B \to D \tau \nu_{\tau}$ decay indicates that the minimum occurs at $\varphi$ = 0\cite{tw}. Therefore the allowed region for the coupling amounts to the intersection between two circular crowns whose centers are considerably far from each other, which strongly restricts the range of values of the phase; precisely, the previous analyses provide a narrow nearby of $\varphi$ = $\pm\pi/2$\cite{faj2,tw,iv}. But, as shown, in this case one has $x$ = 0.50 $\pm$ 0.04. - Also the $H$-interaction exhibits strong limitations on its phase. Indeed, we have to take into account the results of Table 4, together with those by refs. 19, that is, $g_c$ = (-0.76, 0.0), $r$ = 1.03 $\pm$ 0.25\footnote{Ivanov {\it et al.}\cite{iv}, private communication.}, together with a bound on the phase\cite{tw,iv}. Then, the allowed region of the Argand plane amounts to two very small intervals around \begin{equation} \varphi = \pm 2.18 ~ rad, \label{rel47} \end{equation} as illustrated in Fig. 1. - Lastly, the $S+P$-interaction is excluded by recent analyses\cite{iv}; similarly, the tensor interaction does not find an appreciable room\cite{tw,iv}. \begin{figure} \centering \includegraphics[width=0.70\textwidth] {corone4.jpg} \caption{$H$-interaction: the Argand diagram for the relative coupling $g_r$. The thinner of the two circular crowns is inferred from our analysis, the other one from the second ref. 19, where also bounds on the phase have been established. The dark regions correspond to the range of the allowed values of $g_r$.} \end{figure} \section{Discussion} First of all, we draw some consequences of our assumption 5) from the bounds just discussed. Secondly, we review and comment some of the previous analyses. \subsection{Analyzing the Results} - The $P$-interaction demands a quite large coupling ($x$ $>$ 2), in order to compensate the smallness of the matrix element of the corresponding operator between the initial and final state. This appears unrealistic, also in view of the considerations by Datta {\it et al.}\cite{dt2}, who discard this interaction when compared with the data of the decay $B_c \to \tau \bar{\nu}_{\tau}$. - As a consequence, for $x$ $\leq$ 1, the $H$-interaction ($H$ = $S-P$) behaves quite similarly to the $S$ one, as can be seen from Tables 4 and 5. Moreover, as shown before, when combined with the previous ones, our analysis imposes strong limits on the phase, which entails a relative strength that is considerably greater than the minimal value. Indeed, in order to determine $x$, we set, at the left-hand side of Eq. (\ref{meq}), $\Gamma_{\ell}$ = $\Gamma_{\tau}$ = $\xi \Gamma_{\tau}^{SM}$ and fix $\varphi$ according to Eq. (\ref{rel47}). The smaller root of this equation yields \begin{eqnarray} x_0 &=& 1.18\pm 0.12, ~~~~ x_1 = 1.09\pm 0.10, ~~~~ x_2 = 1.05\pm 0.09, \nonumber \\ x_3 &=& 1.09\pm 0.10, ~~~~ x_4 = 1.09\pm 0.10, ~~~~ \ ~~~~ \ ~~~~ \label{xh} \end{eqnarray} where $x_0$ corresponds to the IW FF, the remaining $x_i$ to the SR FF. Apart from the scarce agreement with our assumption 5), we observe that the 2HD model, included in the $H$-interaction, presents difficulties in explaining the anomaly[24-27], despite the fact that its coupling depends on the flavor, as required by LFUV. - Similarly, the $R$-interaction $-$ implemented by a specific model\cite{bb} $-$ is affected, as seen, by strict limitations on the phase. - On the contrary, as regards the $L$-interaction, any value of $\varphi$ is admitted by the analyses. This entails the possibility of a small ($\sim$ 0.12) value of the relative strength It is in qualitative agreement with a possible solution to the anomaly observed in the $B\to K^*\ell^+\ell^-$ decay\cite{gls,choh}, for which a very small relative strength is required. Moreover, this interaction is favored in the optics of MFV\cite{frt}, since it does not imply a CP violation phase out of the CKM scheme, as opposed to the cases of $H$- and $R$-interactions. Incidentally, estimating the mass of the NP intermediate boson to be about 10 times that of the usual intermediate vector boson $W$, this implies that the NP coupling in the $H$-, $R$- and $L$-interaction is, respectively, $\sim$ 10, 7 and 3.5 times greater than the electroweak coupling constant. To conclude our analysis, we remark that the $L$- and $H$-interactions recur in the most common models used to explain NP effects of the semi-leptonic decay and might be compatible with the anomaly seen in the $B\to K^*\ell^+\ell^-$ decay\cite{dln,crv,cai,choh,bht2}. According to assumption 5), the former interaction appears favored. However, as we shall see in the following subsection, alternative analyses lead to different conclusions. Therefore, measurements for discriminating among different NP interactions are suitable, as we shall exhibit in the next section. \subsection{Previous Analyses} \subsubsection{$\Lambda_b \to \Lambda_c \tau{\bar \nu}_{\tau}$} Two analyses\cite{swd, dt2} are quite similar to the present one, they also show Argand diagrams for the NP couplings. Shivashankara {\it et al.}\cite{swd} take into account the constraints that derive from $R_{D^{(*)}}$ and remark that the effects produced by the $P$-interaction are larger than those caused by the $S$-one. Datta {\it et al.}\cite{dt2} fix the NP couplings so that $R_{\Lambda_c}^{ratio}$ = $R_{D^{(*)}}^{ratio}$ within 3 standard deviations. Their condition is similar to ours, but less restrictive, therefore they get less severe bounds on phases and strengths of the couplings. Dutta\cite{dut} assumes that $R_{\Lambda_c}$ = $R_{D^{(*)}}$ within 3 standard deviations and considers two possible scenarios, either a mixing of $L$- and $R$-, or of $H$- and $S+P$-interactions. In the former case, he finds that only the $L$- or the purely vector interaction are possible. On the contrary, either the $H$- or the $S$-interaction survives the latter scenario, with more restrictions on the parameter space. This is not in contradiction with our results. Li {\it et al.}\cite{lyz} analyze the decay in the framework of the leptoquark model, taking account of the $B\to \tau\nu$ decay. They examine either the vector or the scalar case, finding more restrictions for the latter alternative. \subsubsection{$B \to D^{(*)}\tau{\bar \nu}_{\tau}$} We signal here two analyses of the $B$-decay, alternative to those considered above, which lead to different conclusions about the NP interaction. S. Bhattacharya {\it et al.}\cite{bht} fit the FF to the data of the $B \to D^{(*)}\tau{\bar \nu}_{\tau}$ decay using only the SM term; then they compare such FF with those available from $B \to D^{(*)}\ell{\bar \nu}_{\ell}$ data, finding a disagreement only as regards the axial current. To choose among the various NP operators, they use information-theoretic approaches and goodness-of-fit tests for cross validation, indicating the $R$-interaction as the best one. Celis {\it et al.}\cite{clj} consider it difficult to explain LFUV with $L$- and $R$-interactions and perform a comprehensive analysis of the scalar contributions in $b\to c \tau \nu_{\tau}$ transitions. The authors examine various observables, like $R_{D^{(*)}}$, the $q^2$ differential distributions of $B \to D^{(*)}\tau{\bar \nu}_{\tau}$ and the $\tau$ polarization in $B \to D^*\tau{\bar \nu}_{\tau}$ and the $B_c$ lifetime. They find that, in the framework of scalar NP, the discrepancy with the SM can be explained by a mixing of $H$- and $S+P$-interaction, with a slight tension for $R_{D^*}$. \section{Alternative Observables for New Physics} \subsection{Previous Proposals} In order to discriminate among the possible NP interactions, various observables have been proposed for the semi-leptonic $\Lambda_b$ decays. We recall especially the $\tau$ or $\Lambda_c$ polarization\cite{gu1}, the forward-backward asymmetry on the lepton side\cite{gu1,dt2} and the differential observable\cite{swd,dt2} \begin{equation} B_{\Lambda_c}(q^2) = \frac{d\Gamma_{\tau}}{dq^2}/\frac{d\Gamma_{\ell}}{dq^2}, \label{drb} \end{equation} where $d\Gamma_{\tau(\ell)}/dq^2$ is the differential width of the semi-leptonic $\Lambda_b$ decay, with the $\tau$- $(\ell)$-lepton in the final state. As regards the $B$ semi-leptonic decays, some asymmetry\cite{cg,ddt} and the polarization of one of the final products\cite{faj,lee2,tw,iv,chn,kum}, especially its T-odd component\cite{iv}, have been suggested. In this connection, we remark that a a $T$-odd observable could help to reveal a non-trivial phase $\varphi$, which seems to occur in the cases of $H$- and $R$-interactions. \subsection{A New Suggestion} As an alternative to the observables just exposed above, we propose the following one: \begin{equation} \Delta r(q^2) = \frac{B_{\Lambda_c}(q^2)}{B^{SM}_{\Lambda_c}(q^2)}-1 = \frac{d\Gamma_{\tau}}{dq^2}/(\frac{d\Gamma_{\tau}}{dq^2})_{SM}-1. \label{drq} \end{equation} Fig. 2 shows the behavior of this quantity in the case of the $H$-interaction, assuming, as found before, $\varphi$ = $\pm2.18$ $rad$ and the strengths (\ref{xh}) for the different FF. Once more, it does not depend so dramatically on the FF. As regards the $L$-interaction, one has, independent of the FF, \begin{equation} \Delta r(q^2) = 0.25 \pm 0.04 \label{lrc} \end{equation} for any $\varphi$. This is equal to the distribution (\ref{drq}) for the $R$-interaction at $\varphi$ = $\pm\pi/2$. \begin{table*} \begin{center} \caption{Minimal values of the relative strength $x$ for the various interactions and for the different FF.} \begin{tabular}{|c|c|c|c|c|c|} \hline\hline $~~~~~FF~~~$&$~~~~x_S~~$&$~~~x_P~~~~$&$~~~~x_H~~~~$&$~~~~x_L~~~~$&$~~~~x_R~~~~$ \\ \hline\hline IW~ & 0.41$\pm$0.10 & 2.44$\pm$0.51 & 0.43$\pm$0.10 & 0.12$\pm$0.04 & 0.18$\pm$0.05 \\ SR1 & 0.33$\pm$0.09 & 2.08$\pm$0.45 & 0.35$\pm$0.09 & 0.12$\pm$0.04 & 0.26$\pm$0.07 \\ SR2 & 0.30$\pm$0.08 & 1.91$\pm$0.42 & 0.32$\pm$0.08 & 0.12$\pm$0.04 & 0.33$\pm$0.07 \\ SR3 & 0.33$\pm$0.09 & 2.08$\pm$0.45 & 0.35$\pm$0.09 & 0.12$\pm$0.04 & 0.26$\pm$0.07 \\ SR4 & 0.33$\pm$0.09 & 2.06$\pm$0.45 & 0.35$\pm$0.09 & 0.12$\pm$0.04 & 0.26$\pm$0.07 \\ \end{tabular} \label{tab:five} \end{center} \end{table*} \section{Conclusions} Let us stress the most relevant points of our paper. A) As already observed in sect. 5, the results concerning the partial widths depend rather strongly on the FF. On the contrary, the dimensionless parameters $r$, $\chi$ and $x$, as well as the observables $R_{\Lambda_c}$, $R^{ratio}_{\Lambda_c}$ and $\Delta r(q^2)$, exhibit, similarly to ref. 41, a mild FF dependence, contained within $\sim$ $2-3\%$. Actually, such uncertainties vanish at all if the $L$-interaction is assumed. Yet, our prediction on $R_{\Lambda_c}$ differs considerably from those by other authors. B) We have done some assumptions, generally shared by the other authors, furthermore we have adopted a particular criterion for choosing the type of NP interaction. We have also taken into account the analyses of the $B$ semi-leptonic decays and the most commonly used models. On this basis, our calculations indicate that the most likely NP interactions are the $L$- and $H$-one. But the former interaction appears simpler and more natural. C) Our conclusions about the NP term are in contrast with those by other authors. At this point the measurements of alternative observables, like polarization or other asymmetries, is determinant. In particular, we have proposed a differential observable which could allow to discriminate between the two NP interactions mentioned at point B). \begin{figure} \centering \includegraphics[width=0.70\textwidth] {bandeperarticolo_fi218.jpg} \caption{The observable $\Delta r$, Eq. (\ref{drq}), as a function of $q^2$, with $\varphi$ = $\pm$ 2.18 $rad$; see Eqs. (\ref{xh}) for the corresponding values of $x$. The upper and lower curve delimit the allowed band.} \end{figure} \vskip 0.50cm \centerline{\bf Acknowledgments} The authors are thankful to their colleagues Fajfer {\it et al.}\cite{faj,faj2} and Ivanov {\it et al.}\cite{iv} for helpful communications and suggestions. \newpage \setcounter{equation}{0} \renewcommand\theequation{A. \arabic{equation}}
{ "timestamp": "2018-10-05T02:13:25", "yymm": "1804", "arxiv_id": "1804.05592", "language": "en", "url": "https://arxiv.org/abs/1804.05592" }
\subsection{Kruskal Coordinate} We will derive the formula of butterfly velocity in the following anisotropic background: \begin{eqnarray} ds^2&=&-a(r)f(r)dt^2+{dr^2\over b(r)f(r)}+\sum_{S=1}^n h^{(S)}(r)\,\bar g^{(S)}_{ij}(x)\,dx_{(S)}^idx_{(S)}^j \end{eqnarray} The horizon locates at $r=r_H$ then $f(r_H)=0$ while $a(r_H)\ne0$ and $b(r_H)\ne0$. The associated temperature of black hole or black brane is \begin{eqnarray} T={f\rq{}(r_H)\sqrt {a(r_H)b(r_H)}\over 4\pi} \end{eqnarray} Defining the tortoise coordinate $r_*$ the line element of time and radial parts can be expressed as \begin{eqnarray} d\bar s^2&=&-a(r)f(r)dt^2+{dr^2\over b(r)f(r)}=-a(r)f(r)\Big[dt^2-dr_*^2\Big]\\ dr_*&=&{dr\over f(r)\sqrt{a(r)b(r)}} \end{eqnarray} The metric can be written in Kruskal coordinate \begin{eqnarray} \label{metric0} ds^2&=&2A(UV)dUdV+\sum_Sh^{(S)}(UV)\,\bar g^{(S)}_{ij}(x)dx_{(S)}^idx_{(S)}^j \end{eqnarray} where \begin{eqnarray} A(UV)&=&-{2a(r)f(r)\over f\rq{}(r_H)^2a(r_H)b(r_H)}e^{-\sqrt{a(r_H)b(r_H)}~f\rq{}(r_H)~r_*}\\ U&=&e^{\sqrt{a(r_H)b(r_H)}{f\rq{}(r_H)\over 2} (-t+r_*)}\\ V&=&e^{\sqrt{a(r_H)b(r_H)}{f\rq{}(r_H)\over 2} (t+r_*)}\\ r_*&=&{1\over \sqrt{a(r_H)b(r_H)}f\rq{}(r_H)}\ln (UV) \end{eqnarray} In the tortoise coordinate $r_*(r_H)=-\infty$ and thus on the horizon $U_H=0$. Above definitions imply following useful relations \begin{eqnarray} \label{A1} A(U_H)&=&{2r_H\over f\rq{}(r_H)b(r_H)}\\ \label{h1} h'(U_H)&=&{dh(UV)\over dr}{dr\over d(UV)}\mid_{r=r_H}=r_H~h'(r_H)\\ A'(U_H)&=&\left({dA(UV)\over dr}{dr\over d(UV)} \right)_ {r=r_H}= {r_H^2\over b(r_H)f'(r_H)}~\Big( {3a'(r_H) \over a(r_H) }+{b'(r_H) \over b(r_H) }+{2f''(r_H)\over f'(r_H)}\Big)~~ \end{eqnarray} Higher derivative terms $h''(U_H)$ and $A''(U_H)$ can be evaluated in the similar way. \subsection {Shock Wave Equation} In the Kruskal coordinate the generalized gravitational equation can be expressed as \begin{eqnarray} {\sf G}=T_{matter}&=&2T_{UV}(U,V,x)dUdV+T_{UU}(U,V,x)dUdU+T_{VV}(U,V,x)dVdV \nonumber\\ &&+\sum_ST^{(S)}_{ij}(U,V,x)dx_{(S)}^i dx_{(S)}^j \end{eqnarray} Along the arguments of Dray and G. t'Hooft \cite{t'Hooft1985}, after adding a small null perturbation of asymptotic energy $E$ \begin{eqnarray} T_{(shock)\hat U\hat U}=E\,e^{2\pi t/\beta}~a(\hat x)~\delta(\hat U) \end{eqnarray} the spacetime is still described by \eqref{metric0} but $V$ is shifted by \begin{eqnarray} V \rightarrow V + \alpha(x) \end{eqnarray} Through analysis we can find that in terms of the new coordinates \cite{Sfetsos9408} \begin{eqnarray} \hat U=U,~~~\hat V=V+\Theta(U) \alpha(x) \end{eqnarray} where $\Theta=\Theta(U)$ is a step function, the metric can be expressed by \begin{eqnarray} \label{metric1} ds^2&=&2\hat A(\hat U,\hat V)d\hat Ud\hat V+\sum_S\hat g^{(S)}_{ij}(\hat U,\hat V,\hat x) d\hat x_{(S)}^id\hat x_{(S)}^j - 2\hat A~\hat \alpha(\hat x)\hat \delta(\hat U) d\hat U^2 \end{eqnarray} the generalized gravitational equation, after dropping hat notation, becomes \begin{eqnarray} \label{sweq} {\sf G}^{(1)}_{UU}+2{\sf G}^{(0)}_{UV}~ \alpha(x) ~\delta(U)=E\,e^{2\pi t/\beta}~a(x)~\delta(U) \end{eqnarray} Term ${\sf G}^{(1)}_{UU}$ and ${\sf G}^{(0)}_{UV}$ are the first-order correction and zero-order generalized Einstein tensor in the metric \eqref{metric1} respectively. This is the shock wave equation. Using above formulation we will present a systematic procedure to find the differential equation of $ \alpha(x)$ for the quadratic gravity in the anisotropic spacetime \eqref{metric1} and then obtain the associated formula of butterfly velocity. \subsection{Shock Wave Equation in Einstein Gravity} For Einstein gravity theory the tensor calculation in our previous paper \cite{Huang1710} gives \begin{eqnarray} \label{Einstein} {\sf G}^{(1)}_{UU}+2{\sf G}^{(0)}_{UV}~ \alpha(x) ~\delta(U) &=&{A\over 2}\sum_S \Big({2\over h^{(S)}}~\bar\Delta^{(S)}\alpha(x)- {d^{(S)}\,h'^{(S)}\over A h^{(S)}}\alpha(x)\Big)\,\delta(U) \end{eqnarray} and, in the case with local source $a(x)=\delta(x_i^{(Q)})$, the shock wave equation becomes \begin{eqnarray} \label{mainresult1} \Big[\bar\Delta^{(Q)}-h^{(Q)}(U_H)\sum_S {d^{(S)}\,h'^{(S)}(U_H)\over 2A(U_H)~ h^{(S)}(U_H)}\Big] ~\alpha(t,x_i^{(Q)}) = E\,e^{2\pi t/\beta}~{h^{(Q)}(U_H)\over A(U_H)}~\delta(x_i^{(Q)}) \end{eqnarray} The butterfly velocity along the direct $x_i^{(Q)}$ is \begin{eqnarray} \label{mainvelocity} v_B^{(Q)}&=&{2\pi kT\over M_{(Q)}},~~ M^2_{(Q)}=h^{(Q)}(r_H)\sum_S {d^{(S)}}~{b(r_H) f'(r_H)h'^{(S)}(r_H)\over 4 h^{(S)}(r_H)} \end{eqnarray} where we have used the relations of \eqref{A1} and \eqref{h1}. Formulas \eqref{Einstein} and \eqref{mainresult1} were derived by us in eq.(2.28) and eq.(2.30) in \cite{Huang1710}, respectively. \section{Shock Wave Equation in Quadratic Gravity} To investigate the theory with quadratic gravity we first collect following primary relations which can be proved with the help of appendix A, B and C. Note that we denote coordinate index by $a,b,c,d$ and these not $U,V$ by $i,j,k,m,n$. \subsection{Five Relations} Relation 1 : On the horizon the non-zero values of $R^a_{~bcd}$ are \begin{eqnarray} R^U_{~UUV}&=&-R^U_{~UVU}\dot=-{A'(0)\over A(0)},~~~ R^{U(S)}_{~iUj}=-R^{U(S)}_{~ijU}\dot=-{\bar g^{(S)}_{ij}(x)h'^{(S)}(0)\over 2A(0)}\\ R^V_{~VUV}&=&-R^V_{~VVU}\dot={A'(0)\over A(0)},~~~R^{V(S)}_{~iVj}=-R^V_{~ijV}\dot=-{\bar g^{(S)}_{ij}(x)h'^{(S)}(0)\over 2A(0)}\\ R^{i(S)}_{~UVj}&=&-R^{i(S)}_{~UjV}\,=\, R^{i(S)}_{~VUj}=-R^{i(S)}_{VjU} \,\dot=\,{h'^{(S)}(0)\over 2h^{(S)}(0)},~~ R^{i(S)}_{~jkm}=\bar R^{i(S)}_{~jkm}\,\dot\ne\,0 \end{eqnarray} Relation 2 : On the horizon the non-zero values of $R_{ab}$ and $R$ are \begin{eqnarray} R_{UV}&=&R_{VU}\dot=-{A'(0)\over A(0)}-\sum_S{{d^{(S)}}\,h'^{(S)}(0)\over 2h^{(S)}(0)}\\ R^{(S)}_{ij}&\dot=&\bar R^{(S)}_{ij}-{\bar g^{(S)}_{ij}(x)h'^{(S)}(0)\over A(0)}\\ R&\dot=&-{2A'(0)\over A(0)^2}+\sum_S {\bar R^{(S)}\over h^{(S)}(0)}-{2d^{(S)}\,h'^{(S)}(0)\over A(0)\,h^{(S)}(0)} \end{eqnarray} where the superscript $(S)$ is used to specify the coordinate $dx^i_{(S)}$ in metric \eqref{metric0}. The notation $\dot=0$ is used to emphasize that the value is calculated on the horizon. Notice that the index $(S)$ in $R^{i(S)}_{~jkm}$ means that indices $i,j,k,m$ have to be on the same $(S)$ otherwise the tensor is zero. We use $\bar R^{i(S)}_{~jkm}$ , $\bar R^{(S)}_{ij}$ and $\bar R^{(S)}$ to denote the curvature evaluated in metric $ds^2=\bar g^{(S)}_{ij}(x) dx^idx^j$. Note that $d^{(S)}= {\bar g^{ij(S)}} \bar g^{(S)}_{ij}$ is the dimension of space $dx_{(S)}^i$ in \eqref{metric0}. The relation between bulk dimension $D$ and dimension $d^{(S)}$ is \begin{eqnarray} D=2+\sum_Sd^{(S)} \end{eqnarray} In isotropic space above relation reduces to $D=2+d$ while other literature denotes $D=1+d$. Our notation is convenient when space is anisotropic. Relation 3 : On the horizon the non-zero values of $\delta R^a_{~bcd}$ are \begin{eqnarray} \delta R^{V}_{~UUV}&=&-\delta R^V_{~UVU}=-{2\alpha(x)A'\over A}\,\delta (U)\\ \delta R^{V}_{~iUj}&=&-\delta R^{V}_{~ijU}=\delta(U) \bar\nabla_i\bar\nabla_j\alpha(x) -{1\over2A}\bar g_{ij}h'\alpha(x)\delta(U)\\ \delta R^{i}_{~UjU}&=&-\delta R^{i}_{~UUj}={h'\alpha(x)\over 2h}\,\delta (U)\,\delta^i_j+{A\delta (U)\over h}\,\bar\nabla^i\bar\nabla_j\alpha(x)\\ \delta R^{i}_{~UjU}&=&-\delta R^{i}_{~UUj}={h'\alpha(x)\over 2h}\,\delta (U)\,\delta^i_j+{A\delta (U)\over h}\,\bar\nabla^i\bar\nabla_j\alpha(x) \end{eqnarray} Relation 4 : On the horizon the non-zero values of $\delta R_{ab}$ are $\delta R_{UU}$ and $\delta R=0$. \begin{eqnarray} \delta R_{UU}&=&\Big( {2A'\over A}+\sum_S {d^{(S)}}{h'^{(S)}\over 2 h^{(S)}}\Big)\,\alpha(x)\delta(U)+\delta(U)\sum_S {A\over h^{(S)}}~\bar\Delta^{(S)}\alpha(x)\\ \delta R&=&0 \end{eqnarray} where the Laplacian is defined by \begin{eqnarray} \bar\Delta^{(S)}~\alpha(x)={1\over \sqrt {\bar g^{(S)}}} ~\partial_i^{(S)} \Big(\sqrt {\bar g^{(S)}} ~ {\bar g^{(S)ij}}~ \partial^{(S)}_j \alpha(x)\Big) \end{eqnarray} and $\bar \nabla_i$ is the covariant derivative in the space with metric $\bar g_{ij}^{(S)}$. With the helps of relation 1 $\sim$ relation 4, we can prove following relation. \\ Relation 5 : On the horizon \begin{eqnarray} \delta (R_{abcd}R^{abcd})=\delta (R_{ab}R^{ab})=\delta (R^2)=\delta (\Box R)\,\dot=\,0 \end{eqnarray} Last relation plays a central role to obtain the simplified shock wave equation of quadratic gravity in below. \subsection{Simplified Shock Wave Equation} Using these relations we begin to evaluate ${\sf G}^{(1)}_{UU}+ 2\,{\sf G}^{(0)}_{UV} \, \alpha(x) \,\delta(U)$ in qradratic gravity theory. After explicitly expansion we find that \begin{eqnarray} &&{\sf G}^{(1)}_{UU}+2\,{\sf G}^{(0)}_{UV}\, \alpha(x) \,\delta(U)\nonumber\\ \label{totalL1} &=&2\alpha \delta (R_{Ucde}R_U^{~cde}) +2(2\alpha+\beta)\delta (R_{UcUd}R^{cd})-4\alpha \delta (R_{Uc}R_U^{c})\nonumber\\ &&+2\gamma \delta (R_{UU}R)+(4\alpha+\beta)\delta (\Box R_{UU})-(2\alpha+\beta+2\gamma)\delta (\nabla_U\nabla_U R)\nonumber\\ &&-{1\over 2}\delta \Big[g_{UU} \Big(( \alpha R_{abcd}R^{abcd}+\beta R_{ab}R^{ab} +\gamma R^2) -(\beta+4\gamma)\Box R \Big)\Big]\nonumber\\ &&+2\Big\{2\alpha R_{Ucde}R_V^{~cde} +2(2\alpha+\beta) R_{UcVd}R^{cd}-4\alpha R_{Uc}R_V^{c}\nonumber\\ &&+2\gamma (R_{UV}R)+(4\alpha+\beta) (\Box R_{UV})-(2\alpha+\beta+2\gamma) (\nabla_U\nabla_V R)\nonumber\\ &&-{1\over 2} \Big[g_{UV} \Big(( \alpha R_{abcd}R^{abcd}+\beta R_{ab}R^{ab} +\gamma R^2) -(\beta+4\gamma)\Box R \Big)\Big]\Big\} \alpha(x) \delta(U)\\ \label{totalL2} &=&2\alpha \delta (R_{Ucde}R_U^{~cde}) +2(2\alpha+\beta)\delta (R_{UcUd}R^{cd})-4\alpha \delta (R_{Uc}R_U^{c})\nonumber\\ &&+2\gamma \delta (R_{UU}R)+(4\alpha+\beta)\delta (\Box R_{UU}) -(2\alpha+\beta+2\gamma) \delta (\nabla_U\nabla_U R)\nonumber\\ &&+2\Big\{2\alpha R_{Ucde}R_V^{~cde} +2(2\alpha+\beta) R_{UcVd}R^{cd}-4\alpha R_{Uc}R_V^{c}\nonumber\\ &&+2\gamma (R_{UV}R)+(4\alpha+\beta) (\Box R_{UV})-(2\alpha+\beta+2\gamma) (\nabla_U\nabla_V R)\Big\} \alpha(x)\delta(U) \end{eqnarray} To obtain the last relation we have used the properties of Proposition 5 to conclude that the operator $\delta$ in first bracket of \eqref{totalL1} only produces $\delta g_{UU}$. After substituting the explicitly forms of $\delta g_{UU}=2A(UV) \alpha(x)\delta(U)$ in first bracket and $g_{UV}=A(UV)$ in second bracket we see that they are canceled to each other and we have the last relation \eqref{totalL2}. \\ Eq.\eqref{totalL2} has six zero-order terms and six first-order terms. It is interesting to see that we could furthermore simplify it to contain only six first-order terms in another forms. Using the metric properties in \eqref{metric0} and \eqref{metric1} we can find the following simple relation, which can be applied to any tensor $F_{UU}$ : \begin{eqnarray} \label{FUU} \delta F_{UU}&=&\delta(g_{Ua}F^a_{~U})=(\delta g_{Ua}) F^a_{~U}+g_{Ua}\delta(F^a_{~U})=(\delta g_{UU})F^U_{~U} +g_{UV}\delta(F^V_{~U})\nonumber\\ &=&(\delta g_{UU})g^{UV}F_{VU}+g_{UV}\delta(F^V_{~U}) =-2F_{VU}\alpha(x)\delta(U)+g_{UV}\delta(F^V_{~U}) \end{eqnarray} After identifying $F_{UU}$ as $R_{Ucde}R_U^{~cde},~R_{UcUd}R^{cd},\cdot\cdot\cdot$ and substituting them into equation \eqref{totalL2} we find that \begin{eqnarray} \label{totalL3} {\sf G}^{(1)}_{UU}+2\,{\sf G}^{(0)}_{UV}\, \alpha(x) \,\delta(U) &=&g_{UV}\Big(2\alpha \delta (R^V_{~cde}R_U^{~cde}) +2(2\alpha+\beta)\delta (R^V_{~cUd}R^{cd})-4\alpha \delta (R^V_{~c}R_{~U}^{c})\nonumber\\ &&+2\gamma \delta (R^V_{~U}R)+(4\alpha+\beta)\delta (\Box R^V_{~U}) -(2\alpha+\beta+2\gamma) \delta (\nabla^V\nabla_U R)\Big)~~~~~~~ \end{eqnarray} Now, it remains only six terms. Among them four terms are quadratic curvature and two terms are derivative of curvature. With the help of appendix and after calculations we collect the formulas of these six terms in below. \subsection{Six Formulas} Using the proposition 1 $\sim$ proposition 4 we can find following four formulas : Formula 1 : \begin{eqnarray} \label{4-1} \delta(R^V_{~bcd}R_U^{~bcd})&\dot=&\sum_S{d^{(S)}\alpha(x)(h'^{(S)})^2\over A^2\, (h^{(S)})^2}\delta(U) -2\delta(U){h'^{(S)}\over A\,(h^{(S)})^2}\bar\Delta^{(S)} \alpha(x) \end{eqnarray} Formula 2 : \begin{eqnarray} \label{4-2} \delta (R^V_{~cUd}R^{cd})&\dot=&\sum_{S}{\delta(U)\over (h^{(S)})^2}\Big(\bar R^{(S)}_{ij} \bar\nabla^{(S)i}\bar\nabla^{(S)j}\alpha(x) -{1\over2A}\bar R^{(S)}h'^{(S)} \alpha(x)\Big)\nonumber\\ &&+{\delta(U)\over Ah^{(S)}}\Big({h'^{(S)}\over h^{(S)}}-{A'\over A} \Big)\Big({d^{(S)}h'^{(S)}\over2A}\alpha(x)- \bar\Delta^{(S)}\alpha(x)\Big)~~ \end{eqnarray} Formula 3 : \begin{eqnarray} \label{4-3} \delta (R^V_{~c}R_U^{c}) &\dot=& {\delta(U)\over A^2}\Big(\sum_S {{d^{(S)}}h'^{(S)}\over h^{(S)}}\,\alpha(x)-\sum_S {2A\over h^{(S)}}~\bar\Delta^{(S)}\alpha(x)\Big)\nonumber\\ &&\times\Big({A'\over A}+\sum_{\tilde S}{{d^{(\tilde S)}}\,h'^{(\tilde S)}\over 2h^{(\tilde S)}}\Big) \end{eqnarray} Formula 4 : \begin{eqnarray} \label{4-4} \delta (R^V_{~U}R) &\,\dot=\,&{\delta(U)\over A}\Big(\sum_S }{{d^{(S)}h'^{(S)}\over h^{(S)}}\,\alpha(x)-\sum_S {2A\over h^{(S)}}~\bar\Delta^{(S)}\alpha(x)) \Big)\nonumber\\ &&\times \Big({A'\over A^2}-\sum_{\tilde S} {\bar R^{(\tilde S)}\over 2h^{(\tilde S)}}+{d^{(\tilde S)}\,h'^{(\tilde S)}\over A\,h^{(\tilde S)}}\Big) \end{eqnarray} We perform more calculations in appendix to obtain following two formulas : Formula 5 : \begin{eqnarray} \label{6-1} \delta (\Box R^V_{~U}) &\dot=&\Big(\sum_{S,\tilde S} {\bar\Delta^{(S)}\bar\Delta^{(\tilde S)}\alpha(x)\over h^{(S)}h^{(\tilde S)}}\Big)\,\delta (U)-{1\over A}\Big({2A'\over A}+\sum_S{d^{(S)}h'^{(S)} \over h^{(S)}}\Big)\Big(\sum_S {\bar\Delta^{(S)} \alpha(x) \over h^{(S)}}\Big)\,\delta (U)\nonumber\\ &&+{1\over A^2}\left[\Big(\sum_S {d^{(S)}h'^{(S)} \over 2 h^{(S)}}\Big)^2+\sum_S d^{(S)}\Big({3A'h'^{(S)}\over Ah}-{2h''^{(S)} \over h^{(S)}}+{(h'^{(S)})^2\over (h^{(S)})^2} \Big)\right] \alpha(x)\,\delta (U)\nonumber\\ \end{eqnarray} Formula 6 : \begin{eqnarray} \label{6-2} &&\delta (\nabla^V\nabla_U R)\,\dot=\,-\Big[\sum_S{1\over (h^{(S)})^2}\,(\bar\partial^i\alpha(x))(\bar\partial_i\bar R^{(S)})\Big]\,\delta(U)\nonumber\\ &&+{1\over A}\Big[{6(A')^2\over A^3}-{4A''\over A^2}-\sum_S\Big({\bar R^{(S)}h'^{(S)}\over (h^{(S)})^2 }+{4d^{(S)}h''^{(S)} \over Ah^{(S)}}-{2d^{(S)}A'h'^{(S)} \over A^2h^{(S)}} +{d^{(S)}(d^{(S)}-7)(h'^{(S)})^2 \over 2A(h^{(S)})^2}\Big)\Big]\alpha(x)\delta(U)\nonumber\\ \end{eqnarray} \subsection{Butterfly Velocity in Anisotropic Space of Gauss-Bonnet Gravity} We now apply above formulas to the simplest case of Gauss-Bonnet gravity in which $\alpha=\gamma_{GB},~\beta=-4\gamma_{GB},~\gamma=\gamma_{GB}$. The equation \eqref{totalL3} now has a simple form \begin{eqnarray} \label{mainresult2} &&{\sf G}^{(1)}_{UU}+2\,{\sf G}^{(0)}_{UV}\, \alpha(x) \,\delta(U)\nonumber\\ &=&2g_{UV}\,\gamma_{GB}\,\Big( \delta (R^V_{~cde}R_U^{~cde}) -2\delta (R^V_{~cUd}R^{cd})-2 \delta (R^V_{~c}R_{~U}^{c})+\delta (R^V_{~U}R)\Big)\nonumber\\ &=&2\delta(U)\,\gamma_{GB}\,\Big[-2\Big(\sum_{S}{A\over (h^{(S)})^2}\bar R^{(S)}_{ij} \bar\nabla^{(S)i}\bar\nabla^{(S)j}\alpha(x)-{\bar R^{(S)}h'^{(S)} \over 2 (h^{(S)})^2}\alpha(x)\Big)\nonumber \\ &&+\Big(\sum_S {A\over h^{(S)}}~\bar\Delta^{(S)}\alpha(x)-{d^{(S)}h'^{(S)}\over 2h^{(S)}}\,\alpha(x)\Big) \Big(\sum_{\tilde S} {\bar R^{(\tilde S)}\over h^{(\tilde S)}}\Big)\Big] \end{eqnarray} Let us make three comments about the result : 1. In the case of isotropic space we can remove the summations over $S$ and $\tilde S$, then formula \eqref{mainresult2} reproduces the formula (4.18) in our previous paper \cite{Huang1710}. 2. Dues to the appearance of double summations the formula \eqref{mainresult2} is not just that adding a simple summation over $S$ to the formula (4.18) in \cite{Huang1710}, in which only isotropic space was analyzed . Thus the shock wave equation in anisotropic space is a non-trivial extension of that in isotropic space. 3. Consider the the planar, spherical, or hyperbolic black hole metric in \eqref{BH1} and using the relations \eqref{BH2} and \eqref{BH3} we find that \begin{eqnarray} {A\over (h^{(S)})^2}\bar R^{(S)}_{ij} \bar\nabla^{(S)i}\bar\nabla^{(S)j}\alpha(x)-{\bar R^{(S)}h'^{(S)} \over 2 (h^{(S)})^2}\alpha(x)=k^{(S)}(d^{(S)}-1) \Big({A\over h^{(S)}}~\bar\Delta^{(S)}\alpha(x)-{d^{(S)}h'^{(S)}\over 2h^{(S)}}\,\alpha(x)\Big)~~ \end{eqnarray} Substituting this relation into \eqref{mainresult2} we see that the shock wave equation of Einstein-Gauss-Bonnet gravity and that of Einstein gravity, i.e. \eqref{Einstein}, obey the same differential equation when the space is isotropic. The double summations in the formula \eqref{mainresult2} will ruin this property when the space is anisotropic. Thus, we conclude that {\it in the D-dimensional planar, spherical or hyperbolic black hole spacetime the Einstein-Gauss-Bonnet gravity has the same shock wave equation as that in Einstein gravity if and only if the space is isotropic}. \section{Butterfly Velocity in Isotropic Spaces of Quadratic Gravity} \subsection{Formula} In this section we will find the simplified formula of shock wave equation for the quadratic gravity in the following spacetime\footnote {Through coordinate transformation the space can become a general form : $ds^2=-a(\tilde r)\tilde f(\tilde r)dt^2 +{d\tilde r^2\over b(\tilde r)\tilde f(\tilde r)} + \tilde h(\tilde r)\bar g_{ij}(x)dx^idx^j$ and property found in this paper is very general} \begin{eqnarray} \label{metricS} ds^2&=&-N_\sharp^2f(r)dt^2+{dr^2\over f(r)}+ h(r)\sum_{i,j=1}^d\bar g_{ij}(x)dx^idx^j \end{eqnarray} The constant $N_\sharp^2$ is introduced to make the metric to be AdS space asymptotically. We consider 2+d dimensional planar, spherical or hyperbolic black holes. The general metric is \begin{eqnarray} \label{BH1} \bar g_{ij}(x)dx^idx^j&=&\left\{ \begin{array} {cc} d\theta_1^2+d\theta_2^2+ \cdot\cdot\cdot+d\theta_d^2,&k=0\nonumber\\ d\theta_1^2+\sin^2\theta_1(d\theta_2^2+\sin^2\theta_2(d\theta_3^2+ \cdot\cdot\cdot+\sin^2\theta_{d-1}d\theta_d^2),&k=1\\ d\theta_1^2+\sinh^2\theta_1(d\theta_2^2+\sin^2\theta_2(d\theta_3^2+ \cdot\cdot\cdot+\sin^2\theta_{d-1}d\theta_d^2),&~~k=-1\nonumber\\ \end{array} \right. \\ \end{eqnarray} which implies \begin{eqnarray} \label{BH2} \bar R^{ij} \bar\nabla_i\bar\nabla_j\alpha(x)&=&k(d-1)\bar\Delta\alpha(x)\\ \label{BH3} \bar R&=&kd(d-1) \end{eqnarray} Note that the shock wave equation has two parts, one is from Einstein gravity (EG) and another is from quadratic gravity (QG). The results are \begin{eqnarray} {\sf G}^{(1)}_{UU}+2{\sf G}^{(0)}_{UV} \alpha(x)\delta(U)&=&\Big[{\sf G}^{(1)}_{UU}+2{\sf G}^{(0)}_{UV}~ \alpha(x) ~\delta(U)\Big]_{EG}+\Big[{\sf G}^{(1)}_{UU}+2{\sf G}^{(0)}_{UV}~ \alpha(x) ~\delta(U)\Big]_{QG}~~~ \end{eqnarray} where \begin{eqnarray} \label{Deltaalpha} \Big[{\sf G}^{(1)}_{UU}+2{\sf G}^{(0)}_{UV}~ \alpha(x) ~\delta(U)\Big]_{EG} &=&{A\over r_H^2}\Big(\Delta\alpha(x)-{d\over 2} \,r_H\, f'(r_H)\alpha(x)\Big)\,\delta(U)\\ \nonumber\\ \Big[{\sf G}^{(1)}_{UU}+2{\sf G}^{(0)}_{UV}~ \alpha(x) ~\delta(U)\Big]_{QG}&=& A\left(2\alpha \delta (R^V_{~cde}R_U^{~cde}) +2(2\alpha+\beta)\delta (R^V_{~cUd}R^{cd})-4\alpha \delta (R^V_{~c}R_{~U}^{c})\right.\nonumber\\ &&+2\gamma \delta (R^V_{~U}R)+(4\alpha+\beta)\delta (\Box R^V_{~U}) -(2\alpha+\beta+2\gamma) \delta (\nabla^V\nabla_U R)\Big)~~~~~~~~ \end{eqnarray} and \begin{eqnarray} \delta(R^V_{~bcd}R_U^{~bcd}) &=&-{2f'(r_H)\over r_H^3}\Big(\bar\Delta\alpha(x)-{d\over 2}r_H\,f'(r_H)\,\alpha(x) \Big)\\ \delta (R^V_{~cUd}R^{cd}) &=&{r_H^2f''(r_H)-2r_Hf'(r_H)-2(1-d)dk\over 2r_H^4}\Big(\bar\Delta\alpha(x)-{d\over 2}r_H\,f'(r_H)\,\alpha(x) \Big)\\ \delta (R^V_{~c}R_U^{c}) &=&-{r_Hf''(r_H)+df'(r_H)\over r_H^3}\Big(\bar\Delta\alpha(x)-{d\over 2}r_H\,f'(r_H)\,\alpha(x) \Big)\\ \label{GBP} \delta (R^V_{~U}R) &=&-{r_H^2f''(r_H)+2dr_Hf'(r_H)+(1-d)dk\over r_H^4}\Big(\bar\Delta\alpha(x)-{d\over 2}r_H\,f'(r_H)\,\alpha(x) \Big)~~~~~~~ \end{eqnarray} which are calculated from \eqref{4-1}, \eqref{4-2}, \eqref{4-3} and \eqref{4-4} respectively. And \begin{eqnarray} \delta (\Box R^V_{~U}) &=&{\bar\Delta\bar\Delta\alpha(x)\over r_H^4}-{f'(r_H)\over r_H^3}\Big(d+r_Hf''(r_H)\Big) \bar\Delta\alpha(x) +{df'(r_H)\over 4r_H^2}\Big(2r_Hf''(r_H) +df'(r_H)\Big)\alpha(x)\nonumber\\ \\ \nonumber\\ \delta (\nabla^V\nabla_U R)\, &=&-{\alpha(x)\,f'(r_H)\over 2r_H^3}\Big(2(d-1)d\,k +d\,r_H((d-3) f'(r_H) +2r_Hf''(r_H)) +r_H^3f'''(r_H)\Big)~~~~~~~~~~~ \end{eqnarray} which are calculated from \eqref{6-1} and \eqref{6-2} respectively. Note that \eqref{4-1}, \eqref{4-2}, \eqref{4-3} and \eqref{4-4} have a common factor $``\bar\Delta\alpha(x)-{d\over 2}r_H\,f'(r_H)\,\alpha(x) ``$. This factor is just that appears in the Einstein gravity \eqref{Einstein}. We now apply these formula to the spacetime \eqref{metricS} with \footnote{ This is the planar black solution.} \begin{eqnarray} \label{metricF} f(r)&=&r^2\left(1-\Big({r_0\over r}\Big)^{d+1}+\delta+\eta\Big({r_0\over r}\Big)^{2(d+1)}\right),~~~~h(r)=r^2\\ \delta&=&{(d - 2)\over d} \Big[(d + 1) \Big((d + 2) \gamma + \beta\Big) + 2 \alpha\Big],~~~~\eta=(d - 1) (d - 2) \alpha\\ N_\sharp^2&=&1+\delta \end{eqnarray} The black hole horizon and temperature \footnote{The temperature form in \eqref{T} is shown in \cite{Kats0712}, which is that without factor $N_\sharp$.} are \begin{eqnarray} r_H&=&r_0\,\Big(1-{\delta+\eta\over d+1}\Big)\\ \label{T} T&=&{(d+1)r_H\over 4\pi}\left[1-\gamma(d-2)(d-1) +{(d-2)\Big((d+1)(\alpha (d+2)+\beta) +2\gamma\Big)\over 2d}\right] \end{eqnarray} Above metric was first derived in \cite{Kats0712}. It had been used by \cite{Kats0712, Brigante0712} to show the viscosity bound violation and shear sum rule in higher derivative gravity theories \cite{ Chowdhury1711}. We will use this metric to study the effect of quadratic gravity on the butterfly velocity. After the calculation the shock wave equation \eqref{sweq} becomes \begin{eqnarray} C_2\,\bar\Delta\bar\Delta\alpha(x)+C_1\,\bar\Delta\alpha(x)+C_0\,\alpha(x)&=&E\,e^{2\pi t/\beta}~a(x) \end{eqnarray} where \begin{eqnarray} C_2&=&(4 \alpha + \beta){1\over r_H^4}\\ C_1&=& -(1 + d)^2 (4 \alpha + \beta)+{1\over r_H^2}\Big( 1 - 2 (-2 + d + 3 d^2) \alpha - 4 \gamma - 2 d^2 (\beta + \gamma) + 2 d (\beta + 3 \gamma)\Big)\\ C_0&=&(1 + d) \Big[ \Big(-2 d +2 (1 + d) (4 + d^2)\Big) \alpha + (1 + d) (4 + d (2 + d)) \beta + 2 (1 + d) (2 + d)^2 \gamma\Big]~~~~~~~~ \end{eqnarray} The appearing of the term $\bar\Delta\bar\Delta\alpha(x)$, which is fourth-order derivative of $\alpha(x)$, is the general property after introducing the quadratic gravity. To proceed we can follow the paper \cite{Alishahiha1610} to find the two butterfly velocities therein. In general the solution can be written as \begin{eqnarray} \alpha(x)\sim e^{{2\pi\over \beta}\big(t-t_*-{|x|\over v_B^{(1)}}\big)}-{v_B^{(2)}\over v_B^{(1)}}\,e^{{2\pi\over \beta}\big(t-t_*-{|x|\over v_B^{(2)}}\big)} \end{eqnarray} where the butterfly velocity is defined by \cite{Shenker1306} \begin{eqnarray} v_B^{(i)}=N_\sharp\,{2\pi T\over M^{(i)}} \end{eqnarray} $M^{(i)}$ are calculated from the following equation \begin{eqnarray} C_2\,\bar\Delta\bar\Delta\alpha(x)+C_1\,\bar\Delta\alpha(x)+C_0\,\alpha(x)&=&C_2\,\Big(\bar\Delta-(M^{(1)})^2\Big)\Big(\bar\Delta-(M^{(2)})^2\Big) \end{eqnarray} The details are described by Alishahiha et. al. in \cite{Alishahiha1610}. After calculation the final formula of the holographic butterfly velocity propagating in the space \eqref{metricF} becomes \begin{eqnarray} \label{finalVB1} v_B^{(1)}&=&\sqrt{d+1\over 2 d}\Big[1- 8\pi^2 (\beta+4\alpha) \,T^2 -{1\over 2} (d-2) \Big((d-1) \alpha + ( d+1)(\beta+4\alpha)+ (3d+1)(\gamma-\alpha) \Big)\Big]\nonumber\\ &&~~~~~~~~~~~~+{\cal O}\Big((\alpha,\beta,\gamma)^2\Big)\\ \label{finalVB2} v_B^{(2)}&=&{(d+1)\sqrt{-(\beta+4\alpha)}\over 2 }+{\cal O}\Big((\alpha,\beta,\gamma)^{3/2}\Big) \end{eqnarray} in which $(\alpha,\beta,\gamma)^2$ represents any function of second order of variables $\alpha,\beta,\gamma$. We now use above relations to discuss the various properties of butterfly velocity in quadratic gravity. \subsection{Example : Butterfly Velocity in Quadratic Gravity} 1. The second velocity becomes zero if $4\alpha+\beta=0$. This is the result that $C_2=0$ when $4\alpha+\beta=0$ and the shock wave equation becomes second-order derivative differential equation. It is interesting to see that {\it second velocity can appear only if $4\alpha+\beta<0$}. 2. In the case of $\beta+4\alpha=0$, this including the $R+\gamma R^2$ gravity, the second velocity is zero and the first velocity becomes \begin{eqnarray} v^{(1)}_B&=&\sqrt{d+1\over 2 d}\Big[1-{1\over 2} (d-2) \Big((d-1) \alpha + (3d+1)(\gamma-\alpha) \Big)\Big]\nonumber\\ \end{eqnarray} In this case the butterfly velocity in D=4 black hole, i.e. d=2, does not have correction by the quadratic curvatures. 3. The quantities $(\beta+4\alpha)$ and $(\gamma-\alpha)$ in eq.\eqref{finalVB1} measure the deviations from the Gauss-Bonnet gravity. The case of both quantities being vanish corresponds to the Einstein-Gauss-Bonnet gravity and \begin{eqnarray} \label{ratio} v^{(GB)}_B(\alpha)=\left[1-{1\over 2}\alpha (d-1)(d-2) \right]\,v^{(GB)}_B(0),~~~~~~~v^{(GB)}_B(0)=\sqrt{d+1\over 2 d} \end{eqnarray} which was first found in \cite {Roberts1409}. The factor $\left[1- {1\over 2} \alpha (d-1)(d-2) \right]$ is from the constant value of $N_\sharp$, which defined in \eqref{metricS}, while the another factor $\sqrt{d+1\over 2 d}$ is from the shock wave equation in the Einstein gravity. 4. When d=2, i.e. D=4, the the butterfly velocities are functions of $(\beta+4\alpha)$ which is zero in Gauss-Bonnet gravity. This reveals that D=4 Gauss-Bonnet term is topological. 5. In the case of $R+\gamma R^2$ gravity the second velocity is zero and the first velocity becomes \begin{eqnarray} v^{(R^2)}_B&=&\sqrt{d+1\over 2 d}\Big[1-{(d-2)(3d+1)\gamma\over2}\Big] \end{eqnarray} This means that for the D=4 planar black hole the $ R^2$ gravity does not give any correction to the butterfly velocity. Otherwise the correction maybe positive or negative, depending on the values of $\gamma$. 6. In the Einstein-Conformal gravity in which $\beta=-2\alpha$ and $\gamma={1\over 3}\alpha$ the second butterfly velocity becomes \begin{eqnarray} v^{(1)}_B&=&\sqrt{d+1\over 2 d}\Big[1-\Big(16\pi^2 T^2+{(d-2)(3d+1)\over6}\Big)\,\alpha\Big] \end{eqnarray} which shows a different behavior form that in $R^2$ gravity, since that for the D=4 planar black hole the conformal gravity will correct the butterfly velocity. 7. At high temperature we have a simple relation \begin{eqnarray} \label{HTV} v_B^{(1)}&\approx&\sqrt{d+1\over 2 d}\left[1-8\pi^2(\beta+4\alpha) T^2+{\cal{O}}(T^{(0)})\right] \end{eqnarray} which is independent of the values of $\gamma$. This means that, at high temperature the correction of butterfly velocity is independent of $R^2$ term. $R^2$ term can correct the butterfly velocity at order ${\cal O}(T^0)$. 8. At low temperature we have a simple relation \begin{eqnarray} \label{LTV} v_B^{(1)}&\approx&\sqrt{d+1\over 2 d}\left[1-{1\over 2} (d-2) \Big((d-1) \alpha + ( d+1)(\beta+4\alpha)+ (3d+1)(\gamma-\alpha) \Big)\right] \end{eqnarray} Comparing above equation to the case of high temperature expansion we see that, depending on the values of $\alpha$, $\beta$, $\gamma$ and $d$, the velocity correction from the quadratic gravity may be from positive to negative or from negative to positive while increasing the temperature. \\ Finally, We can directly apply the general formula derived in previous subsection to a simple case in which the metric solution is the Schwarzschild-AdS black hole solution \begin{eqnarray} ds^2=-r^2\left(1-{r_H^3\over r^3}\right)dt^2+{dr^2\over r^2\left(1-{r_H^3\over r^3}\right)}+r^2(dx^2+dy^2) \end{eqnarray} since that any solution of the pure Einstein theory continues to be a solution of the theory with the quadratic modifications.. This is the spacetime considered by Alishahiha et. al. in \cite{Alishahiha1610}. The fourth order differential equation of shock wave equation becomes \begin{eqnarray} &&{4\alpha+\beta\over r_H^4}\,\bar\Delta\bar\Delta\alpha(x)+{1-3(8\alpha+4\beta+3r_H^2(4\alpha+\beta)+8\gamma)\over r_H^2}\,\bar\Delta\alpha(x)\nonumber\\ &&~~~~~~~~~~~+(-3+36\alpha+27\beta+72\gamma)\,\alpha(x)=E\,e^{2\pi t/\beta}~a(x) \end{eqnarray} After the calculations the butterfly velocities becomes \begin{eqnarray} v_B^{(1)}&=&{\sqrt3\over 2}\left(1-{8\pi^2(4\alpha+\beta)\over |1-6\beta-24\gamma|}\,T^2+{\cal O}(T^4)\right)\\ v_B^{(2)}&=&{3\over 2} \sqrt{(4\alpha+\beta)\over 12\alpha+9\beta+24\gamma-1}\,\left(1+{8\pi^2(4\alpha+\beta)\over |1-6\beta-24\gamma|}\,T^2+{\cal O}(T^4)\right) \end{eqnarray} which consists with eq.(22) in \cite{Alishahiha1610} when $\alpha=T=0$. The second velocity is real if $ {(4\alpha+\beta)\over 12\alpha+9\beta+24\gamma-1}>0$. The condition reduces to $4\alpha+\beta<0$ for small values of $\alpha,\beta,\gamma$. \subsection{Example : Butterfly Velocity in Gauss-Bonnet Massive Gravity} Since that ${\cal U}_i$ in \eqref{MG} are functions of metric $g_{\mu\nu}$ and reference metric $f_{\mu\nu}$ while do not depend on the Riemann curvature we can regard them as some kinds of the extra matter fields. Thus the formula derived in previous section, which analyze the variation with respect to the Riemann curvature, can be directly applied. Thus we conclude that : {\it In the D-dimensional planar, spherical or hyperbolic black hole spacetime the Einstein-Gauss-Bonnet massive gravity has the same shock wave equation as that in Einstein gravity if and only if the space is isotropic.} Therefore, in the case of isotropic space we can quickly calculate the butterfly velocity in the Gauss-Bonnet massive gravity theories, following the method described in our previous paper \cite{Huang1710}, i.e. \eqref{mainvelocity}. We consider the (d+2) dimensional Maxwell-Gauss-Bonnet massive gravity. The Lagrangian is described by \eqref{MG} where ${\cal L}_{\rm matters}$ is the Maxwell field and we add the Gauss-Bonnet curvature with coefficient $\alpha$. The charged black hole solution found in \cite{Hendi1507} is \begin{eqnarray} ds^2&=& -N_\sharp^2f(r)dt^2+{dr^2\over f(r)}+r^2 \delta_{ij}dx^idx^j,~~~~~~~~i,j=1,2,3....d\\ F_{tr}&=&{Q\over r^{d}}\\ f(r)&=&k+{r^2\over 2\alpha\,d_3d_4}\,\left\{1-\sqrt{1+{ 8\alpha\,d_3d_4\over d_1d_2}\left[\Lambda+{ d_1d_2m_0\over 2r^{d_1}}-{Q^2\,d_1\over d_3r^{2d_2}}+\Upsilon\right]}\right\}\\ \Upsilon&=&-m^2\,d_1d_2\left[{d_3 d_4c^4c_4\over 2r^4}+{ d_3c^3c_3\over 2r^3}+{ c^2c_2\over 2r^2}+{cc_1\over 2d_2r} \right] \end{eqnarray} The reference metric is chosen to be $f_{\mu\nu}=(0,0,c^2\delta_{ij})$. The notation $d_i=d+2-i$ is used. The constant $N_\sharp^2$ is \begin{eqnarray} N_\sharp^2 &=&{1\over2}\,\left(1+\sqrt{1-2\alpha(d-1)(d-2)}\right) \end{eqnarray} after substituting the conventional value of $\Lambda=-{d(d+1)\over 4}$. The horizon defined by $f(r_H)=0$ leads to relation \begin{eqnarray} 1+{2k\alpha\,d_3d_4\over r_H^2}&=&\sqrt{1+{ 8\alpha\,d_3d_4\over d_1d_2}\left[\Lambda+{ d_1d_2m_0\over 2r_H^{d_1}}-{Q^2\,d_1\over d_3r_H^{2d_2}}+\Upsilon_H\right]} \end{eqnarray} and black hole temperature is \begin{eqnarray} 4\pi T&=&-{2k\over r_H }+{ d_1m_0\over r_H^{d_1-1}}-{4Q^2\over d_3r_H^{2d_2-1}}-m^2\left[{4d_3 d_4c^4c_4\over r_H^3}+{ 3d_3c^3c_3\over r_H^2}+{2 c^2c_2\over r_H}+{cc_1\over d_2} \right] \end{eqnarray} Above relation can be used to express $r_H$ as a function of temperature while does not explicitly depend on $\alpha$. Therefore, after using the basic formula $v_B=N_\sharp\,\sqrt{4\pi T\over 2 d r_H}$ the butterfly velocity has an exact relation \begin{eqnarray} v_B^m(\alpha)&=&\left[{1\over2}\,\left(1+\sqrt{1-2\alpha(d-1)(d-2)}\right)\right]^{1/2}\,v_B^m(0) \end{eqnarray} The ratio between $v_B^m(\alpha)$ and $v_B^m(0)$ had appeared in previous literature \cite{Roberts1412, Huang1710} and \eqref{ratio}\footnote{ \eqref{ratio} is leading order of $\alpha$}. Since this paper is to see how the quadratic curvature affect the butterfly velocity the property of $v_B^m(0)$ is left to reader to analyze. Note that above relation can also appear in Gauss-Bonnet massive gravity with Born-Infeld electrodynamics \cite{Hendi1510} or in the presence of power-Maxwell field \cite{Hendi1708}, since that $r_H$ in these cases still are a function of temperature while does not explicitly depend on $\alpha$. \section{Conclusions} In this paper we continue previous work \cite{Huang1710} to study the butterfly velocity in general quadratic gravity with Lagrangian ${\cal L}= \alpha R_{\mu\nu\sigma\rho} R^{\mu\nu\sigma\rho}+\beta R_{\mu\nu}R^{\mu\nu}+\gamma R^2+{\cal L}_{\rm matter}$. Contrast to the case of Gauss-Bonnet theory, in which $\alpha=\gamma=-{\beta\over 4}$, the quadratic gravity can correct the shock wave equation. After the detailed tensor calculations the general formula of shock wave equation in the general anisotropic spacetime is derived. We use the formula to prove that in the D-dimensional planar, spherical or hyperbolic black hole spacetime the shock wave equation in the Einstein-Gauss-Bonnet gravity has the same form as that in Einstein gravity only if the space is isotropic. We consider the example of a simple spacetime, which is the solution in leading order of $\alpha$, $\beta$ and $\gamma $. We obtain a simple formula of butterfly velocity in eq.\eqref{finalVB1} and eq.\eqref{finalVB2}. Using the formula we find that the fourth-derivative shock wave equation therein could lead to two butterfly velocities if and only if $ 4\alpha+\beta<0 $. We also see that the D=4 planar black hole does not give correction to the butterfly velocity in the quadratic gravity with $ \beta +4\alpha=0$, which includes the $ R^2$ gravity. We also see that, depending on the values of $\alpha$, $\beta$, $\gamma$, and the black-hole shape the velocity correction from the quadratic gravity may be from positive to negative or from negative to positive while increasing the temperature. The butterfly velocity in the theory of Gauss-Bonnet massive gravity is also studied. Since our formula collected in section 4 is very general it can be applied to general anisotropic space with arbitrary matter fields. While the application in section 5 is in a simple isotropic space it is interesting to applied it to more complex space with matters and to see how the butterfly velocity will be in there. It hopes that our formula is helpful in studying the butterfly velocity in quadratic gravity. We make four comments to conclude this paper. 1. As mentioned in section I that the value of $\alpha$ is related to the ${1\over 2N}$. Also the values of $\beta$ and $\gamma$ are related the R charges of the theory. Thus the calculations in section 5 tell us that correction of butterfly velocity, which describes how the perturbation spreads, may be positive or negative depending on the values of ${1\over 2N}$, R-charges and temperature of the theory. 2. It is known that there is a simple relation between diffusion constant and butterfly velocity : $D_c\sim v_B^2T$ \cite{Blake1603}. Thus, using this relation we can read the property of how the diffusion constant will depends on the values of ${1\over 2N}$, R-charges and temperature of the theory. 3. It remains to find a simple explanation of why the shock wave equation of Einstein gravity does not be modified in the Gauss-Bonnet gravity with any matters for the planar, spherical or hyperbolic black hole spacetime in the case of isotropic space ? 4. It is known that the bound (KSS) violation of viscosity in higher derivative gravity have been explained to relate to the Weyl anomaly and central charge \cite{Kats0712,Brigante0712, Myers0812}. So, does butterfly velocity corrected by higher derivative gravity has any simple explanation? The investigations in \cite{Alishahiha1610} and \cite{Qaemmaqami1707} had related it to the conformal dimension and central charge. The more general property is worthy to be studied in detain. The answers to the problems could help us to understand the intrinsic properties of the quantum chaos. \\ \\
{ "timestamp": "2018-08-06T02:04:56", "yymm": "1804", "arxiv_id": "1804.05527", "language": "en", "url": "https://arxiv.org/abs/1804.05527" }
\section{Introduction} \label{introduction} Measurements of vector boson production in association with jets provide fundamental tests of quantum chromodynamics (QCD). The high centre-of-mass energy at the CERN LHC allows the production of an electroweak boson along with a large number of jets with large transverse momenta. A precise knowledge of the kinematic distributions in processes with large jet multiplicity is essential to exploit the potential of the LHC experiments. Comparison of the measurements with predictions motivates additional Monte Carlo (MC) generator development and improves our understanding of the prediction uncertainties. Furthermore, the production of a massive vector boson together with jets is an important background to a number of standard model (SM) processes (production of a single top quark, $\ttbar$, and Higgs boson as well as vector boson fusion and WW scattering) as well as to searches for physics beyond the SM, \eg supersymmetry. Leptonic decay modes of the vector bosons are often used in the measurement of SM processes and searches for physics beyond the SM since they have a sufficiently high branching fraction and clean signatures that provide a strong rejection of backgrounds. Differential cross sections for the associated production of a $\cPZ$ boson with hadronic jets have been previously measured by the ATLAS, CMS, and LHCb Collaborations in proton-proton collisions at centre-of-mass energies of 7 \cite{Aad:2013ysa,Aad:2011qv,Chatrchyan:2011ne,Khachatryan:2014zya}, 8~\cite{Khachatryan:2015ira,Khachatryan:2016crw,AbellanBeteta:2016ugk} and 13 \cite{Aaboud:2017hbk} \TeV, and by the CDF and D0 Collaborations in proton-antiproton collisions at 1.96\TeV \cite{Aaltonen:2007ae,Abazov:2008ez}. In this paper, we present measurements of the cross section multiplied by the branching fraction for the production of a $\cPZ/\gamma^*$ boson in association with jets and its subsequent decay into a pair of oppositely charged leptons ($\ell^+\ell^-$) in proton-proton collisions at a centre-of-mass energy of 13\TeV. The measurements from the two final states, with an electron--positron pair (electron channel) and with a muon--antimuon pair (muon channel), are combined. The measurements are performed with data from the CMS detector recorded in 2015 at the LHC corresponding to 2.19\fbinv of integrated luminosity. For convenience, $\cPZ/\gamma^*$ is denoted as $\cPZ$. In this paper a $\cPZ$ boson is defined as a pair of oppositely charged muons or electrons with invariant mass in the range $91\pm20\GeV$. This range is chosen to have a good balance between the signal acceptance, the rejection of background processes, and the ratio of $\cPZ$ boson to $\gamma^*$ event yields. It is also consistent with previous measurements~\cite{Khachatryan:2014zya,Khachatryan:2015ira,Khachatryan:2016crw} and eases comparisons. The cross section is measured as a function of the jet multiplicity ($\ensuremath{N_{\text{jets}}}$), transverse momentum (\pt) of the $\cPZ$ boson, and of the jet transverse momentum and rapidity ($y$) of the first, second, and third jets, where the jets are ordered by decreasing \pt. Furthermore, the cross section is measured as a function of the scalar sum of the jet transverse momenta (\HT) for event samples with at least one, two, and three jets. These observables have been studied in previous measurements. In addition, we study the balance in transverse momentum between the reconstructed jet recoil and the $\cPZ$ boson for the different jet multiplicities and two $\cPZ$ boson \pt regions ($\pt(\cPZ) < 50\GeV$ and $\pt(\cPZ) > 50\GeV$). \section{The CMS detector} \label{cms} The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors up to $\abs{\eta}=5$. The electron momentum is estimated by combining the energy measurement in the ECAL with the momentum measurement in the tracker. The momentum resolution for electrons with $\pt \approx 45\GeV$ from $\Z \rightarrow \Pe \Pe$ decays ranges from 1.7\% for nonshowering electrons in the barrel region ($\abs{\eta}< 1.444$) to 4.5\% for showering electrons in the endcaps ($1.566 < \abs{\eta} < 3$)~\cite{Khachatryan:2015hwa}. When combining information from the entire detector, the jet energy resolution is 15\% at 10\GeV, 8\% at 100\GeV, and 4\% at 1\TeV, to be compared to about 40, 12, and 5\% obtained when only the ECAL and HCAL calorimeters are used. Muons are measured in the pseudorapidity range $\abs{\eta} < 2.4$, with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers. Matching muons to tracks measured in the silicon tracker results in a relative transverse momentum resolution for muons with $20 < \pt < 100\GeV$ of 1.3--2.0\% in the barrel and better than 6\% in the endcaps. The \pt resolution in the barrel is better than 10\% for muons with \pt up to 1\TeV~\cite{Chatrchyan:2012xi}. Events of interest are selected using a two-tiered trigger system~\cite{Khachatryan:2016bia}. The first level (L1), composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100\unit{kHz} within a time interval of less than 4\mus. The second level, known as the high-level trigger (HLT), consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1\unit{kHz} before data storage. \section{Observables} \label{mobs} The cross section is measured for jet multiplicities up to 6 and differentially as a function of the transverse momentum of the $\cPZ$ boson and as a function of several jet kinematic variables, including the jet transverse momentum, rapidity, and the scalar sum of jet transverse momenta. Jet kinematic variables are measured for event samples with at least one, two, and three jets. In the following, the jet multiplicity will be referred to as ``inclusive'' to designate events with at least $N$ jets and as ``exclusive'' for events with exactly $N$ jets. The balance between the $\cPZ$ boson and jet transverse momenta is also studied via the \pt balance observable $\ensuremath{\pt^{\text{bal}}}\xspace = \lvert \ptvec(\cPZ) + \sum_{\text{jets}} \ptvec(\text{j}_i) \rvert$, and the so-called jet-$\cPZ$ balance $\ensuremath{\text{JZB}}\xspace = \lvert \sum_{\text{jets}} \ptvec(\text{j}_i) \rvert - \lvert \ptvec(\cPZ) \rvert$, where the sum runs over jets with $\pt > 30\GeV$ and $\abs{y} < 2.4$ \cite{Dias:2011zf,Chatrchyan:2012qka}. The hadronic activity not included in the jets will lead to an imbalance that translates into $\pt^{\text{bal}}$ and \ensuremath{\text{JZB}}\xspace values different from zero. It includes the activity in the forward region ($\abs{y} > 2.4$), which is the dominant contribution according to simulation. Gluon radiation in the central region that is not clustered in a jet with $\pt>30\GeV$ will also contribute to the imbalance. Hadronic activity not included in the jets will lead to a shift of the $\pt^{\text{bal}}$ distribution peak to larger values. The \ensuremath{\text{JZB}}\xspace variable distinguishes between two configurations, one where transverse momentum due to the unaccounted hadronic activity is in the direction of the $\cPZ$ boson and another where it is in the opposite direction. Events in the first configuration that have a large imbalance will populate the positive tail of the \ensuremath{\text{JZB}}\xspace distribution, while those in the second configuration populate the negative tail. The distribution of \ensuremath{\pt^{\text{bal}}}\xspace is measured for events with minimum jet multiplicities of 1, 2, and~3. To separate low and high jet multiplicity events without \pt and y constraints on the jets, the \ensuremath{\text{JZB}}\xspace variable is also studied for $\pt(\cPZ)$ below and above 50\GeV. The $\cPZ$ boson transverse momentum $\pt(\cPZ)$ can be described via fixed-order calculations in perturbative QCD at high values, while at small transverse momentum this requires resummation of multiple soft-gluon emissions to all orders in perturbation theory~\cite{Dokshitzer:1978yd,Collins:1984kg}. The measurement of the distribution of $\pt(\cPZ)$ for events with at least one jet, due to the increased phase space for soft gluon radiation, leads to an understanding of the balance in transverse momentum between the jets and the $\cPZ$ boson, and can be used for comparing theoretical predictions that treat multiple soft-gluon emissions in different ways. \section{Phenomenological models and theoretical calculations} \label{theory} The measured $\cPZ + \text{ jets}$ cross section is compared to four different calculations: two merging matrix elements (MEs) with various final-state parton multiplicities together with parton showering; a third with a combination of next-to-next-to-leading order (NNLO) calculation with next-to-next-to-leading logarithmic (NNLL) resummation and with parton showering; and a fourth with fixed-order calculation. The first two calculations use \MGvATNLO version 2.2.2 (denoted {\textsc{MG5}\_a\textsc{MC}}\xspace)~\cite{Alwall:2014hca}, which is interfaced with {\PYTHIA}8\xspace (version 8.212)~\cite{Sjostrand:2014zea}. {\PYTHIA}8\xspace is used to include initial- and final-state parton showers and hadronisation. Its settings are defined by the CUETP8M1 tune~\cite{Khachatryan:2015pea}, in particular the NNPDF 2.3~\cite{Ball:2012cx} leading order (LO) parton distribution function (PDF) is used and the strong coupling $\alpS(m_{\cPZ})$ is set to 0.130. The first calculation includes MEs computed at LO for the five processes \ensuremath{\Pp\Pp \to \cPZ + N \text{ jets}}\xspace, $N=0\ldots4$ and matched to the parton shower using the \kt-MLM~\cite{Alwall:2007fs,Alwall:2008qv} scheme with the matching scale set at 19\GeV. In the ME calculation, the NNPDF~3.0 LO PDF~\cite{Ball:2014uwa} is used and $\alpS(m_{\cPZ})$ is set to 0.130 at the $\cPZ$ boson mass scale. The second calculation includes MEs computed at NLO for the three processes \ensuremath{\Pp\Pp \to \cPZ + N \text{ jets}}\xspace, $N=0\ldots2$ and merged with the parton shower using the FxFx~\cite{Frederix:2012ps} scheme with the merging scale set at 30\GeV. The NNPDF~3.0 next-to-leading order (NLO) PDF is used and $\alpS(m_{\cPZ})$ is set to 0.118. This second calculation is also employed to derive nonperturbative corrections for the fixed-order prediction discussed in the following. The third calculation uses the \textsc{geneva}\xspace 1.0-RC2 MC program (GE), where an NNLO calculation for Drell--Yan production is combined with higher-order resummation~\cite{Alioli:2015toa,Alioli:2012fc}. Logarithms of the 0-jettiness resolution variable, ${\tau}$, also known as beam thrust and defined in Ref.~\cite{Stewart:2010tn}, are resummed at NNLL including part of the next-to-NNLL (N$^{3}$LL) corrections. The accuracy refers to the $\tau$ dependence of the cross section and is denoted NNLL'$_\tau$. The PDF set PDF4LHC15 NNLO~\cite{Butterworth:2015oua} is used for this calculation and $\alpS(m_{\cPZ})$ is set to 0.118. The resulting parton-level events are further combined with parton showering and hadronisation provided by {\PYTHIA}8\xspace using the same tune as for {\textsc{MG5}\_a\textsc{MC}}\xspace. Finally, the distributions measured for $\ensuremath{N_{\text{jets}}}\ge1$ are compared with the fourth calculation performed at NNLO accuracy for $\cPZ +1 $ jet using the $N$-jettiness subtraction scheme ($N_{\text{jetti}}$)~\cite{Boughezal:2016isb,Boughezal:2015ded}. The PDF set CT14~\cite{Dulat:2015mca} is used for this calculation. The nonperturbative correction obtained from {\textsc{MG5}\_a\textsc{MC}}\xspace and {\PYTHIA}8\xspace is applied. It is calculated for each bin of the measured distributions from the ratio of the cross section values obtained with and without multiple parton interactions and hadronisation. This correction is less than 7\%. Given the large uncertainty in the LO calculation for the total cross section, the prediction with LO MEs is rescaled to match the $\Pp\Pp\to \cPZ$ cross section calculated at NNLO in \alpS and includes NLO quantum electrodynamics (QED) corrections with \FEWZ~\cite{Melnikov:2006kv} (version 3.1b2). The values used to normalise the cross section of the {\textsc{MG5}\_a\textsc{MC}}\xspace predictions are given in Table~\ref{tab:theory_xsec}. All the numbers correspond to a 50\GeV dilepton mass threshold applied before QED final-state radiation (FSR). With \FEWZ, the cross section is computed in the dimuon channel, using a mass threshold applied after QED FSR, but including the photons around the lepton at a distance $R = \sqrt{(\Delta \eta)^2+(\Delta \phi)^2}$ smaller than 0.1. The number given in the table includes a correction computed with the LO sample to account for the difference in the mass definition. This correction is small, $+0.35\%$. When the mass threshold is applied before FSR, the cross section is assumed to be the same for the electron and muon channels. \begin{table*}[ht] \centering \topcaption{Values of the $\Pp\Pp \to \ell^+\ell^-$ total cross section used for the calculation in data-theory comparison plots. The cross section used, the cross section from the MC generator (``native''), and the ratio of the two ($k$) are provided. The phase space of the sample to which the cross section values correspond is indicated in the second column.} \cmsTable{ \begin{tabular}{lccccc} & & Native cross & & Used cross &\\ Prediction & Phase space & section [pb] & Calculation & section [pb]& $k$\\ \hline {\textsc{MG5}\_a\textsc{MC}}\xspace+{\PYTHIA}8\xspace, ${\le} 4$ j LO+PS & $m_{\ell^+\ell^-}>50\GeV$ & 1652 & \FEWZ NNLO & 1929 & 1.17 \\ {\textsc{MG5}\_a\textsc{MC}}\xspace+{\PYTHIA}8\xspace, ${\le }2$ j NLO+PS & $m_{\ell^+\ell^-}>50\GeV$ & 1977 & native & 1977 & 1 \\ \textsc{geneva}\xspace & $m_{\ell^+\ell^-} \in [50, 150\GeV]$ & 1980 & native & 1980 & 1 \\ \end{tabular} } \label{tab:theory_xsec} \end{table*} Uncertainties in the ME calculation (denoted {\em theo. unc.} in the figure legends) are estimated for the NLO {\textsc{MG5}\_a\textsc{MC}}\xspace, NNLO, and \textsc{geneva}\xspace calculations following the prescriptions recommended by the authors of the respective generators. The uncertainty coming from missing terms in the fixed-order calculation is estimated by varying the normalisation ($\mu_{\mathrm{R}}$) and factorisation ($\mu_{\mathrm{F}}$) scales by factors 0.5 and 2. In the case of the FxFx-merged sample, the envelope of six combinations of the variations is considered, the two combinations where one scale is varied by a factor 0.5 and the other by a factor 2 are excluded. In the case of the NNLO and \textsc{geneva}\xspace samples the two scales are varied by the same factor, leading to only two combinations. For \textsc{geneva}\xspace, the uncertainty is symmetrised by using the maximum of the up and down uncertainties for both cases. The uncertainty from the resummation is also estimated and added in quadrature. It is estimated using six profile scales~\cite{Abbate:2010xh,Ligeti:2008ac}, as described in Ref.~\cite{Alioli:2015toa}. Uncertainties in PDF and \alpS values are also estimated in the case of the FxFx-merged sample. The PDF uncertainty is estimated using the set of 100 replicas of the NNPDF~3.0 NLO PDF, and the uncertainty in the \alpS value used in the ME calculation is estimated by varying it by $\pm0.001$. These two uncertainties are added in quadrature to the ME calculation uncertainties. For \textsc{geneva}\xspace and NLO {\textsc{MG5}\_a\textsc{MC}}\xspace all these uncertainties are obtained using the reweighting method~\cite{Frederix:2011ss,Alioli:2015toa} implemented in these generators. \section{Simulation} \label{samples} MC event generators are used to simulate proton-proton interactions and produce events from signal and background processes. The response of the detector is modeled with \GEANTfour \cite{Allison:2006ve}. The $\cPZ (\to \ell^+ \ell^-) + \text{ jets}$ process is generated with NLO {\textsc{MG5}\_a\textsc{MC}}\xspace interfaced with {\PYTHIA}8\xspace, using the FxFx merging scheme as described in Section~\ref{theory}. The sample includes the $\cPZ\to \Pgt^+\Pgt^-$ process, which is considered a background. Other processes that can give a final state with two oppositely charged same-flavour leptons and jets are $\PW\PW$, $\PW\cPZ$, $\cPZ\cPZ$, $\PQt\PAQt$ pairs, and single top quark production. The $\PQt\PAQt$ and single top quark backgrounds are generated using \POWHEG version~2~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd,Frixione:2007nw} interfaced with {\PYTHIA}8\xspace. Background samples corresponding to diboson electroweak production (denoted VV in the figure legends)~\cite{Nason:2013ydw} are generated at NLO with \POWHEG interfaced to {\PYTHIA}8\xspace ($\PW\PW$), {\textsc{MG5}\_a\textsc{MC}}\xspace interfaced to {\PYTHIA}8\xspace or {\PYTHIA}8\xspace alone ($\PW\cPZ$ and $\cPZ\cPZ$). The background sample corresponding to $\PW + \text{ jets}$ production ($\PW$) is generated at NLO using {\textsc{MG5}\_a\textsc{MC}}\xspace interfaced with {\PYTHIA}8\xspace, utilizing the FxFx merging scheme. The events collected at the LHC contain multiple superimposed proton-proton collisions within a single beam crossing, an effect known as pileup. Samples of simulated pileup are generated with a distribution of proton-proton interactions per beam bunch crossing close to that observed in data. The number of pileup interactions, averaging around 20, varies with the beam conditions. The correct description of pileup is ensured by reweighting the simulated sample to match the number of interactions measured in data. \section{Object reconstruction and event selection} \label{eventselection} The particle-flow (PF) algorithm~\cite{Sirunyan:2017ulk} is used to reconstruct the events. It combines the information from the various elements of the CMS detector to reconstruct and identify each particle in the event. The reconstructed particles are called PF candidates. If several primary vertices are reconstructed, we use the one with the largest quadratic sum of associated track transverse momenta as the vertex of the hard scattering and the other vertices are assumed to be pileup. The online trigger selects events with two isolated electrons (muons) with transverse momenta of at least 17 and 12 (17 and 8) \GeV. After offline reconstruction, the leptons are required to satisfy $\pt > 20 \GeV$ and $\abs{\eta} < 2.4$. We require that the two electrons (muons) with highest transverse momenta form a pair of oppositely charged leptons with an invariant mass in the range $91\pm20\GeV$. The transition region between the ECAL barrel and endcap ($1.444 < \abs{\eta}< 1.566$) is excluded in the reconstruction of electrons and the missing acceptance is corrected to the full $\abs{\eta} < 2.4$ region. The reconstruction of electrons and muons is described in detail in Refs.~\cite{Khachatryan:2015hwa,Chatrchyan:2012xi}. The identification criteria applied for electrons and muons are identical to those described in the Ref.~\cite{Khachatryan:2016crw} except for the thresholds of the isolation variables, which are optimised for 13\TeV centre-of-mass energy in our analysis. Electrons (muons) are considered isolated based on the scalar $\pt$ sum of the nearby PF candidates with a distance $R = \sqrt{(\Delta \eta)^2+(\Delta \phi)^2} < 0.3$ (0.4). The scalar \pt sum must be less than 15 (25)\% of the electron (muon) transverse momentum. We also correct the simulation for differences from data in the trigger, and the lepton identification, reconstruction and isolation efficiencies. These corrections, which depend on the run conditions, are derived using data taken during the run period, and they typically amount to 1--2\% for the reconstruction and identification efficiency and 3--5\% for the trigger efficiency. Jets at the generator level are defined from the stable particles ($c\tau > 1\cm$), neutrinos excluded, clustered with the anti-$\kt$ algorithm~\cite{Cacciari:2008gp} using a radius parameter of 0.4. The jet four-momentum is obtained according to the E-scheme~\cite{fastjetmanual} (vector sum of the four-momenta of the constituents). In the reconstructed data, the algorithm is applied to the PF candidates. The technique of charged-hadron subtraction~\cite{Sirunyan:2017ulk} is used to reduce the pileup contribution by removing charged particles that originate from pileup vertices. The jet four-momentum is corrected for the difference observed in the simulation between jets built from PF candidates and generator-level particles. The jet mass and direction are kept constant for the corrections, which are functions of the jet $\eta$ and \pt, as well as the energy density and jet area quantities defined in Ref.~\cite{Cacciari:2007fd,Chatrchyan:2011ds}. The latter are used in the correction of the energy offset introduced by the pileup interactions. Further jet energy corrections are applied for differences between data and simulation in the pileup in zero-bias events and in the \pt balance in dijet, $\cPZ +\text{ jet}$, and $\gamma+\text{ jet}$ events. Since the \pt balance in $\cPZ+\text{ jet}$ events is one of the observables we are measuring in this paper, it is important to understand how it is used in the jet calibration. The balance is measured for events with two objects (jet, $\gamma$, or $\cPZ$ boson) back-to-back in the transverse plane ($\abs{\Delta\phi - \pi} < 0.34$) associated with a possible third object, a soft jet. The measurement is made for various values of $\rho=\pt^{\text{soft jet}}/\pt^{\text{ref}}$, running from 0.1 to 0.3, and extrapolated to $\rho = 0$. In the case the back-to-back objects are a jet and a boson, $\pt^{\text{ref}}$ is defined as the transverse momentum of the boson, while in the case of two jets it is defined as the average of their transverse momenta. All jets down to $\pt = 5$ or 10\GeV, including jets reconstructed in the forward calorimeter, are considered for the soft jet. The data-simulation adjustment is therefore done for ideal topologies with only two objects, whose transverse momenta must be balanced. The jet calibration procedure is detailed in the Ref.~\cite{Khachatryan:2016kdb}. In this measurement, jets are further required to satisfy the loose identification criteria defined in Ref.~\cite{CMS:2016jetID}. Despite the vertex requirement used in the jet clustering some jets are reconstructed from pileup candidates; these jets are suppressed using the multivariate technique described in Ref.~\cite{CMS:2013wea}. Jets with $\pt>30\GeV$ and $\abs{y}<2.4$ are used in this analysis. \section{Backgrounds estimation} \label{background} The contributions from background processes are estimated using the simulation samples described in Section~\ref{samples} and are subtracted from the measured distributions. The dominant background, $\ttbar$, is also measured from data. This $\ttbar$ background contributes mainly due to events with two same-flavour leptons. The production cross sections for $\Pep\Pem$ and $\Pgmp\Pgmm$ events from $\ttbar$ are identical to the cross section of $\Pep\Pgmm$ and $\Pem\Pgmp$ and can therefore be estimated from the latter. We select events in the $\ttbar$ control sample using the same criteria as for the measurement, but requiring the two leptons to have different flavours. This requirement rejects the signal and provides a sample enriched in $\PQt\PAQt$ events. Each of the distributions that we are measuring is derived from this sample and compared with the simulation. This comparison produces a discrepancy for events with at least one jet that we correct by applying a correction factor $\mathcal{C}$ to the simulation depending on the event jet multiplicity. These factors, together with their uncertainties, are given in Table~\ref{tab:TTSF}. After applying this correction to the simulation, all the distributions considered in this measurement agree with data in the $\ttbar$ control sample. The agreement is demonstrated with a $\chi^2$-test. We conclude that a parametrization as a function of the jet multiplicity is sufficient to capture the dependency on the event topology. Remaining sources of uncertainties are the estimate of the lepton reconstruction and selection efficiencies and of the yield of events from processes other than \ttbar entering in the control region. This yield is estimated from the simulation. Based on the sizes of the statistical uncertainties and background contributions, both these uncertainties are negligible. Therefore, the uncertainty in the correction factor is reduced to the statistical uncertainties in the data and simulation samples. \begin{table}[h] \centering \topcaption{The correction factors ($\mathcal{C}$) applied to the simulated $\ttbar$ sample with their uncertainties, which are derived from the statistical uncertainties in the data and simulation samples.} \begin{tabular}{cc} $\ensuremath{N_{\text{jets}}}$ & $\mathcal{C}$ \\ \hline $=$0 & 1 \\ $=$1 & 0.94 $\pm$ 0.04 \\ $=$2 & 0.97 $\pm$ 0.03 \\ $=$3 & 1.01 $\pm$ 0.04 \\ $=$4 & 0.86 $\pm$ 0.06 \\ $=$5 & 0.61 $\pm$ 0.09 \\ $=$6 & 0.68 $\pm$ 0.17 \\ \end{tabular} \label{tab:TTSF} \end{table} \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{Figure_001-a.pdf}% \includegraphics[width=0.45\textwidth]{Figure_001-b.pdf}\\ \includegraphics[width=0.45\textwidth]{Figure_001-c.pdf}% \includegraphics[width=0.45\textwidth]{Figure_001-d.pdf} \caption{Reconstructed data, simulated signal, and background distributions of the inclusive (left) and exclusive (right) jet multiplicity for the electron (upper) and muon (lower) channels. The background distributions are obtained from the simulation, except for the $\PQt\PAQt$ contribution which is estimated from the data as explained in the text. The error bars correspond to the statistical uncertainty. In the ratio plots, they include both the uncertainties from data and from simulation. The set of generators described in Section~\ref{samples} has been used for the simulation.} \label{fig:njet} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{Figure_002-a.pdf}% \includegraphics[width=0.45\textwidth]{Figure_002-b.pdf}\\ \includegraphics[width=0.45\textwidth]{Figure_002-c.pdf}% \includegraphics[width=0.45\textwidth]{Figure_002-d.pdf} \caption{Reconstructed data, simulated signal, and background distributions of the transverse momentum balance between the $\cPZ$ boson and the sum of the jets with at least one jet (left) and three jets (right) for the electron (upper) and muon (lower) channels. The background distributions are obtained from the simulation, except for the $\PQt\PAQt$ contribution which is estimated from the data as explained in the text. The error bars correspond to the statistical uncertainty. In the ratio plots, they include both the uncertainties from data and from simulation. The set of generators described in Section~\ref{samples} has been used for the simulation.} \label{fig:reco-ptbal} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.5\textwidth]{Figure_003-a.pdf}\includegraphics[width=0.5\textwidth]{Figure_003-b.pdf} \caption{Reconstructed data, simulated signal, and background distributions of the \ensuremath{\text{JZB}}\xspace variable for the electron (left) and muon (right) channels. The background distributions are obtained from the simulation, except for the $\PQt\PAQt$ contribution which is estimated from the data as explained in the text. The error bars correspond to the statistical uncertainty. In the ratio plots, they include both the uncertainties from data and from simulation. The set of generators described in Section~\ref{samples} has been used for the simulation.} \label{fig:reco-jzb} \end{figure*} The jet multiplicity distributions in data and simulation are presented in Fig.~\ref{fig:njet}. The background contamination is below 1\% for the inclusive cross section, and increases with the number of jets to close to 10\% for a jet multiplicity of three and above due to $\PQt\PAQt$ production. Multijet and $\PW$ events could pass the selection if one or two jets are misidentified as leptons. The number of multijet events is estimated from data using a control sample obtained by requiring two same-sign same-flavour lepton candidates, whereas the number of $\PW$ events is estimated from simulation. Both contributions are found to be negligible. Fig.~\ref{fig:reco-ptbal} shows the \ensuremath{\pt^{\text{bal}}}\xspace distribution separately for electron and muon channels. The $\ttbar$ background does not peak at the same $\pt$ balance as the signal, and has a broader spectrum. The \ensuremath{\text{JZB}}\xspace distribution is shown in Fig.~\ref{fig:reco-jzb}. The $\ttbar$ background is asymmetric, making a larger contribution to the positive side of the distribution because transverse energy is carried away by neutrinos from $\PW$ boson decays, leading to a reduction in the negative term of the \ensuremath{\text{JZB}}\xspace expression. Overall the agreement between data and simulation before the background subtraction is good and differences are within about 10\%. \section{Unfolding procedure} \label{unfolding} The fiducial cross sections are obtained by subtracting the simulated backgrounds from the data distributions and correcting the background-subtracted data distributions back to the particle level using an unfolding procedure, which takes into account detector effects such as detection efficiency and resolution. The unfolding is performed using the D'Agostini iterative method with early stopping~\cite{D'Agostini:1994zf} implemented in the RooUnfold toolkit~\cite{Adye:2011gm}. The response matrix describes the migration probability between the particle- and reconstructed-level quantities, including the overall reconstruction efficiencies. It is computed using a $\cPZ + \text{ jets}$ sample simulated with {\textsc{MG5}\_a\textsc{MC}}\xspace interfaced with {\PYTHIA}8\xspace, using the FxFx merging scheme as described in Section~\ref{theory}. The optimal number of iterations is determined separately for each distribution by studying the fluctuations introduced by the unfolding with toy MC experiments generated at each step of the iteration. Final unfolded results have also been checked to be consistent with data-simulation comparisons on detector-folded distributions. Because of the steep slope at the lower boundary of the jet transverse momentum distributions and in order to improve its accuracy, the unfolding is performed for these distributions using histograms with two additional bins, $[20, 24]$ and $[24, 30]\GeV$, below the nominal \pt threshold. The additional bins are discarded after the unfolding The particle-level values refer to the stable leptons from the decay of the $\cPZ$ boson and to the jets built from the stable particles (c$\tau>1\cm$) other than neutrinos using the same algorithm as for the measurements. The momenta of all the photons whose $R$ distance to the lepton axis is smaller than 0.1 are added to the lepton momentum to account for the effects of the final-state radiation; the leptons are said to be ``dressed''. The momentum of the $\cPZ$ boson is taken to be the sum of the momenta of the two highest-\pt electrons (or muons). The phase space for the cross section measurement is restricted to events with a $\cPZ$ boson mass between 71 and 111\GeV and both leptons with $\pt > 20\GeV$ and $\abs{\eta} < 2.4$. Jets are required to have $\pt > 30\GeV$, $\abs{y} < 2.4$ and a spatial separation from the dressed leptons of $R > 0.4$. \section{Systematic uncertainties} \label{systematics} The systematic uncertainties are propagated to the measurement by varying the corresponding simulation parameters by one standard deviation up and down when computing the response matrix. The uncertainty sources are independent, and the resulting uncertainties are therefore added in quadrature. Tables~\ref{tab:combZNGoodJets_Zexc} to~\ref{tab:combJZB_ptHigh} present the uncertainties for each differential cross section. The dominant uncertainty comes from the jet energy scale (JES). It typically amounts to 5\% for a jet multiplicity of one and increases with the number of reconstructed jets. The uncertainty in the jet resolution (JER), which is responsible for the bin-to-bin migrations that is corrected by the unfolding, is estimated and the resulting uncertainty is typically 1\%. The most important uncertainty after the JES arises from the measured efficiency (Eff) of trigger, lepton reconstruction, and lepton identification, which results in a measurement uncertainty of about 2\% up to 4\% for events with leptons of large transverse momenta. The uncertainty in the measurement of the integrated luminosity (Lumi) is 2.3\%~\cite{CMS:2016eto}. The resulting uncertainty on the measured distributions is 2.3\%, although the uncertainty is slightly larger in regions that contain background contributions that are estimated from simulation. The largest background contribution to the uncertainty (Bkg) comes from the reweighting procedure for the $\ttbar$ simulation, which is estimated to be less than 1\% for jet multiplicity below 4. Theoretical contributions come from the accuracy of the predicted cross sections, and include the uncertainties from PDFs, \alpS and the fixed-order calculation. Three other small sources of uncertainty are: (1) the lepton energy scale (LES) and resolution (LER), which are below 0.3\% in every bin of the measured distributions; (2) the uncertainty in the pileup model, where the 5\% uncertainty in the average number of pileup events results in an uncertainty in the measurement smaller than 1\%; and (3) the uncertainty in the input distribution used to build the response matrix used in the unfolding and described as follows. Because of the finite binning a different distribution will lead to a different response matrix. This uncertainty is estimated by weighting the simulation to agree with the data in each distribution and building a new response matrix. The weighting is done using a finer binning than for the measurement. The difference between the nominal results and the results unfolded using the alternative response matrix is taken as the systematic uncertainty, denoted {\em Unf model}. An additional uncertainty comes from the finite size of the simulation sample used to build the response matrix. This source of uncertainty is denoted {\em Unf stat} in the table and is included in the systematic uncertainty of the measurement. \begin{table*} \centering \topcaption{Cross section in exclusive jet multiplicity for the combination of both decay channels and breakdown of the uncertainties.} \footnotesize{ \begin{tabular}{cccccccccccc} $\ensuremath{N_{\text{jets}}}$ & $\dd{\sigma}{\ensuremath{N_{\text{jets}}}}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ & [\text{pb]} & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $=$ 0 & 652. & 3.0 & 0.090 & 1.1 & 0.046 & 1.5 & 2.3 & $<$0.01 & 0.22 & \NA & 0.026 \\ $=$ 1 & 98.0 & 5.1 & 0.27 & 4.3 & 0.18 & 1.5 & 2.3 & 0.012 & 0.30 & \NA & 0.10 \\ $=$ 2 & 22.3 & 7.3 & 0.62 & 6.7 & 0.20 & 1.6 & 2.3 & 0.026 & 0.43 & \NA & 0.26 \\ $=$ 3 & 4.68 & 10. & 1.3 & 9.8 & 0.39 & 1.7 & 2.3 & 0.13 & 0.29 & \NA & 0.54 \\ $=$ 4 & 1.01 & 11. & 3.4 & 10. & 0.24 & 1.7 & 2.3 & 0.42 & 0.56 & \NA & 1.4 \\ $=$ 5 & 0.274 & 14. & 5.0 & 12. & 0.076 & 2.0 & 2.3 & 1.2 & 0.30 & \NA & 2.2 \\ $=$ 6 & 0.045 & 24. & 15. & 17. & 0.35 & 1.8 & 2.4 & 3.5 & 1.7 & \NA & 6.6 \\ \end{tabular}} \label{tab:combZNGoodJets_Zexc} \end{table*} \begin{table*} \centering \topcaption{Cross section in inclusive jet multiplicity for the combination of both decay channels and breakdown of the uncertainties.} \footnotesize{ \begin{tabular}{cccccccccccc} $\ensuremath{N_{\text{jets}}}$ & $\dd{\sigma}{\ensuremath{N_{\text{jets}}}}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ & [\text{pb]} & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $\geq$ 0 & 778. & 2.8 & 0.080 & 0.079 & $<$0.01 & 1.5 & 2.3 & $<$0.01 & 0.24 & \NA & 0.025 \\ $\geq$ 1 & 126.3 & 5.7 & 0.22 & 5.0 & 0.19 & 1.5 & 2.3 & $<$0.01 & 0.32 & \NA & 0.086 \\ $\geq$ 2 & 28.3 & 7.9 & 0.51 & 7.4 & 0.22 & 1.6 & 2.3 & 0.072 & 0.41 & \NA & 0.21 \\ $\geq$ 3 & 6.02 & 11. & 1.1 & 10. & 0.29 & 1.7 & 2.3 & 0.25 & 0.35 & \NA & 0.46 \\ $\geq$ 4 & 1.33 & 12. & 2.7 & 11. & 0.16 & 1.7 & 2.3 & 0.65 & 0.54 & \NA & 1.1 \\ $\geq$ 5 & 0.319 & 14. & 4.8 & 13. & 0.097 & 1.9 & 2.3 & 1.5 & 0.50 & \NA & 2.2 \\ $\geq$ 6 & 0.045 & 24. & 15. & 17. & 0.35 & 1.8 & 2.4 & 3.5 & 1.7 & \NA & 6.6 \\ \end{tabular}} \label{tab:combZNGoodJets_Zinc} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in $\pt(\cPZ)$ ($\ensuremath{N_{\text{jets}}} \geq 1$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccccc} $\pt(\cPZ)$ & $\dd{\sigma}{\pt(\cPZ)}$ & Tot. & Stat & JES & JER & Eff & Lumi & Bkg & LES & LER & Pileup & Unf & Unf \vspace{-0.4em}\\ & & unc & & & & & & & & & & model & stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $0 \ldots 1.25$ & 0.073 & 18. & 5.4 & 16. & 0.81 & 1.6 & 2.3 & $<$0.01 & 1.2 & 0.93 & 0.22 & 5.5 & 2.2 \\ $1.25 \ldots 2.5$ & 0.212 & 14. & 3.2 & 13. & 0.89 & 1.6 & 2.3 & $<$0.01 & 0.67 & 0.37 & 0.34 & 1.9 & 1.3 \\ $2.5 \ldots 3.75$ & 0.309 & 13. & 2.7 & 13. & 0.82 & 1.5 & 2.3 & $<$0.01 & 0.55 & 0.30 & 0.17 & 1.7 & 1.1 \\ $3.75 \ldots 5$ & 0.377 & 13. & 2.4 & 13. & 0.86 & 1.6 & 2.3 & $<$0.01 & 0.73 & 0.18 & 0.43 & 1.2 & 1.0 \\ $5 \ldots 6.25$ & 0.422 & 14. & 2.3 & 13. & 0.85 & 1.5 & 2.3 & $<$0.01 & 0.55 & 0.085 & 0.50 & 1.7 & 1.1 \\ $6.25 \ldots 7.5$ & 0.487 & 13. & 2.2 & 12. & 0.88 & 1.5 & 2.3 & $<$0.01 & 0.51 & 0.11 & 0.34 & 1.8 & 1.0 \\ $7.5 \ldots 8.75$ & 0.537 & 13. & 2.1 & 12. & 0.85 & 1.5 & 2.3 & $<$0.01 & 0.57 & 0.073 & 0.30 & 2.0 & 1.0 \\ $8.75 \ldots 10$ & 0.580 & 12. & 1.9 & 12. & 0.81 & 1.6 & 2.3 & $<$0.01 & 0.62 & 0.040 & 0.24 & 2.7 & 0.93 \\ $10 \ldots 11.25$ & 0.631 & 13. & 1.9 & 12. & 0.74 & 1.6 & 2.3 & $<$0.01 & 0.67 & 0.030 & 0.29 & 3.1 & 0.91 \\ $11.25 \ldots 12.5$ & 0.697 & 12. & 1.8 & 11. & 0.81 & 1.6 & 2.3 & $<$0.01 & 0.55 & 0.11 & 0.20 & 3.2 & 0.91 \\ $12.5 \ldots 15$ & 0.757 & 12. & 1.4 & 11. & 0.89 & 1.6 & 2.3 & $<$0.01 & 0.48 & 0.098 & 0.18 & 2.8 & 0.71 \\ $15 \ldots 17.5$ & 0.87 & 12. & 1.4 & 11. & 0.86 & 1.5 & 2.3 & $<$0.01 & 0.98 & 0.093 & 0.058 & 2.2 & 0.68 \\ $17.5 \ldots 20$ & 0.98 & 12. & 1.3 & 12. & 0.87 & 1.5 & 2.3 & $<$0.01 & 0.81 & 0.085 & 0.43 & 1.1 & 0.66 \\ $20 \ldots 25$ & 1.15 & 11. & 0.87 & 11. & 0.79 & 1.6 & 2.3 & $<$0.01 & 0.67 & 0.044 & 0.19 & 1.4 & 0.43 \\ $25 \ldots 30$ & 1.47 & 11. & 0.79 & 10. & 0.54 & 1.6 & 2.3 & $<$0.01 & 0.63 & 0.017 & 0.30 & 1.4 & 0.36 \\ $30 \ldots 35$ & 1.80 & 9.3 & 0.75 & 8.6 & 0.32 & 1.5 & 2.3 & $<$0.01 & 0.50 & 0.035 & 0.45 & 1.9 & 0.32 \\ $35 \ldots 40$ & 2.03 & 7.3 & 0.69 & 6.4 & 0.11 & 1.6 & 2.3 & $<$0.01 & 0.26 & 0.055 & 0.35 & 1.7 & 0.28 \\ $40 \ldots 45$ & 2.04 & 6.0 & 0.72 & 5.0 & 0.061 & 1.6 & 2.3 & $<$0.01 & 0.11 & 0.046 & 0.38 & 1.5 & 0.29 \\ $45 \ldots 50$ & 1.908 & 4.9 & 0.74 & 3.8 & 0.028 & 1.6 & 2.3 & $<$0.01 & 0.18 & 0.034 & 0.39 & 1.0 & 0.29 \\ $50 \ldots 60$ & 1.617 & 3.9 & 0.59 & 2.5 & 0.025 & 1.5 & 2.3 & 0.012 & 0.22 & 0.039 & 0.41 & 0.74 & 0.23 \\ $60 \ldots 70$ & 1.204 & 3.4 & 0.68 & 1.6 & 0.023 & 1.6 & 2.3 & 0.018 & 0.51 & 0.031 & 0.23 & 0.53 & 0.26 \\ $70 \ldots 80$ & 0.881 & 3.2 & 0.77 & 1.0 & 0.017 & 1.6 & 2.3 & 0.024 & 0.65 & 0.055 & 0.38 & 0.52 & 0.30 \\ $80 \ldots 90$ & 0.634 & 3.3 & 0.87 & 0.64 & 0.011 & 1.6 & 2.3 & 0.028 & 0.93 & $<$0.01 & 0.25 & 0.63 & 0.35 \\ $90 \ldots 100$ & 0.444 & 3.3 & 1.0 & 0.38 & 0.022 & 1.6 & 2.3 & 0.031 & 0.80 & 0.081 & 0.36 & 0.74 & 0.42 \\ $100 \ldots 110$ & 0.333 & 3.3 & 1.2 & 0.34 & $<$0.01 & 1.6 & 2.3 & 0.026 & 0.66 & $<$0.01 & 0.25 & 0.77 & 0.48 \\ $110 \ldots 130$ & 0.2212 & 3.3 & 1.0 & 0.22 & $<$0.01 & 1.6 & 2.3 & 0.021 & 0.87 & 0.019 & 0.20 & 0.79 & 0.41 \\ $130 \ldots 150$ & 0.1308 & 3.4 & 1.3 & 0.16 & 0.010 & 1.7 & 2.3 & 0.021 & 0.88 & 0.023 & 0.073 & 0.88 & 0.54 \\ $150 \ldots 170$ & 0.0813 & 3.6 & 1.6 & 0.18 & 0.013 & 1.7 & 2.3 & 0.016 & 0.75 & 0.027 & 0.11 & 1.0 & 0.67 \\ $170 \ldots 190$ & 0.0516 & 3.9 & 2.0 & 0.13 & 0.015 & 1.8 & 2.3 & 0.022 & 0.87 & 0.017 & 0.17 & 1.1 & 0.84 \\ $190 \ldots 220$ & 0.0317 & 4.0 & 2.1 & 0.11 & $<$0.01 & 1.8 & 2.3 & 0.034 & 0.69 & 0.033 & 0.10 & 1.1 & 0.90 \\ $220 \ldots 250$ & 0.01835 & 4.5 & 2.8 & 0.028 & $<$0.01 & 1.8 & 2.3 & 0.041 & 0.82 & 0.020 & 0.11 & 1.4 & 1.2 \\ $250 \ldots 400$ & 0.00508 & 4.5 & 2.5 & 0.055 & $<$0.01 & 2.0 & 2.3 & 0.065 & 0.80 & $<$0.01 & 0.12 & 1.4 & 1.1 \\ $400 \ldots 1000$ & 0.000187 & 7.8 & 6.1 & $<$0.01 & $<$0.01 & 1.7 & 2.4 & 0.11 & 1.7 & 0.062 & 0.58 & 2.6 & 2.4 \\ \end{tabular}} \label{tab:combZPt_Zinc1jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in $1^{\text{st}}$ jet \pt ($\ensuremath{N_{\text{jets}}} \geq 1$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} $\pt(j_1)$ & $\dd{\sigma}{\pt(j_1)}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $30 \ldots 41$ & 3.99 & 5.9 & 0.28 & 5.1 & 0.17 & 1.5 & 2.3 & $<$0.01 & 0.39 & 0.34 & 0.11 \\ $41 \ldots 59$ & 2.07 & 5.4 & 0.35 & 4.5 & 0.18 & 1.5 & 2.3 & 0.011 & 0.33 & 0.35 & 0.13 \\ $59 \ldots 83$ & 0.933 & 5.1 & 0.45 & 4.2 & 0.17 & 1.6 & 2.3 & 0.015 & 0.25 & 0.26 & 0.18 \\ $83 \ldots 118$ & 0.377 & 5.1 & 0.59 & 4.1 & 0.20 & 1.6 & 2.3 & 0.051 & 0.28 & 0.24 & 0.24 \\ $118 \ldots 168$ & 0.1300 & 5.1 & 0.92 & 4.1 & 0.22 & 1.6 & 2.3 & 0.070 & 0.057 & 0.30 & 0.38 \\ $168 \ldots 220$ & 0.0448 & 4.9 & 1.4 & 3.8 & 0.21 & 1.6 & 2.3 & 0.077 & 0.21 & 0.30 & 0.59 \\ $220 \ldots 300$ & 0.01477 & 6.4 & 2.0 & 5.3 & 0.32 & 1.6 & 2.3 & 0.065 & 0.30 & 0.37 & 0.86 \\ $300 \ldots 400$ & 0.00390 & 7.0 & 3.4 & 5.2 & 0.24 & 1.7 & 2.3 & 0.096 & 0.28 & 0.72 & 1.4 \\ \end{tabular}} \label{tab:combFirstJetPt_Zinc1jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in $2^{\text{nd}}$ jet \pt ($\ensuremath{N_{\text{jets}}} \geq 2$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} $\pt(j_2)$ & $\dd{\sigma}{\pt(j_2)}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $30 \ldots 41$ & 1.125 & 8.5 & 0.56 & 7.9 & 0.22 & 1.6 & 2.3 & 0.020 & 0.51 & 0.38 & 0.24 \\ $41 \ldots 59$ & 0.457 & 7.4 & 0.73 & 6.8 & 0.13 & 1.6 & 2.3 & 0.049 & 0.33 & 0.34 & 0.31 \\ $59 \ldots 83$ & 0.173 & 6.5 & 1.1 & 5.7 & 0.16 & 1.6 & 2.3 & 0.15 & 0.31 & 0.39 & 0.44 \\ $83 \ldots 118$ & 0.0590 & 5.6 & 1.7 & 4.4 & 0.16 & 1.6 & 2.3 & 0.22 & 0.48 & 0.21 & 0.66 \\ $118 \ldots 168$ & 0.0187 & 6.0 & 2.3 & 4.7 & 0.20 & 1.7 & 2.3 & 0.25 & 0.19 & 0.13 & 0.89 \\ $168 \ldots 250$ & 0.00518 & 6.6 & 3.4 & 4.6 & 0.33 & 1.7 & 2.3 & 0.22 & 0.21 & 0.19 & 1.3 \\ \end{tabular}} \label{tab:combSecondJetPt_Zinc2jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in $3^{\text{rd}}$ jet \pt ($\ensuremath{N_{\text{jets}}} \geq 3$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} $\pt(j_3)$ & $\dd{\sigma}{\pt(j_3)}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $30 \ldots 41$ & 0.289 & 11. & 1.2 & 10. & 0.26 & 1.6 & 2.3 & 0.12 & 0.42 & 0.93 & 0.50 \\ $41 \ldots 59$ & 0.0972 & 9.3 & 1.8 & 8.6 & 0.14 & 1.7 & 2.3 & 0.28 & 0.41 & 1.0 & 0.72 \\ $59 \ldots 83$ & 0.0306 & 7.9 & 2.9 & 6.5 & 0.31 & 1.7 & 2.3 & 0.48 & 0.69 & 1.2 & 1.1 \\ $83 \ldots 118$ & 0.00756 & 11. & 4.7 & 8.7 & 0.46 & 1.9 & 2.3 & 0.83 & 0.74 & 0.83 & 1.7 \\ $118 \ldots 168$ & 0.00180 & 10. & 8.1 & 3.7 & 0.40 & 1.8 & 2.4 & 0.82 & 0.50 & 1.3 & 3.0 \\ $168 \ldots 250$ & 0.000342 & 17. & 14. & 6.1 & 0.20 & 1.8 & 2.3 & 0.71 & 1.5 & 2.2 & 5.3 \\ \end{tabular}} \label{tab:combThirdJetPt_Zinc3jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in $1^{\text{st}}$ jet $\vert y \vert$ ($\ensuremath{N_{\text{jets}}} \geq 1$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} $\abs{y(j_1)}$ & $\dd{\sigma}{\abs{y(j_1)}}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ & [\text{pb]} & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $0 \ldots 0.2$ & 70.4 & 4.9 & 0.62 & 4.0 & 0.089 & 1.5 & 2.3 & 0.015 & 0.23 & 0.11 & 0.25 \\ $0.2 \ldots 0.4$ & 69.5 & 5.0 & 0.63 & 4.1 & 0.097 & 1.5 & 2.3 & 0.015 & 0.29 & 0.14 & 0.26 \\ $0.4 \ldots 0.6$ & 66.7 & 5.0 & 0.65 & 4.1 & 0.12 & 1.5 & 2.3 & 0.014 & 0.20 & 0.14 & 0.26 \\ $0.6 \ldots 0.8$ & 64.7 & 5.2 & 0.64 & 4.3 & 0.18 & 1.6 & 2.3 & 0.014 & 0.30 & 0.15 & 0.26 \\ $0.8 \ldots 1$ & 62.3 & 5.2 & 0.68 & 4.3 & 0.087 & 1.5 & 2.3 & 0.013 & 0.20 & 0.17 & 0.28 \\ $1 \ldots 1.2$ & 57.3 & 5.1 & 0.71 & 4.2 & 0.19 & 1.5 & 2.3 & 0.012 & 0.28 & 0.24 & 0.29 \\ $1.2 \ldots 1.4$ & 52.0 & 5.4 & 0.75 & 4.6 & 0.16 & 1.5 & 2.3 & $<$0.01 & 0.29 & 0.25 & 0.31 \\ $1.4 \ldots 1.6$ & 47.8 & 6.1 & 0.77 & 5.4 & 0.087 & 1.5 & 2.3 & $<$0.01 & 0.32 & 0.31 & 0.32 \\ $1.6 \ldots 1.8$ & 43.5 & 6.3 & 0.80 & 5.6 & 0.21 & 1.5 & 2.3 & $<$0.01 & 0.34 & 0.21 & 0.34 \\ $1.8 \ldots 2$ & 38.9 & 6.7 & 0.84 & 6.0 & 0.38 & 1.5 & 2.3 & $<$0.01 & 0.41 & 0.32 & 0.36 \\ $2 \ldots 2.2$ & 34.3 & 7.2 & 0.90 & 6.5 & 0.44 & 1.5 & 2.3 & $<$0.01 & 0.62 & 0.40 & 0.39 \\ $2.2 \ldots 2.4$ & 29.5 & 7.2 & 1.0 & 6.4 & 0.66 & 1.5 & 2.3 & $<$0.01 & 0.66 & 0.36 & 0.44 \\ \end{tabular}} \label{tab:combFirstJetAbsRapidity_Zinc1jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in $2^{\text{nd}}$ jet $\vert y \vert$ ($\ensuremath{N_{\text{jets}}} \geq 2$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} $\abs{y(j_2)}$ & $\dd{\sigma}{\abs{y(j_2)}}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ & [\text{pb]} & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $0 \ldots 0.2$ & 15.1 & 7.2 & 1.4 & 6.4 & 0.11 & 1.6 & 2.3 & 0.078 & 0.30 & 0.26 & 0.62 \\ $0.2 \ldots 0.4$ & 14.4 & 7.3 & 1.5 & 6.6 & 0.041 & 1.6 & 2.3 & 0.082 & 0.15 & 0.33 & 0.64 \\ $0.4 \ldots 0.6$ & 14.4 & 7.4 & 1.4 & 6.6 & 0.13 & 1.6 & 2.3 & 0.074 & 0.49 & 0.35 & 0.64 \\ $0.6 \ldots 0.8$ & 13.7 & 7.5 & 1.5 & 6.7 & 0.25 & 1.6 & 2.3 & 0.071 & 0.35 & 0.27 & 0.68 \\ $0.8 \ldots 1$ & 13.9 & 7.5 & 1.5 & 6.7 & 0.17 & 1.6 & 2.3 & 0.065 & 0.17 & 0.093 & 0.70 \\ $1 \ldots 1.2$ & 12.43 & 7.4 & 1.6 & 6.6 & 0.11 & 1.6 & 2.3 & 0.065 & 0.42 & 0.13 & 0.70 \\ $1.2 \ldots 1.4$ & 11.89 & 8.1 & 1.5 & 7.4 & 0.082 & 1.6 & 2.3 & 0.062 & 0.23 & 0.10 & 0.68 \\ $1.4 \ldots 1.6$ & 11.00 & 7.7 & 1.7 & 6.9 & 0.15 & 1.6 & 2.3 & 0.052 & 0.51 & 0.11 & 0.76 \\ $1.6 \ldots 1.8$ & 10.09 & 8.6 & 1.7 & 7.8 & 0.25 & 1.6 & 2.3 & 0.049 & 0.48 & 0.19 & 0.78 \\ $1.8 \ldots 2$ & 9.35 & 8.2 & 1.8 & 7.4 & 0.33 & 1.6 & 2.3 & 0.043 & 0.65 & 0.44 & 0.84 \\ $2 \ldots 2.2$ & 8.48 & 8.6 & 1.8 & 7.8 & 0.48 & 1.6 & 2.3 & 0.035 & 0.50 & 0.67 & 0.85 \\ $2.2 \ldots 2.4$ & 7.04 & 9.3 & 2.0 & 8.4 & 0.37 & 1.6 & 2.3 & 0.037 & 0.93 & 1.2 & 0.96 \\ \end{tabular}} \label{tab:combSecondJetAbsRapidity_Zinc2jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in $3^{\text{rd}}$ jet $\abs{y}$ ($\ensuremath{N_{\text{jets}}} \geq 3$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} $\abs{y(j_3)}$ & $\dd{\sigma}{\abs{y(j_3)}}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ & [\text{pb]} & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $0 \ldots 0.3$ & 3.14 & 9.9 & 2.5 & 9.0 & 0.26 & 1.7 & 2.3 & 0.27 & 0.28 & 0.15 & 1.1 \\ $0.3 \ldots 0.6$ & 3.02 & 10. & 2.6 & 9.4 & 0.13 & 1.7 & 2.3 & 0.27 & 0.31 & 0.088 & 1.1 \\ $0.6 \ldots 0.9$ & 3.06 & 9.6 & 2.6 & 8.7 & 0.20 & 1.6 & 2.3 & 0.25 & 0.20 & 0.012 & 1.2 \\ $0.9 \ldots 1.2$ & 2.70 & 9.5 & 2.7 & 8.5 & 0.22 & 1.7 & 2.3 & 0.25 & 0.22 & 0.34 & 1.2 \\ $1.2 \ldots 1.5$ & 2.51 & 12. & 2.8 & 11. & 0.14 & 1.6 & 2.3 & 0.23 & 0.59 & 0.78 & 1.3 \\ $1.5 \ldots 1.8$ & 2.21 & 11. & 3.1 & 10. & 0.17 & 1.6 & 2.3 & 0.22 & 0.13 & 0.62 & 1.4 \\ $1.8 \ldots 2.1$ & 1.89 & 13. & 3.1 & 12. & 0.13 & 1.7 & 2.3 & 0.22 & 0.57 & 1.8 & 1.4 \\ $2.1 \ldots 2.4$ & 1.70 & 11. & 3.4 & 10. & 0.66 & 1.7 & 2.3 & 0.21 & 0.87 & 2.4 & 1.6 \\ \end{tabular}} \label{tab:combThirdJetAbsRapidity_Zinc3jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in \HT ($\ensuremath{N_{\text{jets}}} \geq 1$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} \HT & $\dd{\sigma}{\HT}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $30 \ldots 41$ & 3.71 & 5.9 & 0.41 & 5.1 & 0.18 & 1.5 & 2.3 & $<$0.01 & 0.38 & 0.92 & 0.19 \\ $41 \ldots 59$ & 1.678 & 4.7 & 0.53 & 3.6 & 0.16 & 1.5 & 2.3 & $<$0.01 & 0.26 & 1.1 & 0.21 \\ $59 \ldots 83$ & 0.852 & 5.3 & 0.66 & 4.4 & 0.23 & 1.5 & 2.3 & $<$0.01 & 0.30 & 0.62 & 0.26 \\ $83 \ldots 118$ & 0.449 & 6.0 & 0.74 & 5.3 & 0.13 & 1.6 & 2.3 & 0.015 & 0.34 & 0.54 & 0.30 \\ $118 \ldots 168$ & 0.199 & 5.9 & 0.92 & 5.1 & 0.20 & 1.6 & 2.3 & 0.040 & 0.18 & 0.41 & 0.38 \\ $168 \ldots 220$ & 0.0886 & 6.3 & 1.5 & 5.4 & 0.36 & 1.6 & 2.3 & 0.078 & 0.35 & 0.33 & 0.61 \\ $220 \ldots 300$ & 0.0373 & 6.9 & 1.6 & 6.0 & 0.10 & 1.7 & 2.3 & 0.14 & 0.20 & 0.17 & 0.66 \\ $300 \ldots 400$ & 0.0148 & 6.8 & 2.3 & 5.6 & 0.21 & 1.6 & 2.3 & 0.20 & 0.18 & 0.21 & 0.98 \\ $400 \ldots 550$ & 0.00449 & 7.3 & 3.2 & 5.7 & 0.20 & 1.8 & 2.3 & 0.36 & 0.63 & 0.28 & 1.3 \\ $550 \ldots 780$ & 0.00133 & 8.1 & 5.3 & 4.8 & 0.13 & 1.6 & 2.3 & 0.40 & 1.2 & 0.24 & 2.1 \\ $780 \ldots 1100$ & 0.000306 & 12. & 8.2 & 7.5 & 0.22 & 1.8 & 2.3 & 0.59 & 0.69 & 0.56 & 3.2 \\ \end{tabular}} \label{tab:combJetsHT_Zinc1jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in \HT ($\ensuremath{N_{\text{jets}}} \geq 2$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} \HT & $\dd{\sigma}{\HT}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $60 \ldots 83$ & 0.208 & 9.5 & 1.1 & 8.9 & 0.25 & 1.5 & 2.3 & 0.023 & 0.63 & 1.0 & 0.67 \\ $83 \ldots 118$ & 0.228 & 7.9 & 0.89 & 7.3 & 0.15 & 1.6 & 2.3 & 0.027 & 0.45 & 0.59 & 0.42 \\ $118 \ldots 168$ & 0.1371 & 6.8 & 0.96 & 6.0 & 0.18 & 1.6 & 2.3 & 0.030 & 0.32 & 0.58 & 0.42 \\ $168 \ldots 220$ & 0.0705 & 7.3 & 1.4 & 6.6 & 0.29 & 1.6 & 2.3 & 0.10 & 0.36 & 0.31 & 0.57 \\ $220 \ldots 300$ & 0.0329 & 7.1 & 1.6 & 6.2 & 0.11 & 1.7 & 2.3 & 0.16 & 0.18 & 0.29 & 0.64 \\ $300 \ldots 400$ & 0.01360 & 6.8 & 2.2 & 5.7 & 0.20 & 1.6 & 2.3 & 0.22 & 0.33 & 0.29 & 0.90 \\ $400 \ldots 550$ & 0.00436 & 7.3 & 3.1 & 5.8 & 0.18 & 1.8 & 2.3 & 0.36 & 0.56 & 0.28 & 1.2 \\ $550 \ldots 780$ & 0.00129 & 8.1 & 5.0 & 5.1 & 0.17 & 1.6 & 2.3 & 0.41 & 1.1 & 0.21 & 1.9 \\ $780 \ldots 1100$ & 0.000304 & 12. & 7.9 & 7.2 & 0.25 & 1.7 & 2.3 & 0.58 & 0.65 & 0.41 & 3.1 \\ \end{tabular}} \label{tab:combJetsHT_Zinc2jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in \HT ($\ensuremath{N_{\text{jets}}} \geq 3$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} \HT & $\dd{\sigma}{\HT}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $90 \ldots 130$ & 0.0166 & 17. & 3.5 & 15. & 0.64 & 1.6 & 2.3 & 0.013 & 0.61 & 5.4 & 2.3 \\ $130 \ldots 168$ & 0.0300 & 12. & 2.5 & 11. & 0.10 & 1.7 & 2.3 & 0.097 & 0.35 & 1.8 & 1.2 \\ $168 \ldots 220$ & 0.0254 & 11. & 2.8 & 9.7 & 0.088 & 1.7 & 2.3 & 0.20 & 0.46 & 0.75 & 1.2 \\ $220 \ldots 300$ & 0.0163 & 9.3 & 2.4 & 8.4 & 0.27 & 1.7 & 2.3 & 0.28 & 0.21 & 0.73 & 1.0 \\ $300 \ldots 400$ & 0.00841 & 8.4 & 3.1 & 7.2 & 0.13 & 1.7 & 2.3 & 0.36 & 0.26 & 0.43 & 1.3 \\ $400 \ldots 550$ & 0.00307 & 8.9 & 3.9 & 7.2 & 0.22 & 1.8 & 2.3 & 0.53 & 0.72 & 0.40 & 1.5 \\ $550 \ldots 780$ & 0.00103 & 10. & 6.3 & 6.8 & 0.33 & 1.7 & 2.3 & 0.53 & 1.1 & 0.22 & 2.5 \\ $780 \ldots 1100$ & 0.000246 & 12. & 9.1 & 6.5 & 0.17 & 1.7 & 2.3 & 0.67 & 0.88 & 2.7 & 3.5 \\ \end{tabular}} \label{tab:combJetsHT_Zinc3jet} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in \ensuremath{\pt^{\text{bal}}}\xspace ($\ensuremath{N_{\text{jets}}} \geq 1$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} \ensuremath{\pt^{\text{bal}}}\xspace & $\dd{\sigma}{\ensuremath{\pt^{\text{bal}}}\xspace}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $0 \ldots 10$ & 2.65 & 6.0 & 0.45 & 5.2 & 0.42 & 1.5 & 2.3 & $<$0.01 & 0.45 & 1.1 & 0.18 \\ $10 \ldots 20$ & 3.53 & 6.1 & 0.36 & 5.3 & 0.28 & 1.5 & 2.3 & $<$0.01 & 0.40 & 1.2 & 0.14 \\ $20 \ldots 35$ & 2.35 & 6.3 & 0.37 & 5.1 & 0.38 & 1.6 & 2.3 & $<$0.01 & 0.31 & 2.2 & 0.15 \\ $35 \ldots 50$ & 1.116 & 6.0 & 0.53 & 4.1 & 0.69 & 1.6 & 2.3 & 0.023 & 0.30 & 3.2 & 0.23 \\ $50 \ldots 65$ & 0.467 & 4.4 & 0.87 & 2.2 & 0.77 & 1.6 & 2.3 & 0.053 & 0.092 & 2.0 & 0.39 \\ $65 \ldots 80$ & 0.208 & 5.0 & 1.2 & 1.0 & 0.85 & 1.9 & 2.3 & 0.17 & 0.33 & 3.5 & 0.54 \\ $80 \ldots 100$ & 0.0883 & 5.1 & 1.8 & 1.6 & 0.81 & 2.0 & 2.4 & 0.37 & 0.62 & 2.9 & 0.75 \\ $100 \ldots 125$ & 0.0344 & 6.9 & 2.7 & 2.9 & 0.66 & 2.2 & 2.4 & 0.62 & 0.42 & 4.2 & 1.1 \\ $125 \ldots 150$ & 0.0154 & 7.5 & 4.1 & 4.3 & 0.57 & 2.1 & 2.4 & 0.69 & 0.54 & 2.6 & 1.6 \\ $150 \ldots 175$ & 0.00686 & 12. & 6.1 & 7.7 & 0.23 & 2.2 & 2.4 & 0.76 & 0.67 & 4.5 & 2.3 \\ $175 \ldots 200$ & 0.00357 & 12. & 8.0 & 5.2 & 0.82 & 2.3 & 2.5 & 0.71 & 0.51 & 4.7 & 2.9 \\ \end{tabular}} \label{tab:combVisPt_Zinc1jetQun} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in \ensuremath{\pt^{\text{bal}}}\xspace ($\ensuremath{N_{\text{jets}}} \geq 2$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} \ensuremath{\pt^{\text{bal}}}\xspace & $\dd{\sigma}{\ensuremath{\pt^{\text{bal}}}\xspace}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $0 \ldots 15$ & 0.522 & 8.7 & 0.70 & 8.2 & 0.38 & 1.5 & 2.3 & 0.027 & 0.56 & 0.40 & 0.32 \\ $15 \ldots 30$ & 0.635 & 8.1 & 0.56 & 7.5 & 0.29 & 1.6 & 2.3 & 0.023 & 0.48 & 0.97 & 0.26 \\ $30 \ldots 45$ & 0.372 & 6.6 & 0.75 & 5.7 & 0.48 & 1.6 & 2.3 & 0.040 & 0.38 & 1.4 & 0.35 \\ $45 \ldots 60$ & 0.178 & 6.3 & 1.0 & 5.4 & 0.94 & 1.6 & 2.3 & 0.14 & 0.24 & 0.87 & 0.47 \\ $60 \ldots 80$ & 0.0738 & 6.7 & 1.4 & 5.0 & 1.2 & 1.9 & 2.3 & 0.35 & 0.29 & 2.6 & 0.60 \\ $80 \ldots 100$ & 0.0308 & 7.3 & 2.3 & 5.2 & 1.3 & 2.2 & 2.4 & 0.75 & 0.34 & 2.7 & 0.91 \\ $100 \ldots 125$ & 0.0133 & 8.7 & 3.7 & 5.4 & 1.3 & 2.2 & 2.4 & 1.1 & 0.60 & 4.0 & 1.4 \\ $125 \ldots 150$ & 0.00682 & 12. & 5.1 & 9.0 & 0.98 & 2.5 & 2.4 & 1.3 & 0.59 & 4.1 & 1.9 \\ $150 \ldots 175$ & 0.00352 & 14. & 7.3 & 10. & 0.15 & 2.6 & 2.4 & 1.4 & 0.22 & 5.1 & 2.6 \\ $175 \ldots 200$ & 0.00182 & 15. & 9.5 & 10. & 0.33 & 2.3 & 2.5 & 1.2 & 0.78 & 4.3 & 3.0 \\ \end{tabular}} \label{tab:combVisPt_Zinc2jetQun} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in \ensuremath{\pt^{\text{bal}}}\xspace ($\ensuremath{N_{\text{jets}}} \geq 3$) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} \ensuremath{\pt^{\text{bal}}}\xspace & $\dd{\sigma}{\ensuremath{\pt^{\text{bal}}}\xspace}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $0 \ldots 20$ & 0.102 & 12. & 1.8 & 11. & 0.71 & 1.5 & 2.3 & 0.044 & 0.57 & 4.4 & 0.78 \\ $20 \ldots 40$ & 0.106 & 11. & 1.4 & 9.9 & 0.61 & 1.6 & 2.3 & 0.095 & 0.29 & 2.8 & 0.66 \\ $40 \ldots 65$ & 0.0483 & 9.3 & 2.2 & 7.8 & 1.2 & 1.7 & 2.3 & 0.33 & 0.32 & 3.0 & 1.0 \\ $65 \ldots 90$ & 0.0160 & 8.5 & 4.0 & 4.8 & 1.4 & 2.1 & 2.4 & 1.1 & 0.16 & 4.1 & 1.7 \\ $90 \ldots 120$ & 0.00580 & 13. & 7.1 & 8.3 & 1.9 & 2.3 & 2.4 & 2.0 & 0.61 & 4.6 & 2.9 \\ $120 \ldots 150$ & 0.00243 & 23. & 13. & 16. & 0.81 & 2.6 & 2.4 & 2.8 & 1.5 & 6.8 & 5.0 \\ $150 \ldots 175$ & 0.00127 & 26. & 18. & 16. & 1.3 & 2.6 & 2.4 & 2.9 & 0.96 & 4.3 & 6.7 \\ $175 \ldots 200$ & 0.00079 & 26. & 20. & 9.9 & 1.8 & 2.8 & 2.5 & 3.1 & 0.41 & 8.5 & 7.4 \\ \end{tabular}} \label{tab:combVisPt_Zinc3jetQun} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in \ensuremath{\text{JZB}}\xspace (full phase space) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} \ensuremath{\text{JZB}}\xspace & $\dd{\sigma}{\ensuremath{\text{JZB}}\xspace}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $-140 \ldots -105$ & 0.00274 & 17. & 11. & 9.8 & 1.3 & 1.6 & 2.4 & 0.10 & 1.6 & 6.4 & 4.8 \\ $-105 \ldots -80$ & 0.0115 & 11. & 6.3 & 7.0 & 0.66 & 1.7 & 2.4 & 0.12 & 0.64 & 2.5 & 2.9 \\ $-80 \ldots -60$ & 0.0388 & 15. & 3.7 & 11. & 0.73 & 1.7 & 2.4 & 0.061 & 0.82 & 5.7 & 1.7 \\ $-60 \ldots -40$ & 0.153 & 14. & 2.0 & 11. & 0.73 & 1.7 & 2.3 & 0.047 & 0.59 & 7.0 & 0.90 \\ $-40 \ldots -20$ & 0.658 & 9.0 & 0.96 & 6.7 & 1.3 & 1.7 & 2.3 & 0.012 & 0.53 & 4.7 & 0.40 \\ $-20 \ldots 0$ & 2.45 & 8.0 & 0.43 & 6.9 & 0.54 & 1.6 & 2.3 & $<$0.01 & 0.46 & 2.8 & 0.17 \\ $0 \ldots 20$ & 2.16 & 5.1 & 0.58 & 3.6 & 0.64 & 2.1 & 2.3 & $<$0.01 & 0.17 & 1.3 & 0.24 \\ $20 \ldots 40$ & 0.69 & 15. & 0.89 & 14. & 1.5 & 1.6 & 2.3 & 0.027 & 0.41 & 5.4 & 0.38 \\ $40 \ldots 60$ & 0.142 & 11. & 2.1 & 9.5 & 1.4 & 1.7 & 2.3 & 0.18 & 0.34 & 3.9 & 0.92 \\ $60 \ldots 85$ & 0.0356 & 13. & 3.9 & 11. & 1.9 & 1.9 & 2.4 & 0.55 & 1.0 & 2.6 & 1.6 \\ $85 \ldots 110$ & 0.0114 & 14. & 7.3 & 9.1 & 0.83 & 2.1 & 2.4 & 0.93 & 2.0 & 5.7 & 3.0 \\ $110 \ldots 140$ & 0.0053 & 19. & 11. & 12. & 0.66 & 2.4 & 2.5 & 1.1 & 1.5 & 8.0 & 4.4 \\ \end{tabular}} \label{tab:combJZB} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in \ensuremath{\text{JZB}}\xspace ($\pt(\cPZ)<50$ GeV) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} \ensuremath{\text{JZB}}\xspace & $\dd{\sigma}{\ensuremath{\text{JZB}}\xspace}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $-50 \ldots -30$ & 0.00859 & 7.8 & 5.8 & 1.9 & 1.2 & 1.8 & 2.3 & 0.042 & 0.92 & 2.5 & 2.6 \\ $-30 \ldots -15$ & 0.1212 & 5.1 & 2.1 & 1.5 & 2.6 & 2.3 & 2.3 & 0.042 & 0.19 & 0.61 & 1.0 \\ $-15 \ldots 0$ & 1.30 & 8.0 & 0.52 & 6.9 & 0.26 & 1.7 & 2.3 & $<$0.01 & 0.55 & 2.9 & 0.23 \\ $0 \ldots 15$ & 1.63 & 12. & 0.44 & 11. & 0.51 & 1.6 & 2.3 & $<$0.01 & 0.32 & 3.1 & 0.19 \\ $15 \ldots 30$ & 0.83 & 14. & 0.65 & 13. & 1.3 & 1.6 & 2.3 & 0.013 & 0.34 & 3.4 & 0.29 \\ $30 \ldots 50$ & 0.219 & 11. & 1.2 & 11. & 1.4 & 1.6 & 2.3 & 0.036 & 0.15 & 1.2 & 0.50 \\ $50 \ldots 75$ & 0.0410 & 11. & 2.6 & 9.2 & 1.4 & 1.8 & 2.3 & 0.29 & 0.39 & 4.6 & 1.1 \\ $75 \ldots 105$ & 0.0097 & 13. & 5.4 & 9.6 & 0.63 & 2.3 & 2.4 & 0.89 & 1.0 & 6.1 & 2.2 \\ $105 \ldots 150$ & 0.00241 & 14. & 10. & 6.3 & 1.4 & 2.4 & 2.4 & 1.3 & 0.87 & 5.1 & 3.8 \\ \end{tabular}} \label{tab:combJZB_ptLow} \end{table*} \begin{table*} \centering \topcaption{Differential cross section in \ensuremath{\text{JZB}}\xspace ($\pt(\cPZ)>50$ GeV) for the combination of both decay channels and breakdown of the uncertainties.} \cmsTable{ \begin{tabular}{cccccccccccc} \ensuremath{\text{JZB}}\xspace & $\dd{\sigma}{\ensuremath{\text{JZB}}\xspace}$ & Tot. unc & Stat & JES & JER & Eff & Lumi & Bkg & Pileup & Unf model & Unf stat\\ {[{\GeV}]} & ${\scriptstyle [\frac{\text{pb}}{{\GeV}}]}$ & [\%]& [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%]\\ \hline $-165 \ldots -125$ & 0.00165 & 11. & 8.8 & 1.6 & 0.32 & 1.7 & 2.4 & 0.15 & 0.87 & 3.3 & 5.0 \\ $-125 \ldots -95$ & 0.00475 & 8.8 & 6.2 & 2.8 & 1.3 & 1.9 & 2.4 & 0.14 & 0.46 & 2.0 & 3.4 \\ $-95 \ldots -70$ & 0.0182 & 19. & 3.6 & 16. & 0.64 & 1.8 & 2.4 & 0.12 & 0.30 & 5.2 & 2.0 \\ $-70 \ldots -45$ & 0.091 & 14. & 1.4 & 13. & 0.36 & 1.6 & 2.3 & 0.052 & 0.58 & 3.5 & 0.78 \\ $-45 \ldots -20$ & 0.551 & 6.1 & 0.63 & 3.8 & 0.71 & 1.6 & 2.3 & 0.011 & 0.28 & 1.0 & 0.33 \\ $-20 \ldots 0$ & 1.404 & 5.3 & 0.38 & 4.4 & 0.13 & 1.5 & 2.3 & $<$0.01 & 0.43 & 0.33 & 0.18 \\ $0 \ldots 25$ & 0.607 & 4.9 & 0.62 & 3.4 & 0.92 & 2.1 & 2.3 & 0.021 & 0.30 & 1.1 & 0.30 \\ $25 \ldots 55$ & 0.090 & 19. & 1.3 & 18. & 2.3 & 1.7 & 2.3 & 0.14 & 0.43 & 3.5 & 0.68 \\ $55 \ldots 85$ & 0.0162 & 19. & 3.5 & 14. & 2.4 & 2.0 & 2.4 & 0.52 & 0.93 & 11. & 1.8 \\ $85 \ldots 120$ & 0.00454 & 18. & 6.9 & 14. & 3.2 & 2.0 & 2.4 & 0.79 & 1.8 & 8.1 & 3.3 \\ $120 \ldots 150$ & 0.00195 & 21. & 11. & 14. & 1.2 & 2.3 & 2.6 & 1.3 & 1.8 & 9.4 & 5.0 \\ \end{tabular}} \label{tab:combJZB_ptHigh} \end{table*} \section{Results} \label{results} The measurements from the electron and muon channels are found to be consistent and are combined using a weighted average as described in Ref.~\cite{Khachatryan:2016crw}. For each bin of the measured differential cross sections, the results of each of the two measurements are weighted by the inverse of the squared total uncertainty. The covariance matrix of the combination, the diagonal elements of which are used to extract the measurement uncertainties, is computed assuming full correlation between the two channels for all the sources of uncertainty sources except the statistical uncertainties and those associated with lepton reconstruction and identification, which are taken to be uncorrelated. The integrated cross section is measured for different exclusive and inclusive multiplicities and the results are shown in Tables~\ref{tab:combZNGoodJets_Zexc} and~\ref{tab:combZNGoodJets_Zinc}. The results for the differential cross sections are shown in Figs.~\ref{fig:sigNjet} to~\ref{fig:JZBmumu_b} and are compared to the predictions described in Section~\ref{theory}. For the two predictions obtained from {\textsc{MG5}\_a\textsc{MC}}\xspace and {\PYTHIA}8\xspace the number of partons included in the ME calculation and the order of the calculation is indicated by distinctive labels (``${\leq} 4$j LO'' for up to four partons at LO and ``${\leq} 2$j NLO'' for up to two partons at NLO). The prediction of \textsc{geneva}\xspace is denoted as ``GE''. The label ``PY8'' indicates that {\PYTHIA}8\xspace is used in these calculations for the parton showering and the hadronisation. The NNLO $\cPZ + 1 \text{ jet}$ calculation is denoted as N$_{\text{jetti}}$\xspace NNLO in the legends. The measured cross section values along with the uncertainties discussed in Section~\ref{systematics} are given in Tables~\ref{tab:combZNGoodJets_Zexc} to~\ref{tab:combJZB_ptHigh}. Fig.~\ref{fig:sigNjet} shows the measured cross section as a function of the exclusive (Table~\ref{tab:combZNGoodJets_Zexc}) and the inclusive (Table~\ref{tab:combZNGoodJets_Zinc}) jet multiplicities. Agreement between the measurement and the {\textsc{MG5}\_a\textsc{MC}}\xspace prediction is observed. The cross section obtained from LO {\textsc{MG5}\_a\textsc{MC}}\xspace tends to be lower than NLO {\textsc{MG5}\_a\textsc{MC}}\xspace up to a jet multiplicity of 3. The total cross section for $\cPZ (\to \ell^+\ell^-)+\ge 0 \text{ jet}, m_{\ell^+\ell^-}>50\GeV$ computed at NNLO and used to normalise the cross section of the LO prediction is similar to the NLO cross section as seen in Table~\ref{tab:theory_xsec}. The smaller cross section seen when requiring at least one jet is explained by a steeply falling \pt spectrum of the leading jet in the LO prediction. The \textsc{geneva}\xspace prediction describes the measured cross section up to a jet multiplicity of 2, but fails to describe the data for higher jet multiplicities, where one or more jets arise from the parton shower. This effect is not seen in the NLO (LO) {\textsc{MG5}\_a\textsc{MC}}\xspace predictions, which give a fair description of the data for multiplicities above three (four). The measured cross section as a function of the transverse momentum of the $\cPZ$ boson for events with at least one jet is presented in Fig.~\ref{fig:sigZPt1j} and Table~\ref{tab:combZPt_Zinc1jet}. The best model for describing the measurement at low \pt, below the peak, is NLO {\textsc{MG5}\_a\textsc{MC}}\xspace, showing a better agreement than the NNLL$_{\tau}$' calculation from \textsc{geneva}\xspace. The shape of the distribution in the region below 10\GeV is better described by \textsc{geneva}\xspace than by the other predictions, as shown by the flat ratio plot. This kinematic region is covered by events with extra hadronic activity in addition to the jet required by the event selection. The estimation of the uncertainty in the shape in this region shows that it is dominated by the statistical uncertainty, represented by error bars on the plot since the systematic uncertainties are negligible. In the intermediate region, \textsc{geneva}\xspace predicts a steeper rise for the distribution than the other two predictions and than the measurement. The high-\pt region, where \textsc{geneva}\xspace and NLO {\textsc{MG5}\_a\textsc{MC}}\xspace are expected to have similar accuracy (NLO), is equally well described by the two. The LO predictions undershoot the measurement in this region despite the normalisation of the total $\cPZ + \ge 0 \text{ jet}$ cross section to its NNLO value. The jet transverse momenta for the 1$^{\text{st}}$, 2$^{\text{nd}}$ and~3$^{\text{rd}}$ leading jets can be seen in Figs.~\ref{fig:sigPtjet_a} and~\ref{fig:sigPtjet_b} (Tables~\ref{tab:combFirstJetPt_Zinc1jet}--\ref{tab:combThirdJetPt_Zinc3jet}). The LO {\textsc{MG5}\_a\textsc{MC}}\xspace predicted spectrum differs from the measurement, showing a steeper slope in the low $\pt$ region. The same feature was observed in the previous measurements~\cite{Chatrchyan:2011ne,Khachatryan:2014zya}. The comparison with NLO {\textsc{MG5}\_a\textsc{MC}}\xspace and N$_{\text{jetti}}$\xspace NNLO calculation shows that adding NLO terms cures this discrepancy. The \textsc{geneva}\xspace prediction shows good agreement for the measured \pt of the first jet, while it undershoots the data at low \pt for the second jet. The jet rapidities for the first three leading jets have also been measured and the distributions are shown in Figs.~\ref{fig:sigEtajet_a} and~\ref{fig:sigEtajet_b} (Tables~\ref{tab:combFirstJetAbsRapidity_Zinc1jet}--\ref{tab:combThirdJetAbsRapidity_Zinc3jet}). All the predictions are in agreement with data. \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{Figure_004-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_004-b.pdf} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the jet exclusive (left) and inclusive (right) multiplicity. The error bars represent the statistical uncertainty and the grey hatched bands represent the total uncertainty, including the systematic and statistical components. The measurement is compared with different predictions, which are described in the text. The ratio of each prediction to the measurement is shown together with the measurement statistical (black bars) and total (black hatched bands) uncertainties and the prediction (coloured bands) uncertainties. Different uncertainties were considered for the predictions: statistical (stat), ME calculation (theo), and PDF together with the strong coupling constant (\alpS). The complete set was computed for one of the predictions. These uncertainties were added together in quadrature (represented by the $\oplus$ sign in the legend).} \label{fig:sigNjet} \end{figure*} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Figure_005.pdf} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the transverse momentum of the $\cPZ$ boson for events with at least one jet. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:sigZPt1j} \end{figure} The total jet activity has been measured via the \HT variable. The differential cross section as a function of this observable is presented in Figs.~\ref{fig:sigHtjet_a} and~\ref{fig:sigHtjet_b} (Tables~\ref{tab:combJetsHT_Zinc1jet}--\ref{tab:combJetsHT_Zinc3jet}) for inclusive jet multiplicities of 1, 2, and~3. The LO {\textsc{MG5}\_a\textsc{MC}}\xspace calculation predicts fewer events than found in the data for the region $\HT < 400\GeV$. For higher jet multiplicities both LO and NLO {\textsc{MG5}\_a\textsc{MC}}\xspace are compatible with the measurement, although the contribution in the region $\HT < 400\GeV$ is smaller for LO than for NLO {\textsc{MG5}\_a\textsc{MC}}\xspace. The contribution at lower values of \HT is slightly overestimated, but the discrepancy is compatible with the theoretical and experimental uncertainties. The \textsc{geneva}\xspace generator predicts a steeper spectrum than measured. For jet multiplicities of at least one, we also compare with N$_{\text{jetti}}$\xspace NNLO, and the level of agreement is similar to that found with NLO {\textsc{MG5}\_a\textsc{MC}}\xspace. The uncertainty for N$_{\text{jetti}}$\xspace NNLO is larger than in the jet transverse momentum distribution because of the contribution from the additional jets. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Figure_006.pdf} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the transverse momentum of the first jet. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:sigPtjet_a} \end{figure} \begin{figure} \centering \raisebox{-\height}{\includegraphics[width=0.48\textwidth]{Figure_007-a.pdf}} \raisebox{-\height}{\includegraphics[width=0.48\textwidth]{Figure_007-b.pdf}} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the transverse momentum of the second (\cmsLeft) and third (\cmsRight) jet. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:sigPtjet_b} \end{figure} \begin{figure} \centering \raisebox{-\height}{\includegraphics[width=0.43\textwidth]{Figure_008-a.pdf}} \raisebox{-\height}{\includegraphics[width=0.43\textwidth]{Figure_008-b.pdf}} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the absolute rapidity of the first (\cmsLeft) and second (\cmsRight) jet. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:sigEtajet_a} \end{figure} \begin{figure} \centering \includegraphics[width=0.43\textwidth]{Figure_009.pdf} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the absolute rapidity of the third jet. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:sigEtajet_b} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Figure_010.pdf} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the \HT observable for events with at least one jet. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:sigHtjet_a} \end{figure} \begin{figure} \centering \raisebox{-\height}{\includegraphics[width=0.4\textwidth]{Figure_011-a.pdf}} \raisebox{-\height}{\includegraphics[width=0.4\textwidth]{Figure_011-b.pdf}} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the \HT observable of jets for events with at least two (\cmsLeft) and three (\cmsRight) jets. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:sigHtjet_b} \end{figure} The balance in transverse momentum between the jets and the $\cPZ$ boson, \ensuremath{\pt^{\text{bal}}}\xspace, is shown in Figs.~\ref{fig:ptbalmumu_a} and~\ref{fig:ptbalmumu_b} (Tables~\ref{tab:combVisPt_Zinc1jetQun}--\ref{tab:combVisPt_Zinc3jetQun}) for inclusive jet multiplicities of 1, 2, and 3. When more jets are included, the peak of \ensuremath{\pt^{\text{bal}}}\xspace is shifted to larger values. The measurement is in good agreement with NLO {\textsc{MG5}\_a\textsc{MC}}\xspace predictions. The slopes of the distributions for the first two jet multiplicities predicted by LO {\textsc{MG5}\_a\textsc{MC}}\xspace do not fully describe the data. This observation indicates that the NLO correction is important for the description of hadronic activity beyond the jet acceptance used in this analysis, $\pt>30\GeV$ and $\abs{y}>2.4$. An imbalance in the event, \ie \ensuremath{\pt^{\text{bal}}}\xspace not equal to zero, requires two partons in the final state with one of the two out of the acceptance. Such events are described with NLO accuracy for the NLO {\textsc{MG5}\_a\textsc{MC}}\xspace sample and LO accuracy for the two other samples. In the case of the \textsc{geneva}\xspace simulation, when at least two jets are required, as in the second plot of Fig.~\ref{fig:ptbalmumu_a}, the additional jet must come from parton showering and this leads to an underestimation of the cross section, as in the case of the jet multiplicity distribution. When requiring two jets within the acceptance, the NLO {\textsc{MG5}\_a\textsc{MC}}\xspace prediction, which has an effective LO accuracy for this observable, starts to show discrepancies with the measurement. The estimated theoretical uncertainties cover the observed discrepancies. \begin{figure} \centering \raisebox{-\height}{\includegraphics[width=0.4\textwidth]{Figure_012-a.pdf}} \raisebox{-\height}{\includegraphics[width=0.4\textwidth]{Figure_012-b.pdf}} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the transverse momentum balance between the $\cPZ$ boson and the accompanying jets for events with at least one (\cmsLeft) and two (\cmsRight) jets. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:ptbalmumu_a} \end{figure} \begin{figure} \centering \raisebox{-\height}{\includegraphics[width=0.4\textwidth]{Figure_013.pdf}} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the transverse momentum balance between the $\cPZ$ boson and the accompanying jets for events with at least three jets. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:ptbalmumu_b} \end{figure} The \ensuremath{\text{JZB}}\xspace distribution is shown in Figs.~\ref{fig:JZBmumu_a} and~\ref{fig:JZBmumu_b} (Tables~\ref{tab:combJZB}--\ref{tab:combJZB_ptHigh}) for the inclusive one-jet events, in the full phase space, and separately for $\pt(\cPZ)$ below and above 50\GeV. As expected in the high-$\pt(\cPZ)$ region, \ie in the high jet multiplicity sample, the distribution is more symmetric. The NLO {\textsc{MG5}\_a\textsc{MC}}\xspace prediction provides a better description of the \ensuremath{\text{JZB}}\xspace distribution than \textsc{geneva}\xspace and LO {\textsc{MG5}\_a\textsc{MC}}\xspace. This applies to both configurations, $\ensuremath{\text{JZB}}\xspace<0$ and ${}>0$. This observation indicates that the NLO correction is important for the description of hadronic activity beyond the jet acceptance used in this analysis. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Figure_014.pdf} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the \ensuremath{\text{JZB}}\xspace variable (see text), with no restriction on $\pt(\cPZ)$. Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:JZBmumu_a} \end{figure} \begin{figure*} \centering \raisebox{-\height}{\includegraphics[width=0.4\textwidth]{Figure_015-a.pdf}} \raisebox{-\height}{\includegraphics[width=0.4\textwidth]{Figure_015-b.pdf}} \caption{Measured cross section for $\cPZ+\text{ jets}$ as a function of the \ensuremath{\text{JZB}}\xspace variable (see text), for $\pt(\cPZ)<50\,$GeV (left) and $\pt(\cPZ)>50\,$GeV (right). Other details are as mentioned in the Fig.~\ref{fig:sigNjet} caption .} \label{fig:JZBmumu_b} \end{figure*} \FloatBarrier \section{Summary} \label{summary} We have measured differential cross sections for the production of a $\cPZ$ boson in association with jets, where the $\cPZ$ boson decays into two charged leptons with $\pt > 20\GeV$ and $\abs{\eta}<2.4$. The data sample corresponds to an integrated luminosity of 2.19\fbinv collected with the CMS detector during the 2015 proton-proton LHC run at a centre-of-mass energy of 13\TeV . The cross section has been measured as functions of the exclusive and inclusive jet multiplicities up to 6, of the transverse momentum of the $\cPZ$ boson, jet kinematic variables including jet transverse momentum (\pt), the scalar sum of jet transverse momenta (\HT), and the jet rapidity ($y$) for inclusive jet multiplicities of 1, 2, and~3. The balance in transverse momentum between the reconstructed jet recoil and the $\cPZ$ boson has been measured for different jet multiplicities. This balance has also been measured separating events with a recoil smaller and larger than the boson \pt using the \ensuremath{\text{JZB}}\xspace variable. Jets with $\pt>30\GeV$ and $\abs{y} < 2.4$ are used in the definition of the different jet quantities. The results are compared to the predictions of four different calculations. The first two merge matrix elements with different final-state parton multiplicities. The first is LO for multiplicities up to 4, the second NLO for multiplicities up to 2 and LO for a jet multiplicity of 3, and both are based on {\textsc{MG5}\_a\textsc{MC}}\xspace. The third is a combination of NNLO calculation with NNLL resummation, based on \textsc{geneva}\xspace. The fourth is a fixed order NNLO calculation of one $\cPZ$ boson and one jet. The first three calculations include parton showering, based on {\PYTHIA}8\xspace. The measurements are in good agreement with the results of the NLO multiparton calculation. Even the measurements for events with more than 2 jets agree within the $\approx 10\%$ measurement and 10\% theoretical uncertainties, although this part of the calculation is only LO. The multiparton LO prediction does not agree as well as the NLO multiparton one. It exhibits significant discrepancies with data in jet multiplicity and in both transverse momentum and rapidity distributions of the leading jet. The transverse momentum balance between the $\cPZ$ boson and the hadronic recoil, which is expected to be sensitive to soft-gluon radiation, has been measured for the first time at the LHC. The multiparton LO prediction fails to describe the measurement, while the multiparton NLO prediction provides a very good description for jet multiplicities computed with NLO accuracy. Inclusive measurement for events with at least one jet are compared with the NNLO $\cPZ+\ge 1 \text{ jet}$ fixed order calculation. The agreement is good, even for the \HT observable, which is sensitive to events of different jet multiplicities. The NNLO+NNLL predictions provide similar agreement for the measurements of the kinematic variables of the two leading jets, but fail to describe observables sensitive to extra jets. At low transverse momentum of the $\cPZ$ boson, the NLO multiparton calculation provides a better description than the NNLO+NNLL calculation, whereas both calculations provide a similar description at high transverse momentum. The results suggest using multiparton NLO predictions for the estimation of the $\cPZ+\text{ jets}$ contribution at the LHC in measurements and searches, and its associated uncertainty. \begin{acknowledgments} \hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren Rachada-pisek} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Secretariat for Higher Education, Science, Technology and Innovation, Ecuador; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Research, Development and Innovation Fund, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, the Russian Foundation for Basic Research and the Russian Competitiveness Program of NRNU ``MEPhI"; the Ministry of Education, Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on, Programa Consolider-Ingenio 2010, Plan Estatal de Investigaci\'on Cient\'{\i}fica y T\'ecnica y de Innovaci\'on 2013-2016, Plan de Ciencia, Tecnolog\'{i}a e Innovaci\'on 2013-2017 del Principado de Asturias and Fondo Europeo de Desarrollo Regional, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation. Individuals have received support from the Marie-Curie programme and the European Research Council and Horizon 2020 Grant, contract No. 675440 (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science - EOS" - be.h project n. 30820817; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lend\"ulet (``Momentum") Programme and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850 and 125105 (Hungary); the Council of Scientific and Industrial Research, India; the HOMING PLUS programme of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus programme of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Programa de Excelencia Mar\'{i}a de Maeztu and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA). \end{acknowledgments} \clearpage
{ "timestamp": "2018-12-03T02:04:15", "yymm": "1804", "arxiv_id": "1804.05252", "language": "en", "url": "https://arxiv.org/abs/1804.05252" }